I've been recently facing a strange issue with Hangfire and an ASP.Net app hosted on a single server (Virtual VM).
My code runs fine on the local machine, however, once the code is uploaded to the server it seems that I am getting the results of some older code that still exists somewhere.
Here is what the background job is supposed to do:
Query the database multiple times (some heavy queries are involved)
Write the results to an excel file
Send the file in an email.
What happens is that for the same request I am randomly getting 3 different outcomes. Either an empty excel file in the email with only the header data filled, an excel file that is partially filled, or a complete excel file.
I've disabled sending the email to debug but between one request and the other I am still getting emails (!)
I've restarted the IIS server, restarted the web app, deleted the directory of the web app and redeployed to no avail.
Hangfire is great but I am afraid that I have no choice but to look for an alternative if I do not overcome this problem (it's been a couple days of debugging). Is there a way to properly restart Hangfire?
Here is my setup for reference:
Global.asax.xs
Hangfire.GlobalConfiguration.Configuration
.SetDataCompatibilityLevel(CompatibilityLevel.Version_170)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseSerilogLogProvider()
.UseSqlServerStorage("Server=localhost; Database = hangfire; Integrated Security = SSPI;", new SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero,
UseRecommendedIsolationLevel = true,
DisableGlobalLocks = true
});
var options = new BackgroundJobServerOptions
{
WorkerCount = 1
};
yield return new BackgroundJobServer(options);
and this is how the job is called:
BackgroundJob.Enqueue(() => (new cReport()).GetReportInBackground(itemsToReport, tbStartDate.Text, tbEndDate.Text, ME));
Thanks!
Related
This bounty has ended. Answers to this question are eligible for a +50 reputation bounty. Bounty grace period ends in 22 hours.
whisk is looking for an answer from a reputable source.
I created an image uploading app for a client and they want to host it on its own IIS server. When I publish the app to the server I get the error
HTTP Error 500.30 - ASP.NET Core app failed to start
I have installed all the .NET v6 SDK, Runtime and hosting bundle that is needed to host the app.
After looking around on SO and google I was able to run the appNameHere.dll from the command prompt and it runs just fine without showing any errors. When I do that I can open it locally on the server and have the app show up. It's just when it's public facing I get the error.
I have narrowed it down to these few lines of code in the Program.cs file
app.UseStaticFiles();
app.UseStaticFiles(new StaticFileOptions()
{
FileProvider = new PhysicalFileProvider(Path.GetFullPath("\\\\12.34.56.789\\c$\\ABC\\FolderName\\ProjectName\\Images\\ItemImages\\")),
RequestPath = new PathString("/ItemImages")
});
When I comment these out, the app shows up fine and works, but I can't get the files from the other site.
I can also set up a folder locally in the "C:\UnitImages" and everything works as well.
I created a shared connection to the main server to test the path as well and it works there too. So I'm a bit lost on where to go next.
Update
As stated in one of the links from #Code Maverick I have updated the Application Pool Identity to the user that has full access to the folders and I still get the Error that's stated above.
I came across this article and tried it but I'm getting an error 'NetworkConnection' is a namespace but is used like a type
repo for ref.
var sourceCredentials = new NetworkCredential { Domain = "12.34.56.789", UserName = "Administrator", Password = "123456" };
using (new NetworkConnection("\\\\12.34.56.789\\c$\\ABC\\FolderName\\ProjectName\\Images\\UnitImages\\", sourceCredentials))
{
// to serve static froms from \\network\shared location
app.UseStaticFiles(new StaticFileOptions()
{
FileProvider = new PhysicalFileProvider(Path.GetFullPath("\\\\12.34.56.789\\c$\\ABC\\FolderName\\ProjectName\\Images\\UnitImages\\")),
RequestPath = new PathString("/UnitImages")
});
}
Based on this and this, you may need to provide IIS_IUSRS group the ability to write to the share.
You personally may be able to access the share, but your web application hosted within IIS needs the same access privilege. This does have security implications though.
We've created a Selenium test project that starts the (ASP.NET) web application and runs a couple of tests using the ChromeDriver. Locally this all runs fine (in headless and non-headless mode).
But on the build server (using an Azure DevOps agent) this fails without ever starting the tests. It looks like it fails when starting the ChromeDriver: the driver starts, but then it's immediately followed by 403 errors. It never gets to the part where it actually loads a webpage.
Any ideas where to look?
Answering my own question to document possible solutions.
After some rigorous investigation (which included using the source code to get to the bottom of things) we found out that the proxy server somehow got in the way. It turned out that the ChromeDriver tries to communicate over a local port (e.g. http://localhost:12345), which was redirected through the proxy server. This failed with a 403 error.
This gave us a lead on possible solutions. First we tried to use the .proxybypass file to exclude localhost addresses. This didn't work -- it turns out that this proxy bypass only works for https requests. And the ChromeDriver control commands are sent over http :-(
We then made sure that no proxy was used in our test code. We did this with the following lines:
var options = new ChromeOptions();
options.AddArgument("--no-sandbox");
options.AddArgument("headless");
options.AddArgument("ignore-certificate-errors");
options.Proxy = new Proxy()
{
Kind = ProxyKind.Direct
};
var driver = new ChromeDriver(options);
In addition to these settings (note that some arguments were added to solve other issues and might not apply to your own situation), we also disabled the proxy for other requests:
WebRequest.DefaultWebProxy = null;
HttpClient.DefaultProxy = new WebProxy()
{
BypassProxyOnLocal = true,
};
This allowed our tests to finally run on the build server without the 403 errors.
One last remark (which might be obvious) is to always run your tests in non-headless mode if you encounter any issues. This allowed us to see the "invalid certificate error" which would otherwise be hidden.
I have a .Net/C# app that calls the Azure Search Service. It's been working fine to find a list of PDF files I have in Azure storage based on keywords submitted. But a couple of days ago, the live app on Azure stopped working - no documents are returned from a search. However, on Local, the app works fine with the same code. I'm suspecting something may have changed with firewall rules, but I can't find where that may have occurred. Hopefully someone has had something similar happen and has a solution.
Here's the code that stopped working on Live.
var indexClient = GetIndexClient(); // sets up SearchIndexClient with uri, credentials, etc.
SearchParameters sp =
new SearchParameters()
{
Select = new[] { "metadata_storage_name" },
SearchMode = SearchMode.Any
};
var docs = indexClient.Documents.Search(searchString, sp); // this line no longer works on Live
As it turns out, it had to do with Microsoft's TLS 1.1 and 1.0 decommissioning in the last 2 weeks. I was able to add the following to my code to make it work again (added to my page_load procedure in the default template):
System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
However, I'm still working on an issue where the PDF links that are listed in an editor window (using CKEditor extension), will no longer work. I'm assuming this is the same problem, as it works on my local, but not from the Azure web app.
I'm using the ElasticClient C# class for connecting to an Elasticsearch instance hosted on AWS.
var pool = new SingleNodeConnectionPool(new Uri(Url));
var httpConnection = new AwsHttpConnection(Region);
var config = new ConnectionSettings(pool, httpConnection)
.PrettyJson()
.DisableDirectStreaming()
.DefaultTypeName(TYPE)
.DefaultIndex(INDEX);
_client = new ElasticClient(config);
For setting the access key and secret, I have a credentials file stored on my Windows computer here: C:\Users\{username}\.aws\credential. It has a "default" entry, so setting the profile name manually shouldn't be required. This is working fine when I run my ASP.NET Core web application with Launch set to Project.
However, as soon as I change to Launch: IIS...
...then the Elasticsearch connection fails. Whenever I try to execute a query, it errors:
Message=Invalid NEST response built from a unsuccessful low level
call on POST: /{url1}/{url2}/_search?pretty=true&typed_keys=true
Audit trail of this API call:
1 BadRequest: Node: https://{url1}.us-east-1.es.amazonaws.com/ Took: 00:00:00.0090414
OriginalException: System.Net.Http.HttpRequestException: A socket operation was attempted to an unreachable network --->
System.Net.Sockets.SocketException: A socket operation was attempted
to an unreachable network
The IIS website is running with an app pool set to use my Windows account. Clearly, it's ignoring the .aws credentials when running under IIS. I also tried creating profiles using the AWS Explorer Visual Studio 2017 extension, both "default" as well as a custom named one.
I tried installing the AWSSDK.Extensions.NETCore.Setup nuget package in my ASP.NET Core project, and specifying the custom named profile in appsettings.json, both like this:
"AWS": {
"Profile": "local-dev-profile",
"Region": "us-east-1"
}
And like this:
"AppSettings": {
"AWSProfileName": "local-dev-profile",
},
Neither works, I still get the same "A socket operation was attempted to an unreachable network" error. I've followed all of the AWS guides and feel like I'm doing this correctly, but it just won't work under IIS. Any help would be appreciated.
I was able to get this working, for some reason when running under IIS it doesn't pull in the access key and secret like it normally would, probably related to the magic that occurs in ASP.NET Core to run under IIS. I had to add the keys to my launchSettings.json file instead to get it to work in IIS (which gets copied as ENVIRONMENT_VARIABLES to the web.config.)
Here is what an IIS profile in launchSettings.json would look like:
"MobileApi IIS (DEV)": {
"commandName": "IIS",
"launchUrl": "{url}",
"environmentVariables": {
"AWS_SECRET_ACCESS_KEY": "{value}",
"AWS_ACCESS_KEY_ID": "{value}",
"ASPNETCORE_ENVIRONMENT": "Development"
},
"applicationUrl": "{url}"
},
We are in the process of migrating an app from a Server 2008 set of servers to Server 2016, and since this app has ~75 private MSMQ queues, I wrote a very basic C# utility (just a console app) to get the list from our production server and recreate them on the new 2016 server via the following:
//connect to the specified server to pull all existings queues
var queues = MessageQueue.GetPrivateQueuesByMachine("[production server name]");
var acl = new AccessControlList();
acl.Add(new AccessControlEntry
{
EntryType = AccessControlEntryType.Allow,
GenericAccessRights = GenericAccessRights.All,
StandardAccessRights = StandardAccessRights.All,
Trustee = new Trustee("Everyone")
});
acl.Add(new AccessControlEntry
{
EntryType = AccessControlEntryType.Allow,
GenericAccessRights = GenericAccessRights.All,
StandardAccessRights = StandardAccessRights.All,
Trustee = new Trustee("Network Service")
});
foreach (var queue in queues)
{
var newQueue = MessageQueue.Create($".\\{queue.QueueName}", true);
newQueue.SetPermissions(acl);
newQueue.Label = queue.QueueName;
}
When I start running our web app on the new server and execute an action that places a message on the queue, it fails with System.Messaging.MessageQueueException: Access to Message Queuing system is denied, despite the Everyone ACL entry that is confirmed added to the queue.
The really strange part I'm running into though, is if I delete the queue in question and recreate it manually on the server with the same Everyone has full control permissions, the code works successfully. I've compared the properties of an auto-generated queue to a manually created one and everything is 100% identical, so it makes zero sense why this would occur.
Any suggestions? I'm at a loss, but trying not to have to create all of these queues manually if I can avoid it.
After a lot of back and forth testing, I reached out to Microsoft Support and one of their engineers has confirmed there's a bug of some kind on the .Net side with creating queues. We confirmed everything was identical, but the only time permissions worked was if the queue was created manually via the Computer Management snap-in. Creating it in code, regardless of permissions, caused it to not work correctly for multiple accounts.
Hopefully this helps anyone else trying to do this!