Keep alive on shared server support - c#

I have a followup question from this one. So, I'm using HangFire to run recurring jobs from a dummy webform on a shared server (somee.com). But I discovered that the IIS goes idle in x minutes, so my jobs are never executed (I mean they are only executed when the IIS is active).
So, is there any way to keep it alive? As it is a shared server I don't exactly have access to its configuration. I've read that having a service ping the website would do the trick. I've tried with uptimerobot but didn't have any luck. It still goes to sleep...
Any ideas?

So, I managed to solve it. I asked for help on the HangFire forums and finally did it. Here's the link to the post: https://discuss.hangfire.io/t/keep-alive-on-shared-server/3723
TL;DR: Basically, I'd tried ping and HTTP requests with uptimerobot and they didn't work for me. What did was their Keyword requests. And now I have a windows scheduler simulated service. :)
Visit the HangFire forums for details.

Related

Azure website IE10/11 error SCRIPT7002: XMLHttpRequest: Network Error 0x2ef3, Could not complete the operation due to error 00002ef3

We are experiencing this error in IE10/11 and have spent the last 2 days researching about it and have not been able to find a solution. We are running a asp.net mvc 5 web application utilizing signalR in specific areas hosted on a azure website that is scaled up to 2 instances. We are using the Redis backplane mod to make sure our SignalR talks back to all instances of the application. The error is intermittent and causes the rest of the application to hang. We do not believe this is a SignalR issue because we removed the invocation of SignalR from the page and was still able to get the issue to occur.
We have tried the following
GET request before POST
Setting the charset=utf-8
Our JS libraries are update to date
We need to find a fix for this issue quickly so if anyone has any ideas I would be really greatful.
thanks in advance
One problem when scaling a SignalR application on more instances is that clients connected to a server only receive updates from other clients from the same server. (The messages are not automatically broadcast between all servers).
http://www.asp.net/signalr/overview/performance/scaleout-in-signalr
One solution is to have a backplane that automatically sends the messages between servers, so that any server has any message at any time so it can push updates to their connected interested clients.
There are two implementations that Microsoft explains in the link above, one using Redis Pub/Sub and the other using Azure Service Bus.
With Redis you simply get an instance running (it can be on Linux, Windows or in Azure) and configure each server to push messages and subscribe to messages on the same channel.
I hope it helps your problem (since you said that the problem is intermitent, I can assume that it appears when clients are connected to different instances of your application).
Good luck!
EDIT:
Thanks for updating your question.
Have a look at this SO post:
IE10/11 Ajax XHR error - SCRIPT7002: XMLHttpRequest: Network Error 0x2ef3
And at this post:
http://www.kewlcodes.com/posts/5/SCRIPT7002-XMLHttpRequest-Network-Error-0x2ef3-Could-not-complete-the-operation-due-to-error-00002ef3
Good luck!
I also ran into this issue. In my case, I used IOwinContext.Request.ReadFormAsync() in my server code. I found out that if I send a post request with json content type, ReadFormAsync() hangs.
I solved it by checking if the request's content type is json, and if so, I use a different method to parse the body.

How to keep quartz .net's scheduler alive?

I use quartz in my asp website, i initialize the scheduler in application_start method and shutdown in application_end method ,my trigger will fire everyday but I found that my scheduler will automatically shutdown if there are not request for a while ,so my background works will not triggered,are there any better way to keep the scheduler life long and only shutdown when the server stopped?
For better knowledge sharing:
There are two suggestions:
http://www.codeproject.com/Articles/12117/Simulate-a-Windows-Service-using-ASP-NET-to-run-sc
http://weblog.west-wind.com/posts/2007/May/10/Forcing-an-ASPNET-Application-to-stay-alive
In general, if you need reliable scheduling, you should not do it within a web site.
As you've found, the worker process will be shut down after a period of time. Even if you force the worker process to run all the time, there are conditions that may cause it to terminate as well. It's just not a good idea.
Instead, you should write a Windows Service and run quartz.net in that.
If you cannot install services (say you're in a shared hosting environment), then your options are more limited.
There is an IIS configuration that allows worker processes to stay on all the time. I found this setting through another SO answer link.
Edit C:\Windows\System32\inetsrv\config\applicationHost.config to include:
<applicationPools>
<add name="MyAppWorkerProcess" managedRuntimeVersion="v4.0" startMode="AlwaysRunning" />
</applicationPools>
Scott Guthrie (Microsoft Product Manager for .NET) has answerered a question directly related to the OP's question (link).
#Dominic Pettifer,
If I set startMode="AlwaysRunning" does this mean the web app will
'never' shut down and will always be kept running, even with no
traffic hitting the site for a long period (unless of course it's
manually shut down, or server is switched off/crashes etc.)? The
reason I ask is because I like to run background threads/services on
the IIS ASPNET worker process instead of using Windows Services (we
deal with clients with lots of security restrictions on their servers
which makes running a Windows Service difficalt or impossible).
Normally I have to devise something that hits the website periodically
to keep the ASPNET worker process alive and stop it from shutting
down.
This should mean that the application and worker process is always
running - so I think that does indeed handle your scenario well for
you.
Hope this helps,
Scott
I wondered the same thing. Ultimately, whilst I agree with the general consensus, I wanted to see how it could be done, because I've been in a similar situation myself, where Windows Services were not available to me.
All I did was create a new job which, when executed sends a HTTP request to the application itself. For me, I pointed it at a page which simply contained #Datetime.Now.ToString().
The action of sending a HTTP request to itself should be enough to keep the scheduler (and parent worker process) alive.
It does not however stop the application from being stopped/recycled without warning. If you wanted a way to handle that, then you'd likely need more than one site running which pings both itself and the other site. This way, if one site goes down, the other can hit it (assuming it's started) to bring it back.
A much simpler way to is use a quality assurance checker. Using the tool Zapix I was able to schedule my website to be quality checked every 20 minutes. Zapix simply visited the site and received and http response. By using Zapix, it mimicked the functionality of manually visiting the website to trigger the emails. That way, the Application Pool threads are constantly woke.

WCF CallBack in Production Environment

I know this is going to sound stupid - but we've spent close to 4 weeks trying to implement a WCF callback system (Subscription Service), but to no avail.
Can anyone verify that they have successfully got this working in a multiple client production environment?
All the examples I've come across work on localhost but in production fails miserably.
The specific problem is that subscriptions and un-subscriptions work perfectly, up until the client is closed and re-opened at which point; multiple CallBacks are made or the Publish Method times out.
Again, can anyone confirm they have this working, or perhaps direct me to some documentation that provides a real-world PROVEN example.
Background info on my existing configuration found here:
WCF to WCF Communication
Thanks

Redis connection errors when using Booksleeve Redis client in Azure VM

I've recently started hosting a side project of mine on the new Azure VMs. The app uses Redis as an in-memory cache. Everything was working fine in my local environment but now that I've moved the code to Azure I'm seeing some weird exceptions coming out of Booksleeve.
When the app first fires up everything works fine. However, after about 5-10 minutes of inactivity the next request to the app experiences a network exception (I'm at work right now and don't have the exact error messages on me, so I will post them when I get home if people think they're germane to the discussion) This causes the internal MessageQueue to close, which results in every subsequent Enqueue() throwing an exception ("The Queue Is Closed").
So after some googling I found this SO post: Maintaining an open Redis connection using BookSleeve about a DIY connection manager. I can certainly implement something similar if that's the best course of action.
So, questions:
Is it normal for the RedisConnection to close periodically after a certain amount of time?
I've seen the conn.SetKeepAlive() method but I've tried many different values and none seem to make a difference. Is there more to this or am I barking up the wrong tree?
Is the connection manager idea from the post above the best way to handle this scenario?
Can anyone shed any additional light on why hosting my Redis instance in a new Azure VM causes this issue? I can also confirm that if I run my local environement against the Azure Redis VM I experience this issue.
Like I said, if it's unusual for a Redis connection to die after inactivity, I will post the stack traces and exceptions from my logs when I get home.
Thanks!
UPDATE
Didier pointed out in the comments that this may be related to the load balanacer that Azure uses: http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/12/windows-azure-load-balancer-timeout-details.aspx
Assuming that's the case, what would be the best way to implement a connection manager that could account for this goofy problem. I assume I shouldn't create a connection per unit of work right?
From other answers/comments, it sounds like this is caused by the azure infrastructure shutting down sockets that look idle. You could simply have a timer somewhere that performs some kind of operation periodically, but note that this is already built into Booksleeve: when it connects, it checks what the redis connection timeout is, and configures a heartbeat to prevent redis from closing the socket. You might be able to piggy-back this to prevent azure closing the socket too. For example, in a redis-cli session:
config set timeout 30
should configure redis (on the fly, without having to restart) to have a 30 second connection timeout. Booksleeve should then automatically take steps to ensure that there is a heartbeat shortly before 30 seconds. Note that if this is successful, you should also edit your configuration file so that this setting applies after the next restart too.
The Load Balancer in Windows Azure will close the connection after X amount of time depend on total connection load on load balancer and because of it you will get a random timeout in your connection.
As I am not well known to Redis connections I am unable to suggest how to implement it correctly however in general the suggested workaround is the have a heartbeat pulse to keep your session alive. Have you have chance to look for the workaround suggested in blog and try to implement in Redis, if that works out for you?

SOAP - Operation Timed Out - Rule out my server

I am using SOAP in C# .Net 3.5 to consume a web service, from a video game company. I am having lots of SOAP Exceptions with the error "Operation Timed Out"
While one process is timing out, others fly by with no problems. I would like to rule out a problem on my end, but I have no idea where to begin. My timeout is 5 minutes. For every 5,000 requests, maybe 500 fail.
Anyone have some advice for diagnosing web services failures? The web service owner will probably give no support to helping me on this, as it's a free service.
Thanks
I've had to do a lot of debugging connecting to a SOAP Service using PHP and timeouts are the worst problem. Normally the problem is the 'client' doesn't have a high enough timeout and bombs after something like 30s.
I test making the calls using SoapUI. I keep using a higher client-side timeout using that until I find something that works. Once I find that out I use the newly found time to my client and re-test.
Your only solution may be to make sure your 'clients' have a high enough timeout that will work for everything. 5 minutes should be fine for most of your server-side timeouts.
OK this is a huge question and there is a lot that it could be.
Have you tackled HTTP two connection limit? http://madskristensen.net/post/Optimize-HTTP-requests-and-web-service-calls.aspx
Have you got enough IO threads to cater for the load? Use the performance monitoring to check this for your App Pool - I think there is a IO threads counter. A quick google turned this up - http://www.guidanceshare.com/wiki/ASP.NET_2.0_Performance_Guidelines_-_Threading
Are you exhausting your bandwidth? Use performance monitoring again to check the usage of your network card.
This is a really hard subject to broach textually, as it so dependent on environment but I hop these might help.
This also looks interesting - http://software.intel.com/en-us/articles/how-to-tune-the-machineconfig-file-on-the-aspnet-platform/

Categories