I've written a very basic HTTP-based server program that runs in the background on my computer to allow me to automate various tasks from my Android (via HTTP requests in Tasker). This all works fine (barring this problem), except that after more than about 30 minutes of inactivity, the application ends up falling asleep, as though it's being shunted out of memory, and takes a good minute or so to wake up when it receives an HTTP request or when I try to restore the window and interact with the UI.
I'm using System.Net.HttpListener to implement the server (using asynchronous calls to BeginListen). How should I go about keeping it on its toes?
Is it possible that your process memory is being swapped out to disk? Perhaps write a little monitor task that just pings it every 5 minutes? Note that this could also be useful for making sure it's still running - it could email you if it's down, for example :)
What is your OS (Windows XP / Seven / Server 2008 ...) and distribution ? (Home / Pro / SBS...) ?
Try to give hight priority to your process (like realtime).
You can also try to perform some task with your process every 10 minutes...
Maybe you could do another thread and check if your HttpListerner instance IsListening periodically?
Looks like the problem description is missing mentioning of the actual problem itself. On windows applications do not go into sleep mode. Inactive apps might be swapped, but it should not take very long time when it gets a request.
In .Net 2 I have noticed similar behavior due to garbage collection of the Listener Object itself or associated objects due to long inactivity. Try to keep a static reference of the listener object.This might help.
Related
Firstly let me apologise, as I don't really know how to phrase the question.
The issue I'm having is trying to keep my database 'alive' while users come to my site. An example being, if I build my c# asp.net application and publish it, then try and navigate to it, it takes a while to respond (which I get, I understand it, this isn't an issue for me) the problem is if some person hasn't been to the site for a while, it seems to take a while again, like a session timer has passed, I'm not sure if this is something to do with App Pool recycling?
I've tried to run a scheduled task to hit the database (trying to keep it responsive) every 15 minutes, but this doesn't seem to work, it works well every 15 minutes for say 5 hours, and then I receive a message on a random call that the request has taken over 4 seconds to respond and therefore fails.
My question then, how do I keep my connection to the database / the site responsive so that each time a person requests it, the site loads quickly, rather than having to 'start up'
Kind regards as always
I suggest to increase connection pool size in your connection string.
This looks like what you want:
Keep an ASP.NET IIS website responsive when time between visits is long: Keep an ASP.NET IIS website responsive when time between visits is long
You might consider IIS application auto-start?
Some web applications need to load large amounts of data, or perform expensive initialization processing, before they are ready to process requests. Developers using ASP.NET today often do this work using the “Application_Start” event handler within the Global.asax file of an application (which fires the first time a request executes). They then either devise custom scripts to send fake requests to the application to periodically “wake it up” and execute this code before a customer hits it, or simply cause the unfortunate first customer that accesses the application to wait while this logic finishes before processing the request (which can lead to a long delay for them).
ASP.NET 4 ships with a new feature called “auto-start” that better addresses this scenario, and is available when ASP.NET 4 runs on IIS 7.5
We've built this app that needs to have some calculations done on a remote machine (actually a MatLab server). We're using web services to connect to the MatLab server and perform the calculations.
In order to speed things up, we've used Parallel.ForEach() in order to have multiple service calls going at the same time. If we're very conservative in setting ParallelOptions.MaxDegreeOfParallelism (DOP) to 4 or something, everything works fine and well.
However, if we let the framework decide on the DOP it will spawn so many threads that it forces the remote machine on its knees and timeouts start occurring ( > 10 minutes ).
How can we solve this issue? What I would LOVE to be able to do is use the response time to throttle the calls. If response time is less than 30 sec, keep adding threads, as soon as it's over 30 sec, use less. Any suggestions?
N.B. Related to the response in this question: https://stackoverflow.com/a/20192692/896697
Simplest way would be to tune for the best number of concurrent requests and hardcode that as you have done so far, however there are some nicer options if you are willing to put in some effort.
You could move from a Parallel.ForEach to using a thread pool. That way as things come back from the remote server you can either manually or programatically tune the number of available threads. reducing/increasing the number of available threads as things slow down/speed up, or even kill them if needed.
You could also do a variant of the above using Tasks which are the newer way of doing parallel/async stuff in .net.
Another option would be to use a timers and/or jobs model to schedule jobs every x milliseconds, which could then be throttled/relaxed as results returned from the server. The easiest way to get started would be using Quartz.Net.
I created a windows service with c# 2010. The problem is that when computer shuts down the service does not have time to stop and onstop is not always executing. I say not always because sometime it manages to stop. I have tried to use windows pre-shutdown notification that was introduced at vista, however the results are better but not absolute.
Is there anyway to get windows wait for my service to stop?
Is there anyway to change the order windows stops the services?
According to http://msdn.microsoft.com/en-us/library/windows/desktop/ms685149(v=vs.85).aspx, a service normally has about 20 seconds to shut down before the system gives up and shuts down anyway.
You might be able to send STOP_PENDING messages back if your service needs more time to shut down, but even then you're limited to about 125 seconds before the system figures you're never going to shut down, and pulls the rug out from under you.
In C#, you would have your service call RequestAdditionalTime during shutdown. Again, there's no guarantee that you'll get that extra time, but you can ask for it. Remember, though, if you ask for more time you better be done before that time expires, or you can ask for more. But eventually the system will shut you down (again, probably in less than 2 minutes).
In general, I've found that it's best if you construct your service so that you can shut it down quickly. If you can't shut down in a few seconds, you probably need to change your design.
This can be answer for you (and for somebody, who search same answer):
Why doesn't the RequestAdditionalTime() method work on restart in Vista/7?:
... Suppose Windows allows 13 seconds for all services to shutdown. Some other service takes 12 seconds of cpu time (so other services can't execute during that time) and finally shuts down. That leaves 1 second for all the rest. Windows is going to kill them.
This is the reason, why it sometimes shutdown correctly a sometimes doesn't.
By the way, Vista has 20 sec. to kill services, W7 has 12 sec. and W8 has 5 sec.
In your service class derived from ServiceBase, in override of OnStop you can request for additional time:
base.RequestAdditionalTime(1000 * 60 * 2);
I have a web application using ASP.NET, that is connecting to Oracle CRM as a back end. The ASP.Net uses some business objects to call into the Oracle CRM webservices, and this works fine.
Except, however, Oracle CRM has a limitation where they only allow you to make 20 web service calls per second (or one call per 50mS), and if you exceed this rate a SOAPException is returned "The maximum rate of requests was exceeded. Please try again in X ms."
The traffic to the site has increased recently, so we are now getting a lot of these SOAPExceptions, but as the code that calls the webservice is wrapped up in a business object, I thought I would modify it to ensure that the 50ms limit is never breached.
I use the following code
private static object lock_obj = new object();
lock (lock_obj)
{
call webservice;
System.Threading.Thread.Sleep(50);
}
However, I am still getting some SOAP Exceptions. I did try writing the code using mutexes instead of lock(), but the performance impact proved to be a problem.
Can anyone explain to me why my solution isn't workinf, and perhaps suggest an alternative?
Edit: Moved to answer. Possible due to > 1 IIS worker process. I don't think object locking spans worker processes so subsequent simultaneous threads could be started but I could be wrong
http://hectorcorrea.com/Blog/Log4net-Thread-Safe-but-not-Process-Safe
My suggestion would be an application variable which stores the tick of the last request, then from that you can work out when it's safe to fire the next.
As long as your application is running with only one ASP.NET worker process you should be ok with what you have, but there are a few things to potentially consider.
Are you using a Web Garden? If so this creates multiple worker processes and therefore a lock is only obtained per/process
Are you in a load balanced environment? If so you will need to go to a different method.
OK, it turns out that a compounding issue was that we have a windows service running on the same server that was also calling into some of the same objects every 4 minutes (running on a different process of course). When I turn it off (and having bumped the sleep up to 100 as per Mitchel's suggestion) the problem seems to have gone away almost entirely.
I say almost, because every so often I still get the odd mysterious soapexception, but I think by and large the problem is sorted. I'm still a bit mystified as to how we can get any of these Exceptions, but we will live with it for now.
I think Oracle should publicise this feature of Oracle CRM on Demand a little more widely.
My environment - C# 3.5 and ASP.NET 4.0 and VS 2010
Apologies - am a bit new to some of the concepts related to threading and Async methods.
My scenario is this:
My site will periodically make a couple of GET/POSTS to an external site and collect some data
This data will be cached in a central cache
The periodic action will happen once in about 5 minutes, and will happen for every new member who registers on my site. The querying for the member will stop based on certain conditions
The user does NOT need to be logged in for these periodic queries - they register on the site, and then off my async code goes - it keeps working 24/7 and messages the user once a while via email depending on certain trigger condition. So essentially it all should happen in the background regardless of whether the user is explicitly logged in or not.
Load Expected - I anticipate about 100 total running members a day (accounting for new members + old ones leaving/stopping).
the equation is ~ 100 visitors / day x 4 POST per fetch x 12 fetches / hour x 8 hours / day
In my mind - I'm running 100 threads a day, and each 'thread' wakes up once in 5 minutes and does some stuff. The threads will interact with a static central cache which is shared among all of them.
I've read some discussions on ThreadPools, AsyncPage etc - all a bit new territory. In my scenario what would you suggest? What's the best approach to doing this so it's efficient?
In your response I would appreciate if you mention specific classes/methods/links to use so I can chase this. Thanks a bunch!
You will not be able to do it with ASP.net as such, you will not be able to keep the "threads" running with any level of reliability. IIS could decide to restart the appication pool (I.E. the whole process) at any point in time. Really what you would need is some kind of windows service that runs and makes the requests. You could the use HttpWebRequest.BeginGetResponse method to make your calls. This will fire off the relevent delegate when the response comes back and .net will manage the threading.
Agreeing with Ben, I would not use threading in IIS with ASP.NET. It's not the same as using it in a desktop application.
If you're going to use some kind of polling or timed action, I recommend having a handler (.ashx) or asp.net page (aspx) that can take the request that you want to run in the background and return XML or JSON as a response. You can then set some javascript in your pages to do an AJAX request to that URI and get whatever data you need. That handler can do the server side operations that you need. This will let you run background processes and update the front-end for your users if need be, and will take advantage of the existing IIS thread pool, which you can scale to fit the traffic you're getting.
So, for instance
ajaxRequest.ashx : Processes "background" request, takes http POST/GET parameters.
myPage.aspx : your UI
someScript.js : javascript file with functions to call ajaxRequest.ashx from myPage.aspx (or any other page) when certain actions or intervals occur.
jQuery.js : No need to write all the AJAX code or event handlers yourself :)
You will need to create a separate Windows service(or a console app that runs using the Windows scheduler) to poll the remote server.
If you need to trigger requests based on user interation with your site, the best way is to use some kind of queuing system(eg MSMQ) that your service monitors.