I have a C# WebAPI that is hosted in IIS that listens to a RabbitMQ queue. When the Application Pool is started, it seems the processing of queued messages functions as expected. After a period of time, though, it appears as though the service stops picking up the messages from the queue. I am suspecting that its IIS putting the thread for the app pool to sleep or some such thing but not certain. Is there a way to ensure that the thread and the connection to RabbitMQ is not suspended to confirm? Also, if this is a known issue and my suspicion is incorrect (that its an IIS issue), please let me know.
From a thread entitled ".NET Core 3.1 Web Consumer -IIS (RabbitMQ)", it turns out I was correct. IIS was suspending the thread for the application pool. Changing the .NET CLR Version to v4.0 instead of No Managed Code, Start Mode to AlwaysRunning, and Idle Time-out (minutes) to 0 resolved my issue. Adding the Application Initialization Windows Feature required a restart of the server.
Related
I have a WCF service hosted on IIS, where during application initialization it start listening to the RabbitMQ and it subscribed to the Q say Q1, after long run of the service, we are seeing that the service is fetching the messages and it fails to processing it.
But we do have the different windows service which is also interested in the same events which is subscribed to the different Q say Q2, was able to process all the events even after a long run.
Why does the WCF is failing after long run, is there a thread pool sealing which will be imposed on Apppool ? Need help in debugging this.
Note: Both Queues (Q1 and Q2) are subscribing to the same message rout keys which is connected to the exchange.
Well I'm not sure about processing, but by default IIS-hosted anything AppPools recycle/expire after 20 minutes so it's entirely possible your WCF service is no longer running if its service methods have not been invoked.
Try setting your IIS AppPool timeout to 0 to disable timeout.
We have a simple signalr server and client running with the backplane enabled. When I looked in to the IIS worker process I found out in the current requests tab there is always this signalr connect is showing.
When I connect like 100 clients 100 current requests are shown in the woker process view. Shouldn't these be removed after connecting or is this the expected behavior from the signalr?
The threads will close or get reused by new clients, its basically a thread pool, you shouldn't need to worry about it. The only time you need worry about how many threads are open is if you only have limited cpu resources, and a low open thread limit set.
I have a 4 tier .NET application which consists of a
Silverlight 5 Client
MVC4 Web API Controller (Supplying data to the SL5 Client)
Windows Service - responsible for majority of data processing.
Oracle DB storage.
The workflow is simple: SL5 client sends a request to the rest service, the rest service simply stores it in the DB.
The windows service, while periodically polling the DB for new records, detects the new records and attempts to process them accordingly. Once finished it updates the records and their status in the DB.
In the meantime the SL5 Client also periodically polls the DB to see if the records have been processed. When they are, the result is retrieved and rendered on the screen.
So the question here is the following:
Is there a difference between spawning the same processing code (currently in the windows service) in a new discrete process (right out of the Web API Controller), vs keeping it as is in the windows service?
Aside from removing the constant DB polling that happens in the windows service, it simplifies processing greatly because it can be done on a per-request basis as the requests arrive from the client. But are there any other drawbacks? Perhaps server or other issues with IIS?
Yes there is a difference.
Windows services are the right tool for asynchronous processing. Operations can take a long time without producing strange effects. After all, it is a continuously running service.
IIS on the other hand, processes requests by using a thread pool. Long running tasks have the potential to exhaust that thread pool, so this may cause problems depending on the number of background tasks you start. Also, IIS makes no guarantees to keep long running tasks alive. If the web site is recycled, which happens regularly in a IIS default installation, your background task may die.
We have an IIS-hosted WCF service that receives a large chunk of data to work on. The service fires up several worker threads and then returns leaving the worker threads to finish the job (which might take an hour). If the WCF service is idle long enough IIS recycles tha app pool aborting the worker threads. This problem has been circumvented by having the worker threads occasionally call a dummy service just to keep the app pool alive. If you think this whole setup is a really bad idea, I completely agree (not my code). So no need to comment that.
The problem is we still get an occasional ThreadAbortException. Is there any way to get additional information about what/who initiated the thread abort? I know it isn't our code.
IIS logs turned out to give the answer. AFAIK, if new binariers are loaded IIS waits until all service calls are finished (and no new call are accepted), then recycles the app pool. However, IIs has no knowledge of the background threads running after the service and therefore thinks it's free recycle the app pool. In some cases we've been uploading a new version while the background threads are still running. In any case, a very bad architecture.
I have a process on a system that runs on IIS, it takes hours to finish so it runs on a thread.
The problem is that this thread is dropped after some time because the IIS process timeout (no activity). This thread can't stop in the middle.
How can I prevent this timeout if the thread is running?
In the settings of the Application Pool in IIS you could configure it to not recycle the AppDomain after a certain period of inactivity. Notice however that using long running tasks in IIS is bad idea and this setting might not be 100% reliable. For example if your server starts running low on memory or high CPU usage IIS could still recycle it. IIRC this threshold could also be configured.
The best way would be to externalize those long running tasks as a separate Windows Service.
And if you cannot do any of those previous things and you are absolutely desperate the last thing you could try in your total despair is to auto-ping the web application from this background thread by sending HTTP requests at regular intervals to avoid it from dying. But once again that should really be the last thing you should attempt.