We have a remoting singleton server running in a separate windows service (let's call her RemotingService). The clients of the RemotingService are ASP.NET instances (many many).
Currently, the clients remoting call RemotingService and blocks while the RemotingService call is serviced. However, the remoting service is getting complicated enough (with more RPC calls and complex algorithms) that the asp.net worker threads are blocked for a significantly long time (4-5 seconds).
According to this msdn article, doing this will not scale well because an asp.net worker thread is blocked for each remoting RPC. It advises switching to async handlers to free up asp.net worker threads.
The purpose of an asynchronous handler
is to free up an ASP.NET thread pool
thread to service additional requests
while the handler is processing the
original request.
This seems fine, except the remoting call still takes up a thread from the thread pool.
Is this the same thread pool as the asp.net worker threads?
How should I go about turning my remoting singleton server into an async system such that I free up my asp.net worker threads?
I've probably missed out some important information, please let me know if there is anything else you need to know to answer the question.
The idea behind using the ThreadPool is that through it you can control the amount of synchronous threads, and if those get too many, then the thread pool automatically manages the waiting of newer threads.
The Asp.Net worked thread (AFAIK) doesn't come from the Thread Pool and shouldn't get affected by your call to the remoting service (unless this is a very slow processor, and your remoting function is very CPU intensive - in which case, everything on your computer will be affected).
You could always host the remoting service on a different physical server. In that case, your asp.net worker thread will be totally independent of your remoting call (if the remoting call is called on a separate thread that is).
Related
I'm maintaining an ASP.NET Core web application that needs to repeatedly run some background threads. I know it's not a good design but currently I have to fix its major issues with minimum effort. Now I wonder if I should worry about handling users http requests by web server or not?
Question is simple but I can't find any clear answer for it:
What is the difference between threads that are created in application like this:
Task.Run(() => { // some parallel job })
and worker threads of IIS that handle http requests?
Are they come from the same thread pool or they're reside in separate pools?
According to this it's all one pool: "ASP.NET Core already runs app code on normal Thread Pool threads." In other words, there isn't a separate max for threads serving requests vs. background threads.
The biggest difference is that IIS is aware of the threads it creates itself for an incoming request. IIS is not aware of any threads you create yourself.
When an app pool is recycled, or IIS is shut down, it waits until all requests have finished processing - it waits until the threads it creates for each request has finished processing - and then it kills the process. If you create any threads that outlive the request (for example, if you create a background thread and then send the response back to the client) IIS has no idea that thread is still running and could kill the whole process at any time.
If you don't return a response until all the threads are complete, then you won't have that specific problem.
The other issue is that you may hit the maximum number of allowable threads. Then all kinds of weird performance issues would happen. But that depends on how many threads you are creating and how many HTTP requests are coming in.
We have an IIS-hosted WCF service that receives a large chunk of data to work on. The service fires up several worker threads and then returns leaving the worker threads to finish the job (which might take an hour). If the WCF service is idle long enough IIS recycles tha app pool aborting the worker threads. This problem has been circumvented by having the worker threads occasionally call a dummy service just to keep the app pool alive. If you think this whole setup is a really bad idea, I completely agree (not my code). So no need to comment that.
The problem is we still get an occasional ThreadAbortException. Is there any way to get additional information about what/who initiated the thread abort? I know it isn't our code.
IIS logs turned out to give the answer. AFAIK, if new binariers are loaded IIS waits until all service calls are finished (and no new call are accepted), then recycles the app pool. However, IIs has no knowledge of the background threads running after the service and therefore thinks it's free recycle the app pool. In some cases we've been uploading a new version while the background threads are still running. In any case, a very bad architecture.
I'm working in an environment with a light-weight web application (ASP.NET MVC 4.5), with pretty much all of the actual work being implemented in a separate web service application (ASP.NET Web API).
In nearly all cases, all the web site proper does is deliver pages that make AJAX calls to the web service site. But I have one use case in which a page needs to do an httppost to the main web site, and then have the controller action make a call against the webservice, and wait for a response.
Microsoft has, of course, a how-to page:
http://www.asp.net/web-api/overview/web-api-clients/calling-a-web-api-from-a-net-client
On that page, it says:
Many of the HttpClient methods are async, because they perform network I/O. In the RunAsync method, I'll show the correct way to invoke those methods asynchronously. It's OK to block the main thread in a console application, but in a GUI application, you should never block the UI thread.
So, the question, is blocking in a Controller Action, while we're waiting for a response from the webservice going to cause a problem? Yes, it means that the response to the browser is going to be delayed, but that seems inevitable given the circumstances.
My expectation would be that this wouldn't be an issue. Multiple requests are coming into the web service simultaneously, and that for this one request to block for a bit wouldn't have any unusual impact on overall performance. But I thought I'd ask.
When you await in your action, you're actually just returning the current thread back to the app's thread pool. So it really won't make much difference if you do your server side stuff synchronously or asynchronously unless you're getting enough requests to exhaust your thread pool.
Request comes in and is assigned a free thread from the thread pool (if no thread
is free then the request will sit and wait until one is free)
Web Api creates your controller and executes your action
When your action awaits, the thread is returned back to the thread pool
When the awaited task is done, a thread is assigned back to the request
The action finishes its work and returns or is awaited again which will return
the thread back to the thread pool again
With that in mind, it's obvious that you're not holding your api back by executing the action synchronously unless your thread pool is running out of available threads. In that case, the synchronous operation would hold you back because new requests are sitting and waiting to be assigned to a thread. But if it was async, that operation will return it's thread back to the thread pool so the new requests will be taken care of sooner.
I have a wcf server using NetNamedPipesBinding.
I can see when the server is loaded with requests the reply is very slow (1-7 seconds).
The application code runs very fast but the time between sending the reply and receiving the reply takes long.
Is this because there are lots of messages at the pipe and they are processed sequentially ? is there a way to improve that ?
there are only 2 processes involves (caller and service) and the calls are 2 way, the caller process uses different threads to call.
Thanks.
If you are creating a separate Thread for each request, you could be starving your system. Since both client and server are on the same machine, it may be the client's fault the server is slow.
There are lots of ways to do multithreading in .NET and a new Thread may be the worst. At worst you should move your calls to the thread pool (http://msdn.microsoft.com/en-us/library/3dasc8as.aspx)
or you may want to use the async methods of the proxy (http://msdn.microsoft.com/en-us/library/ms730059.aspx).
I have a webpage with a button that generates some files to a server path. (It takes somewhere from 5 to 20 minutes). I want to create an async task that will continue executing even after the user closes the browser. Is it possible to do this with asp.net 4 and C#?
You do not control the thread pool in an asp.net application. You cannot even guarantee that a request will be completed on the same thread that it started with. Creating threads uses the same application pool that the web server uses, and you can use up all the request threads leaving your web server unavailable to process requests.
You should implement a windows service that hosts a WCF service that you can call from within your web application. In the service you can then fire off a thread to process the long running process. At the end of that process you can then update a status flag (e.g from Processing to Complete) that the user can view to determine if the files are done processing.
I would recommend using Topshelf to implement your windows service, it will save you much headache.
Actually, it is recommended that you not do this. Instead, the recommended way is to create a service (e.g. a windows service) that performs your processing asynchronously. In your web application, you create methods that starts the process, and another method that polls the service to determine if processing has completed.
There are a number of reasons for this, but one of the biggest is that the default and recommended configuration for webservers allows the server to kill long-running requests.
Or that I didn't understand what you want to do, or that you don't need to do a thing.
After the request was sent, the request process continues no matter if the user browser was closed or not. You don't need to do a thing
Fabulous nature of stateless WEB applications...
Creating new thread / using thread pool is the easiest approach to create run away tasks.
Note that there is no guarantees that process will stay alive for duration of a long task - so be prepared to handle partial completion and manual restarts. I.e. AppPoll recycle due to config change.
Easiest way is to put your task on the ThreadPool. The thread pool threads will stay alive even after the web page has completed rendering. The code would look like the following:
/* Beginning Method */
object someData = new object();
ThreadPool.QueueUserWorkItem(new WaitCallback(ProcessAsync), someData);
/* Ending Method */
static void ProcessAsync(Object stateInfo)
{
string dataToString = stateInfo.ToString();
}
you have to create a thread that does the long running task
have a look at the below:
http://kiranpatils.wordpress.com/2010/01/01/performing-a-long-running-task-with-asp-net/
Anyway what ever you start on the server it will continue running even if the user close the browser(until you recycle the app-pool or restart the web server).