Thread abort exception - c#

I am having an issue in webservice which resides in mono.
I am having a webservice which process huge database operation. I have given "Timeout = 1024" in the "webconfig" file under "appSettings" tag.
When call is done to the webservice after 2 minuter i am getting "thread abort exception".
please help me to overcome this problem
regards
Kumaran

You want to set the request timeout also. This is something like 30 or 60 seconds by default.
In the system.web section, set something like:
<httpRuntime executionTimeout="200"/>
This will affect all the pages, so perhaps you want to put the page in a separate folder so that you can have a local web.config file for this setting.

It is bad practice to place long operations (in your case it is over 2 mins) to synchronous web service method. Usually web service is only facade to start long time method on back-end server or at least another thread. Client can periodcally check if operation is done (so called watchdog pattern). Or review possibility to use oneway method - when client doesn't care about result at all.
By the way, NOTE, even succeed operation in web request must finish with ThreadAbort exception - since HttpRequest contains it raising at end of request processing

Check the innerexception. That's supposed to have some sort of HttpApplication.CancelModelException that should contain a flag indicating if it's a timeout or not. Either way, if you do have an innerexception it may provide more insight.
Additionally, make sure your method is set to async.

Related

Will non-awaited async functions definitely attempt finish in ASP.NET Core Web API?

It's my understanding that controllers get destroyed after an http request is made. Is there any assurances that the .NET Core runtime will wait until all threads initiated in an async action have terminated/ended before destroying the controller instance?
I have code below with an async controller action that calls an async function. I don't need to know if the async function actually succeeds or not (e.g. sending the email), I just want to make sure that it attempts to. My fear is that the .NET Core runtime will possibly kill the thread in the middle of execution.
Spoiler alert I ran the code below in my development environment and it does send the email every time (I put a real email). But I don't know if the behavior would change in a production environment.
Any thoughts/guidance?
[HttpGet]
public async Task SendEmail()
{
// If I would prefix this with 'await' the controller
// action doesn't terminate until the async function returns
this.InternalSendEmail();
}
private async Task InternalSendEmail()
{
try
{
await this.Email.Send("to#example.com", "Interesting subject", "Captivating content");
}
catch (Exception exc)
{
Log(exc);
}
}
What happens to the controller instance - nothing you can't manage
First, when we talk about destroying the controller instance let's be more precise. The instance won't get GCd as long as there's still a control flow that has access to this. It can't. So your controller instance will be fine in that regard at least until your private method finishes.
What will happen is your controller method will return and control flow will go to the next stage in the middleware chain, meaning your API consumer will likely get the http response before the email is sent. You will lose your HttpContext and all that goes with it when this happens. Thus if there's anything in your Log method or anything else in InternalSendEmail that relies on the HttpContext you need to make sure that information is extracted and provided to the background method before the controller method returns.
What happens to the thread - almost certainly nothing
As far as the thread goes, most likely the email will be sent on a different thread in the thread pool from that of the original controller method, but either way, no the .NET runtime isn't going to care about your controller method returning before every task it fired off has completed, let alone kill the thread. That's well above its paygrade. Moreover it's very rare for threads to be killed in any instance these days because it's not just your control flow that's affected but completely unrelated async contexts could be dependent on that thread too.
IIS Application Pool recycling and other things that COULD potentially kill your background task
The only reasonably likely thing that would cause your background task not to complete would be if the process terminated. This happens for example during an IIS Application Pool reset (or equivalent if you're using dotnet hosting), obviously a server restart, etc. It can also happen if there's a catastrophic event like running out of memory, or nasty things like memory faults unique to unsafe or native code. But these things would kill all pending HTTP requests too.
I have seen anecdotal assertions that if there are no pending HTTP requests it makes it more likely that IIS will recycle the application pool on its own even if you have other active code running. After many years of maintaining an application that uses a very similar pattern for many non-critical long-running tasks, I have not seen this happen in practice (and we log every application start to a local TXT file so we would know if this were happening). So I am personally skeptical of this, though I welcome someone providing an authoritative source proving me wrong.
That said, we do set the application pool to reset every day at 4 AM, so to the extent that IIS would be inclined to involuntarily reset our app pools (as it does need to happen every now and then) I suspect this helps mitigate that, and would recommend it regardless. We also allow only one CPU process per application, rather than allowing IIS to fire off processes whenever it feels like it; I suspect this also makes it less likely IIS would kill the process involuntarily.
In sum - this is perfectly fine for non-critical tasks
I would not use this for critical tasks where unambiguous acknowledgement of success or failure is needed, such as in life critical applications. But for 99+% of real world applications what you're doing is perfectly fine as long as you account for the things discussed above and have some reasonable level of fault tolerance and failsafes in place, which the fact that you're logging the exception shows you clearly do.
PS - If you're interested in having robust progress reporting and you aren't familiar with it, I would look into SignalR, which would allow you to notify the client of a successful email send (or anything else) even after the API call returns, and is shockingly easy to use. Plus an open websocket connection would almost certainly prevent IIS from mistaking a returned API method for an opportunity to kill the process.
Is there any assurances that the .NET Core runtime will wait until all threads initiated in an async action have terminated/ended before destroying the controller instance?
No, this is absolutely not guaranteed.
I don't need to know if the async function actually succeeds or not (e.g. sending the email), I just want to make sure that it attempts to. My fear is that the .NET Core runtime will possibly kill the thread in the middle of execution.
You cannot be sure that it will attempt to do so. The thread (and entire process) may be terminated at any time after the HTTP response is sent. In general, request-extrinsic code is not guaranteed to complete.
Some people are fine with losing some work (e.g., in this case, missing some emails). I'm not, so my systems are all built on a basic distributed architecture, as described on my blog.
It's important to note that work can be lost during any shutdown, and shutdowns are normal:
Any rolling upgrade triggers shutdowns (i.e., application updates).
IIS/ASP.NET recycles every 19 hours by default.
Runtime and OS patches require shutdowns.
Cloud hosting causes shutdowns (both at the VM level and higher levels).
Bottom line: shutdowns are normal, and shutdowns cause any currently-running request-extrinsic work to be lost. If you're fine with occasionally losing work, then OK; but if you require an assurance that the work is done, then you'll need a basic distributed architecture (a durable queue with a background processor).
There are more basic control flow issues with that logic what you trying to do. Your biggest problem is not the garantee about it is finished or not.
The example you present is very simple, but in real life example you will need some context in InternalSendEmail when it is executed. Because the request is completely served at the time it is executed, there will not be HttpContext, with all the consequences, for example you can not even log the IP address of the the request, not talking about all the more advanced context bound things like the user (or any other security principal) etc.
Of course you can pass anything as parameter (for example the IP address) but probably your logging infra (or your custom log enricher) will not work with that. Same is true for any other pipeline component which depends on the context.

how to interrupt a long-running Web Service call

I am dealing with a web-service call that may take anywhere from a few seconds to several minutes to complete. It constructs the requested data and returns it. Right now for a long-running call into the WS the user interface (WinForms) becomes unresponsive; the user has no way to cancel the operation.
The ideal approach to solving this (I think) would be to break the operation into two web-service calls: first a request, second to get the status or available data.
But if the web-service structure cannot be changed, what is the best way to interrupt the web-service call?
UPDATE:
The WS call could be made asynchronously. If the user wants to cancel the operation, then I'd like to relieve the server of unfinished work (rather than letting the thread complete normally and throw away the response). Thread.Abort() is a possibility but I want to know if there is a better way.
The web services I am working with are WCF based. The operations are read-only, so there is nothing to undo if interrupted.
You can generate Asynchronous proxy class to implement this feature.
Please look at the following link for sample,
http://msdn.microsoft.com/en-us/library/wyd0d1e5(v=vs.100).aspx

Thread Aborted?

Hi,
I have a ASP.NET application where I have added a Webservice that contains a "fire and forget" method. When this method is executed it will start a loop (0-99999) and for every loop it will read a xml file and save it to the database.
The problem is that this action will take a couple of hours and it usually ends with a Thread Aborted exception?
I have seen that you can increase the executionTimeout and this is how :
<httpRuntime executionTimeout="604800"/>
<compilation debug="true">
But this does not help?
I have also tried to add a thread.sleep within the loop. If I set it to 500 it will go half way and if I set <100 it will just go a couple of 1000 loops before the thread aborted exception?
How can I solve this?
Don't run the loop inside the web service. Instead, have it in a console app, a winforms app, or possibly even a windows service. Use the web service to start up the other program.
A web service is basically a web page, and asp.net web pages are not meant to host long running processes.
This article does not directly answer your question, but contains a snippet of info relevant to my answer:
http://msdn.microsoft.com/en-us/magazine/dd296718.aspx
However, when the duration of the
operation grows longer than the
typical ASP.NET session duration (20
minutes) or requires multiple actors
(as in my hiring example), ASP.NET
does not offer sufficient support. You
may recall that the ASP.NET worker
processes automatically shut down on
idle and periodically recycle
themselves. This will cause big
problems for long-running operations,
as state held within those processes
will be lost.
and the article is a good read, at any rate. It may offer ideas for you.
Not sure if this is 'the answer', but when you receive the web service call you could consider dispatching the action onto another thread. That could then run until completion. You would want to consider how you ensure that someone doesn't kick off two of these processes simultaneously though.
I have a ASP.NET application where I
have added a Webservice that contains
a "fire and forget" method. When this
method is executed it will start a
loop (0-99999) and for every loop it
will read a xml file and save it to
the database.
Lets not go into that I fhind this approach quite... hm... bad for many reasons (like a mid of the thing reset). I would queue the request, then return, and have a queue listener do the processing with transactional integrity.
Anyhow, what you CAN do is:
Queue a WorkItem for a wpool thread to pick things up.
Return immediately.
Besides that, web services and stuff like this are not a good place for hourly long running processes. Tick off a workflow, handle it separately.

Async webmethod without timeout

I need a console app which will calling webmethod.
It must be asynchronous and without timeout (we don't know how much time takes this method to deal with task.
Is it good way:
[WebMethod]
[SoapDocumentMethod(OneWay = true)]
??
Don't use one-way if you need results
First, if you need a response from your method, you don't want [SoapDocumentMethod(OneWay = true)]. This attribute creates a "fire and forget" call which never returns a response back to the caler and must return void. Instead, use a regular method call and call it async.
One method or two?
If you're using ASMX, there are two basic solutions: one method with a very long timeout, or two methods (as #Aaronaught suggested above): one to kick off the operation and return an ID of the operation, and another to pass in the ID and retrieve results (if available).
Personally, I would not recommend this two-method approach in most cases because of the additional complexity involved, including:
client and server code needs to be changed to suppport 2-step invocation
ASP.NET intrinsic objects like Request and Response are not available when called from a background task launched with ThreadPool.QueueUserWorkItem.
throttling on a busy server gets much harder if there are multiple threads involved with each request.
the server must hang onto the results until the client picks them up (or you decide to throw them out), which may eat up RAM if results are large.
you can't stream large, intermediate results back to the client
True, in some scenarios the 2-method approach may scale better and will be more resilient to broken network connections between client and server. If you need to pick up results hours later, this is something to consider. But your operations only take a few minutes and you can guarantee the client will stay connected, given the addiitonal dev complexity of the 2-method approach I'd consider it a last resort to be used only if the one-method solution doesn't match your needs.
Anyway, the solution requires two pieces. First, you need to call the method asynchronously from the client. Second, you need to lengthen timeouts on both client and server. I cover both below.
Calling ASMX Web Services Asynchronously
For calling an ASMX web service asynchronously from a command-line app, take a look at this article starting with page 2. It shows how to call a web service asynchronously from a .NET cilent app using the newer Event-Based Async Pattern. Note that the older .NET 1.0 approach described here, which relies on BeginXXX/EndXXX methods on the proxy, is not recommended anymore anymore since Visual Studio's proxy generator doesn't create those methods. Better to use the event-based pattern as linked above.
Here's an excerpt/adaptation from the article above, so you can get an idea of the code involved:
void KickOffAsyncWebServiceCall(object sender, EventArgs e)
{
HelloService service = new HelloService();
//Hookup async event handler
service.HelloWorldCompleted += new
HelloWorldCompletedEventHandler(this.HelloWorldCompleted);
service.HelloWorldAsync();
}
void HelloWorldCompleted(object sender,
HelloWorldCompletedEventArgs args)
{
//Display the return value
Console.WriteLine (args.Result);
}
Lengthen server and client timeouts
To prevent timeouts, http://www.dotnetmonster.com/Uwe/Forum.aspx/asp-net-web-services/5202/Web-Method-TimeOut has a good summary of how to adjust both client and server timeouts. You didn't specify in your question if you own the server-side method or just the client-side call, so the excerpt below covers both cases:
there has two setttings that will
affect the webservice call timeout
behavior:
** The ASP.NET webservice's server-side httpruntime timeout
setting, this is configured through
the following element:
httpRuntime Element (ASP.NET Settings Schema)
http://msdn2.microsoft.com/en-us/library/e1f13641.aspx
<configuration> <system.web>
<httpRuntime .............
executionTimeout="45"
.............../> </system.web>
</configuration>
Also, make sure that you've set the
<compilation debug="false" /> so as to
make the timeout work correctly.
** If you're using the wsdl.exe or VS IDE "add webreference" generated
proxy to call webservice methods,
there is also a timeout setting on the
client proxy class(derived from
SoapHttpClientProtocol class). This is
the "Timeout" property derived from
"WebClientProtocol" class:
WebClientProtocol.Timeout Property http://msdn2.microsoft.com/en-us/library/system.web.services.protocols.webclientprotocol.timeout.aspx
Therefore, you can consider adjusting
these two values according to your
application's scenario. Here is a
former thread also mentioned this:
http://groups.google.com/group/microsoft.public.dotnet.framework.webservices/browse_thread/thread/73548848d0544bc9/bbf6737586ca3901
Note that I'd strongly recommend making your timeouts long enough to encompass your longest operation (plus enough buffer to be safe should things get slower) but I wouldn't recommend turning off timeouts altogether. It's generally bad programming practice to allow unlimited timeouts since an errant client or server can permanently disable the other. Instead, just make timeouts very long--- and make sure to be logging instances where your clients or servers time out, so you can detect and diagnose the problem when it happens!
Finally, to echo the commenters above: for new code it's best to use WCF. But if you're stuck using ASMX web services, the above solution should work.
If the method is actually one-way, and you don't care about the result or ever need to follow up on the status of your request, then that is good enough.
If you do need a result (eventually), or need to check on the status of the operation, then this won't work very well. What your method should do in that case is start the work in a background thread, then immediately return an ID that can be used in a different web method to look up the status.
So something like this:
public enum JobStatus { Running, Completed, Failed };
public class MyService : WebService
{
[WebMethod]
public int BeginJob()
{
int id = GetJobID();
// Save to a database or persistent data source
SaveJobStatus(id, JobStatus.Running);
ThreadPool.QueueUserWorkItem(s =>
{
// Do the work here
SaveJobStatus(id, JobStatus.Completed);
}
return id;
}
[WebMethod]
public JobStatus GetJobStatus(int id)
{
// Load the status from database or other persistent data source
return ( ... )
}
}
That's one method to start the work, and another method to check on its status. It's up to the client to poll periodically. It's not a very good system, but you don't have a lot of options with ASMX.
Of course, if you do need a response from this operation, a much better way is to use WCF instead. WCF gives you callback contracts, which you can use to begin a one-way operation and subscribe to a notification when that operation is complete, which eliminates the need for polling above.
So, to summarize all that:
If you don't need any response or status updates, just use IsOneWay = true.
If you do need updates, and can use WCF on the service side, use that with a callback contract. You should be using WCF for new Web Service projects anyway.
If you need updates and cannot use WCF, do the work in a background thread and implement a periodic polling system with an additional status-check web method.

WCF Error Logging at Service Boundary

I'm trying to implement an IErrorHandler in my WCF service in order to log every exception that hits the service boundary before it's passed to the client. I already use IErrorHandlers for translating Exceptions to typed FaultExceptions, which has been very useful. According to the MSDN for IErrorHandler.HandleError(), it's also intended to be used for logging at the boundary.
The problem is, the HandleError function isn't guaranteed to be called on the operation thread, so I can't figure out how to get information about what operation triggered the exception. I can get the TargetSite out of the exception itself, but that gives me the interior method instead of the operation. I could also parse through the StackTrace string to figure out where it was thrown, but this seems a little fragile and hokey. Is there any consistent, supported way to get any state information (messages, operationdescription, anything) while in the HandleError function? Or any other ways to automatically log exceptions for service calls?
I'm looking for a solution to implement on production, using my existing logging framework, so SvcTraceViewer won't do it for me.
Thanks.
I ended up putting the logging in IErrorHandler.ProvideFault() instead of IErrorHandler.HandlerError(). The ProvideFault call is made in the operation thread, so I can use OperationContext.Current to get some information to log.
I use the IErrorHanlder in the same way that you describe, but not for logging. Instead on service classes (WCF or not) I use an interceptor as described here. I believe that this technique will capture the information you are interested in.
You could stash any context information you need to log in the Exception's Data dictionary in the ProvideFault method which is called on the operation thread...then reference it in the HandleError method for logging purposes.
Have you used the Service Trace Viewer?
The ProvideFault() operations is being called on the incoming call thread and the client is still blocked waiting for response. I don't think it is good idea to add a lengthy process(like logging) inside this method. That is why they exposed another operation HandleError whch gets called on a separate worker thread.
But I understand your situation. Please share if you found a solution other than logging inside ProvideFault.
What about creating an instance and saving the request message on that instance?

Categories