I've looked numerous posts on task exception handling and I'm not clear how to resolve this issue. My application crashes anytime there is an exception in one of my tasks. I am unable to catch the exception and my application is left in an inoperable state.
UPDATE: This only seems to happen when calling the ExecuteAsync method in the DataStax Cassandra C# driver. This leads me to believe it's an issue in the driver itself. When I create my own task and throw an exception it works fine.
Most use cases seem to await all asynchronous calls, but in my case I need to fire off a group of asynchronous commands and then use WhenAll to await them together (rather than awaiting each one individually).
This is based off of this post which shows how to batch up tasks to send to a Cassandra database:
https://lostechies.com/ryansvihla/2014/08/28/cassandra-batch-loading-without-the-batch-keyword/
This is the same practice recommended by Microsoft when you want to perform multiple async requests without having to chain them:
https://social.msdn.microsoft.com/Forums/en-US/6ab8c611-6b0c-4390-933c-351e56b62526/await-multiple?forum=async
My application entry point:
public void Go()
{
dbTest().Wait();
My async method...
private async Task dbTest() {
List<Task> tasks = new List<Task>();
Task<RowSet> resultSetFuture = session.ExecuteAsync(bind); // spawn a db exception
Task<RowSet> resultSetFuture2 = session.ExecuteAsync(bind);
Task<RowSet> resultSetFuture3 = session.ExecuteAsync(bind);
tasks.Add(resultSetFuture);
tasks.Add(resultSetFuture2);
tasks.Add(resultSetFuture3);
try
{
await Task.WhenAll(tasks.ToArray());
}
catch (Exception ex)
{
...
}
All indications are that WhenAll should properly catch any exceptions from async methods, but it just locks up my application in this case.
Related
I found this code snippet (simplified version provided):
using System;
using System.Threading.Tasks;
namespace TaskTest
{
class Program
{
static void Main(string[] args)
{
var task = SendMessage();
task.Wait();
if (task.IsFaulted) // Never makes it to this line
{
Console.WriteLine("faulted!");
}
Console.Read();
}
private static async Task SendMessage()
{
await Task.Run(() => throw new Exception("something bad happened"));
}
}
}
I'm sure this is a bug since task.Wait(); throws and there is no catch block.
Now I'm wondering when you would need to use task.IsFaulted?
The Task is throwing the exception, not the Task.Wait().
But there is a subtle way that exceptions get bubbled up when using Task.Wait() vs Task.GetAwaiter().GetResult()
You should probably use
task.GetAwaiter().GetResult();
See this question for a good explanation of how the different Syncronous ways to wait work with exceptions.
Task.IsFaulted is useful when using await Task.WhenAny(); or any other time where you want to check the status of a Task without awaiting it, eg from another synchronization context.
I often find myself using Task.IsCompleted | Faulted | Successful to determine what feedback to give a user in a WinForms scenario.
Here is an example where the IsFaulted could be useful. Let's say that you have started two concurrent tasks, the task1 and task2, and after both of them have completed (Task.WhenAll) you want to handle the exception of each task individually. In that case the ex in the catch (Exception ex) is not useful to you, because it contains only one of the possible exceptions, and the exception is dissociated from the originating task. So you could do something like this:
try
{
await Task.WhenAll(task1, task2);
}
catch when (task1.IsFaulted || task2.IsFaulted)
{
if (task1.IsFaulted) HandleException("Task 1", task1.Exception.InnerException);
if (task2.IsFaulted) HandleException("Task 2", task2.Exception.InnerException);
}
When a task IsFaulted, it is guarantied that its Exception property will not be null. This property returns an AggregateException, with an InnerException also practically guarantied to not be null. That's because it is practically impossible to transition a task to a faulted state, without providing at least one Exception object. And the InnerException contains the first of the InnerExceptions that are stored inside an AggregateException.
In this particular example only the failure case in handled. If none of the tasks is faulted, and at least one is canceled, the TaskCanceledException will propagate.
When you await a Task asynchronously the program constantly switches between the calling context and the context of the awaited Task.(this is over generalized)
This means in SendMessage(); the program runs everything before the await call with the Main context, runs the awaited call in a Task, which may or may not run on another thread, and switched back to the original context of Main.
Because you awaited the Task within SendMessage(); the Task can properly bubble up errors to the calling context, which in this case is Main which halts the program.
Both .Wait() and await bubble errors back to the calling context.
In your example of you removed the .Wait();, the Task would run parallel (run synchronously in it's own context on another thread) and no errors would be able to bubble back to Main.
Think of it like you are cooking a two course meal. You could cook it asynchronously by constantly walking between two cooking stations and doing tasks at each a little at a time. Alternatively you could have a friend cook the other meal in parallel with you.
When you cook both meals yourself you will know immediately if you've burned your steak. But if you have your friend cook the steak, but he sucks at cooking steak, you won't know he's burned the steak until you check his work(.IsFaulted).
I have a question referencing the usage of concurrently running tasks in Azure Functions, on the consumption plan.
One part of our application allows users to connect their mail accounts, then downloads messages every 15 minutes. We have azure function to do so, one for all users. The thing is, as users count increases, the function need's more time to execute.
In order to mitigate a timeout case, I've changed our function logic. You can find some code below. Now it creates a separate task for each user and then waits for all of them to finish. There is also some exception handling implemented, but that's not the topic for today.
The problem is, that when I check some logs, I see executions as the functions weren't executed simultaneously, but rather one after one. Now I wonder if I made some mistake in my code, or is it a thing with azure functions that they cannot run in such a scenario (I haven't found anything suggesting it on the Microsoft sites, quite the opposite actually)
PS - I do know about durable functions, however, for some reason I'd like to resolve this issue without them.
My code:
List<Task<List<MailMessage>>> tasks = new List<Task<List<MailMessage>>>();
foreach (var account in accounts)
{
using (var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(6)))
{
try
{
tasks.Add(GetMailsForUser(account, cancellationTokenSource.Token, log));
}
catch (TaskCanceledException)
{
log.LogInformation("Task was cancelled");
}
}
}
try
{
await Task.WhenAll(tasks.ToArray());
}
catch(AggregateException aex)
{
aex.Handle(ex =>
{
TaskCanceledException tcex = ex as TaskCanceledException;
if (tcex != null)
{
log.LogInformation("Handling cancellation of task {0}", tcex.Task.Id);
return true;
}
return false;
});
}
log.LogInformation($"Zakończono pobieranie wiadomości.");
private async Task<List<MailMessage>> GetMailsForUser(MailAccount account, CancellationToken cancellationToken, ILogger log)
{
log.LogInformation($"[{account.UserID}] Rozpoczęto pobieranie danych dla konta {account.EmailAddress}");
IEnumerable<MailMessage> mails;
try
{
using (var client = _mailClientFactory.GetIncomingMailClient(account))
{
mails = client.GetNewest(false);
}
log.LogInformation($"[{account.UserID}] Pobrano {mails.Count()} wiadomości dla konta {account.EmailAddress}.");
return mails.ToList();
}
catch (Exception ex)
{
log.LogWarning($"[{account.UserID}] Nie udało się pobrać wiadomości dla konta {account.EmailAddress}");
log.LogError($"[{account.UserID}] {ex.Message} {ex.StackTrace}");
return new List<MailMessage>();
}
}
Output:
Azure functions in a consumption plan scales out automatically. Problem is that the load needs to be high enough to trigger the scale out.
What is probably happening is that the scaling is not being triggered, therefore everything runs on the same instance, therefore the calls run sequentially.
There is a discussion on this with some code to test it here: https://learn.microsoft.com/en-us/answers/questions/51368/http-triggered-azure-function-not-scaling-to-extra.html
The compiler will give you a warning for GetMailsForUser:
CS1998: This async method lacks 'await' operators and will run synchronously. Consider using the 'await' operator to await non-blocking API calls, or 'await Task.Run(…)' to do CPU-bound work on a background thread.
It's telling you it will run synchronously, which is the behaviour you're seeing. In the warning message there's a couple of recommendations:
Use await. This would be the most ideal solution, since it will reduce the resources your Azure Function uses. However, this means your _mailClientFactory will need to support asynchronous APIs, which may be too much work to take on right now (many SMTP libraries still do not support async).
Use thread pool threads. Task.Run is one option, or you could use PLINQ or Parallel. This solution will consume one thread per account, and you'll eventually hit scaling issues there.
If you want to identify which Task is running in which Function instance etc. use invocation id ctx.InvocationId.ToString(). May be prefix all your logs with this id.
Your code isn't written such that it can be run in parallel by the runtime. See this: Executing tasks in parallel
You can also get more info about the trigger using trigger meta-data. Depends on trigger. This is just to get more insight into what function is handling what message etc.
We have a third-party method Foo which sometimes runs in a deadlock for unknown reasons.
We are executing an single-threaded tcp-server and call this method every 30 seconds to check that the external system is available.
To mitigate the problem with the deadlock in the third party code we put the ping-call in a Task.Run to so that the server does not deadlock.
Like
async Task<bool> WrappedFoo()
{
var timeout = 10000;
var task = Task.Run(() => ThirdPartyCode.Foo());
var delay = Task.Delay(timeout);
if (delay == await Task.WhenAny(delay, task ))
{
return false;
}
else
{
return await task ;
}
}
But this (in our opinion) has the potential to starve the application of free threads. Since if one call to ThirdPartyCode.Foo deadlock the thread will never recover from this deadlock and if this happens often enough we might run out of resources.
Is there a general approach how one should handle deadlocking third-party code?
A CancellationToken won't work because the third-party-api does not provide any cancellation options.
Update:
The method at hand is from the SAPNCO.dll provided by SAP to establish and test rfc-connections to a sap-system, therefore the method is not a simple network-ping. I renamed the method in the question to avoid further misunderstandings
Is there a general approach how one should handle deadlocking third-party code?
Yes, but it's not easy or simple.
The problem with misbehaving code is that it can not only leak resources (e.g., threads), but it can also indefinitely hold onto important resources (e.g., some internal "handle" or "lock").
The only way to forcefully reclaim threads and other resources is to end the process. The OS is used to cleaning up misbehaving processes and is very good at it. So, the solution here is to start a child process to do the API call. Your main application can communicate with its child process by redirected stdin/stdout, and if the child process ever times out, the main application can terminate it and restart it.
This is, unfortunately, the only reliable way to cancel uncancelable code.
Cancelling a task is a collaborative operation in that you pass a CancellationToken to the desired method and externally you use CancellationTokenSource.Cancel:
public void Caller()
{
try
{
CancellationTokenSource cts=new CancellationTokenSource();
Task longRunning= Task.Run(()=>CancellableThirdParty(cts.Token),cts.Token);
Thread.Sleep(3000); //or condition /signal
cts.Cancel();
}catch(OperationCancelledException ex)
{
//treat somehow
}
}
public void CancellableThirdParty(CancellationToken token)
{
while(true)
{
// token.ThrowIfCancellationRequested() -- if you don't treat the cancellation here
if(token.IsCancellationRequested)
{
// code to treat the cancellation signal
//throw new OperationCancelledException($"[Reason]");
}
}
}
As you can see in the code above , in order to cancel an ongoing task , the method running inside it must be structured around the CancellationToken.IsCancellationRequested flag or simply CancellationToken.ThrowIfCancellationRequested method ,
so that the caller just issues the CancellationTokenSource.Cancel.
Unfortunately if the third party code is not designed around CancellationToken ( it does not accept a CancellationToken parameter ), then there is not much you can do.
Your code isn't cancelling the blocked operation. Use a CancellationTokenSource and pass a cancellation token to Task.Run instead :
var cts=new CancellationTokenSource(timeout);
try
{
await Task.Run(() => ThirdPartyCode.Ping(),cts.Token);
return true;
}
catch(TaskCancelledException)
{
return false;
}
It's quite possible that blocking is caused due to networking or DNS issues, not actual deadlock.
That still wastes a thread waiting for a network operation to complete. You could use .NET's own Ping.SendPingAsync to ping asynchronously and specify a timeout:
var ping=new Ping();
var reply=await ping.SendPingAsync(ip,timeout);
return reply.Status==IPStatus.Success;
The PingReply class contains far more detailed information than a simple success/failure. The Status property alone differentiates between routing problems, unreachable destinations, time outs etc
I have a low level CAN device class that I would like to create an onMessageReceive event for. I have several high level device classes that could use an instance of this CAN class. I would like to attach the high level device class' message parser to the low level CAN device onMessageReceive event. Such that when the low level class receives a packet it is parsed into the high level class by the low level reader task. Put into code it would look like the following.
void Main()
{
try
{
using (HighLevelDevice highLevelDevice = new HighLevelDevice())
{
while (true)
{
// Use the properties/fields in highLevelDevice to make testing decisions.
}
}
}
catch (Exception)
{
// If the low level CAN reader task encounters an error I would like for it to asynchronously propogate up to here.
throw;
}
}
public class HighLevelDevice
{
private LowLevelCan lowLevelCanInstance;
public HighLevelDevice()
{
lowLevelCanInstance = new LowLevelCan(this.ProcessPacket);
}
private void ProcessPacket(Packet packet)
{
// Convert packet contents into high level device properties/fields.
}
}
public class LowLevelCan
{
private delegate void ProcessPacketDelegate(Packet packet);
private ProcessPacketDelegate processPacket;
private Task readerTask;
public LowLevelCan(Action<Packet> processPacketFunction)
{
processPacket = new ProcessPacketDelegate(processPacketFunction);
readerTask = Task.Run(() => readerMethod());
}
private async Task readerMethod()
{
while(notCancelled) // This would be a cancellation token, but I left that out for simplicity.
{
processPacket(await Task.Run(() => getNextPacket()));
}
}
private Packet getNextPacket()
{
// Wait for next packet and then return it.
return new Packet();
}
}
public class Packet
{
// Data packet fields would go here.
}
If an exception is thrown in getNextPacket I would like that to be caught in main. Is this possible in any way? If I am way off base and completely misunderstanding async I apologize. If something like this is possible how could I change my approach to achieve it? I could check the state of the reader periodically, but I would like to avoid that if possible.
This implementation will kill the reader, but the highLevelDevice thread continues obliviously. This would be okay if I stored the error and checked the status occasionally on the main thread. I would just like to find a solution that avoid that, if possible.
I have tried variations of error reporting events and progress reporting created on the thread that the highLevelDevice exits on. These do not work as expected/or I do not understand what they are doing properly.
Your title question applies when you want to start a method asynchronously and at a later time synchronize with it to get the result. However, what the body of your question describes is really concurrent access to shared state (the high-level device.) The state is read from your Main thread but written to on a background thread by your low-level device.
The solution is to create an Error property in the high-level device which can be used to coordinate error-handling across the two threads:
catch any exceptions thrown by your low-level device and propagate them to the high-level device (see below), which will store the error in a property Error.
encapsulate all reads of the HL device in properties so that on read you can check Error. If an error has occurred, throw an exception with the details (to be caught and dealt with in Main.)
The net effect is that exceptions from the low-level device have been propagated to Main.
As a side note, your question implies the task-based asynchronous pattern but your low-level device is actually written in an event-based manner. See Asynchronous programming patterns.
Each pattern has a specific method for propagating errors:
For the event-based pattern (EAP) you propagate errors through the event args of any events you raise. See Best Practices for Implementing the Event-based Asynchronous Pattern.
For the task-based pattern (TAP) you propagate errors/exceptions when you await the Task
Your Task.Run really only has the effect of putting the low-level device loop on a different thread to Main You can't await readerTask because it represents the processing loop as a whole and not an individual packet update. The individual packet updates are notified through events instead.
To summarise, when the low-level device catches an exception, it should raise an event and pass the details of the exception in the event's event args. The high-level device will receive the event and store the details in its Error property. This all happens on your background thread. On the main thread, when an error is detected during a property read on the high-level device, the getter should throw an exception with the Error details to be handled in Main.
No. I've tested this and you do have to wait for the task to complete in order to throw the exception. Either await the Task or use Task.Wait() to wait for the task to complete.
I tried using this code and it didn't catch the exception.
try
{
var task = Task.Run(() => WaitASecond());
task.ContinueWith(failedTask => throw failedTask.Exception, TaskContinuationOptions.OnlyOnFaulted);
}
catch (Exception ex)
{
throw;
}
When I added task.Wait() under var task = Task.Run(() => WaitASecond()); it caught an aggregate exception and threw it.
You'll have to wait for all your tasks to complete to catch the exception and throw it up to Main().
The general answers such as here and here to fire-and-forget questions is not to use async/await, but to use Task.Run or TaskFactory.StartNew passing in the synchronous method instead. However, sometimes the method that I want to fire-and-forget is async and there is no equivalent sync method.
Update Note/Warning: As Stephen Cleary pointed out below, it is dangerous to continue working on a request after you have sent the response. The reason is because the AppDomain may be shut down while that work is still in progress. See the link in his response for more information. Anyways, I just wanted to point that out upfront, so that I don't send anyone down the wrong path.
I think my case is valid because the actual work is done by a different system (different computer on a different server) so I only need to know that the message has left for that system. If there is an exception there is nothing that the server or user can do about it and it does not affect the user, all I need to do is refer to the exception log and clean up manually (or implement some automated mechanism). If the AppDomain is shut down I will have a residual file in a remote system, but I will pick that up as part of my usual maintenance cycle and since its existence is no longer known by my web server (database) and its name is uniquely timestamped, it will not cause any issues while it still lingers.
It would be ideal if I had access to a persistence mechanism as Stephen Cleary pointed out, but unfortunately I don't at this time.
I considered just pretending that the DeleteFoo request has completed fine on the client side (javascript) while keeping the request open, but I need information in the response to continue, so it would hold things up.
So, the original question...
for example:
//External library
public async Task DeleteFooAsync();
In my asp.net mvc code I want to call DeleteFooAsync in a fire-and-forget fashion - I don't want to hold up the response waiting for DeleteFooAsync to complete. If DeleteFooAsync fails (or throws an exception) for some reason, there is nothing that the user or the program can do about it so I just want to log an error.
Now, I know that any exceptions will result in unobserved exceptions, so the simplest case I can think of is:
//In my code
Task deleteTask = DeleteFooAsync()
//In my App_Start
TaskScheduler.UnobservedTaskException += ( sender, e ) =>
{
m_log.Debug( "Unobserved exception! This exception would have been unobserved: {0}", e.Exception );
e.SetObserved();
};
Are there any risks in doing this?
The other option that I can think of is to make my own wrapper such as:
private void async DeleteFooWrapperAsync()
{
try
{
await DeleteFooAsync();
}
catch(Exception exception )
{
m_log.Error("DeleteFooAsync failed: " + exception.ToString());
}
}
and then call that with TaskFactory.StartNew (probably wrapping in an async action). However this seems like a lot of wrapper code each time I want to call an async method in a fire-and-forget fashion.
My question is, what it the correct way to call an async method in a fire-and-forget fashion?
UPDATE:
Well, I found that the following in my controller (not that the controller action needs to be async because there are other async calls that are awaited):
[AcceptVerbs( HttpVerbs.Post )]
public async Task<JsonResult> DeleteItemAsync()
{
Task deleteTask = DeleteFooAsync();
...
}
caused an exception of the form:
Unhandled Exception: System.NullReferenceException: Object reference
not set to an instance of an object. at System.Web.ThreadContext.AssociateWithCurrentThread(BooleansetImpersonationContext)
This is discussed here and seems to be to do with the SynchronizationContext and 'the returned Task was transitioned to a terminal state before all async work completed'.
So, the only method that worked was:
Task foo = Task.Run( () => DeleteFooAsync() );
My understanding of why this works is because StartNew gets a new thread for DeleteFooAsync to work on.
Sadly, Scott's suggestion below does not work for handling exceptions in this case, because foo is not a DeleteFooAsync task anymore, but rather the task from Task.Run, so does not handle the exceptions from DeleteFooAsync. My UnobservedTaskException does eventually get called, so at least that still works.
So, I guess the question still stands, how do you do fire-and-forget an async method in asp.net mvc?
First off, let me point out that "fire and forget" is almost always a mistake in ASP.NET applications. "Fire and forget" is only an acceptable approach if you don't care whether DeleteFooAsync actually completes.
If you're willing to accept that limitation, I have some code on my blog that will register tasks with the ASP.NET runtime, and it accepts both synchronous and asynchronous work.
You can write a one-time wrapper method for logging exceptions as such:
private async Task LogExceptionsAsync(Func<Task> code)
{
try
{
await code();
}
catch(Exception exception)
{
m_log.Error("Call failed: " + exception.ToString());
}
}
And then use the BackgroundTaskManager from my blog as such:
BackgroundTaskManager.Run(() => LogExceptionsAsync(() => DeleteFooAsync()));
Alternatively, you can keep TaskScheduler.UnobservedTaskException and just call it like this:
BackgroundTaskManager.Run(() => DeleteFooAsync());
As of .NET 4.5.2, you can do the following
HostingEnvironment.QueueBackgroundWorkItem(async cancellationToken => await LongMethodAsync());
But it only works within ASP.NET domain
The HostingEnvironment.QueueBackgroundWorkItem method lets you
schedule small background work items. ASP.NET tracks these items and
prevents IIS from abruptly terminating the worker process until all
background work items have completed. This method can't be called
outside an ASP.NET managed app domain.
More here: https://msdn.microsoft.com/en-us/library/ms171868(v=vs.110).aspx#v452
The best way to handle it is use the ContinueWith method and pass in the OnlyOnFaulted option.
private void button1_Click(object sender, EventArgs e)
{
var deleteFooTask = DeleteFooAsync();
deleteFooTask.ContinueWith(ErrorHandeler, TaskContinuationOptions.OnlyOnFaulted);
}
private void ErrorHandeler(Task obj)
{
MessageBox.Show(String.Format("Exception happened in the background of DeleteFooAsync.\n{0}", obj.Exception));
}
public async Task DeleteFooAsync()
{
await Task.Delay(5000);
throw new Exception("Oops");
}
Where I put my message box you would put your logger.