Timeout in Async Method - c#

I have an async Method named "RequestMessage()". Within this method, I'm going to send a message to a message broker. Since I don't know when to expect the result, I'm using "TaskCompletionSource". I want the async method to terminate, when the reply message arrived (I'll receive an event from the broker).
This works fine so far. Now, my issue is that this message could never be answered, or at least way to late.
I'm looking for a change to implement my own timeout. To do so, I tried a Timer, as well as the Observer of Reactive Extensions. Th issue is always the same - I can't get my main thread and the timer thread syncronized, as I'm using .NET Core 2 and there is no SynchronizationContext.
So, in my code there is an observer ..
Observable
.Interval(TimeSpan.FromSeconds(timeOutInSeconds))
.Subscribe(
x =>
{
timeoutCallback();
});
If time expires a callback should be called. In my caller method, I handle the callback this way:
TimeoutDelegate timeoutHandler = () => throw new WorkcenterRepositoryCommunicationException("Broker communication timed out.", null);
As you realized already, this Exception will never be catched, as it is not thrown to the main thread.
How can I sync threads here?
Thanks in advance!

The best way to "fail upon some problem" IMHO would be to throw the appropriate exception, but you can definitely just use return; if you prefer to avoid exceptions.
This will create a completed/faulted task that was completed synchronously, so the caller using await will get a finished task and continue on using the same thread.
CancellationToken allows for the caller to cancel the operation, which isn't the case you are describing.
Task.Yield doesn't terminate any operation, it just enables other tasks to run for some time and reschedules itself for later.

Related

How does awaiting part of code calledback/executed when asynchronous method finished

I've read about asynchronous programming in C#, but I still do not fully understand how continuation of async method is executed.
From my understanding, Asynchronous programming is not about multithreading. We can run async method on UI Thread, and it later continues on that UI Thread (while not blocking and continuing to respond to other messages from message loop).
This is basic message loop most GUI apps have:
while (1)
{
bRet = GetMessage(&msg, NULL, 0, 0);
if (bRet > 0) // (bRet > 0 indicates a message that must be processed.)
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
...
DispatchMessage() calls UI event handlers. And the code inside the event handlers should not block the main thread. For that reason if we want to, i.e. create a button that loads a heavy data from disk, we can use async method like this: (simplified pseudo code)
public async Task ButtonClicked()
{
loadingBar.Show();
await AsyncLoadData();
loadingBar.Hide();
}
When execution reaches await AsyncLoadData(); line, it stores the context and returns the Task object. DispatchMessage() finishes and message loop repeatedly comes to the bRet = GetMessage(&msg, NULL, 0, 0); line.
So my question is, how the rest code is executed? Is finished async operation triggers a new message, that is then handled by DispatchMessage() again? Or the message loop has another method (after dispatch), that checks for finished async operations?
So my question is, how the rest code is executed? Is finished async operation triggers a new message, that is then handled by DispatchMessage() again? Or the message loop has another method (after dispatch), that checks for finished async operations?
await by default will capture a "context" and use that to resume the execution of the method. This "context" is SynchronizationContext.Current, falling back on TaskScheduler.Current. UI apps provide a SynchronizationContext, e.g., WindowsFormsSynchronizationContext or DispatcherSynchronizationContext. When the await completes, it schedules the continuation of the method onto that context (in this case, onto the SynchronizationContext).
For WinForms, the syncctx uses Control.BeginInvoke, which will post a Win32 message which is handled by the WinProc.
For WPF, the syncctx posts to its Dispatcher, which adds the callback to the dispatch queue. This queue is also processed by a Win32 WinProc loop.
Alex Davies wrote an excellent book on this, it is called "Async in C# 5", I strongly recommend reading it.
I can't tell low level details behind this, but at the high level CLR will create something like this:
void __buttoncliked_remaining_code_1(...) {
loadingBar.Hide();
}
So, a specific event will be triggered indicating that async job has completed.
Then __buttoncliked_remaining_code_1() will be executed, sinchronously, just like any regular C# function.
CLR will use for that any thread, but most likely it will reuse the one that encountered await keyword, which in your case might be GUI thread.

Test run stops when doing several multi-threaded tests in a row

I've got a class with a static ConcurrentQueue. One class receives messages and puts them on the queue, whilst a different thread on this class reads them from that queue and processes them one at a time. That method is aborted with a cancellationtoken.
The method that empties the queue looks like this:
public async Task HandleEventsFromQueueAsync(CancellationToken ct, int pollDelay = 25)
{
while (true)
{
if (ct.IsCancellationRequested)
{
return;
}
if(messageQueue.TryDequeue(out ConsumeContext newMessage))
{
handler.Handle(newMessage);
}
try
{
await Task.Delay(pollDelay, ct).ConfigureAwait(true);
}
catch (TaskCanceledException)
{
return;
}
}
}
My testing methods look like this:
CancellationToken ct = source.Token;
Thread thread = new Thread(async () => await sut.HandleEventsFromQueueAsync(ct));
thread.Start();
EventListener.messageQueue.Enqueue(message1);
EventListener.messageQueue.Enqueue(message2);
await Task.Delay(1000);
source.Cancel(false);
mockedHandler.Verify(x => x.Handle(It.IsAny<ConsumeContext>()), Times.Exactly(2));
So I start my dequeueing method in its own thread, with a fresh cancellation token. Then I enqueue a couple of messages, give the process a second to handle them, and then use source.Cancel(false) to put an end to the thread and make the method return. Then I check that the handler was called the right number of times. Of course I'm testing this in a couple variations, with different message types and different times when I abort the dequeueing method.
The issue is that when I run any of my tests individually, they all succeed. But when I try to run them as a group, Visual Studio does not run every test. There's no error message, and the tests it does run succeed fine, but the run just stops after the second test.
I do not have an idea why this happens. My tests are all identical in structure. I'm aborting the dequeueing thread properly every time.
What could compel Visual Studio to stop a test run, without throwing any kind of error?
You are passing an async lambda to the Thread constructor. The Thread constructor doesn't understand async delegates (does not accept a Func<Task> argument), so you end up with an async void lambda. Async void methods should be avoided for anything that it's not an event handler. What happens in your case is that the explicitly created thread is terminated when the code hits the first await, and the rest of the body runs in ThreadPool threads. It seems that the code never fails with an exception, otherwise the process would crash (this is the default behavior of async void methods).
Suggestions:
Use a Task instead of a Thread. This way you'll have something to await before exiting the test.
CancellationToken ct = source.Token;
Task consumerTask = Task.Run(() => sut.HandleEventsFromQueueAsync(ct));
EventListener.messageQueue.Enqueue(message1);
EventListener.messageQueue.Enqueue(message2);
await Task.Delay(1000);
source.Cancel(false);
await consumerTask; // Wait the task to complete
mockedHandler.Verify(x => x.Handle(It.IsAny<ConsumeContext>()), Times.Exactly(2));
Consider using a BlockingCollection or an asynchronous queue like a Channel instead of a ConcurrentQueue. Polling is an awkward and inefficient technique. With a blocking or async queue you'll not be obliged to do loops waiting for new messages to arrive. You'll be able to enter a waiting state, and notified instantly when a new message arrives.
Configure the awaiting with ConfigureAwait(false). ConfigureAwait(true) is the default and does nothing.
Consider propagating cancellation by throwing an OperationCanceledException. This is the standard way of propagating cancellation in .NET. So instead of:
if (ct.IsCancellationRequested) return;
...it is preferable to do this:
ct.ThrowIfCancellationRequested();
I have solved my own issue. Turns out that the newly created thread threw an exception, and when threads throw exceptions those are ignored, but they still stop the unit test from happening. After fixing the issue causing the exception, the tests work fine.

Background started task does not finish/gets terminated after the first encountered await

In an ASP.NET application, I have an action which when hit, starts a new background task in the following way:
Task.Factory.StartNew(async () => await somethingWithCpuAndIo(input), CancellationToken.None, TaskCreationOptions.DenyChildAttach | TaskCreationOptions.LongRunning, TaskScheduler.FromCurrentSynchronizationContext());
I'm not awaiting it, I just want it to start, and continuing doing its work in the background.
Immediately after that, I return a response to the client.
For some reason though, after the initial thread executing the background work hits an await that awaits the completion of a method, when debugging, I successfully resolve the method, but upon returning, execution just stops and does not continue below that point.
Interestingly, if I fully await this task (using double await), everything works as expected.
Is this due to the SynchronizationContext? The moment I return a response, the synchronizationContext is disposed/removed? (The SynchronizationContext is being used inside the method)
If it is due to that, where exactly does the issue happen?
A) When the Scheduler attempts to assign the work on the given synchronizationContext, it will already be disposed, so nothing will be provided
B) Somewhere down the lines in the method executing, when I return a response to the client, the synchronizationContext is lost, regardless of anything else.
C) Something else entirely?
If it's A), I should be able to fix this by simply doing Thread.Sleep() between scheduling the work and returning a response. (Tried that, it didn't work.)
If it's B) I have no idea how I can resolve this. Help will be appreciated.
As Gabriel Luci has pointed out, it is due the the first awaited incomplete Task returning immediately, but there's a wider point to be made about Task.Factory.StartNew.
Task.Factory.StartNew should not be used with async code, and neither should TaskCreationOptions.LongRunning. TaskCreationOptions.LongRunning should be used for scheduling long running CPU-bound work. With an async method, it may be logically long running, but Task.Factory.StartNew is about starting synchronous work, the synchronous part of an async method is the bit before the first await, this is usually very short.
Here is the guidance from David Fowler (Partner Software Architect at Microsoft on the ASP.NET team) on the matter:
https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/86b502e88c752e42f68229afb9f1ac58b9d1fef7/AsyncGuidance.md#avoid-using-taskrun-for-long-running-work-that-blocks-the-thread
See the 3rd bulb:
Don't use TaskCreationOptions.LongRunning with async code as this will
create a new thread which will be destroyed after first await.
Your comments made you intentions a little clearer. What I think you want to do is:
Start the task and don't wait for it. Return a response to the client before the background task completes.
Make sure that the somethingWithCpuAndIo method has access to the request context.
But,
A different thread won't be in the same context, and
As soon as the first await is hit, a Task is returned, which also means that Task.Factory.StartNew returns and execution of the calling method continues. That means that the response is returned to the client. When the request completes, the context is disposed.
So you can't really do both things you want. There are a couple of ways to work around this:
First, you might be able to not start it on a different thread at all. This depends on when somethingWithCpuAndIo needs access to the context. If it only needs the context before the first await, then you can do something like this:
public IActionResult MyAction(input) {
somethingWithCpuAndIo(input); //no await
}
private async Task somethingWithCpuAndIo(SomeThing input) {
// You can read from the request context here
await SomeIoRequest().ConfigureAwait(false);
// Everything after here will run on a ThreadPool thread with no access
// to the request context.
}
Every asynchronous method starts running synchronously. The magic happens when await is given an incomplete Task. So in this example, somethingWithCpuAndIo will start executing on the same thread, in the request context. When it hits the await, a Task is returned to MyAction, but it is not awaited, so MyAction completes executing and a response gets sent to the client before SomeIoRequest() has completed. But ConfigureAwait(false) tells it that we don't need to resume execution in the same context, so somethingWithCpuAndIo resume execution on a ThreadPool thread.
But that will only help you if you don't need the context after the first await in somethingWithCpuAndIo.
Your best option is to still execute on a different thread, but pass the values you need from the context into somethingWithCpuAndIo.
But also, use Task.Run instead of Task.Factory.StartNew for reasons described in detail here.
Update: This can very likely cause unpredictable results, but you can also try passing a reference to HttpContext.Current to the thread and setting HttpContext.Current in the new thread, like this:
var ctx = HttpContext.Current;
Task.Run(async () => {
HttpContext.Current = ctx;
await SomeIoRequest();
});
However, it all depends on how you are using the context. HttpContext itself doesn't implement IDiposable, so it, itself, can't be disposed. And the garbage collector won't get rid of it as long as you're holding a reference to it. But the context isn't designed to live longer than the request. So after the response is returned to the client, there may be many parts of the context that are disposed or otherwise unavailable. Test it out an see what explodes. But even if nothing explodes right now, you (or someone else) might come back to that code later, try to use something else in the context and get really confused when it blows up. It could make for some difficult-to-debug scenarios.

'await' does not return, when my Task is started from a custom TaskScheduler

Background:
I have a "Messenger" class. It sends messages. But due to limitations, let's say it can only send - at most - 5 messages at a time.
I have a WPF application which queues messages as needed, and waits for the queued message to be handled before continuing. Due to the asynchronous nature of the application, any number of messages could be awaited at any given time.
Current Implementation:
To accomplish this, I've implemented a Task<Result> SendMessage(Message message) API within my messaging class. Internal to the messaging class is a custom TaskScheduler (the LimitedConcurrencyTaskScheduler from MSDN), with its concurrency level set to 5. In this way, I would expect that no matter how many messages are queued, only 5 will be sent out at a time, and my client application will patiently wait until its respective message has been handled.
Problem:
When I await the SendMessage method, I can see via the debugger that the message was completed and the result returned, but my code never executes beyond the awaited method call!
Is there some special considerations that need to be made, when awaiting a Task which was scheduled using a different TaskScheduler?
Snipped Code:
From my client/consuming function:
public async Task Frobulate()
{
Message myMessage = new Message(x, y, z);
await messenger.SendMessage(myMessage);
//Code down here never executes!
}
From my messenger class:
private TaskScheduler _messengerTaskScheduler = new LimitedConcurrencyLevelTaskScheduler(5);
private TaskFactory _messengerTaskFactory = new TaskFactory(_messengerScheduler);
public Task<Result> SendMessage(Message message)
{
//My debugger has verified that "InternalSendMessage" has completed,
//but the caller's continuation appears to never execute
return _messengerTaskFactory.StartNew(() => InternalSendMessage(message));
}
Update:
The 'freeze' does not actually appear to be caused by my custom TaskScheduler; when I queue up the Task with the default TaskFactory, the same behavior occurs! There must be something else happening at a more fundamental level, likely due to my own stupidity.
Based on the comments, you probably have a deadlock because you're blocking on async code.
When using async, whenever there are thread restrictions on the SynchronizationContext or TaskScheduler and the code blocks using Task.Result or Task.Wait there's a possibility of deadlocking. The asynchronous operation needs a thread to finish execution, which it can't get because the SynchronizationContext (or TaskScheduler in your case) is waiting for that same exact operation to complete before allowing "new" ones to run.
Go deeper in Stephen Cleary's blog post: Don't Block on Async Code

Preventing task from running on certain thread

I have been struggling a bit with some async await stuff. I am using RabbitMQ for sending/receiving messages between some programs.
As a bit of background, the RabbitMQ client uses 3 or so threads that I can see: A connection thread and two heartbeat threads. Whenever a message is received via TCP, the connection thread handles it and calls a callback which I have supplied via an interface. The documentation says that it is best to avoid doing lots of work during this call since its done on the same thread as the connection and things need to continue on. They supply a QueueingBasicConsumer which has a blocking 'Dequeue' method which is used to wait for a message to be received.
I wanted my consumers to be able to actually release their thread context during this waiting time so somebody else could do some work, so I decided to use async/await tasks. I wrote an AwaitableBasicConsumer class which uses TaskCompletionSources in the following fashion:
I have an awaitable Dequeue method:
public Task<RabbitMQ.Client.Events.BasicDeliverEventArgs> DequeueAsync(CancellationToken cancellationToken)
{
//we are enqueueing a TCS. This is a "read"
rwLock.EnterReadLock();
try
{
TaskCompletionSource<RabbitMQ.Client.Events.BasicDeliverEventArgs> tcs = new TaskCompletionSource<RabbitMQ.Client.Events.BasicDeliverEventArgs>();
//if we are cancelled before we finish, this will cause the tcs to become cancelled
cancellationToken.Register(() =>
{
tcs.TrySetCanceled();
});
//if there is something in the undelivered queue, the task will be immediately completed
//otherwise, we queue the task into deliveryTCS
if (!TryDeliverUndelivered(tcs))
deliveryTCS.Enqueue(tcs);
}
return tcs.Task;
}
finally
{
rwLock.ExitReadLock();
}
}
The callback which the rabbitmq client calls fulfills the tasks: This is called from the context of the AMQP Connection thread
public void HandleBasicDeliver(string consumerTag, ulong deliveryTag, bool redelivered, string exchange, string routingKey, RabbitMQ.Client.IBasicProperties properties, byte[] body)
{
//we want nothing added while we remove. We also block until everybody is done.
rwLock.EnterWriteLock();
try
{
RabbitMQ.Client.Events.BasicDeliverEventArgs e = new RabbitMQ.Client.Events.BasicDeliverEventArgs(consumerTag, deliveryTag, redelivered, exchange, routingKey, properties, body);
bool sent = false;
TaskCompletionSource<RabbitMQ.Client.Events.BasicDeliverEventArgs> tcs;
while (deliveryTCS.TryDequeue(out tcs))
{
//once we manage to actually set somebody's result, we are done with handling this
if (tcs.TrySetResult(e))
{
sent = true;
break;
}
}
//if nothing was sent, we queue up what we got so that somebody can get it later.
/**
* Without the rwlock, this logic would cause concurrency problems in the case where after the while block completes without sending, somebody enqueues themselves. They would get the
* next message and the person who enqueues after them would get the message received now. Locking prevents that from happening since nobody can add to the queue while we are
* doing our thing here.
*/
if (!sent)
{
undelivered.Enqueue(e);
}
}
finally
{
rwLock.ExitWriteLock();
}
}
rwLock is a ReaderWriterLockSlim. The two queues (deliveryTCS and undelivered) are ConcurrentQueues.
The problem:
Every once in a while, the method that awaits the dequeue method throws an exception. This would not normally be an issue since that method is also async and so it enters the "Exception" completion state that tasks enter. The problem comes in the situation where the task that calls DequeueAsync is resumed after the await on the AMQP Connection thread that the RabbitMQ client creates. Normally I have seen tasks resume onto the main thread or one of the worker threads floating around. However, when it resumes onto the AMQP thread and an exception is thrown, everything stalls. The task does not enter its "Exception state" and the AMQP Connection thread is left saying that it is executing the method that had the exception occur.
My main confusion here is why this doesn't work:
var task = c.RunAsync(); //<-- This method awaits the DequeueAsync and throws an exception afterwards
ConsumerTaskState state = new ConsumerTaskState()
{
Connection = connection,
CancellationToken = cancellationToken
};
//if there is a problem, we execute our faulted method
//PROBLEM: If task fails when its resumed onto the AMQP thread, this method is never called
task.ContinueWith(this.OnFaulted, state, TaskContinuationOptions.OnlyOnFaulted);
Here is the RunAsync method, set up for the test:
public async Task RunAsync()
{
using (var channel = this.Connection.CreateModel())
{
...
AwaitableBasicConsumer consumer = new AwaitableBasicConsumer(channel);
var result = consumer.DequeueAsync(this.CancellationToken);
//wait until we find something to eat
await result;
throw new NotImplementeException(); //<-- the test exception. Normally this causes OnFaulted to be called, but sometimes, it stalls
...
} //<-- This is where the debugger says the thread is sitting at when I find it in the stalled state
}
Reading what I have written, I see that I may not have explained my problem very well. If clarification is needed, just ask.
My solutions that I have come up with are as follows:
Remove all Async/Await code and just use straight up threads and block. Performance will be decreased, but at least it won't stall sometimes
Somehow exempt the AMQP threads from being used for resuming tasks. I assume that they were sleeping or something and then the default TaskScheduler decided to use them. If I could find a way to tell the task scheduler that those threads are off limits, that would be great.
Does anyone have an explanation for why this is happening or any suggestions to solving this? Right now I am removing the async code just so that the program is reliable, but I really want to understand what is going on here.
I first recommend that you read my async intro, which explains in precise terms how await will capture a context and use that to resume execution. In short, it will capture the current SynchronizationContext (or the current TaskScheduler if SynchronizationContext.Current is null).
The other important detail is that async continuations are scheduled with TaskContinuationOptions.ExecuteSynchronously (as #svick pointed out in a comment). I have a blog post about this but AFAIK it is not officially documented anywhere. This detail does make writing an async producer/consumer queue difficult.
The reason await isn't "switching back to the original context" is (probably) because the RabbitMQ threads don't have a SynchronizationContext or TaskScheduler - thus, the continuation is executed directly when you call TrySetResult because those threads look just like regular thread pool threads.
BTW, reading through your code, I suspect your use of a reader/writer lock and concurrent queues are incorrect. I can't be sure without seeing the whole code, but that's my impression.
I strongly recommend you use an existing async queue and build a consumer around that (in other words, let someone else do the hard part :). The BufferBlock<T> type in TPL Dataflow can act as an async queue; that would be my first recommendation if you have Dataflow available on your platform. Otherwise, I have an AsyncProducerConsumerQueue type in my AsyncEx library, or you could write your own (as I describe on my blog).
Here's an example using BufferBlock<T>:
private readonly BufferBlock<RabbitMQ.Client.Events.BasicDeliverEventArgs> _queue = new BufferBlock<RabbitMQ.Client.Events.BasicDeliverEventArgs>();
public void HandleBasicDeliver(string consumerTag, ulong deliveryTag, bool redelivered, string exchange, string routingKey, RabbitMQ.Client.IBasicProperties properties, byte[] body)
{
RabbitMQ.Client.Events.BasicDeliverEventArgs e = new RabbitMQ.Client.Events.BasicDeliverEventArgs(consumerTag, deliveryTag, redelivered, exchange, routingKey, properties, body);
_queue.Post(e);
}
public Task<RabbitMQ.Client.Events.BasicDeliverEventArgs> DequeueAsync(CancellationToken cancellationToken)
{
return _queue.ReceiveAsync(cancellationToken);
}
In this example, I'm keeping your DequeueAsync API. However, once you start using TPL Dataflow, consider using it elsewhere as well. When you need a queue like this, it's common to find other parts of your code that would also benefit from a dataflow approach. E.g., instead of having a bunch of methods calling DequeueAsync, you could link your BufferBlock to an ActionBlock.

Categories