I have a need to keep track of a task and potentially queue up another task after some delay so the way I'm thinking of doing it looks something like this:
private Task lastTask;
public void DoSomeTask()
{
if (lastTask == null)
{
lastTask = Task.FromResult(false);
}
lastTask = lastTask.ContinueWith(t =>
{
// do some task
}).ContinueWith(t => Task.Delay(250).Wait());
}
My question is, if I do something like this, creating potentially long chains of tasks is will the older tasks be disposed or will they end up sticking around forever because the ContinueWith takes the last task as a parameter (so it's a closure). If so, how can I chain tasks while avoiding this problem?
Is there a better way to do this?
Task.Delay(250).Wait()
You know you're doing something wrong when you use Wait in code you're trying to make asynchronous. That's one wasted thread doing nothing.
The following would be much better:
lastTask = lastTask.ContinueWith(t =>
{
// do some task
}).ContinueWith(t => Task.Delay(250)).Unwrap();
ContinueWith returns a Task<Task>, and the Unwrap call turns that into a Task which will complete when the inner task does.
Now, to answer your question, let's take a look at what the compiler generates:
public void DoSomeTask()
{
if (this.lastTask == null)
this.lastTask = (Task) Task.FromResult<bool>(false);
// ISSUE: method pointer
// ISSUE: method pointer
this.lastTask = this.lastTask
.ContinueWith(
Program.<>c.<>9__2_0
?? (Program.<>c.<>9__2_0 = new Action<Task>((object) Program.<>c.<>9, __methodptr(<DoSomeTask>b__2_0))))
.ContinueWith<Task>(
Program.<>c.<>9__2_1
?? (Program.<>c.<>9__2_1 = new Func<Task, Task>((object) Program.<>c.<>9, __methodptr(<DoSomeTask>b__2_1))))
.Unwrap();
}
[CompilerGenerated]
[Serializable]
private sealed class <>c
{
public static readonly Program.<>c <>9;
public static Action<Task> <>9__2_0;
public static Func<Task, Task> <>9__2_1;
static <>c()
{
Program.<>c.<>9 = new Program.<>c();
}
public <>c()
{
base.\u002Ector();
}
internal void <DoSomeTask>b__2_0(Task t)
{
}
internal Task <DoSomeTask>b__2_1(Task t)
{
return Task.Delay(250);
}
}
This was decompiled with dotPeek in "show me all the guts" mode.
Look at this part:
.ContinueWith<Task>(
Program.<>c.<>9__2_1
?? (Program.<>c.<>9__2_1 = new Func<Task, Task>((object) Program.<>c.<>9, __methodptr(<DoSomeTask>b__2_1))))
The ContinueWith function is given a singleton delegate. So, there's no closing over any variable there.
Now, there's this function:
internal Task <DoSomeTask>b__2_1(Task t)
{
return Task.Delay(250);
}
The t here is a reference to the previous task. Notice something? It's never used. The JIT will mark this local as being unreachable, and the GC will be able to clean it. With optimizations enabled, the JIT will aggressively mark locals that are eligible for collection, even to the point that an instance method can be executing while the instance is being collected by the GC, if said instance method doesn't reference this in the code left to execute.
Now, one last thing, there's the m_parent field in the Task class, which is not good for your scenario. But as long as you're not using TaskCreationOptions.AttachedToParent you should be fine. You could always add the DenyChildAttach flag for extra safety and self-documentation.
Here's the function which deals with that:
internal static Task InternalCurrentIfAttached(TaskCreationOptions creationOptions)
{
return (creationOptions & TaskCreationOptions.AttachedToParent) != 0 ? InternalCurrent : null;
}
So, you should be safe here. If you want to be sure, run a memory profiler on a long chain, and see for yourself.
if I do something like this, creating potentially long chains of tasks is will the older tasks be disposed
Tasks do not require explicit disposal, as they don't contain unmanaged resources.
will they end up sticking around forever because the ContinueWith takes the last task as a parameter (so it's a closure)
It's not a closure. A closure is an anonymous method using a variable from outside the scope of that anonymous method in its body. You're not doing that, so you're not closing over it. Each Task does however have a field where it keeps track of its parent, so the managed Task object will still be accessible if you're using this pattern.
Take a look at the source code of the ContinuationTaskFromTask class. It has the following code:
internal override void InnerInvoke()
{
// Get and null out the antecedent. This is crucial to avoid a memory
// leak with long chains of continuations.
var antecedent = m_antecedent;
Contract.Assert(antecedent != null,
"No antecedent was set for the ContinuationTaskFromTask.");
m_antecedent = null;
m_antecedent is the field that holds a reference to the antecedent ask. The developers here explicitly set it to null (after it is no longer needed) to make sure that there is no memory leak with long chains of continuations, which I guess is your concern.
Related
The question asked here is the same as the one here and is aimed to create a definitive solution to it.
The most accurate answer is by Stephen Toub himself in this issue that is exactly about this question. The "recommendended code" is the following one:
public static ValueTask AsValueTask<T>(this ValueTask<T> valueTask)
{
if (valueTask.IsCompletedSuccessfully)
{
valueTask.GetResult();
return default;
}
return new ValueTask(valueTask.AsTask());
}
This answer is not up-to-date - a ValueTask doesn't expose a GetResult() (only a Result property) - and THE question is:
Do we need to "pull" the Result out of the ValueTask (to "release" the IValueTaskSource that may operate under this ValueTask)?
If yes:
is it the .GetAwaiter() call that is missing above?
OR is a fake call to the property is guaranteed to work var fake = valueTask.Result;? Always? (I'm afraid of dead code elimination.)
If not, is a straight implementation like the following one enough (and optimal)?
public static ValueTask AsNonGenericValueTask<T>( in this ValueTask<T> valueTask )
{
return valueTask.IsCompletedSuccessfully ? default : new ValueTask( valueTask.AsTask() );
}
What's missing in that code is .GetAwaiter():
public static ValueTask AsValueTask<T>(this ValueTask<T> valueTask)
{
if (valueTask.IsCompletedSuccessfully)
{
valueTask.GetAwaiter().GetResult();
return default;
}
return new ValueTask(valueTask.AsTask());
}
You're partially right in that you don't care about the result. But you might care about a thrown exception or cancellation that you'll miss if you don't query the result.
Or you can write it like this:
public static async ValueTask AsValueTask<T>(this ValueTask<T> valueTask)
=> await valueTask;
You can use either:
valueTask.GetAwaiter().GetResult();
...or:
_ = valueTask.Result;
Both of those delegate to the underlying IValueTaskSource<T>.GetResult method, assuming that the ValueTask<T> is backed by a IValueTaskSource<T>. Using the shorter (second) approach should be slightly more efficient, since it involves one less method invocation.
You could also omit getting the result entirely. There is no requirement that a ValueTask<T> must be awaited at least once, or that its result must be retrieved at least once. It is entirely valid to be handed a ValueTask<T> and then forget about it. The documented restriction is:
A ValueTask<TResult> instance may only be awaited once, [...]
It's a "may only", not a "must".
It is still a good idea to get the result though. By retrieving the result you are signaling that the ValueTask<TResult> has been consumed, and so the underlying IValueTaskSource<T> can be reused. Reusing IValueTaskSource<T> instances makes ValueTask<T>-based implementations more efficient, because they allocate memory less frequently. To get an idea, take a look at the internal System.Threading.Channels.AsyncOperation<TResult> class. This class implements the IValueTaskSource<TResult> interface. Here is how the GetResult is implemented:
/// <summary>Gets the result of the operation.</summary>
/// <param name="token">The token that must match <see cref="_currentId"/>.</param>
public TResult GetResult(short token)
{
if (_currentId != token)
{
ThrowIncorrectCurrentIdException();
}
if (!IsCompleted)
{
ThrowIncompleteOperationException();
}
ExceptionDispatchInfo? error = _error;
TResult? result = _result;
_currentId++;
if (_pooled)
{
Volatile.Write(ref _continuation, s_availableSentinel); // only after fetching all needed data
}
error?.Throw();
return result!;
}
Setting the private field Action<object> _continuation to the static readonly value s_availableSentinel allows the AsyncOperation<TResult> to be reused by a subsequent asynchronous operation (like the ChannelReader.ReadAsync for example). Otherwise the next asynchronous operation will allocate a new AsyncOperation<TResult> instance.
I implemented already a polling-worker based on Timer. As example you can think of TryConnect on the client side -- I call TryConnect and it will connect eventually in some time. It handles multiple threads, if connecting is in the process already all subsequent TryConnect returns immediately without any extra action. Internally I simply create a timer and in intervals I try to connect -- if the connection fails, I try again. And so on.
Small downside is it is "fire&forget" pattern and now I would like to combine it with "async/await" pattern, i.e. instead calling:
client.TryConnect(); // returns immediately
// cannot tell if I am connected at this point
I would like to call it like this:
await client.TryConnect();
// I am connected for sure
How can I change my implementation to support "async/await"? I was thinking about creating empty Task (just for await), and then complete it with FromResult, but this method creates a new task, it does not complete given instance.
For the record, current implementation looks like this (just a sketch of the code):
public void TryConnect()
{
if (this.timer!=null)
{
this.timer = new Timer(_ => tryConnect(),null,-1,-1);
this.timer.Change(0,-1);
}
}
private void tryConnect()
{
if (/*connection failed*/)
this.timer.Change(interval,-1);
else
this.timer = null;
}
Lacking a good Minimal, Complete, and Verifiable code example it's impossible to offer any specific suggestions. Given what you've written, it's possible what you're looking for is TaskCompletionSource. For example:
private TaskCompletionSource<bool> _tcs;
public async Task TryConnect()
{
if (/* no connection exists */)
{
if (_tcs == null)
{
this.timer = new Timer(_ => tryConnect(),null,-1,-1);
this.timer.Change(0,-1);
_tcs = new TaskCompletionSource<bool>();
}
await _tcs.Task;
}
}
private void tryConnect()
{
if (/*connection failed*/)
this.timer.Change(interval,-1);
else
{
_tcs.SetResult(true);
_tcs = null;
this.timer = null;
}
}
Notes:
Your original code example would retry the connection logic if TryConnect() is called again after a connection is made. I expect what you really want is to also check for the presence of a valid connection, so I modified the above slightly to check for that. You can of course remove that part if you actually always want to attempt a new connection, even if one already exists.
The code sets _tcs to null immediate after setting its result. Note though that any code awaiting or otherwise having stored the Task value for the _tcs object will implicitly reference the current _tcs object, so it's not a problem to discard the field's reference here.
There is no non-generic TaskCompletionSource. So for scenarios where you just need a Task, you can use the generic type with a placeholder type, like bool as I've done here or object or whatever. I could've called SetResult(false) just as well as SetResult(true), and it wouldn't matter in this example. All that matters is that the Task is completed, not what value is returned.
The above uses the async keyword to make TryConnect() an async method. IMHO this is a bit more readable, but of course does incur slight overhead in the extra Task to represent the method's operation. If you prefer, you can do the same thing directly without the async method:
public Task TryConnect()
{
if (/* no connection exists */)
{
if (_tcs == null)
{
this.timer = new Timer(_ => tryConnect(),null,-1,-1);
this.timer.Change(0,-1);
_tcs = new TaskCompletionSource<bool>();
}
return _tcs.Task;
}
return Task.CompletedTask;
}
I have an async method that fetches some data from a database. This operation is fairly expensive, and takes a long time to complete. As a result, I'd like to cache the method's return value. However, it's possible that the async method will be called multiple times before its initial execution has a chance to return and save its result to the cache, resulting in multiple calls to this expensive operation.
To avoid this, I'm currently reusing a Task, like so:
public class DataAccess
{
private Task<MyData> _getDataTask;
public async Task<MyData> GetDataAsync()
{
if (_getDataTask == null)
{
_getDataTask = Task.Run(() => synchronousDataAccessMethod());
}
return await _getDataTask;
}
}
My thought is that the initial call to GetDataAsync will kick off the synchronousDataAccessMethod method in a Task, and any subsequent calls to this method before the Task has completed will simply await the already running Task, automatically avoiding calling synchronousDataAccessMethod more than once. Calls made to GetDataAsync after the private Task has completed will cause the Task to be awaited, which will immediately return the data from its initial execution.
This seems to be working, but I'm having some strange performance issues that I suspect may be tied to this approach. Specifically, awaiting _getDataTask after it has completed takes several seconds (and locks the UI thread), even though the synchronousDataAccessMethod call is not called.
Am I misusing async/await? Is there a hidden gotcha that I'm not seeing? Is there a better way to accomplish the desired behavior?
EDIT
Here's how I call this method:
var result = (await myDataAccessObject.GetDataAsync()).ToList();
Maybe it has something to do with the fact that the result is not immediately enumerated?
If you want to await it further up the call stack, I think you want this:
public class DataAccess
{
private Task<MyData> _getDataTask;
private readonly object lockObj = new Object();
public async Task<MyData> GetDataAsync()
{
lock(lockObj)
{
if (_getDataTask == null)
{
_getDataTask = Task.Run(() => synchronousDataAccessMethod());
}
}
return await _getDataTask;
}
}
Your original code has the potential for this happening:
Thread 1 sees that _getDataTask == null, and begins constructing the task
Thread 2 sees that _getDataTask == null, and begins constructing the task
Thread 1 finishes constructing the task, which starts, and Thread 1 waits on that task
Thread 2 finishes constructing a task, which starts, and Thread 2 waits on that task
You end up with two instances of the task running.
Use the lock function to prevent multiple calls to the database query section. Lock will make it thread safe so that once it has been cached all the other calls will use it instead of running to the database for fulfillment.
lock(StaticObject) // Create a static object so there is only one value defined for this routine
{
if(_getDataTask == null)
{
// Get data code here
}
return _getDataTask
}
Please rewrite your function as:
public Task<MyData> GetDataAsync()
{
if (_getDataTask == null)
{
_getDataTask = Task.Run(() => synchronousDataAccessMethod());
}
return _getDataTask;
}
This should not change at all the things that can be done with this function - you can still await on the returned task!
Please tell me if that changes anything.
Bit late to answer this but there is an open source library called LazyCache that will do this for you in two lines of code and it was recently updated to handle caching Tasks for just this sort of situation. It is also available on nuget.
Example:
Func<Task<List<MyData>>> cacheableAsyncFunc = () => myDataAccessObject.GetDataAsync();
var cachedData = await cache.GetOrAddAsync("myDataAccessObject.GetData", cacheableAsyncFunc);
return cachedData;
// Or instead just do it all in one line if you prefer
// return await cache.GetOrAddAsync("myDataAccessObject.GetData", myDataAccessObject.GetDataAsync);
}
It has built in locking by default so the cacheable method will only execute once per cache miss, and it uses a lamda so you can do "get or add" in one go. It defaults to 20 minutes sliding expiration but you can set whatever caching policy you like on it.
More info on caching tasks is in the api docs and you may find the sample app to demo caching tasks useful.
(Disclaimer: I am the author of LazyCache)
I'm calling a service over HTTP (ultimately using the HttpClient.SendAsync method) from within my code. This code is then called into from a WebAPI controller action. Mostly, it works fine (tests pass) but then when I deploy on say IIS, I experience a deadlock because caller of the async method call has been blocked and the continuation cannot proceed on that thread until it finishes (which it won't).
While I could make most of my methods async I don't feel as if I have a basic understanding of when I'd must do this.
For example, let's say I did make most of my methods async (since they ultimately call other async service methods) how would I then invoke the first async method of my program if I built say a message loop where I want some control of the degree of parallelism?
Since the HttpClient doesn't have any synchronous methods, what can I safely presume to do if I have an abstraction that isn't async aware? I've read about the ConfigureAwait(false) but I don't really understand what it does. It's strange to me that it's set after the async invocation. To me that feels as if a race waiting to happen... however unlikely...
WebAPI example:
public HttpResponseMessage Get()
{
var userContext = contextService.GetUserContext(); // <-- synchronous
return ...
}
// Some IUserContextService implementation
public IUserContext GetUserContext()
{
var httpClient = new HttpClient();
var result = httpClient.GetAsync(...).Result; // <-- I really don't care if this is asynchronous or not
return new HttpUserContext(result);
}
Message loop example:
var mq = new MessageQueue();
// we then run say 8 tasks that do this
for (;;)
{
var m = mq.Get();
var c = GetCommand(m);
c.InvokeAsync().Wait();
m.Delete();
}
When you have a message loop that allow things to happen in parallel and you have asynchronous methods, there's a opportunity to minimize latency. Basically, what I want to accomplish in this instance is to minimize latency and idle time. Though I'm actually unsure as to how to invoke into the command that's associated with the message that arrives off the queue.
To be more specific, if the command invocation needs to do service requests there's going to be latency in the invocation that could be used to get the next message. Stuff like that. I can totally do this simply by wrapping up things in queues and coordinating this myself but I'd like to see this work with just some async/await stuff.
While I could make most of my methods async I don't feel as if I have a basic understanding of when I'd must do this.
Start at the lowest level. It sounds like you've already got a start, but if you're looking for more at the lowest level, then the rule of thumb is anything I/O-based should be made async (e.g., HttpClient).
Then it's a matter of repeating the async infection. You want to use async methods, so you call them with await. So that method must be async. So all of its callers must use await, so they must also be async, etc.
how would I then invoke the first async method of my program if I built say a message loop where I want some control of the degree of parallelism?
It's easiest to put the framework in charge of this. E.g., you can just return a Task<T> from a WebAPI action, and the framework understands that. Similarly, UI applications have a message loop built-in that async will work naturally with.
If you have a situation where the framework doesn't understand Task or have a built-in message loop (usually a Console application or a Win32 service), you can use the AsyncContext type in my AsyncEx library. AsyncContext just installs a "main loop" (that is compatible with async) onto the current thread.
Since the HttpClient doesn't have any synchronous methods, what can I safely presume to do if I have an abstraction that isn't async aware?
The correct approach is to change the abstraction. Do not attempt to block on asynchronous code; I describe that common deadlock scenario in detail on my blog.
You change the abstraction by making it async-friendly. For example, change IUserContext IUserContextService.GetUserContext() to Task<IUserContext> IUserContextService.GetUserContextAsync().
I've read about the ConfigureAwait(false) but I don't really understand what it does. It's strange to me that it's set after the async invocation.
You may find my async intro helpful. I won't say much more about ConfigureAwait in this answer because I think it's not directly applicable to a good solution for this question (but I'm not saying it's bad; it actually should be used unless you can't use it).
Just bear in mind that async is an operator with precedence rules and all that. It feels magical at first, but it's really not so much. This code:
var result = await httpClient.GetAsync(url).ConfigureAwait(false);
is exactly the same as this code:
var asyncOperation = httpClient.GetAsync(url).ConfigureAwait(false);
var result = await asyncOperation;
There are usually no race conditions in async code because - even though the method is asynchronous - it is also sequential. The method can be paused at an await, and it will not be resumed until that await completes.
When you have a message loop that allow things to happen in parallel and you have asynchronous methods, there's a opportunity to minimize latency.
This is the second time you've mentioned a "message loop" "in parallel", but I think what you actually want is to have multiple (asynchronous) consumers working off the same queue, correct? That's easy enough to do with async (note that there is just a single message loop on a single thread in this example; when everything is async, that's usually all you need):
await tasks.WhenAll(ConsumerAsync(), ConsumerAsync(), ConsumerAsync());
async Task ConsumerAsync()
{
for (;;) // TODO: consider a CancellationToken for orderly shutdown
{
var m = await mq.ReceiveAsync();
var c = GetCommand(m);
await c.InvokeAsync();
m.Delete();
}
}
// Extension method
public static Task<Message> ReceiveAsync(this MessageQueue mq)
{
return Task<Message>.Factory.FromAsync(mq.BeginReceive, mq.EndReceive, null);
}
You'd probably also be interested in TPL Dataflow. Dataflow is a library that understands and works well with async code, and has nice parallel options built-in.
While I appreciate the insight from community members it's always difficult to express the intent of what I'm trying to do but tremendously helpful to get advice about circumstances surrounding the problem. With that, I eventually arrived that the following code.
public class AsyncOperatingContext
{
struct Continuation
{
private readonly SendOrPostCallback d;
private readonly object state;
public Continuation(SendOrPostCallback d, object state)
{
this.d = d;
this.state = state;
}
public void Run()
{
d(state);
}
}
class BlockingSynchronizationContext : SynchronizationContext
{
readonly BlockingCollection<Continuation> _workQueue;
public BlockingSynchronizationContext(BlockingCollection<Continuation> workQueue)
{
_workQueue = workQueue;
}
public override void Post(SendOrPostCallback d, object state)
{
_workQueue.TryAdd(new Continuation(d, state));
}
}
/// <summary>
/// Gets the recommended max degree of parallelism. (Your main program message loop could use this value.)
/// </summary>
public static int MaxDegreeOfParallelism { get { return Environment.ProcessorCount; } }
#region Helper methods
/// <summary>
/// Run an async task. This method will block execution (and use the calling thread as a worker thread) until the async task has completed.
/// </summary>
public static T Run<T>(Func<Task<T>> main, int degreeOfParallelism = 1)
{
var asyncOperatingContext = new AsyncOperatingContext();
asyncOperatingContext.DegreeOfParallelism = degreeOfParallelism;
return asyncOperatingContext.RunMain(main);
}
/// <summary>
/// Run an async task. This method will block execution (and use the calling thread as a worker thread) until the async task has completed.
/// </summary>
public static void Run(Func<Task> main, int degreeOfParallelism = 1)
{
var asyncOperatingContext = new AsyncOperatingContext();
asyncOperatingContext.DegreeOfParallelism = degreeOfParallelism;
asyncOperatingContext.RunMain(main);
}
#endregion
private readonly BlockingCollection<Continuation> _workQueue;
public int DegreeOfParallelism { get; set; }
public AsyncOperatingContext()
{
_workQueue = new BlockingCollection<Continuation>();
}
/// <summary>
/// Initialize the current thread's SynchronizationContext so that work is scheduled to run through this AsyncOperatingContext.
/// </summary>
protected void InitializeSynchronizationContext()
{
SynchronizationContext.SetSynchronizationContext(new BlockingSynchronizationContext(_workQueue));
}
protected void RunMessageLoop()
{
while (!_workQueue.IsCompleted)
{
Continuation continuation;
if (_workQueue.TryTake(out continuation, Timeout.Infinite))
{
continuation.Run();
}
}
}
protected T RunMain<T>(Func<Task<T>> main)
{
var degreeOfParallelism = DegreeOfParallelism;
if (!((1 <= degreeOfParallelism) & (degreeOfParallelism <= 5000))) // sanity check
{
throw new ArgumentOutOfRangeException("DegreeOfParallelism must be between 1 and 5000.", "DegreeOfParallelism");
}
var currentSynchronizationContext = SynchronizationContext.Current;
InitializeSynchronizationContext(); // must set SynchronizationContext before main() task is scheduled
var mainTask = main(); // schedule "main" task
mainTask.ContinueWith(task => _workQueue.CompleteAdding());
// for single threading we don't need worker threads so we don't use any
// otherwise (for increased parallelism) we simply launch X worker threads
if (degreeOfParallelism > 1)
{
for (int i = 1; i < degreeOfParallelism; i++)
{
ThreadPool.QueueUserWorkItem(_ => {
// do we really need to restore the SynchronizationContext here as well?
InitializeSynchronizationContext();
RunMessageLoop();
});
}
}
RunMessageLoop();
SynchronizationContext.SetSynchronizationContext(currentSynchronizationContext); // restore
return mainTask.Result;
}
protected void RunMain(Func<Task> main)
{
// The return value doesn't matter here
RunMain(async () => { await main(); return 0; });
}
}
This class is complete and it does a couple of things that I found difficult to grasp.
As general advice you should allow the TAP (task-based asynchronous) pattern to propagate through your code. This may imply quite a bit of refactoring (or redesign). Ideally you should be allowed to break this up into pieces and make progress as you work towards to overall goal of making your program more asynchronous.
Something that's inherently dangerous to do is to call asynchronous code callously in an synchronous fashion. By this we mean invoking the Wait or Result methods. These can lead to deadlocks. One way to work around something like that is to use the AsyncOperatingContext.Run method. It will use the current thread to run a message loop until the asynchronous call is complete. It will swap out whatever SynchronizationContext is associated with the current thread temporarily to do so.
Note: I don't know if this is enough, or if you are allowed to swap back the SynchronizationContext this way, assuming that you can, this should work. I've already been bitten by the ASP.NET deadlock issue and this could possibly function as a workaround.
Lastly, I found myself asking the question, what is the corresponding equivalent of Main(string[]) in an async context? Turns out that's the message loop.
What I've found is that there are two things that make out this async machinery.
SynchronizationContext.Post and the message loop. In my AsyncOperatingContext I provide a very simple message loop:
protected void RunMessageLoop()
{
while (!_workQueue.IsCompleted)
{
Continuation continuation;
if (_workQueue.TryTake(out continuation, Timeout.Infinite))
{
continuation.Run();
}
}
}
My SynchronizationContext.Post thus becomes:
public override void Post(SendOrPostCallback d, object state)
{
_workQueue.TryAdd(new Continuation(d, state));
}
And our entry point, basically the equivalent of an async main from synchronous context (simplified version from original source):
SynchronizationContext.SetSynchronizationContext(new BlockingSynchronizationContext(_workQueue));
var mainTask = main(); // schedule "main" task
mainTask.ContinueWith(task => _workQueue.CompleteAdding());
RunMessageLoop();
return mainTask.Result;
All of this is costly and we can't just go replace calls to async methods with this but it does allow us to rather quickly create the facilities required to keep writing async code where needed without having to deal with the whole program. It's also very clear from this implementation where the worker threads go and how the impact concurrency of your program.
I look at this and think to myself, yeap, that's how Node.js does it. Though JavaScript does not have this nice async/await language support that C# currently does.
As an added bonus, I have complete control of the degree of parallelism, and if I want, I can run my async tasks completely single threaded. Though, If I do so and call Wait or Result on any task, it will deadlock the program because it will block the only message loop available.
The following code compiles and runs well. But where is the return statement for the Consumer() and Producer() methods?
class Program
{
static BufferBlock<Int32> m_buffer = new BufferBlock<int>(
new DataflowBlockOptions { BoundedCapacity = 10 });
public static async Task Producer() <----- How is a Task object returned?
{
while (true)
{
await m_buffer.SendAsync<Int32>(DateTime.Now.Second);
Thread.Sleep(1000);
}
}
public static async Task Consumer() <----- How is a Task object returned?
{
while (true)
{
Int32 n = await m_buffer.ReceiveAsync<Int32>();
Console.WriteLine(n);
}
}
static void Main(string[] args)
{
Task.WaitAll(Consumer(), Producer());
}
}
While your question states the obvious - the code compiles - and the other answers try to explain-by-example, I think the answer is best described in the following two articles:
"Above-the-surface" answer - full article is here: http://msdn.microsoft.com/en-us/magazine/hh456401.aspx
[...] C# and Visual Basic [...] giving enough hints to the compilers
to build the necessary mechanisms for you behind the scenes. The
solution has two parts: one in the type system, and one in the
language.
The CLR 4 release defined the type Task [...] to represent the
concept of “some work that’s going to produce a result of type T in
the future.” The concept of “work that will complete in the future but
returns no result” is represented by the non-generic Task type.
Precisely how the result of type T is going to be produced in the
future is an implementation detail of a particular task; [...]
The language half of the solution is the new await keyword. A regular
method call means “remember what you’re doing, run this method until
it’s completely finished, and then pick up where you left off, now
knowing the result of the method.” An await expression, in contrast,
means “evaluate this expression to obtain an object representing work
that will in the future produce a result. Sign up the remainder of the
current method as the callback associated with the continuation of
that task. Once the task is produced and the callback is signed up,
immediately return control to my caller.”
2.The under-the-hood explanation is found here: http://msdn.microsoft.com/en-us/magazine/hh456403.aspx
[...] Visual Basic and C# [...] let you express discontinuous
sequential code. [...] When the Visual Basic or C# compiler gets hold
of an asynchronous method, it mangles it quite a bit during
compilation: the discontinuity of the method is not directly supported
by the underlying runtime and must be emulated by the compiler. So
instead of you having to pull the method apart into bits, the compiler
does it for you. [...]
The compiler turns your asynchronous method into a statemachine. The
state machine keeps track of where you are in the execution and what
your local state is. [...]
Asynchronous methods produce Tasks. More specifically, an asynchronous
method returns an instance of one of the types Task or Task from
System.Threading.Tasks, and that instance is automatically generated.
It doesn’t have to be (and can’t be) supplied by the user code. [...]
From the compiler’s point of view, producing Tasks is the easy part.
It relies on a framework-supplied notion of a Task builder, found in
System.Runtime.CompilerServices [...] The builder lets the compiler
obtain a Task, and then lets it complete the Task with a result or an
Exception. [...] Task builders are special helper types meant only for
compiler consumption. [...]
[...] build up a state machine around the production and consumption
of the Tasks. Essentially, all the user logic from the original method
is put into the resumption delegate, but the declarations of locals
are lifted out so they can survive multiple invocations. Furthermore,
a state variable is introduced to track how far things have gotten,
and the user logic in the resumption delegate is wrapped in a big
switch that looks at the state and jumps to a corresponding label. So
whenever resumption is called, it will jump right back to where it
left off the last time.
when the method has no return statement its return type is Task. Have a look at the MSDN page of Async Await
Async methods have three possible return types: Task<TResult>, Task, void
if you need a Task you must specify the return statement. Taken from the MSDN page:
// TASK<T> EXAMPLE
async Task<int> TaskOfT_MethodAsync()
{
// The body of the method is expected to contain an awaited asynchronous
// call.
// Task.FromResult is a placeholder for actual work that returns a string.
var today = await Task.FromResult<string>(DateTime.Now.DayOfWeek.ToString());
// The method then can process the result in some way.
int leisureHours;
if (today.First() == 'S')
leisureHours = 16;
else
leisureHours = 5;
// Because the return statement specifies an operand of type int, the
// method must have a return type of Task<int>.
return leisureHours;
}
async keyword kind of tells compiler that the method body should be used as a Task body. In simple way we can say that these are equivalents to your examples:
public static Task Producer() <----- How is a Task object returned?
{
return Task.Run(() =>
{
while (true)
{
m_buffer.SendAsync<Int32>(DateTime.Now.Second).Wait();
Thread.Sleep(1000);
}
});
}
public static Task Consumer() <----- How is a Task object returned?
{
return Task.Run(() =>
{
while (true)
{
Int32 n = m_buffer.ReceiveAsync<Int32>().Wait();
Console.WriteLine(n);
}
});
}
Of course the compiled result of your methods will be completely different from my examples, because compiler smart enough and it can generate code in such way, so some of the lines in your method will be invoked on the context (thread), which calls the method and some of them on the background context. And this is actually why Thread.Sleep(1000); is not recommended in the body of async methods, because this code can be invoked on the thread, which calls this method. Task has equivalent which can replace Thread.Sleep(1000); the equivalent await Task.Delay(1000), which will be invoked on background thread. As you can see await keyword guaranties you that this call will be invoked on the background context and will not block the caller context.
Let's take a look on one more example:
async Task Test1()
{
int a = 0; // Will be called on the called thread.
int b = await this.GetValueAsync(); // Will be called on background thread
int c = GetC(); // Method execution will come back to the called thread again on this line.
int d = await this.GetValueAsync(); // Going again to background thread
}
So we can say that this will be generated code:
Task Test1()
{
int a = 0; // Will be called on the called thread.
vat syncContext = Task.GetCurrentSynchronizationContext(); // This will help us go back to called thread
return Task.Run(() =>
{
// We already on background task, so we safe here to wait tasks
var bTask = this.GetValueAsync();
bTask.Wait();
int b = bTask.Result;
// syncContext helps us to invoke something on main thread
// because 'int c = 1;' probably was expected to be called on
// the caller thread
var cTask = Task.Run(() => return GetC(), syncContext);
cTask.Wait();
int c = cTask.Result;
// This one was with 'await' - calling without syncContext,
// not on the thread, which calls our method.
var dTask = this.GetValueAsync();
dTask.Wait();
int d = dTask.Result;
});
}
Again, this is not the same code which you will get from compiler, but it should just give you some idea how this works. If you really want to take a look on what will be in the produced library, use for example IlSpy to take a look on generated code.
Also I really recommend to read this article Best Practices in Asynchronous Programming
That's exactly what the async keyword means. It takes a method and (usually) converts it to a Task returning method. If the normal method would have the return type of void, the async method will have Task. If the return type would be some other T, the new return type will be Task<T>.
For example, to download an process some data synchronously, you could write something like:
Foo GetFoo()
{
string s = DownloadFoo();
Foo foo = ParseFoo(s);
return foo;
}
The asynchronous version would then look like this:
Task<Foo> GetFoo()
{
string s = await DownloadFoo();
Foo foo = ParseFoo(s);
return foo;
}
Notice that the return type changed from Foo to Task<Foo>, but you're still returning just Foo. And similarly with void-returning methods: since they don't have to contain a return statement, async Task (without any <T>) methods also don't.
The Task object is constructed by the compiler-generated code, so you don't have to do that.