How a program is executed asynchronously: [duplicate] - c#

I thought that they were basically the same thing — writing programs that split tasks between processors (on machines that have 2+ processors). Then I'm reading this, which says:
Async methods are intended to be non-blocking operations. An await
expression in an async method doesn’t block the current thread while
the awaited task is running. Instead, the expression signs up the rest
of the method as a continuation and returns control to the caller of
the async method.
The async and await keywords don't cause additional threads to be
created. Async methods don't require multithreading because an async
method doesn't run on its own thread. The method runs on the current
synchronization context and uses time on the thread only when the
method is active. You can use Task.Run to move CPU-bound work to a
background thread, but a background thread doesn't help with a process
that's just waiting for results to become available.
and I'm wondering whether someone can translate that to English for me. It seems to draw a distinction between asynchronicity (is that a word?) and threading and imply that you can have a program that has asynchronous tasks but no multithreading.
Now I understand the idea of asynchronous tasks such as the example on pg. 467 of Jon Skeet's C# In Depth, Third Edition
async void DisplayWebsiteLength ( object sender, EventArgs e )
{
label.Text = "Fetching ...";
using ( HttpClient client = new HttpClient() )
{
Task<string> task = client.GetStringAsync("http://csharpindepth.com");
string text = await task;
label.Text = text.Length.ToString();
}
}
The async keyword means "This function, whenever it is called, will not be called in a context in which its completion is required for everything after its call to be called."
In other words, writing it in the middle of some task
int x = 5;
DisplayWebsiteLength();
double y = Math.Pow((double)x,2000.0);
, since DisplayWebsiteLength() has nothing to do with x or y, will cause DisplayWebsiteLength() to be executed "in the background", like
processor 1 | processor 2
-------------------------------------------------------------------
int x = 5; | DisplayWebsiteLength()
double y = Math.Pow((double)x,2000.0); |
Obviously that's a stupid example, but am I correct or am I totally confused or what?
(Also, I'm confused about why sender and e aren't ever used in the body of the above function.)

Your misunderstanding is extremely common. Many people are taught that multithreading and asynchrony are the same thing, but they are not.
An analogy usually helps. You are cooking in a restaurant. An order comes in for eggs and toast.
Synchronous: you cook the eggs, then you cook the toast.
Asynchronous, single threaded: you start the eggs cooking and set a timer. You start the toast cooking, and set a timer. While they are both cooking, you clean the kitchen. When the timers go off you take the eggs off the heat and the toast out of the toaster and serve them.
Asynchronous, multithreaded: you hire two more cooks, one to cook eggs and one to cook toast. Now you have the problem of coordinating the cooks so that they do not conflict with each other in the kitchen when sharing resources. And you have to pay them.
Now does it make sense that multithreading is only one kind of asynchrony? Threading is about workers; asynchrony is about tasks. In multithreaded workflows you assign tasks to workers. In asynchronous single-threaded workflows you have a graph of tasks where some tasks depend on the results of others; as each task completes it invokes the code that schedules the next task that can run, given the results of the just-completed task. But you (hopefully) only need one worker to perform all the tasks, not one worker per task.
It will help to realize that many tasks are not processor-bound. For processor-bound tasks it makes sense to hire as many workers (threads) as there are processors, assign one task to each worker, assign one processor to each worker, and have each processor do the job of nothing else but computing the result as quickly as possible. But for tasks that are not waiting on a processor, you don't need to assign a worker at all. You just wait for the message to arrive that the result is available and do something else while you're waiting. When that message arrives then you can schedule the continuation of the completed task as the next thing on your to-do list to check off.
So let's look at Jon's example in more detail. What happens?
Someone invokes DisplayWebSiteLength. Who? We don't care.
It sets a label, creates a client, and asks the client to fetch something. The client returns an object representing the task of fetching something. That task is in progress.
Is it in progress on another thread? Probably not. Read Stephen's article on why there is no thread.
Now we await the task. What happens? We check to see if the task has completed between the time we created it and we awaited it. If yes, then we fetch the result and keep running. Let's suppose it has not completed. We sign up the remainder of this method as the continuation of that task and return.
Now control has returned to the caller. What does it do? Whatever it wants.
Now suppose the task completes. How did it do that? Maybe it was running on another thread, or maybe the caller that we just returned to allowed it to run to completion on the current thread. Regardless, we now have a completed task.
The completed task asks the correct thread -- again, likely the only thread -- to run the continuation of the task.
Control passes immediately back into the method we just left at the point of the await. Now there is a result available so we can assign text and run the rest of the method.
It's just like in my analogy. Someone asks you for a document. You send away in the mail for the document, and keep on doing other work. When it arrives in the mail you are signalled, and when you feel like it, you do the rest of the workflow -- open the envelope, pay the delivery fees, whatever. You don't need to hire another worker to do all that for you.

In-browser Javascript is a great example of an asynchronous program that has no multithreading.
You don't have to worry about multiple pieces of code touching the same objects at the same time: each function will finish running before any other javascript is allowed to run on the page. (Update: Since this was written, JavaScript has added async functions and generator functions. These functions do not always run to completion before any other javascript is executed: whenever they reach a yield or await keyword, they yield execution to other javascript, and can continue execution later, similar to C#'s async methods.)
However, when doing something like an AJAX request, no code is running at all, so other javascript can respond to things like click events until that request comes back and invokes the callback associated with it. If one of these other event handlers is still running when the AJAX request gets back, its handler won't be called until they're done. There's only one JavaScript "thread" running, even though it's possible for you to effectively pause the thing you were doing until you have the information you need.
In C# applications, the same thing happens any time you're dealing with UI elements--you're only allowed to interact with UI elements when you're on the UI thread. If the user clicked a button, and you wanted to respond by reading a large file from the disk, an inexperienced programmer might make the mistake of reading the file within the click event handler itself, which would cause the application to "freeze" until the file finished loading because it's not allowed to respond to any more clicking, hovering, or any other UI-related events until that thread is freed.
One option programmers might use to avoid this problem is to create a new thread to load the file, and then tell that thread's code that when the file is loaded it needs to run the remaining code on the UI thread again so it can update UI elements based on what it found in the file. Until recently, this approach was very popular because it was what the C# libraries and language made easy, but it's fundamentally more complicated than it has to be.
If you think about what the CPU is doing when it reads a file at the level of the hardware and Operating System, it's basically issuing an instruction to read pieces of data from the disk into memory, and to hit the operating system with an "interrupt" when the read is complete. In other words, reading from disk (or any I/O really) is an inherently asynchronous operation. The concept of a thread waiting for that I/O to complete is an abstraction that the library developers created to make it easier to program against. It's not necessary.
Now, most I/O operations in .NET have a corresponding ...Async() method you can invoke, which returns a Task almost immediately. You can add callbacks to this Task to specify code that you want to have run when the asynchronous operation completes. You can also specify which thread you want that code to run on, and you can provide a token which the asynchronous operation can check from time to time to see if you decided to cancel the asynchronous task, giving it the opportunity to stop its work quickly and gracefully.
Until the async/await keywords were added, C# was much more obvious about how callback code gets invoked, because those callbacks were in the form of delegates that you associated with the task. In order to still give you the benefit of using the ...Async() operation, while avoiding complexity in code, async/await abstracts away the creation of those delegates. But they're still there in the compiled code.
So you can have your UI event handler await an I/O operation, freeing up the UI thread to do other things, and more-or-less automatically returning to the UI thread once you've finished reading the file--without ever having to create a new thread.

Related

In "truly asynchronous" code that uses async-await, isn't there still SOME "thread" that is stuck waiting?

Let's say I properly use async-await, like
await client.GetStringAsync("http://stackoverflow.com");
I understand that the thread that invokes the await becomes "free", that is, something further up the call chain isn't stuck executing some loop equivalent to
bool done = false;
string html = null;
for(; !done; done = GetStringIfAvailable(ref html));
which is what it would be doing if I called the synchronous version of GetStringAsync (probably called GetString by convention).
However, here's where I get confused. Even if the calling thread or any other thread in application's pool of available threads isn't blocked with such a loop, then something is, because, as I understand, at a low level there is always polling going on. So, instead of lowering the total amount of work, I'm simply pushing work to something "beneath" my application's threads ... or something like that.
Can someone clear this up for me?
No.
The compiler will convert methods that use async / await in to state machines that can be broken up in to multiple steps. Once an await is hit, the state of the method is stored and execution is "offloaded" back to the thread that called it. If the task is waiting on things like disk IO, the OS kernel will end up relying on physical CPU interrupts to let the kernel know when to signal the application to resume processing. The state of the pending method is loaded, and queued up on an available thread (the same thread that hit the await if ConfigureAwait is true, or any free thread if false) (This last part isn't exactly right, please see Scott Chamberlain's comments below.). Think of it like an event, where the application asks the hardware to "ping" it once the work is done, while the application gets back to doing whatever it was doing before.
There are some cases where a new thread is spun up to do the work, such as Task.Run which does the work on a ThreadPool thread, but no thread is blocking while awaiting it to complete.
It is important to keep in mind that asynchronous operations using async/ await, are all about pausing, storing, retrieving, and resuming that state-machine. It doesn't really care about what happens inside the Task, what happens there, and how it happens, isn't directly related to async / await.
I was very confused by async / await too, until I really understood how the method is converted to a state-machine. Reading up on exactly what your async methods get converted to by the compiler might help.
You're pushing it off onto the operating system--which will run some other thread if it can rather than simply wait. It only ends up in a busy-wait when it can't find any thread that wants to run.

Confusion regarding threads and if asynchronous methods are truly asynchronous in C#

I was reading up on async/await and when Task.Yield might be useful and came across this post. I had a question regarding the below from that post:
When you use async/await, there is no guarantee that the method you
call when you do await FooAsync() will actually run asynchronously.
The internal implementation is free to return using a completely
synchronous path.
This is a little unclear to me probably because the definition of asynchronous in my head is not lining up.
In my mind, since I do mainly UI dev, async code is code that does not run on the UI thread, but on some other thread. I guess in the text I quoted, a method is not truly async if it blocks on any thread (even if it's a thread pool thread for example).
Question:
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math. If I await it then some thread is getting blocked.
What is an example of a truly asynchronous method and how would they actually work? Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
This is a little unclear to me probably because the definition of asynchronous in my head is not lining up.
Good on you for asking for clarification.
In my mind, since I do mainly UI dev, async code is code that does not run on the UI thread, but on some other thread.
That belief is common but false. There is no requirement that asynchronous code run on any second thread.
Imagine that you are cooking breakfast. You put some toast in the toaster, and while you are waiting for the toast to pop, you go through your mail from yesterday, pay some bills, and hey, the toast popped up. You finish paying that bill and then go butter your toast.
Where in there did you hire a second worker to watch your toaster?
You didn't. Threads are workers. Asynchronous workflows can happen all on one thread. The point of the asynchronous workflow is to avoid hiring more workers if you can possibly avoid it.
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math.
Here, I'll give you a hard problem to solve. Here's a column of 100 numbers; please add them up by hand. So you add the first to the second and make a total. Then you add the running total to the third and get a total. Then, oh, hell, the second page of numbers is missing. Remember where you were, and go make some toast. Oh, while the toast was toasting, a letter arrived with the remaining numbers. When you're done buttering the toast, go keep on adding up those numbers, and remember to eat the toast the next time you have a free moment.
Where is the part where you hired another worker to add the numbers? Computationally expensive work need not be synchronous, and need not block a thread. The thing that makes computational work potentially asynchronous is the ability to stop it, remember where you were, go do something else, remember what to do after that, and resume where you left off.
Now it is certainly possible to hire a second worker who does nothing but add numbers, and then is fired. And you could ask that worker "are you done?" and if the answer is no, you could go make a sandwich until they are done. That way both you and the worker are busy. But there is not a requirement that asynchrony involve multiple workers.
If I await it then some thread is getting blocked.
NO NO NO. This is the most important part of your misunderstanding. await does not mean "go start this job asynchronously". await means "I have an asynchronously produced result here that might not be available. If it is not available, find some other work to do on this thread so that we are not blocking the thread. Await is the opposite of what you just said.
What is an example of a truly asynchronous method and how would they actually work? Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
Asynchronous work often involves custom hardware or multiple threads, but it need not.
Don't think about workers. Think about workflows. The essence of asynchrony is breaking up workflows into little parts such that you can determine the order in which those parts must happen, and then executing each part in turn, but allowing parts that do not have dependencies with each other to be interleaved.
In an asynchronous workflow you can easily detect places in the workflow where a dependency between parts is expressed. Such parts are marked with await. That's the meaning of await: the code which follows depends upon this portion of the workflow being completed, so if it is not completed, go find some other task to do, and come back here later when the task is completed. The whole point is to keep the worker working, even in a world where needed results are being produced in the future.
I was reading up on async/await
May I recommend my async intro?
and when Task.Yield might be useful
Almost never. I find it occasionally useful when doing unit testing.
In my mind, since I do mainly UI dev, async code is code that does not run on the UI thread, but on some other thread.
Asynchronous code can be threadless.
I guess in the text I quoted, a method is not truly async if it blocks on any thread (even if it's a thread pool thread for example).
I would say that's correct. I use the term "truly async" for operations that do not block any threads (and that are not synchronous). I also use the term "fake async" for operations that appear asynchronous but only work that way because they run on or block a thread pool thread.
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math.
Yes; in this case, you would want to define that work with a synchronous API (since it is synchronous work), and then you can call it from your UI thread using Task.Run, e.g.:
var result = await Task.Run(() => MySynchronousCpuBoundCode());
If I await it then some thread is getting blocked.
No; the thread pool thread would be used to run the code (not actually blocked), and the UI thread is asynchronously waiting for that code to complete (also not blocked).
What is an example of a truly asynchronous method and how would they actually work?
NetworkStream.WriteAsync (indirectly) asks the network card to write out some bytes. There is no thread responsible for writing out the bytes one at a time and waiting for each byte to be written. The network card handles all of that. When the network card is done writing all the bytes, it (eventually) completes the task returned from WriteAsync.
Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
Not entirely, although I/O operations are the easy examples. Another fairly easy example is timers (e.g., Task.Delay). Though you can build a truly asynchronous API around any kind of "event".
When you use async/await, there is no guarantee that the method you call when you do await FooAsync() will actually run asynchronously. The internal implementation is free to return using a completely synchronous path.
This is a little unclear to me probably because the definition of
asynchronous in my head is not lining up.
This simply means there are two cases when calling an async method.
The first is that, upon returning the task to you, the operation is already completed -- this would be a synchronous path. The second is that the operation is still in progress -- this is the async path.
Consider this code, which should show both of these paths. If the key is in a cache, it is returned synchronously. Otherwise, an async op is started which calls out to a database:
Task<T> GetCachedDataAsync(string key)
{
if(cache.TryGetvalue(key, out T value))
{
return Task.FromResult(value); // synchronous: no awaits here.
}
// start a fully async op.
return GetDataImpl();
async Task<T> GetDataImpl()
{
value = await database.GetValueAsync(key);
cache[key] = value;
return value;
}
}
So by understanding that, you can deduce that in theory the call of database.GetValueAsync() may have a similar code and itself be able to return synchronously: so even your async path may end up running 100% synchronously. But your code doesn't need to care: async/await handles both cases seamlessly.
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math. If I await it then some thread is getting blocked.
Blocking is a well-defined term -- it means your thread has yielded its execution window while it waits for something (I/O, mutex, and so on). So your thread doing the math is not considered blocked: it is actually performing work.
What is an example of a truly asynchronous method and how would they actually work? Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
A "truly async method" would be one that simply never blocks. It typically ends up involving I/O, but it can also mean awaiting your heavy math code when you want to your current thread for something else (as in UI development) or when you're trying to introduce parallelism:
async Task<double> DoSomethingAsync()
{
double x = await ReadXFromFile();
Task<double> a = LongMathCodeA(x);
Task<double> b = LongMathCodeB(x);
await Task.WhenAll(a, b);
return a.Result + b.Result;
}
This topic is fairly vast and several discussions may arise. However, using async and await in C# is considered asynchronous programming. However, how asynchrony works is a total different discussion. Until .NET 4.5 there were no async and await keywords, and developers had to develop directly against the Task Parallel Library (TPL). There the developer had full control on when and how to create new tasks and even threads. However, this had a downside since not being really an expert on this topic, applications could suffer from heavy performance problems and bugs due to race conditions between threads and so on.
Starting with .NET 4.5 the async and await keywords were introduced, with a new approach to asynchronous programming. The async and await keywords don't cause additional threads to be created. Async methods don't require multithreading because an async method doesn't run on its own thread. The method runs on the current synchronization context and uses time on the thread only when the method is active. You can use Task.Run to move CPU-bound work to a background thread, but a background thread doesn't help with a process that's just waiting for results to become available.
The async-based approach to asynchronous programming is preferable to existing approaches in almost every case. In particular, this approach is better than BackgroundWorker for IO-bound operations because the code is simpler and you don't have to guard against race conditions. You can read more about this topic HERE.
I don't consider myself a C# black belt and some more experienced developers may raise some further discussions, but as a principle I hope that I managed to answer your question.
Asynchronous does not imply Parallel
Asynchronous only implies concurrency. In fact, even using explicit threads doesn't guarantee that they will execute simultaneously (for example when the threads affinity for the same single core, or more commonly when there is only one core in the machine to begin with).
Therefore, you should not expect an asynchronous operation to happen simultaneously to something else. Asynchronous only means that it will happen, eventually at another time (a(greek) = without, syn (greek) = together, khronos (greek) = time. => Asynchronous = not happening at the same time).
Note: The idea of asynchronicity is that on the invocation you do not care when the code will actually run. This allows the system to take advantage of parallelism, if possible, to execute the operation. It may even run immediately. It could even happen on the same thread... more on that later.
When you await the asynchronous operation, you are creating concurrency (com (latin) = together, currere (latin) = run. => "Concurrent" = to run together). That is because you are asking for the asynchronous operation to reach completion before moving on. We can say the execution converges. This is similar to the concept of joining threads.
When asynchronous cannot be Parallel
When you use async/await, there is no guarantee that the method you call when you do await FooAsync() will actually run asynchronously. The internal implementation is free to return using a completely synchronous path.
This can happen in three ways:
It is possible to use await on anything that returns Task. When you receive the Task it could have already been completed.
Yet, that alone does not imply it ran synchronously. In fact, it suggest it ran asynchronously and finished before you got the Task instance.
Keep in mind that you can await on an already completed task:
private static async Task CallFooAsync()
{
await FooAsync();
}
private static Task FooAsync()
{
return Task.CompletedTask;
}
private static void Main()
{
CallFooAsync().Wait();
}
Also, if an async method has no await it will run synchronously.
Note: As you already know, a method that returns a Task may be waiting on the network, or on the file system, etc… doing so does not imply to start a new Thread or enqueue something on the ThreadPool.
Under a synchronization context that is handled by a single thread, the result will be to execute the Task synchronously, with some overhead. This is the case of the UI thread, I'll talk more about what happens below.
It is possible to write a custom TaskScheduler to always run tasks synchronously. On the same thread, that does the invocation.
Note: recently I wrote a custom SyncrhonizationContext that runs tasks on a single thread. You can find it at Creating a (System.Threading.Tasks.)Task scheduler. It would result in such TaskScheduler with a call to FromCurrentSynchronizationContext.
The default TaskScheduler will enqueue the invocations to the ThreadPool. Yet when you await on the operation, if it has not run on the ThreadPool it will try to remove it from the ThreadPool and run it inline (on the same thread that is waiting... the thread is waiting anyway, so it is not busy).
Note: One notable exception is a Task marked with LongRunning. LongRunning Tasks will run on a separate thread.
Your question
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math. If I await it then some thread is getting blocked.
If you are doing computations, they must happen on some thread, that part is right.
Yet, the beauty of async and await is that the waiting thread does not have to be blocked (more on that later). Yet, it is very easy to shoot yourself in the foot by having the awaited task scheduled to run on the same thread that is waiting, resulting in synchronous execution (which is an easy mistake in the UI thread).
One of the key characteristics of async and await is that they take the SynchronizationContext from the caller. For most threads that results in using the default TaskScheduler (which, as mentioned earlier, uses the ThreasPool). However, for UI thread it means posting the tasks into the message queue, this means that they will run on the UI thread. The advantage of this is that you don’t have to use Invoke or BeginInvoke to access UI components.
Before I go into how to await a Task from the UI thread without blocking it, I want to note that it is possible to implement a TaskScheduler where if you await on a Task, you don’t block your thread or have it go idle, instead you let your thread pick another Task that is waiting for execution. When I was backporting Tasks for .NET 2.0 I experimented with this.
What is an example of a truly asynchronous method and how would they actually work? Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
You seem to confuse asynchronous with not blocking a thread. If what you want is an example of asynchronous operations in .NET that do not require blocking a thread, a way to do it that you may find easy to grasp is to use continuations instead of await. And for the continuations that you need to run on the UI thread, you can use TaskScheduler.FromCurrentSynchronizationContext.
Do not implement fancy spin waiting. And by that I mean using a Timer, Application.Idle or anything like that.
When you use async you are telling the compiler to rewrite the code of the method in a way that allows breaking it. The result is similar to continuations, with a much more convenient syntax. When the thread reaches an await the Task will be scheduled, and the thread is free to continue after the current async invocation (out of the method). When the Task is done, the continuation (after the await) is scheduled.
For the UI thread this means that once it reaches await, it is free to continue to process messages. Once the awaited Task is done, the continuation (after the await) will be scheduled. As a result, reaching await doesn’t imply to block the thread.
Yet blindly adding async and await won’t fix all your problems.
I submit to you an experiment. Get a new Windows Forms application, drop in a Button and a TextBox, and add the following code:
private async void button1_Click(object sender, EventArgs e)
{
await WorkAsync(5000);
textBox1.Text = #"DONE";
}
private async Task WorkAsync(int milliseconds)
{
Thread.Sleep(milliseconds);
}
It blocks the UI. What happens is that, as mentioned earlier, await automatically uses the SynchronizationContext of the caller thread. In this case, that is the UI thread. Therefore, WorkAsync will run on the UI thread.
This is what happens:
The UI threads gets the click message and calls the click event handler
In the click event handler, the UI thread reaches await WorkAsync(5000)
WorkAsync(5000) (and scheduling its continuation) is scheduled to run on the current synchronization context, which is the UI thread synchronization context… meaning that it posts a message to execute it
The UI thread is now free to process further messages
The UI thread picks the message to execute WorkAsync(5000) and schedule its continuation
The UI thread calls WorkAsync(5000) with continuation
In WorkAsync, the UI thread runs Thread.Sleep. The UI is now irresponsive for 5 seconds.
The continuation schedules the rest of the click event handler to run, this is done by posting another message for the UI thread
The UI thread is now free to process further messages
The UI thread picks the message to continue in the click event handler
The UI thread updates the textbox
The result is synchronous execution, with overhead.
Yes, you should use Task.Delay instead. That is not the point; consider Sleep a stand in for some computation. The point is that just using async and await everywhere won't give you an application that is automatically parallel. It is much better to pick what do you want to run on a background thread (e.g. on the ThreadPool) and what do you want to run on the UI thread.
Now, try the following code:
private async void button1_Click(object sender, EventArgs e)
{
await Task.Run(() => Work(5000));
textBox1.Text = #"DONE";
}
private void Work(int milliseconds)
{
Thread.Sleep(milliseconds);
}
You will find that await does not block the UI. This is because in this case Thread.Sleep is now running on the ThreadPool thanks to Task.Run. And thanks to button1_Click being async, once the code reaches await the UI thread is free to continue working. After the Task is done, the code will resume after the await thanks to the compiler rewriting the method to allow precisely that.
This is what happens:
The UI threads gets the click message and calls the click event handler
In the click event handler, the UI thread reaches await Task.Run(() => Work(5000))
Task.Run(() => Work(5000)) (and scheduling its continuation) is scheduled to run on the current synchronization context, which is the UI thread synchronization context… meaning that it posts a message to execute it
The UI thread is now free to process further messages
The UI thread picks the message to execute Task.Run(() => Work(5000)) and schedule its continuation when done
The UI thread calls Task.Run(() => Work(5000)) with continuation, this will run on the ThreadPool
The UI thread is now free to process further messages
When the ThreadPool finishes, the continuation will schedule the rest of the click event handler to run, this is done by posting another message for the UI thread. When the UI thread picks the message to continue in the click event handler it will updates the textbox.
Here's asynchronous code which shows how async / await allows code to block and release control to another flow, then resume control but not needing a thread.
public static async Task<string> Foo()
{
Console.WriteLine("In Foo");
await Task.Yield();
Console.WriteLine("I'm Back");
return "Foo";
}
static void Main(string[] args)
{
var t = new Task(async () =>
{
Console.WriteLine("Start");
var f = Foo();
Console.WriteLine("After Foo");
var r = await f;
Console.WriteLine(r);
});
t.RunSynchronously();
Console.ReadLine();
}
So it's that releasing of control and resynching when you want results that's key with async/await ( which works well with threading )
NOTE: No Threads were blocked in the making of this code :)
I think sometimes the confusion might come from "Tasks" which doesn't mean something running on its own thread. It just means a thing to do, async / await allows tasks to be broken up into stages and coordinate those various stages into a flow.
It's kind of like cooking, you follow the recipe. You need to do all the prep work before assembling the dish for cooking. So you turn on the oven, start cutting things, grating things, etc. Then you await the temp of oven and await the prep work. You could do it by yourself swapping between tasks in a way that seems logical (tasks / async / await), but you can get someone else to help grate cheese while you chop carrots (threads) to get things done faster.
Stephen's answer is already great, so I'm not going to repeat what he said; I've done my fair share of repeating the same arguments many times on Stack Overflow (and elsewhere).
Instead, let me focus on one important abstract things about asynchronous code: it's not an absolute qualifier. There is no point in saying a piece of code is asynchronous - it's always asynchronous with respect to something else. This is quite important.
The purpose of await is to build synchronous workflows on top of asynchronous operations and some connecting synchronous code. Your code appears perfectly synchronous1 to the code itself.
var a = await A();
await B(a);
The ordering of events is specified by the await invocations. B uses the return value of A, which means A must have run before B. The method containing this code has a synchronous workflow, and the two methods A and B are synchronous with respect to each other.
This is very useful, because synchronous workflows are usually easier to think about, and more importantly, a lot of workflows simply are synchronous. If B needs the result of A to run, it must run after A2. If you need to make an HTTP request to get the URL for another HTTP request, you must wait for the first request to complete; it has nothing to do with thread/task scheduling. Perhaps we could call this "inherent synchronicity", apart from "accidental synchronicity" where you force order on things that do not need to be ordered.
You say:
In my mind, since I do mainly UI dev, async code is code that does not run on the UI thread, but on some other thread.
You're describing code that runs asynchronously with respect to the UI. That is certainly a very useful case for asynchrony (people don't like UI that stops responding). But it's just a specific case of a more general principle - allowing things to happen out of order with respect to one another. Again, it's not an absolute - you want some events to happen out of order (say, when the user drags the window or the progress bar changes, the window should still redraw), while others must not happen out of order (the Process button must not be clicked before the Load action finishes). await in this use case isn't that different from using Application.DoEvents in principle - it introduces many of the same problems and benefits.
This is also the part where the original quote gets interesting. The UI needs a thread to be updated. That thread invokes an event handler, which may be using await. Does it mean that the line where await is used will allow the UI to update itself in response to user input? No.
First, you need to understand that await uses its argument, just as if it were a method call. In my sample, A must have already been invoked before the code generated by await can do anything, including "releasing control back to the UI loop". The return value of A is Task<T> instead of just T, representing a "possible value in the future" - and await-generated code checks to see if the value is already there (in which case it just continues on the same thread) or not (which means we get to release the thread back to the UI loop). But in either case, the Task<T> value itself must have been returned from A.
Consider this implementation:
public async Task<int> A()
{
Thread.Sleep(1000);
return 42;
}
The caller needs A to return a value (a task of int); since there's no awaits in the method, that means the return 42;. But that cannot happen before the sleep finishes, because the two operations are synchronous with respect to the thread. The caller thread will be blocked for a second, regardless of whether it uses await or not - the blocking is in A() itself, not await theTaskResultOfA.
In contrast, consider this:
public async Task<int> A()
{
await Task.Delay(1000);
return 42;
}
As soon as the execution gets to the await, it sees that the task being awaited isn't finished yet and returns control back to its caller; and the await in the caller consequently returns control back to its caller. We've managed to make some of the code asynchronous with respect to the UI. The synchronicity between the UI thread and A was accidental, and we removed it.
The important part here is: there's no way to distinguish between the two implementations from the outside without inspecting the code. Only the return type is part of the method signature - it doesn't say the method will execute asynchronously, only that it may. This may be for any number of good reasons, so there's no point in fighting it - for example, there's no point in breaking the thread of execution when the result is already available:
var responseTask = GetAsync("http://www.google.com");
// Do some CPU intensive task
ComputeAllTheFuzz();
response = await responseTask;
We need to do some work. Some events can run asynchronously with respect to others (in this case, ComputeAllTheFuzz is independent of the HTTP request) and are asynchronous. But at some point, we need to get back to a synchronous workflow (for example, something that requires both the result of ComputeAllTheFuzz and the HTTP request). That's the await point, which synchronizes the execution again (if you had multiple asynchronous workflows, you'd use something like Task.WhenAll). However, if the HTTP request managed to complete before the computation, there's no point in releasing control at the await point - we can simply continue on the same thread. There's been no waste of the CPU - no blocking of the thread; it does useful CPU work. But we didn't give any opportunity for the UI to update.
This is of course why this pattern is usually avoided in more general asynchronous methods. It is useful for some uses of asynchronous code (avoiding wasting threads and CPU time), but not others (keeping the UI responsive). If you expect such a method to keep the UI responsive, you're not going to be happy with the result. But if you use it as part of a web service, for example, it will work great - the focus there is on avoiding wasting threads, not keeping the UI responsive (that's already provided by asynchronously invoking the service endpoint - there's no benefit from doing the same thing again on the service side).
In short, await allows you to write code that is asynchronous with respect to its caller. It doesn't invoke a magical power of asynchronicity, it isn't asynchronous with respect to everything, it doesn't prevent you from using the CPU or blocking threads. It just gives you the tools to easily make a synchronous workflow out of asynchronous operations, and present part of the whole workflow as asynchronous with respect to its caller.
Let's consider an UI event handler. If the individual asynchronous operations happen to not need a thread to execute (e.g. asynchronous I/O), part of the asynchronous method may allow other code to execute on the original thread (and the UI stays responsive in those parts). When the operation needs the CPU/thread again, it may or may not require the original thread to continue the work. If it does, the UI will be blocked again for the duration of the CPU work; if it doesn't (the awaiter specifies this using ConfigureAwait(false)), the UI code will run in parallel. Assuming there's enough resources to handle both, of course. If you need the UI to stay responsive at all times, you cannot use the UI thread for any execution long enough to be noticeable - even if that means you have to wrap an unreliable "usually asynchronous, but sometimes blocks for a few seconds" async method in a Task.Run. There's costs and benefits to both approaches - it's a trade-off, as with all engineering :)
Of course, perfect as far as the abstraction holds - every abstraction leaks, and there's plenty of leaks in await and other approaches to asynchronous execution.
A sufficiently smart optimizer might allow some part of B to run, up to the point where the return value of A is actually needed; this is what your CPU does with normal "synchronous" code (Out of order execution). Such optimizations must preserve the appearance of synchronicity, though - if the CPU misjudges the ordering of operations, it must discard the results and present a correct ordering.

How does a thread that launches a blocking I/O request under TPL return immediately?

I would like to preface this question with the following:
I'm familiar with the IAsyncStateMachine implementation that the await keyword in C# generates.
My question is not about the basic flow of control that ensures when you use the async and await keywords.
Assumption A
The default threading behaviour in any threading environment, whether it be at the Windows operating system level or in POSIX systems or in the .NET thread pool, has been that when a thread makes a request for an I/O bound operation, say for a disk read, it issues the request to the disk device driver and enters a waiting state. Of course, I am glossing over the details because they are not of moment to our discussion.
Importantly, that thread can do nothing useful until it is unblocked by an interrupt from the device driver notifying it of completion. During this time, the thread remains on the wait queue and cannot be re-used for any other work.
I would first like a confirmation of the above description.
Assumption B
Secondly, even with the introduction of TPL, and its enhancements done in v4.5 of the .NET framework, and with the language level support for asynchronous operations involving tasks, this default behaviour described in Assumption A has not changed.
Question
Then, I'm at a loss trying to reconcile Assumptions A and B with the claim that suddenly emerged in all TPL literature that:
When the, say, main thread, starts this request for this I/O bound
work, it immediately returns and continues executing the rest of
the queued up messages in the message pump.
Well, what makes that thread return back to do other work? Isn't that thread supposed to be in the waiting state in the wait queue?
You might be tempted to reply that the code in the state machine launches the task awaiter and if the awaiter hasn't completed, the main thread returns.
That beggars the question -- what thread does the awaiter run on?
And the answer that springs up to mind is: whatever the implementation of the method be, of whose task it is awaiting.
That drives us down the rabbit hole further until we reach the last of such implementations that actually delivers the I/O request.
Where is that part of the source code in the .NET framework that changes this underlying fundamental mechanism about how threads work?
Side Note
While some blocking asynchronous methods such as WebClient.DownloadDataTaskAsync, if one were to follow their code
through their (the method's and not one's own) oval tract into their
intestines, one would see that they ultimately either execute the
download synchronously, blocking the current thread if the operation
was requested to be performed synchronously
(Task.RunSynchronously()) or if requested asynchronously, they
offload the blocking I/O bound call to a thread pool thread using the
Asynchronous Programming Model (APM) Begin and End methods.
This surely will cause the main thread to return immediately because
it just offloaded blocking I/O work to a thread pool thread, thereby
adding approximately diddlysquat to the application's scalability.
But this was a case where, within the bowels of the beast, the work
was secretly offloaded to a thread pool thread. In the case of an API
that doesn't do that, say an API that looks like this:
public async Task<string> GetDataAsync()
{
var tcs = new TaskCompletionSource<string>();
// If GetDataInternalAsync makes the network request
// on the same thread as the calling thread, it will block, right?
// How then do they claim that the thread will return immediately?
// If you look inside the state machine, it just asks the TaskAwaiter
// if it completed the task, and if it hasn't it registers a continuation
// and comes back. But that implies that the awaiter is on another thread
// and that thread is happily sleeping until it gets a kick in the butt
// from a wait handle, right?
// So, the only way would be to delegate the making of the request
// to a thread pool thread, in which case, we have not really improved
// scalability but only improved responsiveness of the main/UI thread
var s = await GetDataInternalAsync();
tcs.SetResult(s); // omitting SetException and
// cancellation for the sake of brevity
return tcs.Task;
}
Please be gentle with me if my question appears to be nonsensical. The extent of knowledge of things in almost all matters is limited. I am just learning anything.
When you are talking about an async I/O operation, the truth, as pointed out here by Stephen Cleary (http://blog.stephencleary.com/2013/11/there-is-no-thread.html) is that there is no thread. An async I/O operation is completed at a lower level than the threading model. It generally occurs within interrupt handler routines. Therefore, there is no I/O thread handling the request.
You ask how a thread that launches a blocking I/O request returns immediately. The answer is because an I/O request is not at its core actually blocking. You could block a thread such that you are intentionally saying not to do anything else until that I/O request finishes, but it was never the I/O that was blocking, it was the thread deciding to spin (or possibly yield its time slice).
The thread returns immediately because nothing has to sit there polling or querying the I/O operation. That is the core of true asynchronicity. An I/O request is made, and ultimately the completion bubbles up from an ISR. Yes, this may bubble up into the thread pool to set the task completion, but that happens in a nearly imperceptible amount of time. The work itself never had to be ran on a thread. The request itself may have been issued from a thread, but as it is an asynchronous request, the thread can immediately return.
Let's forget C# for a moment. Lets say I am writing some embedded code and I request data from a SPI bus. I send the request, continue my main loop, and when the SPI data is ready, an ISR is triggered. My main loop resumes immediately precisely because my request is asynchronous. All it has to do is push some data into a shift register and continue on. When data is ready for me to read back, an interrupt triggers. This is not running on a thread. It may interrupt a thread to complete the ISR, but you could not say that it actually ran on that thread. Just because its C#, this process is not ultimately any different.
Similarly, lets say I want to transfer data over USB. I place the data in a DMA location, set a flag to tell the bus to transfer my URB, and then immediately return. When I get a response back it also is moved into memory, an interrupt occurs and sets a flag to let the system know hey, heres a packet of data sitting in a buffer for you.
So once again, I/O is never truly blocking. It could appear to block, but that is not what is happening at the low level. It is higher level processes that may decide that an I/O operation has to happen synchronously with some other code. This is not to say of course that I/O is instant. Just that the CPU is not stuck doing work to service the I/O. It COULD block if implemented that way, and this COULD involve threads. But that is not how async I/O is implemented.

What's the point of await DoSomethingAsync [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I'm trying to wrap my head around all of the Async stuff that's been added into the .NET framework with the more recent versions. I understand some of it, but to be honest, personally I don't think it makes writing asynchronous code easier. I find it rather confusing most of the time and actually harder to read than the more conventional approaches that we used before the advent of async/await.
Anyway, my question is a simple one. I see a lot of code like this:
var stream = await file.readAsStreamAsync()
What's going on here? Isn't this equivalent to just calling the blocking variant of the method, i.e.
var stream = file.readAsStream()
If so, what's the point in using it here like this? It doesn't make the code any easier to read so please tell me what I am missing.
The result of both calls is the same.
The difference is that var stream = file.readAsStream() will block the calling thread until the operation completes.
If the call was made in a GUI app from the UI thread, the application will freeze until the IO completes.
If the call was made in a server application, the blocked thread will not be able to handle other incoming requests. The thread pool will have to create a new thread to 'replace' the blocked one, which is expensive. Scalability will suffer.
On the other hand, var stream = await file.readAsStreamAsync() will not block any thread. The UI thread in a GUI application can keep the application responding, a worker thread in a server application can handle other requests.
When the async operation completes, the OS will notify the thread pool and the rest of the method will be executed.
To make all this 'magic' possible, a method with async/await will be compiled into a state machine. Async/await allows to make complicated asynchronous code look as simple as synchronous one.
It makes writing asynchronous code enormously easier. As you noted in your own question, it looks as if you were writing the synchronous variant - but it's actually asynchronous.
To understand this, you need to really know what asynchronous and synchronous means. The meaning is really simple - synchronous means in a sequence, one after another. Asynchronous means out of sequence. But that's not the whole picture here - the two words are pretty much useless on their own, most of their meaning comes from context. You need to ask: synchronous with respect to what, exactly?
Let's say you have a Winforms application that needs to read a file. In the button click, you do a File.ReadAllText, and put the results in some textbox - all fine and dandy. The I/O operation is synchronous with respect to your UI - the UI can do nothing while you wait for the I/O operation to complete. Now, the customers start complaining that the UI seems hung for seconds at a time when it reads the file - and Windows flags the application as "Not responding". So you decide to delegate the file reading to a background worker - for example, using BackgroundWorker, or Thread. Now your I/O operation is asynchronous with respect to your UI and everyone is happy - all you had to do is extract your work and run it in its own thread, yay.
Now, this is actually perfectly fine - as long as you're only really doing one such asynchronous operation at a time. However, it does mean you have to explicitly define where the UI thread boundaries are - you need to handle the proper synchronization. Sure, this is pretty simple in Winforms, since you can just use Invoke to marshal UI work back to the UI thread - but what if you need to interact with the UI repeatedly, while doing your background work? Sure, if you just want to publish results continuously, you're fine with the BackgroundWorkers ReportProgress - but what if you also want to handle user input?
The beauty of await is that you can easily manage when you're on a background thread, and when you're on a synchronization context (such as the windows forms UI thread):
string line;
while ((line = await streamReader.ReadLineAsync()) != null)
{
if (line.StartsWith("ERROR:")) tbxLog.AppendLine(line);
if (line.StartsWith("CRITICAL:"))
{
if (MessageBox.Show(line + "\r\n" + "Do you want to continue?",
"Critical error", MessageBoxButtons.YesNo) == DialogResult.No)
{
return;
}
}
await httpClient.PostAsync(...);
}
This is wonderful - you're basically writing synchronous code as usual, but it's still asynchronous with respect to the UI thread. And the error handling is again exactly the same as with any synchronous code - using, try-finally and friends all work great.
Okay, so you don't need to sprinkle BeginInvoke here and there, what's the big deal? The real big deal is that, without any effort on your part, you actually started using the real asynchronous APIs for all those I/O operations. The thing is, there aren't really any synchronous I/O operations as far as the OS is concerned - when you do that "synchronous" File.ReadAllText, the OS simply posts an asynchronous I/O request, and then blocks your thread until the response comes back. As should be evident, the thread is wasted doing nothing in the meantime - it still uses system resources, it adds a tiny amount of work for the scheduler etc.
Again, in a typical client application, this isn't a big deal. The user doesn't care whether you have one thread or two - the difference isn't really that big. Servers are a different beast entirely, though; where a typical client only has one or two I/O operations at the same time, you want your server to handle thousands! On a typical 32-bit system, you could only fit about 2000 threads with default stacksize in your process - not because of the physical memory requirements, but just by exhausting the virtual address space. 64-bit processes are not as limited, but there's still the thing that starting up new threads and destroying them is rather pricy, and you are now adding considerable work to the OS thread scheduler - just to keep those threads waiting.
But the await-based code doesn't have this problem. It only takes up a thread when it's doing CPU work - waiting on an I/O operation to complete is not CPU work. So you issue that asynchronous I/O request, and your thread goes back to the thread pool. When the response comes, another thread is taken from the thread pool. Suddenly, instead of using thousands of threads, your server is only using a couple (usually about two per CPU core). The memory requirements are lower, the multi-threading overheads are significantly lowered, and your total throughput increases quite a bit.
So - in a client application, await is only really a thing of convenience. In any larger server application, it's a necessity - because suddenly your "start a new thread" approach simply doesn't scale. And the alternative to using await are all those old-school asynchronous APIs, which handle nothing like synchronous code, and where handling errors is very tedious and tricky.
var stream = await file.readAsStreamAsync();
DoStuff(stream);
is conceptually more like
file.readAsStreamAsync(stream => {
DoStuff(stream);
});
where the lambda is automatically called when the stream has been fully read. You can see this is quite different from the blocking code.
If you're building a UI application for example, and implementing a button handler:
private async void HandleClick(object sender, EventArgs e)
{
ShowProgressIndicator();
var response = await GetStuffFromTheWebAsync();
DoStuff(response);
HideProgressIndicator();
}
This is drastically different from the similar synchronous code:
private void HandleClick(object sender, EventArgs e)
{
ShowProgressIndicator();
var response = GetStuffFromTheWeb();
DoStuff(response);
HideProgressIndicator();
}
Because in the second code the UI will lock up and you'll never see the progress indicator (or at best it'll flash briefly) since the UI thread will be blocked until the entire click handler is completed. In the first code the progress indicator shows and then the UI thread gets to run again while the web call happens in the background, and then when the web call completes the DoStuff(response); HideProgressIndicator(); code gets scheduled on the UI thread and it nicely finishes its work and hides the progress indicator.
What's going on here? Isn't this equivalent to just calling the
blocking variant of the method, i.e.
No, it is not a blocking call. This is a syntactic sugar that the compiler uses to create a state machine, which on the runtime will be used to execute your code asynchronously.
It makes your code more readable and almost similar to code that runs synchronously.
It looks like you're missing what is all this async / await concept is about.
Keyword async let compiler knows that method may need to perform some asynchronous operations and therefore it shouldn't be executed in normal way as any other method, instead it should be treated as state machine. This indicates that compiler will first execute only part of method (let's call it Part 1), and then start some asynchronous operation on other thread releasing the calling thread. Compiler also will schedule Part 2 to execute on first available thread from the ThreadPool. If asynchronous operation is not marked with keyword await then its not been awaited and calling thread continues to run till method is finished. In most cases this is not desirable. That's when we need to use keyword await.
So typical scenario is :
Thread 1 enters async method and executes code Part1 ->
Thread 1 starts async operation ->
Thread 1 is released, operation is underway Part2 is scheduled in TP ->
Some Thread (most likely same Thread 1 is its free) continues to run method till its end (Part2) ->

.NET Task Parallel Library

I have read documenation and many tutorials on TPL but none covers model I want to achieve.
There were always fixed number of iterations for some algorithm.
I need constantly running threads (as many as possible):
while(true)
get data from MAIN thread
perform heavy time-consuming task (in separate thread)
update MAIN thread information
Additionaly I need mechanism which will be able to set alarm clock (e.g. 5 seconds). After five seconds all work must be suspended for a while and then resumed.
Should I use Task.ContinueWith the same task? But I am not processing result of previous task launch, but instead I update data structure in MAIN Thread and then decide what will be the input of new task iteration...
How can I leave to TPL decision how many task should be created for best efficiency?
No I am using BackgroundWorkers, becase they have nice RunEventCompleted event - inside it I am on my main thread so I can update my MAIN structure, check time constraints and then eventually call StartAsync again on the BackgroundWorker which completed. It is nice and clear, but probably very inneficient.
I need to make it highly efficient on multi-processor, multi-core servers.
One problem is that computation is always online, never stops. There is some networking also, which enables to ask remotely of current state of MAIN structure.
Second problem is critical time control (I must have precise timer - when it stops which no thread can be restarted). Then comes special high priority task after it ends, all work is resumed.
Third problem is that there is no upper bound for operations to do.
These three constraints, from what I observed, do not go along TPL well - I can't use something like Parallel.For because the collection is modified by results of task itself in realtime...
I don't know also how to combine:
ability to let TPL decide how many threads should be created
with sort of lifetime runing of threads (with pauses and synchronization points between consecutive restarts)
creating threads only once at the begining (they should be only restarted with constantly new parameters)
Can someone give me clues?
I know how to do it bad, inefficent way. There are some small requirements which I described, which prevent me from doing this right. I am a little bit confused.
You need to use messaging + actors + a scheduler imo. And then you need to use a language capable for it. Have a look at this code that asynchronously receives from Azure Service Bus, enqueues in a shared queue and manages runtime state through an actor.
Inline:
Should I use Task.ContinueWith the same task?
No, ContinueWith will get your program killed based on exception handling inside of each continuation passing; there's no good way in TPL to marshal failed state into the call-side/main thread.
But I am not processing result of previous task launch, but
instead I update data structure in
MAIN Thread and then decide what will be the input of new task
iteration...
You need to move beyond threading for this, unless you're willing to spend A LOT of time on the problem.
How can I leave to TPL decision how many task should be created for
best efficiency?
That's handled by the framework that runs your async workflows.
No I am using BackgroundWorkers, becase they have nice
RunEventCompleted event - inside it I am on my main thread so I can
update my MAIN structure, check time constraints and then eventually
call StartAsync again on the BackgroundWorker which completed. It is
nice and clear, but probably very inneficient. I need to make it
highly efficient on multi-processor, multi-core servers.
One problem is that computation is always online, never stops. There
is some networking also, which enables to ask remotely of current
state of MAIN structure. Second problem is critical time control (I
must have precise timer - when it stops which no thread can be
restarted).
If you run everything asynchronously, you can pass messages to your actor that suspends it. You scheduling actor is responsible for calling all its subscribers with their schedulled messages; have a look at the paused state in the code linked. If you have outstanding requests you can pass them a cancellation token and handle a 'hard' cancellation/socket abort that way.
Then comes special high priority task after it ends, all
work is resumed. These two constraints, from what I observed, do not
go along TPL well - I can't use something like Parallel.For because
the collection is modified by results of task itself in realtime...
You probably need a pattern called pipes-and-filters. You pipe your input into a chain of workers (actors); each worker consumes from the other worker's output. Signalling is done using a control channel (in my case that is the inbox of the actor).
I think you should read
MSDN: How to implement a producer / consumer dataflow pattern
I had the same problem: one producer produced items, while several consumers consumed them and decided to send them to other consumers. Each consumer was working asynchronously and independent from other consumers.
Your main task is the producer. He produces items that your other tasks should process. The class with the code of your main task has a function:
public async Task ProduceOutputAsync(...)
Your main program starts this Task using:
var producerTask = Task.Run( () => MyProducer.ProduceOutputAsync(...)
Once this is called the producer task starts producing output. Meanwhile your main program can continue doing other things, like for instance start the consumers.
But let's first focus on the Producer task.
The producer task produces items of type T to be processed by other tasks. They are carried over to the other task using objects that implement ITargetBlock'.
Every time the producer task has finished creating an object of type T it sends it to the target block using ITargetBlock.Post, or preferably the async version:
while (continueProducing())
{
T product = await CreateProduct(...)
bool accepted = await this.TargetBlock(product)
// process the return value
}
// if here, nothing to produce anymore. Notify the consumers:
this.TargetBlock.Complete();
The producer needs an ITargetBlock<T>. In my application a BufferBlock<T> was enough. Check MSDN for the other possible targets.
Anyway, the data flow block should also implement ISourceBlock<T>. Your receiver waits for input to arrive at the source, fetches it and processes it. Once finished, it can send the result to its own target block, and wait for the next input until there is no input expected anymore. Of course if your consumer doesn't produce output it doesn't have to send anything to a target.
Waiting for input is done as follows:
ISourceBlock`<T`> mySource = ...;
while (await mySource.ReceiveAsync())
{ // a object of type T is available at the source
T objectToProcess = await mySource.ReceiveAsync();
// keep in mind that someone else might have fetched your object
// so only process it if you've got it.
if (objectToProcess != null)
{
await ProcessAsync(objectToProcess);
// if your processing produces output send the output to your target:
var myOutput = await ProduceOutput(objectToprocess);
await myTarget.SendAsync(myOutput);
}
}
// if here, no input expected anymore, notify my consumers:
myTarget.Complete();
construct your producer
construct all consumers
give the producer a BufferBlock to send its output to
Start the producer MyProducer.ProduceOutputAsync(...)
While the producer produces output and sends it to the buffer block:
give the consumers the same BufferBlock
Start the consumers as a separate task
await Task.WhenAll(...) to wait for all tasks to complete.
Each consumer will stop as soon as it hears that no input is expected anymore.
After all tasks have completed your main function can read the results and return

Categories