What's the point of await DoSomethingAsync [closed] - c#

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I'm trying to wrap my head around all of the Async stuff that's been added into the .NET framework with the more recent versions. I understand some of it, but to be honest, personally I don't think it makes writing asynchronous code easier. I find it rather confusing most of the time and actually harder to read than the more conventional approaches that we used before the advent of async/await.
Anyway, my question is a simple one. I see a lot of code like this:
var stream = await file.readAsStreamAsync()
What's going on here? Isn't this equivalent to just calling the blocking variant of the method, i.e.
var stream = file.readAsStream()
If so, what's the point in using it here like this? It doesn't make the code any easier to read so please tell me what I am missing.

The result of both calls is the same.
The difference is that var stream = file.readAsStream() will block the calling thread until the operation completes.
If the call was made in a GUI app from the UI thread, the application will freeze until the IO completes.
If the call was made in a server application, the blocked thread will not be able to handle other incoming requests. The thread pool will have to create a new thread to 'replace' the blocked one, which is expensive. Scalability will suffer.
On the other hand, var stream = await file.readAsStreamAsync() will not block any thread. The UI thread in a GUI application can keep the application responding, a worker thread in a server application can handle other requests.
When the async operation completes, the OS will notify the thread pool and the rest of the method will be executed.
To make all this 'magic' possible, a method with async/await will be compiled into a state machine. Async/await allows to make complicated asynchronous code look as simple as synchronous one.

It makes writing asynchronous code enormously easier. As you noted in your own question, it looks as if you were writing the synchronous variant - but it's actually asynchronous.
To understand this, you need to really know what asynchronous and synchronous means. The meaning is really simple - synchronous means in a sequence, one after another. Asynchronous means out of sequence. But that's not the whole picture here - the two words are pretty much useless on their own, most of their meaning comes from context. You need to ask: synchronous with respect to what, exactly?
Let's say you have a Winforms application that needs to read a file. In the button click, you do a File.ReadAllText, and put the results in some textbox - all fine and dandy. The I/O operation is synchronous with respect to your UI - the UI can do nothing while you wait for the I/O operation to complete. Now, the customers start complaining that the UI seems hung for seconds at a time when it reads the file - and Windows flags the application as "Not responding". So you decide to delegate the file reading to a background worker - for example, using BackgroundWorker, or Thread. Now your I/O operation is asynchronous with respect to your UI and everyone is happy - all you had to do is extract your work and run it in its own thread, yay.
Now, this is actually perfectly fine - as long as you're only really doing one such asynchronous operation at a time. However, it does mean you have to explicitly define where the UI thread boundaries are - you need to handle the proper synchronization. Sure, this is pretty simple in Winforms, since you can just use Invoke to marshal UI work back to the UI thread - but what if you need to interact with the UI repeatedly, while doing your background work? Sure, if you just want to publish results continuously, you're fine with the BackgroundWorkers ReportProgress - but what if you also want to handle user input?
The beauty of await is that you can easily manage when you're on a background thread, and when you're on a synchronization context (such as the windows forms UI thread):
string line;
while ((line = await streamReader.ReadLineAsync()) != null)
{
if (line.StartsWith("ERROR:")) tbxLog.AppendLine(line);
if (line.StartsWith("CRITICAL:"))
{
if (MessageBox.Show(line + "\r\n" + "Do you want to continue?",
"Critical error", MessageBoxButtons.YesNo) == DialogResult.No)
{
return;
}
}
await httpClient.PostAsync(...);
}
This is wonderful - you're basically writing synchronous code as usual, but it's still asynchronous with respect to the UI thread. And the error handling is again exactly the same as with any synchronous code - using, try-finally and friends all work great.
Okay, so you don't need to sprinkle BeginInvoke here and there, what's the big deal? The real big deal is that, without any effort on your part, you actually started using the real asynchronous APIs for all those I/O operations. The thing is, there aren't really any synchronous I/O operations as far as the OS is concerned - when you do that "synchronous" File.ReadAllText, the OS simply posts an asynchronous I/O request, and then blocks your thread until the response comes back. As should be evident, the thread is wasted doing nothing in the meantime - it still uses system resources, it adds a tiny amount of work for the scheduler etc.
Again, in a typical client application, this isn't a big deal. The user doesn't care whether you have one thread or two - the difference isn't really that big. Servers are a different beast entirely, though; where a typical client only has one or two I/O operations at the same time, you want your server to handle thousands! On a typical 32-bit system, you could only fit about 2000 threads with default stacksize in your process - not because of the physical memory requirements, but just by exhausting the virtual address space. 64-bit processes are not as limited, but there's still the thing that starting up new threads and destroying them is rather pricy, and you are now adding considerable work to the OS thread scheduler - just to keep those threads waiting.
But the await-based code doesn't have this problem. It only takes up a thread when it's doing CPU work - waiting on an I/O operation to complete is not CPU work. So you issue that asynchronous I/O request, and your thread goes back to the thread pool. When the response comes, another thread is taken from the thread pool. Suddenly, instead of using thousands of threads, your server is only using a couple (usually about two per CPU core). The memory requirements are lower, the multi-threading overheads are significantly lowered, and your total throughput increases quite a bit.
So - in a client application, await is only really a thing of convenience. In any larger server application, it's a necessity - because suddenly your "start a new thread" approach simply doesn't scale. And the alternative to using await are all those old-school asynchronous APIs, which handle nothing like synchronous code, and where handling errors is very tedious and tricky.

var stream = await file.readAsStreamAsync();
DoStuff(stream);
is conceptually more like
file.readAsStreamAsync(stream => {
DoStuff(stream);
});
where the lambda is automatically called when the stream has been fully read. You can see this is quite different from the blocking code.
If you're building a UI application for example, and implementing a button handler:
private async void HandleClick(object sender, EventArgs e)
{
ShowProgressIndicator();
var response = await GetStuffFromTheWebAsync();
DoStuff(response);
HideProgressIndicator();
}
This is drastically different from the similar synchronous code:
private void HandleClick(object sender, EventArgs e)
{
ShowProgressIndicator();
var response = GetStuffFromTheWeb();
DoStuff(response);
HideProgressIndicator();
}
Because in the second code the UI will lock up and you'll never see the progress indicator (or at best it'll flash briefly) since the UI thread will be blocked until the entire click handler is completed. In the first code the progress indicator shows and then the UI thread gets to run again while the web call happens in the background, and then when the web call completes the DoStuff(response); HideProgressIndicator(); code gets scheduled on the UI thread and it nicely finishes its work and hides the progress indicator.

What's going on here? Isn't this equivalent to just calling the
blocking variant of the method, i.e.
No, it is not a blocking call. This is a syntactic sugar that the compiler uses to create a state machine, which on the runtime will be used to execute your code asynchronously.
It makes your code more readable and almost similar to code that runs synchronously.

It looks like you're missing what is all this async / await concept is about.
Keyword async let compiler knows that method may need to perform some asynchronous operations and therefore it shouldn't be executed in normal way as any other method, instead it should be treated as state machine. This indicates that compiler will first execute only part of method (let's call it Part 1), and then start some asynchronous operation on other thread releasing the calling thread. Compiler also will schedule Part 2 to execute on first available thread from the ThreadPool. If asynchronous operation is not marked with keyword await then its not been awaited and calling thread continues to run till method is finished. In most cases this is not desirable. That's when we need to use keyword await.
So typical scenario is :
Thread 1 enters async method and executes code Part1 ->
Thread 1 starts async operation ->
Thread 1 is released, operation is underway Part2 is scheduled in TP ->
Some Thread (most likely same Thread 1 is its free) continues to run method till its end (Part2) ->

Related

Is it bad practice to have async call within Task.Run?

I have a C# console app processing around 100,000 JSON messages from RabbitMQ every 1 min
After getting each/a bunch of messages from RabbitMQ I then call
await Task.Run(async () =>
{
//do lots of CPU stuff here, including 2 external API calls using await async call
}
Everything I've read says use await Task.Run for CPU bound operations. And use await async for the HTTP external calls.
If I change it to:
await Task.Run(() =>
Then it complains as I have an async API call in the lines below, so it needs the async keyword in the Task.Run statement.
There are about 2000+ (complex if then business rules) lines of code in this section, and the sometimes the API call is not needed.
So I'm faced with either a massive restructure of the application, with lots of testing needed, or if its ok to do API calls alongside the CPU bound operations then I'll leave it as is.
To summarise, is this bad practice, or is it ok to have CPU bound work and API calls inside the same task? The task is processing one JSON message.
Everything I've read says use await task.run for cpu bound operations . And use await async for the http external calls
The general guideline is to use async/await for I/O. Task.Run is useful for CPU-bound operations if you need to offload them from a UI thread. For example, in server scenarios such as ASP.NET, you wouldn't want to use Task.Run for CPU-bound code. This is because ASP.NET already schedules your code on a separate thread pool thread.
In your case, you have a Console application, which doesn't have a UI thread. But it also doesn't have that automatic scheduling onto a thread pool thread that ASP.NET gives you.
if its ok to do api calls alongside the cpu bound operations then i'll leave it as is.
This is fine either way. Since the code is awaiting the Task.Run, it won't continue (presumably processing the next message) until the operation completes on another thread pool thread. So the Task.Run isn't helping much, but it isn't hurting much, either.
If you need more performance - specifically, processing messages concurrently - then you should look into something like TPL Dataflow or System.Threading.Channels that would allow you to replace the Task.Run with a queue of work that can run in parallel. That would give you something more like what ASP.NET provides out of the box.
General
(...) use await Task.Run for CPU bound operations. And use await async for the HTTP external calls.
This advice comes from the fact that if you run code that doesn't 'let go' enough then you may not get a lot of the benefit Tasks give because the current thread / thread pool just cannot handle the work. In the extreme case when you run 100% synchronous code you won't get any parallelism because the current thread cannot let go and cannot do any other work - your tasks would be executed sequentially. It's important to remember that this is not a style issue; running busy synchronous code with Tasks just work well in some scenarios. In this sense the problem polices itself: if you structure the solution incorrectly it doesn't do what you need.
If you run a mixture of busy and waiting then Task.Run may or may not be great and it will depend on the specific workload. If it works for you, it's fine - you're not doing anything incorrect.
Generally the picture is nuanced and tasks can be and are used to do all kinds of jobs. In certain circumstances the situation is clear cut - e.g. if you run long running work in the UI thread you will lock the UI which is bad or if you have (long-running) busy synchronous code. It's worth keeping in mind that this has been a problem before C# had Tasks.
BTW. If you look at the reference documentation for Task.WhenAll Method it contains examples with both I/O (ping) and CPU (dummy for loop) style work. Yes, these are toy examples but it shows it isn't incorrect to run both types of work with tasks.
Parallel.ForEachAsync?
If you can use .NET 6, Parallel.ForEachAsync could improve performance of your solution and/or make the code look cleaner. Example on Twitter (as picture!).

How a program is executed asynchronously: [duplicate]

I thought that they were basically the same thing — writing programs that split tasks between processors (on machines that have 2+ processors). Then I'm reading this, which says:
Async methods are intended to be non-blocking operations. An await
expression in an async method doesn’t block the current thread while
the awaited task is running. Instead, the expression signs up the rest
of the method as a continuation and returns control to the caller of
the async method.
The async and await keywords don't cause additional threads to be
created. Async methods don't require multithreading because an async
method doesn't run on its own thread. The method runs on the current
synchronization context and uses time on the thread only when the
method is active. You can use Task.Run to move CPU-bound work to a
background thread, but a background thread doesn't help with a process
that's just waiting for results to become available.
and I'm wondering whether someone can translate that to English for me. It seems to draw a distinction between asynchronicity (is that a word?) and threading and imply that you can have a program that has asynchronous tasks but no multithreading.
Now I understand the idea of asynchronous tasks such as the example on pg. 467 of Jon Skeet's C# In Depth, Third Edition
async void DisplayWebsiteLength ( object sender, EventArgs e )
{
label.Text = "Fetching ...";
using ( HttpClient client = new HttpClient() )
{
Task<string> task = client.GetStringAsync("http://csharpindepth.com");
string text = await task;
label.Text = text.Length.ToString();
}
}
The async keyword means "This function, whenever it is called, will not be called in a context in which its completion is required for everything after its call to be called."
In other words, writing it in the middle of some task
int x = 5;
DisplayWebsiteLength();
double y = Math.Pow((double)x,2000.0);
, since DisplayWebsiteLength() has nothing to do with x or y, will cause DisplayWebsiteLength() to be executed "in the background", like
processor 1 | processor 2
-------------------------------------------------------------------
int x = 5; | DisplayWebsiteLength()
double y = Math.Pow((double)x,2000.0); |
Obviously that's a stupid example, but am I correct or am I totally confused or what?
(Also, I'm confused about why sender and e aren't ever used in the body of the above function.)
Your misunderstanding is extremely common. Many people are taught that multithreading and asynchrony are the same thing, but they are not.
An analogy usually helps. You are cooking in a restaurant. An order comes in for eggs and toast.
Synchronous: you cook the eggs, then you cook the toast.
Asynchronous, single threaded: you start the eggs cooking and set a timer. You start the toast cooking, and set a timer. While they are both cooking, you clean the kitchen. When the timers go off you take the eggs off the heat and the toast out of the toaster and serve them.
Asynchronous, multithreaded: you hire two more cooks, one to cook eggs and one to cook toast. Now you have the problem of coordinating the cooks so that they do not conflict with each other in the kitchen when sharing resources. And you have to pay them.
Now does it make sense that multithreading is only one kind of asynchrony? Threading is about workers; asynchrony is about tasks. In multithreaded workflows you assign tasks to workers. In asynchronous single-threaded workflows you have a graph of tasks where some tasks depend on the results of others; as each task completes it invokes the code that schedules the next task that can run, given the results of the just-completed task. But you (hopefully) only need one worker to perform all the tasks, not one worker per task.
It will help to realize that many tasks are not processor-bound. For processor-bound tasks it makes sense to hire as many workers (threads) as there are processors, assign one task to each worker, assign one processor to each worker, and have each processor do the job of nothing else but computing the result as quickly as possible. But for tasks that are not waiting on a processor, you don't need to assign a worker at all. You just wait for the message to arrive that the result is available and do something else while you're waiting. When that message arrives then you can schedule the continuation of the completed task as the next thing on your to-do list to check off.
So let's look at Jon's example in more detail. What happens?
Someone invokes DisplayWebSiteLength. Who? We don't care.
It sets a label, creates a client, and asks the client to fetch something. The client returns an object representing the task of fetching something. That task is in progress.
Is it in progress on another thread? Probably not. Read Stephen's article on why there is no thread.
Now we await the task. What happens? We check to see if the task has completed between the time we created it and we awaited it. If yes, then we fetch the result and keep running. Let's suppose it has not completed. We sign up the remainder of this method as the continuation of that task and return.
Now control has returned to the caller. What does it do? Whatever it wants.
Now suppose the task completes. How did it do that? Maybe it was running on another thread, or maybe the caller that we just returned to allowed it to run to completion on the current thread. Regardless, we now have a completed task.
The completed task asks the correct thread -- again, likely the only thread -- to run the continuation of the task.
Control passes immediately back into the method we just left at the point of the await. Now there is a result available so we can assign text and run the rest of the method.
It's just like in my analogy. Someone asks you for a document. You send away in the mail for the document, and keep on doing other work. When it arrives in the mail you are signalled, and when you feel like it, you do the rest of the workflow -- open the envelope, pay the delivery fees, whatever. You don't need to hire another worker to do all that for you.
In-browser Javascript is a great example of an asynchronous program that has no multithreading.
You don't have to worry about multiple pieces of code touching the same objects at the same time: each function will finish running before any other javascript is allowed to run on the page. (Update: Since this was written, JavaScript has added async functions and generator functions. These functions do not always run to completion before any other javascript is executed: whenever they reach a yield or await keyword, they yield execution to other javascript, and can continue execution later, similar to C#'s async methods.)
However, when doing something like an AJAX request, no code is running at all, so other javascript can respond to things like click events until that request comes back and invokes the callback associated with it. If one of these other event handlers is still running when the AJAX request gets back, its handler won't be called until they're done. There's only one JavaScript "thread" running, even though it's possible for you to effectively pause the thing you were doing until you have the information you need.
In C# applications, the same thing happens any time you're dealing with UI elements--you're only allowed to interact with UI elements when you're on the UI thread. If the user clicked a button, and you wanted to respond by reading a large file from the disk, an inexperienced programmer might make the mistake of reading the file within the click event handler itself, which would cause the application to "freeze" until the file finished loading because it's not allowed to respond to any more clicking, hovering, or any other UI-related events until that thread is freed.
One option programmers might use to avoid this problem is to create a new thread to load the file, and then tell that thread's code that when the file is loaded it needs to run the remaining code on the UI thread again so it can update UI elements based on what it found in the file. Until recently, this approach was very popular because it was what the C# libraries and language made easy, but it's fundamentally more complicated than it has to be.
If you think about what the CPU is doing when it reads a file at the level of the hardware and Operating System, it's basically issuing an instruction to read pieces of data from the disk into memory, and to hit the operating system with an "interrupt" when the read is complete. In other words, reading from disk (or any I/O really) is an inherently asynchronous operation. The concept of a thread waiting for that I/O to complete is an abstraction that the library developers created to make it easier to program against. It's not necessary.
Now, most I/O operations in .NET have a corresponding ...Async() method you can invoke, which returns a Task almost immediately. You can add callbacks to this Task to specify code that you want to have run when the asynchronous operation completes. You can also specify which thread you want that code to run on, and you can provide a token which the asynchronous operation can check from time to time to see if you decided to cancel the asynchronous task, giving it the opportunity to stop its work quickly and gracefully.
Until the async/await keywords were added, C# was much more obvious about how callback code gets invoked, because those callbacks were in the form of delegates that you associated with the task. In order to still give you the benefit of using the ...Async() operation, while avoiding complexity in code, async/await abstracts away the creation of those delegates. But they're still there in the compiled code.
So you can have your UI event handler await an I/O operation, freeing up the UI thread to do other things, and more-or-less automatically returning to the UI thread once you've finished reading the file--without ever having to create a new thread.

In "truly asynchronous" code that uses async-await, isn't there still SOME "thread" that is stuck waiting?

Let's say I properly use async-await, like
await client.GetStringAsync("http://stackoverflow.com");
I understand that the thread that invokes the await becomes "free", that is, something further up the call chain isn't stuck executing some loop equivalent to
bool done = false;
string html = null;
for(; !done; done = GetStringIfAvailable(ref html));
which is what it would be doing if I called the synchronous version of GetStringAsync (probably called GetString by convention).
However, here's where I get confused. Even if the calling thread or any other thread in application's pool of available threads isn't blocked with such a loop, then something is, because, as I understand, at a low level there is always polling going on. So, instead of lowering the total amount of work, I'm simply pushing work to something "beneath" my application's threads ... or something like that.
Can someone clear this up for me?
No.
The compiler will convert methods that use async / await in to state machines that can be broken up in to multiple steps. Once an await is hit, the state of the method is stored and execution is "offloaded" back to the thread that called it. If the task is waiting on things like disk IO, the OS kernel will end up relying on physical CPU interrupts to let the kernel know when to signal the application to resume processing. The state of the pending method is loaded, and queued up on an available thread (the same thread that hit the await if ConfigureAwait is true, or any free thread if false) (This last part isn't exactly right, please see Scott Chamberlain's comments below.). Think of it like an event, where the application asks the hardware to "ping" it once the work is done, while the application gets back to doing whatever it was doing before.
There are some cases where a new thread is spun up to do the work, such as Task.Run which does the work on a ThreadPool thread, but no thread is blocking while awaiting it to complete.
It is important to keep in mind that asynchronous operations using async/ await, are all about pausing, storing, retrieving, and resuming that state-machine. It doesn't really care about what happens inside the Task, what happens there, and how it happens, isn't directly related to async / await.
I was very confused by async / await too, until I really understood how the method is converted to a state-machine. Reading up on exactly what your async methods get converted to by the compiler might help.
You're pushing it off onto the operating system--which will run some other thread if it can rather than simply wait. It only ends up in a busy-wait when it can't find any thread that wants to run.

Confusion regarding threads and if asynchronous methods are truly asynchronous in C#

I was reading up on async/await and when Task.Yield might be useful and came across this post. I had a question regarding the below from that post:
When you use async/await, there is no guarantee that the method you
call when you do await FooAsync() will actually run asynchronously.
The internal implementation is free to return using a completely
synchronous path.
This is a little unclear to me probably because the definition of asynchronous in my head is not lining up.
In my mind, since I do mainly UI dev, async code is code that does not run on the UI thread, but on some other thread. I guess in the text I quoted, a method is not truly async if it blocks on any thread (even if it's a thread pool thread for example).
Question:
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math. If I await it then some thread is getting blocked.
What is an example of a truly asynchronous method and how would they actually work? Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
This is a little unclear to me probably because the definition of asynchronous in my head is not lining up.
Good on you for asking for clarification.
In my mind, since I do mainly UI dev, async code is code that does not run on the UI thread, but on some other thread.
That belief is common but false. There is no requirement that asynchronous code run on any second thread.
Imagine that you are cooking breakfast. You put some toast in the toaster, and while you are waiting for the toast to pop, you go through your mail from yesterday, pay some bills, and hey, the toast popped up. You finish paying that bill and then go butter your toast.
Where in there did you hire a second worker to watch your toaster?
You didn't. Threads are workers. Asynchronous workflows can happen all on one thread. The point of the asynchronous workflow is to avoid hiring more workers if you can possibly avoid it.
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math.
Here, I'll give you a hard problem to solve. Here's a column of 100 numbers; please add them up by hand. So you add the first to the second and make a total. Then you add the running total to the third and get a total. Then, oh, hell, the second page of numbers is missing. Remember where you were, and go make some toast. Oh, while the toast was toasting, a letter arrived with the remaining numbers. When you're done buttering the toast, go keep on adding up those numbers, and remember to eat the toast the next time you have a free moment.
Where is the part where you hired another worker to add the numbers? Computationally expensive work need not be synchronous, and need not block a thread. The thing that makes computational work potentially asynchronous is the ability to stop it, remember where you were, go do something else, remember what to do after that, and resume where you left off.
Now it is certainly possible to hire a second worker who does nothing but add numbers, and then is fired. And you could ask that worker "are you done?" and if the answer is no, you could go make a sandwich until they are done. That way both you and the worker are busy. But there is not a requirement that asynchrony involve multiple workers.
If I await it then some thread is getting blocked.
NO NO NO. This is the most important part of your misunderstanding. await does not mean "go start this job asynchronously". await means "I have an asynchronously produced result here that might not be available. If it is not available, find some other work to do on this thread so that we are not blocking the thread. Await is the opposite of what you just said.
What is an example of a truly asynchronous method and how would they actually work? Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
Asynchronous work often involves custom hardware or multiple threads, but it need not.
Don't think about workers. Think about workflows. The essence of asynchrony is breaking up workflows into little parts such that you can determine the order in which those parts must happen, and then executing each part in turn, but allowing parts that do not have dependencies with each other to be interleaved.
In an asynchronous workflow you can easily detect places in the workflow where a dependency between parts is expressed. Such parts are marked with await. That's the meaning of await: the code which follows depends upon this portion of the workflow being completed, so if it is not completed, go find some other task to do, and come back here later when the task is completed. The whole point is to keep the worker working, even in a world where needed results are being produced in the future.
I was reading up on async/await
May I recommend my async intro?
and when Task.Yield might be useful
Almost never. I find it occasionally useful when doing unit testing.
In my mind, since I do mainly UI dev, async code is code that does not run on the UI thread, but on some other thread.
Asynchronous code can be threadless.
I guess in the text I quoted, a method is not truly async if it blocks on any thread (even if it's a thread pool thread for example).
I would say that's correct. I use the term "truly async" for operations that do not block any threads (and that are not synchronous). I also use the term "fake async" for operations that appear asynchronous but only work that way because they run on or block a thread pool thread.
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math.
Yes; in this case, you would want to define that work with a synchronous API (since it is synchronous work), and then you can call it from your UI thread using Task.Run, e.g.:
var result = await Task.Run(() => MySynchronousCpuBoundCode());
If I await it then some thread is getting blocked.
No; the thread pool thread would be used to run the code (not actually blocked), and the UI thread is asynchronously waiting for that code to complete (also not blocked).
What is an example of a truly asynchronous method and how would they actually work?
NetworkStream.WriteAsync (indirectly) asks the network card to write out some bytes. There is no thread responsible for writing out the bytes one at a time and waiting for each byte to be written. The network card handles all of that. When the network card is done writing all the bytes, it (eventually) completes the task returned from WriteAsync.
Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
Not entirely, although I/O operations are the easy examples. Another fairly easy example is timers (e.g., Task.Delay). Though you can build a truly asynchronous API around any kind of "event".
When you use async/await, there is no guarantee that the method you call when you do await FooAsync() will actually run asynchronously. The internal implementation is free to return using a completely synchronous path.
This is a little unclear to me probably because the definition of
asynchronous in my head is not lining up.
This simply means there are two cases when calling an async method.
The first is that, upon returning the task to you, the operation is already completed -- this would be a synchronous path. The second is that the operation is still in progress -- this is the async path.
Consider this code, which should show both of these paths. If the key is in a cache, it is returned synchronously. Otherwise, an async op is started which calls out to a database:
Task<T> GetCachedDataAsync(string key)
{
if(cache.TryGetvalue(key, out T value))
{
return Task.FromResult(value); // synchronous: no awaits here.
}
// start a fully async op.
return GetDataImpl();
async Task<T> GetDataImpl()
{
value = await database.GetValueAsync(key);
cache[key] = value;
return value;
}
}
So by understanding that, you can deduce that in theory the call of database.GetValueAsync() may have a similar code and itself be able to return synchronously: so even your async path may end up running 100% synchronously. But your code doesn't need to care: async/await handles both cases seamlessly.
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math. If I await it then some thread is getting blocked.
Blocking is a well-defined term -- it means your thread has yielded its execution window while it waits for something (I/O, mutex, and so on). So your thread doing the math is not considered blocked: it is actually performing work.
What is an example of a truly asynchronous method and how would they actually work? Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
A "truly async method" would be one that simply never blocks. It typically ends up involving I/O, but it can also mean awaiting your heavy math code when you want to your current thread for something else (as in UI development) or when you're trying to introduce parallelism:
async Task<double> DoSomethingAsync()
{
double x = await ReadXFromFile();
Task<double> a = LongMathCodeA(x);
Task<double> b = LongMathCodeB(x);
await Task.WhenAll(a, b);
return a.Result + b.Result;
}
This topic is fairly vast and several discussions may arise. However, using async and await in C# is considered asynchronous programming. However, how asynchrony works is a total different discussion. Until .NET 4.5 there were no async and await keywords, and developers had to develop directly against the Task Parallel Library (TPL). There the developer had full control on when and how to create new tasks and even threads. However, this had a downside since not being really an expert on this topic, applications could suffer from heavy performance problems and bugs due to race conditions between threads and so on.
Starting with .NET 4.5 the async and await keywords were introduced, with a new approach to asynchronous programming. The async and await keywords don't cause additional threads to be created. Async methods don't require multithreading because an async method doesn't run on its own thread. The method runs on the current synchronization context and uses time on the thread only when the method is active. You can use Task.Run to move CPU-bound work to a background thread, but a background thread doesn't help with a process that's just waiting for results to become available.
The async-based approach to asynchronous programming is preferable to existing approaches in almost every case. In particular, this approach is better than BackgroundWorker for IO-bound operations because the code is simpler and you don't have to guard against race conditions. You can read more about this topic HERE.
I don't consider myself a C# black belt and some more experienced developers may raise some further discussions, but as a principle I hope that I managed to answer your question.
Asynchronous does not imply Parallel
Asynchronous only implies concurrency. In fact, even using explicit threads doesn't guarantee that they will execute simultaneously (for example when the threads affinity for the same single core, or more commonly when there is only one core in the machine to begin with).
Therefore, you should not expect an asynchronous operation to happen simultaneously to something else. Asynchronous only means that it will happen, eventually at another time (a(greek) = without, syn (greek) = together, khronos (greek) = time. => Asynchronous = not happening at the same time).
Note: The idea of asynchronicity is that on the invocation you do not care when the code will actually run. This allows the system to take advantage of parallelism, if possible, to execute the operation. It may even run immediately. It could even happen on the same thread... more on that later.
When you await the asynchronous operation, you are creating concurrency (com (latin) = together, currere (latin) = run. => "Concurrent" = to run together). That is because you are asking for the asynchronous operation to reach completion before moving on. We can say the execution converges. This is similar to the concept of joining threads.
When asynchronous cannot be Parallel
When you use async/await, there is no guarantee that the method you call when you do await FooAsync() will actually run asynchronously. The internal implementation is free to return using a completely synchronous path.
This can happen in three ways:
It is possible to use await on anything that returns Task. When you receive the Task it could have already been completed.
Yet, that alone does not imply it ran synchronously. In fact, it suggest it ran asynchronously and finished before you got the Task instance.
Keep in mind that you can await on an already completed task:
private static async Task CallFooAsync()
{
await FooAsync();
}
private static Task FooAsync()
{
return Task.CompletedTask;
}
private static void Main()
{
CallFooAsync().Wait();
}
Also, if an async method has no await it will run synchronously.
Note: As you already know, a method that returns a Task may be waiting on the network, or on the file system, etc… doing so does not imply to start a new Thread or enqueue something on the ThreadPool.
Under a synchronization context that is handled by a single thread, the result will be to execute the Task synchronously, with some overhead. This is the case of the UI thread, I'll talk more about what happens below.
It is possible to write a custom TaskScheduler to always run tasks synchronously. On the same thread, that does the invocation.
Note: recently I wrote a custom SyncrhonizationContext that runs tasks on a single thread. You can find it at Creating a (System.Threading.Tasks.)Task scheduler. It would result in such TaskScheduler with a call to FromCurrentSynchronizationContext.
The default TaskScheduler will enqueue the invocations to the ThreadPool. Yet when you await on the operation, if it has not run on the ThreadPool it will try to remove it from the ThreadPool and run it inline (on the same thread that is waiting... the thread is waiting anyway, so it is not busy).
Note: One notable exception is a Task marked with LongRunning. LongRunning Tasks will run on a separate thread.
Your question
If I have a long running task that is CPU bound (let's say it is doing a lot of hard math), then running that task asynchronously must be blocking some thread right? Something has to actually do the math. If I await it then some thread is getting blocked.
If you are doing computations, they must happen on some thread, that part is right.
Yet, the beauty of async and await is that the waiting thread does not have to be blocked (more on that later). Yet, it is very easy to shoot yourself in the foot by having the awaited task scheduled to run on the same thread that is waiting, resulting in synchronous execution (which is an easy mistake in the UI thread).
One of the key characteristics of async and await is that they take the SynchronizationContext from the caller. For most threads that results in using the default TaskScheduler (which, as mentioned earlier, uses the ThreasPool). However, for UI thread it means posting the tasks into the message queue, this means that they will run on the UI thread. The advantage of this is that you don’t have to use Invoke or BeginInvoke to access UI components.
Before I go into how to await a Task from the UI thread without blocking it, I want to note that it is possible to implement a TaskScheduler where if you await on a Task, you don’t block your thread or have it go idle, instead you let your thread pick another Task that is waiting for execution. When I was backporting Tasks for .NET 2.0 I experimented with this.
What is an example of a truly asynchronous method and how would they actually work? Are those limited to I/O operations which take advantage of some hardware capabilities so no thread is ever blocked?
You seem to confuse asynchronous with not blocking a thread. If what you want is an example of asynchronous operations in .NET that do not require blocking a thread, a way to do it that you may find easy to grasp is to use continuations instead of await. And for the continuations that you need to run on the UI thread, you can use TaskScheduler.FromCurrentSynchronizationContext.
Do not implement fancy spin waiting. And by that I mean using a Timer, Application.Idle or anything like that.
When you use async you are telling the compiler to rewrite the code of the method in a way that allows breaking it. The result is similar to continuations, with a much more convenient syntax. When the thread reaches an await the Task will be scheduled, and the thread is free to continue after the current async invocation (out of the method). When the Task is done, the continuation (after the await) is scheduled.
For the UI thread this means that once it reaches await, it is free to continue to process messages. Once the awaited Task is done, the continuation (after the await) will be scheduled. As a result, reaching await doesn’t imply to block the thread.
Yet blindly adding async and await won’t fix all your problems.
I submit to you an experiment. Get a new Windows Forms application, drop in a Button and a TextBox, and add the following code:
private async void button1_Click(object sender, EventArgs e)
{
await WorkAsync(5000);
textBox1.Text = #"DONE";
}
private async Task WorkAsync(int milliseconds)
{
Thread.Sleep(milliseconds);
}
It blocks the UI. What happens is that, as mentioned earlier, await automatically uses the SynchronizationContext of the caller thread. In this case, that is the UI thread. Therefore, WorkAsync will run on the UI thread.
This is what happens:
The UI threads gets the click message and calls the click event handler
In the click event handler, the UI thread reaches await WorkAsync(5000)
WorkAsync(5000) (and scheduling its continuation) is scheduled to run on the current synchronization context, which is the UI thread synchronization context… meaning that it posts a message to execute it
The UI thread is now free to process further messages
The UI thread picks the message to execute WorkAsync(5000) and schedule its continuation
The UI thread calls WorkAsync(5000) with continuation
In WorkAsync, the UI thread runs Thread.Sleep. The UI is now irresponsive for 5 seconds.
The continuation schedules the rest of the click event handler to run, this is done by posting another message for the UI thread
The UI thread is now free to process further messages
The UI thread picks the message to continue in the click event handler
The UI thread updates the textbox
The result is synchronous execution, with overhead.
Yes, you should use Task.Delay instead. That is not the point; consider Sleep a stand in for some computation. The point is that just using async and await everywhere won't give you an application that is automatically parallel. It is much better to pick what do you want to run on a background thread (e.g. on the ThreadPool) and what do you want to run on the UI thread.
Now, try the following code:
private async void button1_Click(object sender, EventArgs e)
{
await Task.Run(() => Work(5000));
textBox1.Text = #"DONE";
}
private void Work(int milliseconds)
{
Thread.Sleep(milliseconds);
}
You will find that await does not block the UI. This is because in this case Thread.Sleep is now running on the ThreadPool thanks to Task.Run. And thanks to button1_Click being async, once the code reaches await the UI thread is free to continue working. After the Task is done, the code will resume after the await thanks to the compiler rewriting the method to allow precisely that.
This is what happens:
The UI threads gets the click message and calls the click event handler
In the click event handler, the UI thread reaches await Task.Run(() => Work(5000))
Task.Run(() => Work(5000)) (and scheduling its continuation) is scheduled to run on the current synchronization context, which is the UI thread synchronization context… meaning that it posts a message to execute it
The UI thread is now free to process further messages
The UI thread picks the message to execute Task.Run(() => Work(5000)) and schedule its continuation when done
The UI thread calls Task.Run(() => Work(5000)) with continuation, this will run on the ThreadPool
The UI thread is now free to process further messages
When the ThreadPool finishes, the continuation will schedule the rest of the click event handler to run, this is done by posting another message for the UI thread. When the UI thread picks the message to continue in the click event handler it will updates the textbox.
Here's asynchronous code which shows how async / await allows code to block and release control to another flow, then resume control but not needing a thread.
public static async Task<string> Foo()
{
Console.WriteLine("In Foo");
await Task.Yield();
Console.WriteLine("I'm Back");
return "Foo";
}
static void Main(string[] args)
{
var t = new Task(async () =>
{
Console.WriteLine("Start");
var f = Foo();
Console.WriteLine("After Foo");
var r = await f;
Console.WriteLine(r);
});
t.RunSynchronously();
Console.ReadLine();
}
So it's that releasing of control and resynching when you want results that's key with async/await ( which works well with threading )
NOTE: No Threads were blocked in the making of this code :)
I think sometimes the confusion might come from "Tasks" which doesn't mean something running on its own thread. It just means a thing to do, async / await allows tasks to be broken up into stages and coordinate those various stages into a flow.
It's kind of like cooking, you follow the recipe. You need to do all the prep work before assembling the dish for cooking. So you turn on the oven, start cutting things, grating things, etc. Then you await the temp of oven and await the prep work. You could do it by yourself swapping between tasks in a way that seems logical (tasks / async / await), but you can get someone else to help grate cheese while you chop carrots (threads) to get things done faster.
Stephen's answer is already great, so I'm not going to repeat what he said; I've done my fair share of repeating the same arguments many times on Stack Overflow (and elsewhere).
Instead, let me focus on one important abstract things about asynchronous code: it's not an absolute qualifier. There is no point in saying a piece of code is asynchronous - it's always asynchronous with respect to something else. This is quite important.
The purpose of await is to build synchronous workflows on top of asynchronous operations and some connecting synchronous code. Your code appears perfectly synchronous1 to the code itself.
var a = await A();
await B(a);
The ordering of events is specified by the await invocations. B uses the return value of A, which means A must have run before B. The method containing this code has a synchronous workflow, and the two methods A and B are synchronous with respect to each other.
This is very useful, because synchronous workflows are usually easier to think about, and more importantly, a lot of workflows simply are synchronous. If B needs the result of A to run, it must run after A2. If you need to make an HTTP request to get the URL for another HTTP request, you must wait for the first request to complete; it has nothing to do with thread/task scheduling. Perhaps we could call this "inherent synchronicity", apart from "accidental synchronicity" where you force order on things that do not need to be ordered.
You say:
In my mind, since I do mainly UI dev, async code is code that does not run on the UI thread, but on some other thread.
You're describing code that runs asynchronously with respect to the UI. That is certainly a very useful case for asynchrony (people don't like UI that stops responding). But it's just a specific case of a more general principle - allowing things to happen out of order with respect to one another. Again, it's not an absolute - you want some events to happen out of order (say, when the user drags the window or the progress bar changes, the window should still redraw), while others must not happen out of order (the Process button must not be clicked before the Load action finishes). await in this use case isn't that different from using Application.DoEvents in principle - it introduces many of the same problems and benefits.
This is also the part where the original quote gets interesting. The UI needs a thread to be updated. That thread invokes an event handler, which may be using await. Does it mean that the line where await is used will allow the UI to update itself in response to user input? No.
First, you need to understand that await uses its argument, just as if it were a method call. In my sample, A must have already been invoked before the code generated by await can do anything, including "releasing control back to the UI loop". The return value of A is Task<T> instead of just T, representing a "possible value in the future" - and await-generated code checks to see if the value is already there (in which case it just continues on the same thread) or not (which means we get to release the thread back to the UI loop). But in either case, the Task<T> value itself must have been returned from A.
Consider this implementation:
public async Task<int> A()
{
Thread.Sleep(1000);
return 42;
}
The caller needs A to return a value (a task of int); since there's no awaits in the method, that means the return 42;. But that cannot happen before the sleep finishes, because the two operations are synchronous with respect to the thread. The caller thread will be blocked for a second, regardless of whether it uses await or not - the blocking is in A() itself, not await theTaskResultOfA.
In contrast, consider this:
public async Task<int> A()
{
await Task.Delay(1000);
return 42;
}
As soon as the execution gets to the await, it sees that the task being awaited isn't finished yet and returns control back to its caller; and the await in the caller consequently returns control back to its caller. We've managed to make some of the code asynchronous with respect to the UI. The synchronicity between the UI thread and A was accidental, and we removed it.
The important part here is: there's no way to distinguish between the two implementations from the outside without inspecting the code. Only the return type is part of the method signature - it doesn't say the method will execute asynchronously, only that it may. This may be for any number of good reasons, so there's no point in fighting it - for example, there's no point in breaking the thread of execution when the result is already available:
var responseTask = GetAsync("http://www.google.com");
// Do some CPU intensive task
ComputeAllTheFuzz();
response = await responseTask;
We need to do some work. Some events can run asynchronously with respect to others (in this case, ComputeAllTheFuzz is independent of the HTTP request) and are asynchronous. But at some point, we need to get back to a synchronous workflow (for example, something that requires both the result of ComputeAllTheFuzz and the HTTP request). That's the await point, which synchronizes the execution again (if you had multiple asynchronous workflows, you'd use something like Task.WhenAll). However, if the HTTP request managed to complete before the computation, there's no point in releasing control at the await point - we can simply continue on the same thread. There's been no waste of the CPU - no blocking of the thread; it does useful CPU work. But we didn't give any opportunity for the UI to update.
This is of course why this pattern is usually avoided in more general asynchronous methods. It is useful for some uses of asynchronous code (avoiding wasting threads and CPU time), but not others (keeping the UI responsive). If you expect such a method to keep the UI responsive, you're not going to be happy with the result. But if you use it as part of a web service, for example, it will work great - the focus there is on avoiding wasting threads, not keeping the UI responsive (that's already provided by asynchronously invoking the service endpoint - there's no benefit from doing the same thing again on the service side).
In short, await allows you to write code that is asynchronous with respect to its caller. It doesn't invoke a magical power of asynchronicity, it isn't asynchronous with respect to everything, it doesn't prevent you from using the CPU or blocking threads. It just gives you the tools to easily make a synchronous workflow out of asynchronous operations, and present part of the whole workflow as asynchronous with respect to its caller.
Let's consider an UI event handler. If the individual asynchronous operations happen to not need a thread to execute (e.g. asynchronous I/O), part of the asynchronous method may allow other code to execute on the original thread (and the UI stays responsive in those parts). When the operation needs the CPU/thread again, it may or may not require the original thread to continue the work. If it does, the UI will be blocked again for the duration of the CPU work; if it doesn't (the awaiter specifies this using ConfigureAwait(false)), the UI code will run in parallel. Assuming there's enough resources to handle both, of course. If you need the UI to stay responsive at all times, you cannot use the UI thread for any execution long enough to be noticeable - even if that means you have to wrap an unreliable "usually asynchronous, but sometimes blocks for a few seconds" async method in a Task.Run. There's costs and benefits to both approaches - it's a trade-off, as with all engineering :)
Of course, perfect as far as the abstraction holds - every abstraction leaks, and there's plenty of leaks in await and other approaches to asynchronous execution.
A sufficiently smart optimizer might allow some part of B to run, up to the point where the return value of A is actually needed; this is what your CPU does with normal "synchronous" code (Out of order execution). Such optimizations must preserve the appearance of synchronicity, though - if the CPU misjudges the ordering of operations, it must discard the results and present a correct ordering.

With C# and .NET, why doesn't the runtime automatically operate asynchronously?

This is such a basic (read noob) question regarding the .NET async/await library but I thought I'd ask it anyhow before rewriting our api's to be awaitable.
The Question
Why wouldn't the runtime simply evaluate any given thread that has a lot of idle time and automatically operate asynchronous whenever it gets to the blocking call.
Example of some routine
Web request to some app...
App starts database call...
Wait for response...(idle and long)
Receive recordset...
Return to client...
If I were the runtime environment, wouldn't it be wise to simply jot down that step 3 takes a while so I should use the current thread at that point, during its idle moments, to help out other routines that would normally be waiting for our current thread to be available?
Isn't it possible that at some point in the future we'll be able to toggle a flag in the app.config (or web.config) that says <system.runtime><asyncBehavior enableAsynchronousWhenIdle=true /></system.runtime>?
Sure it's possible but it completely breaks the current programming model. Before when you had a blocking call you were guaranteed that no other code would run on your thread. This change now allows re-entrant calls on the same thread.
For instance consider this case:
static int _processCount;
static object _lockObj = new object();
public Response ProcessRequest(Request request) {
lock (_lockObj) {
_processCount++;
var savedCount = _processCount;
// Make long running request
if (savedCount != _processCount)
throw new InvalidOperationException("Is my lock broken?");
}
}
Before we allow processing requests during the long running process this code is fine, but if we allow new requests to be processed on the thread while it is making a long running request we open up the possibility of this case.
Process Request A
Process Request A waits for the long running operation
The idle processing uses the thread to process Request B.
Request B enters the lock because locks have thread affinity
Request B waits for the long running operation
Request A returns from the long running operation and throws an exception because it's state has been corrupted.
So the code needs to be written in such a way that it is aware of the reentrancy potential. There is no way for the Framework to know if your code will break so that change will never happen.
.NET (nor any framework) isn't that smart. Unless you explicitly program code to run asynchronously, it has no way of knowing if any particularly code should run asynchronously. It can't look at your code and say, "Oh here's some code that runs awhile and blocking the UI thread, so I should run this in a separate thread so that the UI can update." As far as the framework is concerned--you intended it to operate that way--and it has no intelligence to override the way you coded into something more efficient.
Basically because both synchronous and asynchronous approaches are useful, and sometimes blocking is fine. Asynchrony isn't a silver bullet.
In the other hand, turning all synchronous code into asynchronous operations may break a lot of code base, both from the base class library (BCL) and third-party code, because asynchronous operations should synchronize access to shared resources and objects, and current synchronous code can be an actual bomb!

Categories