I need someone to explain the following names;
Asynchronous Delegates.
Asynchronous methods.
Asynchronous events.
I'm currently going over this for my 70-536 exam and I am covering all my bases so far.
The threading chapter and online resources have been good to me on my second read through.
Still though, the names used above mean absolutely nothing to me? I would really appreciate the meaning behind the word 'Asynchronous' and its relevance to Delegates, methods and events.
Feel free to go into as much detail as you like.
'Asynchronous' describes a type of execution flow.
Synchronous instructions execute linearly and prevent subsequent instructions from executing until complete (that is, they block). So given the following synchronous code:
DoOneThing();
DoAnotherThing();
DoAnotherThing doesn't execute until DoOneThing is finished.
Asynchronous instructions differ in that you don't know (or sometimes even care) when they start or finish executing. In a case like this:
DoOneAsynchronousThing();
DoAnotherThing();
The first statement initiates the asynchronous operation, then does another thing immediately before the first operation is completed (or perhaps even started).
There are many different mechanisms for providing asynchronous execution: the most common ones (at least in the .NET world) are probably the ThreadPool (for in-process asynchronous execution) and Microsoft Message Queue (for inter-process asynchronous execution). For a .NET-specific introduction, you might start with this MSDN topic, "Including Asynchronous Calls".
So asynchronous delegates, methods, and events all run (and complete) at indeterminate times and do not block the main thread of execution.
I am a believer in studying and finding your answers when it comes to an exam.
Here are some articles
Read the wiki on it:
http://en.wikipedia.org/wiki/Asynchronous_communication
Or here on "What is async", this one is short and to the point:
http://www.webopedia.com/TERM/A/asynchronous.html
On example is say in my code, i have a serial port. One thread reads and one thread writes to the port. I can read and write at the same time (sort of) so this is ASYNC. If i were blocking the incomming data while i am writting then i would be synchronous.
See the section in the .NET documentation on Asynchronous Programming Using Delegates.
In summary, delegates have a BeginInvoke method that lets you call them asynchronously. When called, the target method is started in a separate thread. The invoking thread can receive a call back when the target method completes and can call EndInvoke on the delegate to retrieve the results.
Given what you posted, I'll assume that you know the distinction between asynchronous and synchronous execution.
An asynchronous delegate (and, by extension, an asynchronous event) simply means that the underlying method is (or methods are!) invoked in an asynchronous manner.
An asynchronous method is one that performs an asynchronous operation (heh).
Sorry for being vague, but if you understand what asynchronous means then this should point you in the right direction.
If you execute something synchronously, your app waits on the result: for example, ordering a burger at a drivethrough. You're pretty much stuck in line until the task (burger prep, billing and delivery) is complete.
If you execute asynchronously, you do other things instead of waiting: for example, ordering a pizza and watching a movie while waiting on it to be delivered.
Related
What is an asynchronous method. I think I know, but I keep confusing it with parallelism. I'm not sure what the difference between an asynchronous method is and what parallelism is.
Also what is difference between using threading classes and asynchronous classes?
EDIT
Some code demonstrating the difference between async, threading and parallelism would be useful.
What are asynchronous methods?
Asynchronous methods come into the discussion when we are talking about potentially lengthy operations. Typically we need such an operation to complete in order to meaningfully continue program execution, but we don't want to "pause" until the operation completes (because pausing might mean e.g. that the UI stops responding, which is clearly undesirable).
An asynchronous method is one that we call to start the lengthy operation. The method should do what it needs to start the operation and return "very quickly" so that there are no processing delays.
Async methods typically return a token that the caller can use to query if the operation has completed yet and what its result was. In some cases they take a callback (delegate) as an argument; when the operation is complete the callback is invoked to signal the caller that their results are ready and pass them back. This is a commonly used callback signature, although of course in general the callback can look like anything.
So who does actually run the lengthy operation?
I said above that an async method starts a length operation, but what does "start" mean in this context? Since the method returns immediately, where is the actual work being done?
In the general case an execution thread needs to keep watch over the process. Since it's not the thread that called the async method that pauses, who does? The answer is, a thread picked for this purpose from the managed thread pool.
What's the connection with threading?
In this context my interpretation of "threading" is simply that you explicitly spin up a thread of your own and delegate it to execute the task in question synchronously. This thread will block for a time and presumably will signal your "main" thread (which is free to continue executing) when the operation is complete.
This designated worker thread might be pulled out of the thread pool (beware: doing very lengthy processing in a thread pool thread is not recommended!) or it might be one that you started just for this purpose.
First off, what is a method and what is a thread? A method is a unit of work that either (1) performs a useful side effect, like writing to a file, or (2) computes a result, like making a bitmap of a fractal. A thread is a worker that performs that work.
A method is synchronous if in order to use the method -- to get the side effect or the result -- your thread must do nothing else from the point where you request the work to be done until the point where it is finished.
A method is asynchronous if your thread tells the method that it needs the work to be done, and the method says "OK, I'll do that and I'll call you when it is finished".
Usually the way an asynchronous method does that is it makes another worker -- it grabs a thread from the pool. This is particularly true if the method needs to make heavy use of a CPU. But not always; there is no requirement that an asynchronous method spins up another thread.
Does that make sense?
Say you need to clean the house, cook the dinner and put the children to bed.
Synchronous:
You clean the house, then cook dinner, then put the children to bed.
Parallel:
You hire 3 people to clean the house, cook dinner and put the children to bed. But you don't trust them so keep a supervisory role, looking over them and waiting for them to finish. Only when they've all finished do they get paid.
Asynchronous:
You one child to clean the house and another to cook dinner. When each have finished their chores they put themselves to bed, while you put your feet up with a glass of wine in front of the tv.
First you got to understand that if you want parallelism all the structure need to be parallel, I mean that if you have an asynchronous method you need a asynchronous call.
In webservices or web stuff, asynchronous methods can be (just one of the many ways) called with AJAX which is asynchronous. In one method you can have multiple threads, this is the key difference between async methods and multiplie threads.
And the main: the difference between a standard method and a async method is that if you make 2 calls to a standard method at the same time to the same controller with a asynchronous caller (like AJAX) the second call will just begin when the first call has already completed, if the methods that you called were asynchronous both the calls will begin at the same time, with multiple-cores servers it can achiev twice (2 calls) the standard speed.
The speed of the parallelism is measured by this law.
I've been reading a lot lately about this topic and , still I need to clarify something
The whole idea with asynchronous methods is Thread economy :
Allow many tasks to run on a few threads. this is done by using the hardware driver to do the job while releasing the thread back to the thread-pool so it can server other jobs.
please notice .
I'm not talking about asynchronous delegates which ties another thread (execute a task in parallel with the caller).
However I've seen 2 main types of asynchronous methods examples :
Code samples (from books) who only uses existing I/O asynchronous operations as beginXXX / endXX e.g. Stream.BeginRead.
And I couldn't find any asynchronous methods samples which don't use existing .net I/O operations e.g. Stream.BeginRead )
Code samples like this (and this). which doesnt actually invoking an asynchronous operation (although the author thinks he is - but he actually causes a thread to block !)
Question :
Does asynchronous methods are used only with .net I/O existing methods like BeginXXX , EndXXX ?
I mean , If I want to create my own asynchronous methods like BeginMyDelay(int ms,...){..} , EndMyDelay(...). I couldn't done it without tie a blocked thread to it....correct?
Thank you very much.
p.s. please notice this question is tagged as .net 4 and not .net4.5
You're talking about APM.
APM widely uses OS concept, known as IO Completion ports. That's why different IO operations are the best candidates to use APM.
You could write your own APM methods.
But, in fact, these methods will be either over existing APM methods, or they will be IO-bound, and will use some native OS mechanism (like FilesStream, which uses overlapped file IO).
For compute-bound asynchronous operations APM only will increase complexity, IMO.
A bit more clarification.
Work with hardware is asynchronous by its nature. Hardware needs a time to perform request - newtork card must send or receive data, HDD must read/write etc. If IO is synchronous, thread, which was generated IO request, is waiting for response. And here APM helps - you shouldn't wait, just execute something else, and when IO will be complete, I'll call you, says APM.
The main point - operation is performing outside of CPU.
When you're writing any compute-bound operation, which will use CPU for it execution without any IO, there's nothing to wait here. So, APM coludn't help - if you need CPU, you need thread - you need thread pool.
I think, but I'm not sure, that you can create your own asynchronous methods. For example creating a new thread and wait for it to finish some work (db query, ...).
In term of overall system performance probably it is not useful, as you say you just create another thread. But for example if you work on IIS, the original request thread can be used for other requests while you are waiting for the 'background' operation.
I think that IIS has a fixed number of threads (thread pool), so in this case can be useful.
I mean , If I want to create my own asynchronous methods like
BeginMyDelay(int ms,...){..} , EndMyDelay(...). I couldn't done it
without tie a blocked thread to it....correct?
While I've not dug into the implementation of async, I can't see any reason why one couldn't do this.
The simplest way would be to use existing libraries that help [e.g. timers] or some sort of event system IIRC.
However even if you don't want to use any library helpers then you're stuck with a problem... the 'blocked thread'.
Sure the code does look something like this:
while (true){
foreach (var item in WaitingTasks)
if (item.Ready())
/*fire item, and remove it from tasks*/;
/*Some blocking action*/
}
Thing is - 'Some blocking action' doesn't have to be 'blocking'. You could yield/sleep the thread, or use it to process some data. For example, the Unity Game Engine does a similar thing with Coroutines - where the same thread that processes all the code also checks to see if various coroutines [that have been delayed due to time] need to be updated. Replace /*Some blocking action*/ with ProcessGameLoop().
Hoe that helps, feel free to ask questions/post corrections etc.
In MSDN, there is a paragraph like this:
The async and await keywords don't cause additional threads to be
created. Async methods don't require multithreading because an async
method doesn't run on its own thread. The method runs on the current
synchronization context and uses time on the thread only when the
method is active. You can use Task.Run to move CPU-bound work to a
background thread, but a background thread doesn't help with a process
that's just waiting for results to become available.
But it looks I need little more help with the bold text since I am not sure what it exactly means. So how come it becomes async without using Threads?
Source: http://msdn.microsoft.com/en-us/library/hh191443.aspx
There are many asynchronous operations which don't require the use of multiple threads. Things like Asynchronous IO work by having interrupts which signal when data is available. This allows you to have an asynchronous call which isn't using extra threads - when the signal occurs, the operation completes.
Task.Run can be used to make your own CPU-based async methods, which will run on its own separate thread. The paragraph was intended to show that this isn't the only option, however.
async/await is not just about using more threads. It's about using the threads you have more effectively. When operations block, such as waiting on a download or file read, the async/await pattern allows you to use that existing thread for something else. The compiler handles all the magic plumbing underneath, making it much easier to develop with.
See http://msdn.microsoft.com/en-us/magazine/hh456401.aspx for the problem description and the whitepaper at http://www.microsoft.com/en-us/download/details.aspx?id=14058.
Not the code generated by the async and await keyword themselves, no. They create code that runs on your the current thread, assuming it has a synchronization context. If it doesn't then you actually do get threads, but that's using the pattern for no good reason. The await expression, what you write on the right side of the await keyword causes threads to run.
But that thread is often not observable, it may be a device driver thread. Which reports that it is done with a I/O completion port. Pretty common, I/O is always a good reason to use await. If not already forced on you by WinRT, the real reason that async/await got added.
A note about "having a synchronization context". You have one on a thread if the SynchronizationContext.Current property is not null. This is almost only ever the case on the main thread of a gui app. Also the only place where you normally ever worry about having delays not freeze your user interface.
Essentially what it's doing is when you run an async method without calling it with await is this:
Start the method and do as much as possible sychronously.
When necessary, pause the method and put the rest of it into a continuation.
When the async part is completed (is no longer being waited on), schedule the continuation to run on the same thread.
Whatever you want can run on this thread as normal. You can even examine/manipulate the Task returned from the async method.
When the thread becomes available, it will run the rest of your method.
The 'async part' could be file IO, a web request, or pretty much anything, as long as calling code can wait on this task to complete. This includes, but is not limited to, a separate thread. As Reed Copsey pointed out, there are other ways of performing async operations, like interrupts.
In the asynchronous programming model, there looks to be 4 ways (As stated in Calling Synchronous Methods Asynchronously) for making asynchronous method calls.
Calling the EndInvoke() method makes the calling thread wait for the method completion and returns the result.
Going through the IAsyncResult.AsyncWaitHandle.WaitOne() also seem to do the same. AsyncWaitHandle gets a signal of completion (In other word the main thread waits for the Asynchronous method's completion). Then we can execute EndInvoke() to get the result.
What is the difference between calling the EndInvoke() directly and calling it after WaitOne()/WaitAll()?
In the polling technique we provide time for other threads to utilize the system resources by calling Thread.Sleep().
Does AsyncWaitHandle.WaitOne() or EndInvoke() make the main thread go on sleep while waiting?
Q1. There is no difference in the way your code runs or your application, but there might be some runtime differences (again not sure, but a guess based my understanding of Async delegates).
IAsyncResult.AsyncWaitHandle is provided mainly as a synchronization mechanism while using WaitAll() or WaitAny() if you dont have this synchronization need you shouldn't read AsyncWaitHandle property. Reason : AsyncWaitHandle doesnt have to be implemented (created) by the delegate while running asynchronously, until it is read by the external code. I'm not sure of the way CLR handles the Async delegates and whether it creates a WaitHandler or not, but ideally if it can handle running your async delegates without creating another WaitHandle it will not, but your call to WaitOne() would create this handle and you have extra responsibility of disposing(close) it for efficient resource release. Therefore recommendation would be when there is no sycnchronization requirement which can be supported with WaitAll() or WaitAny() dont read this property.
Q2. This Question answers the difference between Sleep and Wait.
Simple things first. For your second question, yes, WaitOne and EndInvoke does indeed make the current thread sleep while waiting.
For your first questions, I can immediately identify 2 differences.
Using WaitOne requires the wait handle to be released, while using EndInvoke directly doesn't require any cleanup.
In return, using WaitOne allows for something to be done before EndInvoke, but after the task has been completed.
As for what that "something" might be, I don't really know. I suspect allocating resources to receive the output might be something that would need to be done before EndInvoke. If you really have no reason to do something at that moment, try not to bother yourself with WaitOne.
You can pass a timeout to WaitOne, so you could, for instance want to perform some other activities on a regular basis whilst waiting for the operation to complete:
do {
//Something else
) while (!waitHandle.WaitOne(100))
Would do something every ~100 milliseconds (+ whatever the something else time is), until the operation completed.
C# 3.0 in a nutshell says asynchronous methods and asynchronous delegates looks similar but the behavior is very different.
Here is what the book says about both.
Asynchronous methods
Rarely or never blocks any thread.
Begin method may not immediately return to the caller.
An agreed protocol with no C# language support.
Asynchronous delegates
May block for any length of time
BeginInvoke return immediately to the caller.
Built-in compiler support.
The book also says, The purpose of asynchronous methods is to allow many tasks to run on few threads; the purpose of asynchronous delegates is to execute a task in parallel with the caller.
When I looked into the BeginRead() method in System.IO.Stream class through reflector, it is using a delegate and calling BeginInvoke on that. So an asynchronous method is using an asynchronous delegate internally.
In such case, how can one say their behaviors are different? Since it uses delegates internally, how a comparison like the above is possible?
Do you think working with a delegate's BeginXXX method is the way to execute a function in parallel to the caller?
What is the proper way to implement asynchronous methods by maintaining all the advantages like making good use of CPU?
Any thoughts?
At the core, there are two main behaviors you may see when you call BeginFoo() with a callback.
Work is started on a background thread, and that thread will be used the entire time up until the work is completed and the callback is invoked (e.g. because the work is synchronous).
Though some work happens on a background thread, the thread need not be in use the entire time (e.g. because the work involves System IO which can schedule callbacks on e.g. the IOCompletionPort).
When you use a delegate, behavior #1 above happens.
Some APIs (that have underlying support for non-blocking IO calls) support behavior #2.
In the specific case of 'Stream', I am not sure, but my guess is it is an abstract base class and so this is merely the default behavior for a subclass that implements only a synchronous version of Read. A 'good' subclass would override BeginRead/EndRead to have a non-blocking implementation.
The advantage of #2, as you said, is that you can have e.g. 100 pending IO calls without consuming 100 threads (threads are expensive).
The implementation can be different; for example, an async IO call may choose to make use of completion ports to minimise the cost to the system while not doing anything.
It is certainly a way; you could also use BackgroundWorker, ThreadPool.QueueUserWorkItem, or Parallel.For (etc) in .NET 4.0
Varies per implementation
I think what the book is trying to highlight is that delegates always include this pattern:
a synchronous call (Invoke) that can block
an async call (BeginInvoke) that shouldn't really block unless the thread-pool is saturated
but it isn't the only pattern. Also; more recently (for example, the async IO methods in Silverlight, or in WebClient): rather than an IAsyncResult, an event is used to signal completion.