Unexpected behavior with await inside a ContinueWith block - c#

I have a slightly complex requirement of performing some tasks in parallel, and having to wait for some of them to finish before continuing. Now, I am encountering unexpected behavior, when I have a number of tasks, that I want executed in parallel, but inside a ContinueWith handler. I have whipped up a small sample to illustrate the problem:
var task1 = Task.Factory.StartNew(() =>
{
Console.WriteLine("11");
Thread.Sleep(1000);
Console.WriteLine("12");
}).ContinueWith(async t =>
{
Console.WriteLine("13");
var innerTasks = new List<Task>();
for (var i = 0; i < 10; i++)
{
var j = i;
innerTasks.Add(Task.Factory.StartNew(() =>
{
Console.WriteLine("1_" + j + "_1");
Thread.Sleep(500);
Console.WriteLine("1_" + j + "_2");
}));
}
await Task.WhenAll(innerTasks.ToArray());
//Task.WaitAll(innerTasks.ToArray());
Thread.Sleep(1000);
Console.WriteLine("14");
});
var task2 = Task.Factory.StartNew(() =>
{
Console.WriteLine("21");
Thread.Sleep(1000);
Console.WriteLine("22");
}).ContinueWith(t =>
{
Console.WriteLine("23");
Thread.Sleep(1000);
Console.WriteLine("24");
});
Console.WriteLine("1");
await Task.WhenAll(task1, task2);
Console.WriteLine("2");
The basic pattern is:
- Task 1 should be executed in parallel with Task 2.
- Once the first part of part 1 is done, it should do some more things in parallel. I want to complete, once everything is done.
I expect the following result:
1 <- Start
11 / 21 <- The initial task start
12 / 22 <- The initial task end
13 / 23 <- The continuation task start
Some combinations of "1_[0..9]_[1..2]" and 24 <- the "inner" tasks of task 1 + the continuation of task 2 end
14 <- The end of the task 1 continuation
2 <- The end
Instead, what happens, is that the await Task.WhenAll(innerTasks.ToArray()); does not "block" the continuation task from completing. So, the inner tasks execute after the outer await Task.WhenAll(task1, task2); has completed. The result is something like:
1 <- Start
11 / 21 <- The initial task start
12 / 22 <- The initial task end
13 / 23 <- The continuation task start
Some combinations of "1_[0..9]_[1..2]" and 24 <- the "inner" tasks of task 1 + the continuation of task 2 end
2 <- The end
Some more combinations of "1_[0..9]_[1..2]" <- the "inner" tasks of task 1
14 <- The end of the task 1 continuation
If, instead, I use Task.WaitAll(innerTasks.ToArray()), everything seems to work as expected. Of course, I would not want to use WaitAll, so I won't block any threads.
My questions are:
Why is this unexpected behavior occuring?
How can I remedy the situation without blocking any threads?
Thanks a lot in advance for any pointers!

You're using the wrong tools. Instead of StartNew, use Task.Run. Instead of ContinueWith, use await:
var task1 = Task1();
var task2 = Task2();
Console.WriteLine("1");
await Task.WhenAll(task1, task2);
Console.WriteLine("2");
private async Task Task1()
{
await Task.Run(() =>
{
Console.WriteLine("11");
Thread.Sleep(1000);
Console.WriteLine("12");
});
Console.WriteLine("13");
var innerTasks = new List<Task>();
for (var i = 0; i < 10; i++)
{
innerTasks.Add(Task.Run(() =>
{
Console.WriteLine("1_" + i + "_1");
Thread.Sleep(500);
Console.WriteLine("1_" + i + "_2");
}));
await Task.WhenAll(innerTasks);
}
Thread.Sleep(1000);
Console.WriteLine("14");
}
private async Task Task2()
{
await Task.Run(() =>
{
Console.WriteLine("21");
Thread.Sleep(1000);
Console.WriteLine("22");
});
Console.WriteLine("23");
Thread.Sleep(1000);
Console.WriteLine("24");
}
Task.Run and await are superior here because they correct a lot of unexpected behavior in StartNew/ContinueWith. In particular, asynchronous delegates and (for Task.Run) always using the thread pool.
I have more detailed info on my blog regarding why you shouldn't use StartNew and why you shouldn't use ContinueWith.

As noted in the comments, what you're seeing is normal. The Task returned by ContinueWith() completes when the delegate passed to and invoked by ContinueWith() finishes executing. This happens the first time the anonymous method uses the await statement, and the delegate returns a Task object itself that represents the eventual completion of the entire anonymous method.
Since you are only waiting on the ContinueWith() task, and this task only represents the availability of the task that represents the anonymous method, not the completion of that task, your code doesn't wait.
From your example, it's not clear what the best fix is. But if you make this small change, it will do what you want:
await Task.WhenAll(await task1, task2);
I.e. in the WhenAll() call, don't wait on the ContinueWith() task itself, but rather on the task that task will eventually return. Use await here to avoid blocking the thread while you wait for that task to be available.

When using async methods/lambdas with StartNew, you either wait on the returned task and the contained task:
var task = Task.Factory.StartNew(async () => { /* ... */ });
task.Wait();
task.Result.Wait();
// consume task.Result.Result
Or you use the extension method Unwrap on the result of StartNew and wait on the task it returns.
var task = Task.Factory.StartNew(async () => { /* ... */ })
.Unwrap();
task.Wait();
// consume task.Result
The following discussion goes along the line that Task.Factory.StartNew and ContinueWith should be avoided in specific cases, such as when you don't provide creation or continuation options or when you don't provide a task scheduler.
I don't agree that Task.Factory.StartNew shouldn't be used, I agree that you should use (or consider using) Task.Run wherever you use a Task.Factory.StartNew method overload that doesn't take TaskCreationOptions or a TaskScheduler.
Note that this only applies to the default Task.Factory. I've used custom task factories where I chose to use the StartNew overloads without options and task scheduler, because I configured the factories specific defaults for my needs.
Likewise, I don't agree that ContinueWith shouldn't be used, I agree that you should use (or consider using) async/await wherever you use a ContinueWith method overload that doesn't take TaskContinuationOptions or a TaskScheduler.
For instance, up to C# 5, the most practical way to workaround the limitation of await not being supported in catch and finally blocks is to use ContinueWith.
C# 6:
try
{
return await something;
}
catch (SpecificException ex)
{
await somethingElse;
// throw;
}
finally
{
await cleanup;
}
Equivalent before C# 6:
return await something
.ContinueWith(async somethingTask =>
{
var ex = somethingTask.Exception.InnerException as SpecificException;
if (ex != null)
{
await somethingElse;
// await somethingTask;
}
},
CancellationToken.None,
TaskContinuationOptions.DenyChildAttach | TaskContinuationOptions.NotOnRanToCompletion,
TaskScheduler.Default)
.Unwrap()
.ContinueWith(async catchTask =>
{
await cleanup;
await catchTask;
},
CancellationToken.None,
TaskContinuationOptions.DenyChildAttach,
TaskScheduler.Default)
.Unwrap();
Since, as I told, in some cases I have a TaskFactory with specific defaults, I've defined a few extension methods that take a TaskFactory, reducing the error chance of not passing one of the arguments (I known I can always forget to pass the factory itself):
public static Task ContinueWhen(this TaskFactory taskFactory, Task task, Action<Task> continuationAction)
{
return task.ContinueWith(continuationAction, taskFactory.CancellationToken, taskFactory.ContinuationOptions, taskFactory.Scheduler);
}
public static Task<TResult> ContinueWhen<TResult>(this TaskFactory taskFactory, Task task, Func<Task, TResult> continuationFunction)
{
return task.ContinueWith(continuationFunction, taskFactory.CancellationToken, taskFactory.ContinuationOptions, taskFactory.Scheduler);
}
// Repeat with argument combinations:
// - Task<TResult> task (instead of non-generic Task task)
// - object state
// - bool notOnRanToCompletion (useful in C# before 6)
Usage:
// using namespace that contains static task extensions class
var task = taskFactory.ContinueWhen(existsingTask, t => Continue(a, b, c));
var asyncTask = taskFactory.ContinueWhen(existingTask, async t => await ContinueAsync(a, b, c))
.Unwrap();
I decided not to mimic Task.Run by not overloading the same method name to unwrapping task-returning delegates, it's really not always what you want. Actually, I didn't even implement ContinueWhenAsync extension methods so you need to use Unwrap or two awaits.
Often, these continuations are I/O asynchronous operations, and the pre- and post-processing overhead should be so small that you shouldn't care if it starts running synchronously up to the first yielding point, or even if it completes synchronously (e.g. using an underlying MemoryStream or a mocked DB access). Also, most of them don't depend on a synchronization context.
Whenever you apply the Unwrap extension method or two awaits, you should check if the task falls in this category. If so, async/await is most probably a better choice than starting a task.
For asynchronous operations with a non-negligible synchronous overhead, starting a new task may be preferable. Even so, a notable exception where async/await is still a better choice is if your code is async from the start, such as an async method invoked by a framework or host (ASP.NET, WCF, NServiceBus 6+, etc.), as the overhead is your actual business. For long processing, you may consider using Task.Yield with care. One of the tenets of asynchronous code is to not be too fine grained, however, too coarse grained is just as bad: a set of heavy-duty tasks may prevent the processing of queued lightweight tasks.
If the asynchronous operation depends on a synchronization context, you can still use async/await if you're within that context (in this case, think twice or more before using .ConfigureAwait(false)), otherwise, start a new task using a task scheduler from the respective synchronization context.

Related

Why Task.WhenAll requires a manually created Task to be started when the same doesn't require for async methods?

Ex, the following code manually instantiates a Task and passes to a Task.WhenAll in a List<T>
public async Task Do3()
{
var task1 = new Task(async () => { await Task.Delay(2000); Console.WriteLine("########## task1"); });
var taskList = new List<Task>() { task1};
taskList[0].Start();
var taskDone = Task.WhenAll(taskList);
await taskDone;
}
without starting the Task it doesn't work, it hangs forever calling from a console app, but the below works just fine without starting it
public async Task Do3()
{
//var task1 = new Task(async () => { await Task.Delay(2000); Console.WriteLine("########## task1"); });
var taskList = new List<Task>() { SubDo1() };
//taskList[0].Start();
var taskDone = Task.WhenAll(taskList);
await taskDone;
}
public async Task SubDo1()
{
await Task.Delay(2000);
Console.WriteLine("########## task1");
}
Task is used in two completely different ways here; when you call an async method: you are starting it yourself; at this point, two things can happen:
it can run to completion (eventually) without ever reaching a truly asynchronous state, and return a completed (or faulted) task to the caller
it can reach an incomplete awaitable (in this case await Task.Delay), at which point it creates a state machine that represents the current position, schedules a completion operation on that incomplete awaitable (to do whatever comes next), and then returns an incomplete task to the caller
It is not "not started"; to return anything to the caller: we have started it. However, unlike Task.Start(), we start that work on our current thread - not an external worker thread - with other threads only getting involved based on how that incomplete awaitable schedules the completion callbacks that the compiler gives it.
This is very different to the new Task(...) scenario, where nothing is initially started. That's why they behave differently. Note also the Remarks section of the Task constructor here - it is a very niche API, and honestly: not hugely recommended.
Additionally: when you don't immediately await an async method, you're essentially going into concurrent territory (assuming the awaitable won't always complete synchronously). In some cases, this matters, and may cause threading problems re race-conditions. It shouldn't matter much in this case, though.

How to synchronize the recurrent execution of three tasks that depend on each other?

I would like to ask expert developers in C#. I have three recurrent tasks that my program needs to do. Task 2 depends on task 1 and task 3 depends on task 2, but task 1 doesn't need to wait for the other two tasks to finish in order to start again (the program is continuously running). Since each task takes some time, I would like to run each task in one thread or a C# Task. Once task 1 finishes task 2 starts and task 1 starts again ... etc.
I'm not sure what is the best way to implement this. I hope someone can guide me on this.
One way to achieve this is using something called the the Task Parallel Library. This provides a set of classes that allow you to arrange your tasks into "blocks". You create a method that does A, B and C sequentially, then TPL will take care of running multiple invocations of that method simultaneously. Here's a small example:
async Task Main()
{
var actionBlock = new ActionBlock<int>(DoTasksAsync, new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 2 // This is the number of simultaneous executions of DoTasksAsync that will be run
};
await actionBlock.SendAsync(1);
await actionBlock.SendAsync(2);
actionBlock.Complete();
await actionBlock.Completion;
}
async Task DoTasksAsync(int input)
{
await DoTaskAAsync();
await DoTaskBAsync();
await DoTaskCAsync();
}
I would probably use some kind of queue pattern.
I am not sure what the requirements for if task 1 is threadsafe or not, so I will keep it simple:
Task 1 is always executing. As soon as it finished, it posts a message on some queue and starts over.
Task 2 is listening to the queue. Whenever a message is available, it starts working on it.
Whenever task 2 finishes working, it calls task 3, so that it can do it's work.
As one of the comments mentioned, you should probably be able to use async/await successfully in your code. Especially between task 2 and 3. Note that task 1 can be run in parallel to task 2 and 3, since it is not dependent on any of the other task.
You could use the ParallelLoop method below. This method starts an asynchronous workflow, where the three tasks are invoked in parallel to each other, but sequentially to themselves. So you don't need to add synchronization inside each task, unless some task produces global side-effects that are visible from some other task.
The tasks are invoked on the ThreadPool, with the Task.Run method.
/// <summary>
/// Invokes three actions repeatedly in parallel on the ThreadPool, with the
/// action2 depending on the action1, and the action3 depending on the action2.
/// Each action is invoked sequentially to itself.
/// </summary>
public static async Task ParallelLoop<TResult1, TResult2>(
Func<TResult1> action1,
Func<TResult1, TResult2> action2,
Action<TResult2> action3,
CancellationToken cancellationToken = default)
{
// Arguments validation omitted
var task1 = Task.FromResult<TResult1>(default);
var task2 = Task.FromResult<TResult2>(default);
var task3 = Task.CompletedTask;
try
{
int counter = 0;
while (true)
{
counter++;
var result1 = await task1.ConfigureAwait(false);
cancellationToken.ThrowIfCancellationRequested();
task1 = Task.Run(action1); // Restart the task1
if (counter <= 1) continue; // In the first loop result1 is undefined
var result2 = await task2.ConfigureAwait(false);
cancellationToken.ThrowIfCancellationRequested();
task2 = Task.Run(() => action2(result1)); // Restart the task2
if (counter <= 2) continue; // In the second loop result2 is undefined
await task3.ConfigureAwait(false);
cancellationToken.ThrowIfCancellationRequested();
task3 = Task.Run(() => action3(result2)); // Restart the task3
}
}
finally
{
// Prevent fire-and-forget
Task allTasks = Task.WhenAll(task1, task2, task3);
try { await allTasks.ConfigureAwait(false); } catch { allTasks.Wait(); }
// Propagate all errors in an AggregateException
}
}
There is an obvious pattern in the implementation, that makes it trivial to add overloads having more than three actions. Each added action will require its own generic type parameter (TResult3, TResult4 etc).
Usage example:
var cts = new CancellationTokenSource();
Task loopTask = ParallelLoop(() =>
{
// First task
Thread.Sleep(1000); // Simulates synchronous work
return "OK"; // The result that is passed to the second task
}, result =>
{
// Second task
Thread.Sleep(1000); // Simulates synchronous work
return result + "!"; // The result that is passed to the third task
}, result =>
{
// Third task
Thread.Sleep(1000); // Simulates synchronous work
}, cts.Token);
In case any of the tasks fails, the whole loop will stop (with the loopTask.Exception containing the error). Since the tasks depend on each other, recovering from a single failed task is not possible¹. What you could do is to execute the whole loop through a Polly Retry policy, to make sure that the loop will be reincarnated in case of failure. If you are unfamiliar with the Polly library, you could use the simple and featureless RetryUntilCanceled method below:
public static async Task RetryUntilCanceled(Func<Task> action,
CancellationToken cancellationToken)
{
while (true)
{
cancellationToken.ThrowIfCancellationRequested();
try { await action().ConfigureAwait(false); }
catch { if (cancellationToken.IsCancellationRequested) throw; }
}
}
Usage:
Task loopTask = RetryUntilCanceled(() => ParallelLoop(() =>
{
//...
}, cts.Token), cts.Token);
Before exiting the process you are advised to Cancel() the CancellationTokenSource and Wait() (or await) the loopTask, in order for the loop to terminate gracefully. Otherwise some tasks may be aborted in the middle of their work.
¹ It is actually possible, and probably preferable, to execute each individual task through a Polly Retry policy. The parallel loop will be suspended until the failed task is retried successfully.

ContinueWith chaining not working as expected

I have this example code:
static void Main(string[] args) {
var t1 = Task.Run(async () => {
Console.WriteLine("Putting in fake processing 1.");
await Task.Delay(300);
Console.WriteLine("Fake processing finished 1. ");
});
var t2 = t1.ContinueWith(async (c) => {
Console.WriteLine("Putting in fake processing 2.");
await Task.Delay(200);
Console.WriteLine("Fake processing finished 2.");
});
var t3 = t2.ContinueWith(async (c) => {
Console.WriteLine("Putting in fake processing 3.");
await Task.Delay(100);
Console.WriteLine("Fake processing finished 3.");
});
Console.ReadLine();
}
The console output baffles me:
Putting in fake processing 1.
Fake processing finished 1.
Putting in fake processing 2.
Putting in fake processing 3.
Fake processing finished 3.
Fake processing finished 2.
I am trying to chain the tasks so they execute one after another, what am I doing wrong? And I can't use await, this is just example code, in reality I am queueing incoming tasks (some asynchronous, some not) and want to execute them in the same order they came in but with no parallelism, ContinueWith seemed better than creating a ConcurrentQueue and handling everythning myself, but it just doesn't work...
Take a look at the type of t2. It's a Task<Task>. t2 will be completed when it finishes starting the task that does the actual work not when that work actually finishes.
The smallest change to your code to get it to work would be to add an unwrap after both your second and third calls to ContinueWith, so that you get out the task that represents the completion of your work.
The more idiomatic solution would be to simply remove the ContinueWith calls entirely and just use await to add continuations to tasks.
Interestingly enough, you would see the same behavior for t1 if you used Task.Factory.StartNew, but Task.Run is specifically designed to work with async lambdas and actually internally unwraps all Action<Task> delegates to return the result of the task returned, rather than a task that represents starting that task, which is why you don't need to unwrap that task.
in reality I am queueing incoming tasks (some asynchronous, some not) and want to execute them in the same order they came in but with no parallelism
You probably want to use TPL Dataflow for that. Specifically, ActionBlock.
var block = new ActionBlock<object>(async item =>
{
// Handle synchronous item
var action = item as Action;
if (action != null)
action();
// Handle asynchronous item
var func = item as Func<Task>;
if (func != null)
await func();
});
// To queue a synchronous item
Action synchronous = () => Thread.Sleep(1000);
block.Post(synchronous);
// To queue an asynchronous item
Func<Task> asynchronous = async () => { await Task.Delay(1000); };
blockPost(asynchronous);

Running multiple async tasks and waiting for them all to complete

I need to run multiple async tasks in a console application, and wait for them all to complete before further processing.
There's many articles out there, but I seem to get more confused the more I read. I've read and understand the basic principles of the Task library, but I'm clearly missing a link somewhere.
I understand that it's possible to chain tasks so that they start after another completes (which is pretty much the scenario for all the articles I've read), but I want all my Tasks running at the same time, and I want to know once they're all completed.
What's the simplest implementation for a scenario like this?
Both answers didn't mention the awaitable Task.WhenAll:
var task1 = DoWorkAsync();
var task2 = DoMoreWorkAsync();
await Task.WhenAll(task1, task2);
The main difference between Task.WaitAll and Task.WhenAll is that the former will block (similar to using Wait on a single task) while the latter will not and can be awaited, yielding control back to the caller until all tasks finish.
More so, exception handling differs:
Task.WaitAll:
At least one of the Task instances was canceled -or- an exception was thrown during the execution of at least one of the Task instances. If a task was canceled, the AggregateException contains an OperationCanceledException in its InnerExceptions collection.
Task.WhenAll:
If any of the supplied tasks completes in a faulted state, the returned task will also complete in a Faulted state, where its exceptions will contain the aggregation of the set of unwrapped exceptions from each of the supplied tasks.
If none of the supplied tasks faulted but at least one of them was canceled, the returned task will end in the Canceled state.
If none of the tasks faulted and none of the tasks were canceled, the resulting task will end in the RanToCompletion state.
If the supplied array/enumerable contains no tasks, the returned task will immediately transition to a RanToCompletion state before it's returned to the caller.
You could create many tasks like:
List<Task> TaskList = new List<Task>();
foreach(...)
{
var LastTask = new Task(SomeFunction);
LastTask.Start();
TaskList.Add(LastTask);
}
Task.WaitAll(TaskList.ToArray());
You can use WhenAll which will return an awaitable Task or WaitAll which has no return type and will block further code execution simular to Thread.Sleep until all tasks are completed, canceled or faulted.
WhenAll
WaitAll
Any of the supplied tasks completes in a faulted state
A task with the faulted state will be returned. The exceptions will contain the aggregation of the set of unwrapped exceptions from each of the supplied tasks.
An AggregateException will be thrown.
None of the supplied tasks faulted but at least one of them was canceled
The returned task will end in the TaskStatus.Canceled state
An AggregateException will be thrown which contains an OperationCanceledException in its InnerExceptions collection
An empty list was given
An ArgumentException will be thrown
The returned task will immediately transition to a TaskStatus.RanToCompletion State before it's returned to the caller.
Doesn't block the current thread
Blocks the current thread
Example
var tasks = new Task[] {
TaskOperationOne(),
TaskOperationTwo()
};
Task.WaitAll(tasks);
// or
await Task.WhenAll(tasks);
If you want to run the tasks in a particular/specific order you can get inspiration from this answer.
The best option I've seen is the following extension method:
public static Task ForEachAsync<T>(this IEnumerable<T> sequence, Func<T, Task> action) {
return Task.WhenAll(sequence.Select(action));
}
Call it like this:
await sequence.ForEachAsync(item => item.SomethingAsync(blah));
Or with an async lambda:
await sequence.ForEachAsync(async item => {
var more = await GetMoreAsync(item);
await more.FrobbleAsync();
});
Yet another answer...but I usually find myself in a case, when I need to load data simultaneously and put it into variables, like:
var cats = new List<Cat>();
var dog = new Dog();
var loadDataTasks = new Task[]
{
Task.Run(async () => cats = await LoadCatsAsync()),
Task.Run(async () => dog = await LoadDogAsync())
};
try
{
await Task.WhenAll(loadDataTasks);
}
catch (Exception ex)
{
// handle exception
}
Do you want to chain the Tasks, or can they be invoked in a parallel manner?
For chaining
Just do something like
Task.Run(...).ContinueWith(...).ContinueWith(...).ContinueWith(...);
Task.Factory.StartNew(...).ContinueWith(...).ContinueWith(...).ContinueWith(...);
and don't forget to check the previous Task instance in each ContinueWith as it might be faulted.
For the parallel manner
The most simple method I came across: Parallel.Invoke
Otherwise there's Task.WaitAll or you can even use WaitHandles for doing a countdown to zero actions left (wait, there's a new class: CountdownEvent), or ...
This is how I do it with an array Func<>:
var tasks = new Func<Task>[]
{
() => myAsyncWork1(),
() => myAsyncWork2(),
() => myAsyncWork3()
};
await Task.WhenAll(tasks.Select(task => task()).ToArray()); //Async
Task.WaitAll(tasks.Select(task => task()).ToArray()); //Or use WaitAll for Sync
I prepared a piece of code to show you how to use the task for some of these scenarios.
// method to run tasks in a parallel
public async Task RunMultipleTaskParallel(Task[] tasks) {
await Task.WhenAll(tasks);
}
// methode to run task one by one
public async Task RunMultipleTaskOneByOne(Task[] tasks)
{
for (int i = 0; i < tasks.Length - 1; i++)
await tasks[i];
}
// method to run i task in parallel
public async Task RunMultipleTaskParallel(Task[] tasks, int i)
{
var countTask = tasks.Length;
var remainTasks = 0;
do
{
int toTake = (countTask < i) ? countTask : i;
var limitedTasks = tasks.Skip(remainTasks)
.Take(toTake);
remainTasks += toTake;
await RunMultipleTaskParallel(limitedTasks.ToArray());
} while (remainTasks < countTask);
}
There should be a more succinct solution than the accepted answer. It shouldn't take three steps to run multiple tasks simultaneously and get their results.
Create tasks
await Task.WhenAll(tasks)
Get task results (e.g., task1.Result)
Here's a method that cuts this down to two steps:
public async Task<Tuple<T1, T2>> WhenAllGeneric<T1, T2>(Task<T1> task1, Task<T2> task2)
{
await Task.WhenAll(task1, task2);
return Tuple.Create(task1.Result, task2.Result);
}
You can use it like this:
var taskResults = await Task.WhenAll(DoWorkAsync(), DoMoreWorkAsync());
var DoWorkResult = taskResults.Result.Item1;
var DoMoreWorkResult = taskResults.Result.Item2;
This removes the need for the temporary task variables. The problem with using this is that while it works for two tasks, you'd need to update it for three tasks, or any other number of tasks. Also it doesn't work well if one of the tasks doesn't return anything. Really, the .Net library should provide something that can do this
If you're using the async/await pattern, you can run several tasks in parallel like this:
public async Task DoSeveralThings()
{
// Start all the tasks
Task first = DoFirstThingAsync();
Task second = DoSecondThingAsync();
// Then wait for them to complete
var firstResult = await first;
var secondResult = await second;
}

Thread.Sleep vs Task.Delay?

I know that Thread.Sleep blocks a thread.
But does Task.Delay also block? Or is it just like Timer which uses one thread for all callbacks (when not overlapping)?
(this question doesn't cover the differences)
The documentation on MSDN is disappointing, but decompiling Task.Delay using Reflector gives more information:
public static Task Delay(int millisecondsDelay, CancellationToken cancellationToken)
{
if (millisecondsDelay < -1)
{
throw new ArgumentOutOfRangeException("millisecondsDelay", Environment.GetResourceString("Task_Delay_InvalidMillisecondsDelay"));
}
if (cancellationToken.IsCancellationRequested)
{
return FromCancellation(cancellationToken);
}
if (millisecondsDelay == 0)
{
return CompletedTask;
}
DelayPromise state = new DelayPromise(cancellationToken);
if (cancellationToken.CanBeCanceled)
{
state.Registration = cancellationToken.InternalRegisterWithoutEC(delegate (object state) {
((DelayPromise) state).Complete();
}, state);
}
if (millisecondsDelay != -1)
{
state.Timer = new Timer(delegate (object state) {
((DelayPromise) state).Complete();
}, state, millisecondsDelay, -1);
state.Timer.KeepRootedWhileScheduled();
}
return state;
}
Basically, this method is just a timer wrapped inside of a task. So yes, you can say it's just like timer.
No, the Task.Delay doesn't block the current thread. It can be used to block it, but it doesn't do it by itself, and it's rarely used as a synchronous blocker in practice. All it actually does is to return a Task that will complete after the specified amount of time:
Task task = Task.Delay(1000); // The task will complete after 1,000 milliseconds.
Typically this task is then waited asynchronously with the await keyword, inside an async method:
await task; // Suspends the async method, but doesn't block the thread.
The await keyword suspends the current execution flow (async method) until the awaitable completes. No thread is blocked while the execution flow is suspended.
It is also possible to block the current thread until the task completes, by using the synchronous Wait method.
task.Wait(); // Blocks the thread.
If you would like to see an experimental demonstration that the await Task.Delay() doesn't block a thread, here is one. The program below creates a huge number of tasks, where each task awaits a internally a Task.Delay(1000). Then the number of threads used by the current process is printed in the console, and finally all of the tasks are awaited:
Task[] tasks = Enumerable.Range(1, 100_000).Select(async _ =>
{
await Task.Delay(1000);
}).ToArray();
Console.WriteLine($"Tasks: {tasks.Count(t => t.IsCompleted):#,0} / {tasks.Length:#,0}");
Thread.Sleep(500);
Console.WriteLine($"Threads.Count: {Process.GetCurrentProcess().Threads.Count:#,0}");
await Task.WhenAll(tasks);
Console.WriteLine($"Tasks: {tasks.Count(t => t.IsCompleted):#,0} / {tasks.Length:#,0}");
Output:
Tasks: 0 / 100,000
Threads.Count: 9
Tasks: 100,000 / 100,000
Live demo.
The program completes after just 1 second, and reports that during its peak it used a total of 9 threads. If each of the 100,000 tasks blocked a thread, we would expect to see 100,000 threads used at that point. Apparently this didn't happen.

Categories