I am failing to understand why this doesn't seem to run the tasks in Parallel:
var tasks = new Task<MyReturnType>[mbis.Length];
for (int i = 0; i < tasks.Length; i++)
{
tasks[i] = CAS.Service.GetAllRouterInterfaces(mbis[i], 3);
}
Parallel.ForEach(tasks, task => task.Start());
By stepping through the execution, I see that as soon as this line is evaluated:
tasks[i] = CAS.Service.GetAllRouterInterfaces(mbis[i], 3);
The task starts. I want to add all the new tasks to the list, and then execute them in parallel.
If GetAllRouterInterfaces is an async method, the resulting Task will already be started (see this answer for further explanation).
This means that tasks will contain multiple tasks all of which are running in parallel without the subsequent call to Parallel.ForEach.
You may wish to wait for all the entries in tasks to complete, you can do this with an await Task.WhenAll(tasks);.
So you should end up with:
var tasks = new Task<MyReturnType>[mbis.Length];
for (int i = 0; i < tasks.Length; i++)
{
tasks[i] = CAS.Service.GetAllRouterInterfaces(mbis[i], 3);
}
await Task.WhenAll(tasks);
Update from comments
It seems that despite GetAllRouterInterfaces being async and returning a Task it is still making synchronous POST requests (presumably before any other await). This would explain why you are getting minimal concurrency as each call to GetAllRouterInterfaces is blocking while this request is made. The ideal solution would be to make an aynchronous POST request, e.g:
await webclient.PostAsync(request).ConfigureAwait(false);
This will ensure your for loop is not blocked and the requests are made concurrently.
Further update after conversation
It seems you are unable to make the POST requests asynchronous and GetAllRouterInterfaces does not actually do any asynchronous work, due to this I have advised the following:
Remove async from GetAllRouterInterfaces and change the return type to MyReturnType
Call GetAllRouterInterfaces in parallel like so
var routerInterfaces = mbis.AsParallel()
.Select(mbi => CAS.Service.GetAllRouterInterfaces(mbi, 3));
I don't know if I understand you the right way.
First of all, if GetAllRouterInterfaces is returns a Task you have to await the result.
With Parallel.ForEach you can't await tasks like as it is, but you can do something similar like this:
public async Task RunInParallel(IEnumerable<TWhatEver> mbisItems)
{
//mbisItems == your parameter that you want to pass to GetAllRouterInterfaces
//degree of cucurrency
var concurrentTasks = 3;
//Parallel.Foreach does internally something like this:
await Task.WhenAll(
from partition in Partitioner.Create(mbisItems).GetPartitions(concurrentTasks)
select Task.Run(async delegate
{
using (partition)
while (partition.MoveNext())
{
var currentMbis = partition.Current;
var yourResult = await GetAllRouterInterfaces(currentMbis,3);
}
}
));
}
Related
We had something like below
List<string> uncheckItems = new List<string>();
for (int i = 0; i < 100; i++)
{
uncheckItems.Add($"item {i + 1}");
}
var tasks = uncheckItems.Select(item =>
new Task(async () => await ProcessItem(item))
);
// Do some preparations
foreach (var task in tasks)
{
task.Start();
}
Task.WaitAll(tasks.ToArray());
Console.WriteLine("=====================================================All finished");
It seems to make sense but the program never able to reach the all finished line.
And if I adjust the workflow to run tasks immediately like remove the task.Start() loop and change to
var tasks = uncheckItems.Select(async item =>
await ProcessItem(item)
);
Then it works.
However, I wonder
Why it stucks?
Is there any way we can keep the workflow(create tasks without trigger them directly and start them later on) and still able to utilize WaitAll()?
The reason is the lazy enumeration evaluation, you are starting different tasks than waiting with Task.WaitAll. This can be fixed for example with next:
var tasks = uncheckItems.Select(item =>
new Task(async () => await ProcessItem(item))
)
.ToArray();
Though it will not achieve your goal (as I understand) of waiting all ProcessItem to finish. You can do something like new Task(() => ProcessItem(item).GetAwaiter().GetResult()) but I think it would be better to change your approach, for example make ProcessItem return a "cold" task or using your second snippet and moving tasks creation to the point where they needed to be started.
You should be next to the world expert in Task to be using the constructor. The documentation warns against that:
This constructor should only be used in advanced scenarios where it is required that the creation and starting of the task is separated.
Rather than calling this constructor, the most common way to instantiate a Task object and launch a task is by calling the static Task.Run(Action) or TaskFactory.StartNew(Action) method.
If a task with no action is needed just for the consumer of an API to have something to await, a TaskCompletionSource should be used.
The Task constructor produces a non-started Task that will only start when Task.Start() is invoked, as you discovered.
The Task constructor also receives an Action (or Action<T>), so the return value is ignored. That means that, after started, the task will end as soon as async () => await ProcessItem(item) yields.
What you need is:
await Task.WhenAll(Enumerable.Range(0, 100).Select(i => ProcessItem($"item {i + 1}"));
Or, if you really have to block:
Task
.WhenAll(Enumerable.Range(0, 100).Select(i => ProcessItem($"item {i + 1}"))
.GetAwaiter().GetResult();
Get the select out of there.
List<string> uncheckItems = new List<string>();
for (int i = 0; i < 100; i++)
{
uncheckItems.Add($"item {i + 1}");
}
var tasks = new List<Task>();
foreach(var item in uncheckedItems) {
tasks.Add(Task.Run(() => ProcessItem(item)));
}
Task.WaitAll(tasks.ToArray());
Console.WriteLine("========All finished");
https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.waitall?view=net-6.0
I have a list of id's and I want to get data for each of those id in parallel from database. My below ExecuteAsync method is called at very high throughput and for each request we have around 500 ids for which I need to extract data.
So I have got below code where I am looping around list of ids and making async calls for each of those id in parallel and it works fine.
private async Task<List<T>> ExecuteAsync<T>(IList<int> ids, IPollyPolicy policy,
Func<CancellationToken, int, Task<T>> mapper) where T : class
{
var tasks = new List<Task<T>>(ids.Count);
// invoking multiple id in parallel to get data for each id from database
for (int i = 0; i < ids.Count; i++)
{
tasks.Add(Execute(policy, ct => mapper(ct, ids[i])));
}
// wait for all id response to come back
var responses = await Task.WhenAll(tasks);
var excludeNull = new List<T>(ids.Count);
for (int i = 0; i < responses.Length; i++)
{
var response = responses[i];
if (response != null)
{
excludeNull.Add(response);
}
}
return excludeNull;
}
private async Task<T> Execute<T>(IPollyPolicy policy,
Func<CancellationToken, Task<T>> requestExecuter) where T : class
{
var response = await policy.Policy.ExecuteAndCaptureAsync(
ct => requestExecuter(ct), CancellationToken.None);
if (response.Outcome == OutcomeType.Failure)
{
if (response.FinalException != null)
{
// log error
throw response.FinalException;
}
}
return response?.Result;
}
Question:
Now as you can see I am looping all ids and making bunch of async calls to database in parallel for each id which can put lot of load on database (depending on how many request is coming). So I want to limit the number of async calls we are making to database. I modified ExecuteAsync to use Semaphore as shown below but it doesn't look like it does what I want it to do:
private async Task<List<T>> ExecuteAsync<T>(IList<int> ids, IPollyPolicy policy,
Func<CancellationToken, int, Task<T>> mapper) where T : class
{
var throttler = new SemaphoreSlim(250);
var tasks = new List<Task<T>>(ids.Count);
// invoking multiple id in parallel to get data for each id from database
for (int i = 0; i < ids.Count; i++)
{
await throttler.WaitAsync().ConfigureAwait(false);
try
{
tasks.Add(Execute(policy, ct => mapper(ct, ids[i])));
}
finally
{
throttler.Release();
}
}
// wait for all id response to come back
var responses = await Task.WhenAll(tasks);
// same excludeNull code check here
return excludeNull;
}
Does Semaphore works on Threads or Tasks? Reading it here looks like Semaphore is for Threads and SemaphoreSlim is for tasks.
Is this correct? If yes then what's the best way to fix this and limit the number of async IO tasks we make to database here.
Task is an abstraction on threads, and doesn’t necessarily create a new thread. Semaphore limits the number of threads that can access that for loop. Execute returns a Task which aren’t threads. If there’s only 1 request, there will be only 1 thread inside that for loop, even if it is asking for 500 ids. The 1 thread sends off all the async IO tasks itself.
Sort of. I would not say that tasks are related to threads at all. There are actually two kinds of tasks: a delegate task (which is kind of an abstraction of a thread), and a promise task (which has nothing to do with threads).
Regarding the SemaphoreSlim, it does limit the concurrency of a block of code (not threads).
I recently started playing with C# so my understanding is not right looks like w.r.t Threads and Tasks.
I recommend reading my async intro and best practices. Follow up with There Is No Thread if you're interested more about how threads aren't really involved.
I modified ExecuteAsync to use Semaphore as shown below but it doesn't look like it does what I want it to do
The current code is only throttling the adding of the tasks to the list, which is only done one at a time anyway. What you want to do is throttle the execution itself:
private async Task<List<T>> ExecuteAsync<T>(IList<int> ids, IPollyPolicy policy, Func<CancellationToken, int, Task<T>> mapper) where T : class
{
var throttler = new SemaphoreSlim(250);
var tasks = new List<Task<T>>(ids.Count);
// invoking multiple id in parallel to get data for each id from database
for (int i = 0; i < ids.Count; i++)
tasks.Add(ThrottledExecute(ids[i]));
// wait for all id response to come back
var responses = await Task.WhenAll(tasks);
// same excludeNull code check here
return excludeNull;
async Task<T> ThrottledExecute(int id)
{
await throttler.WaitAsync().ConfigureAwait(false);
try {
return await Execute(policy, ct => mapper(ct, id)).ConfigureAwait(false);
} finally {
throttler.Release();
}
}
}
Your colleague has probably in mind the Semaphore class, which is indeed a thread-centric throttler, with no asynchronous capabilities.
Limits the number of threads that can access a resource or pool of resources concurrently.
The SemaphoreSlim class is a lightweight alternative to Semaphore, which includes the asynchronous method WaitAsync, that makes all the difference in the world. The WaitAsync doesn't block a thread, it blocks an asynchronous workflow. Asynchronous workflows are cheap (usually less than 1000 bytes each). You can have millions of them "running" concurrently at any given moment. This is not the case with threads, because of the 1 MB of memory that each thread reserves for its stack.
As for the ExecuteAsync method, here is how you could refactor it by using the LINQ methods Select, Where, ToArray and ToList:
Update: The Polly library supports capturing and continuing on the current synchronization context, so I added a bool executeOnCurrentContext
argument to the API. I also renamed the asynchronous Execute method to ExecuteAsync, to be in par with the guidelines.
private async Task<List<T>> ExecuteAsync<T>(IList<int> ids, IPollyPolicy policy,
Func<CancellationToken, int, Task<T>> mapper,
int concurrencyLevel = 1, bool executeOnCurrentContext = false) where T : class
{
var throttler = new SemaphoreSlim(concurrencyLevel);
Task<T>[] tasks = ids.Select(async id =>
{
await throttler.WaitAsync().ConfigureAwait(executeOnCurrentContext);
try
{
return await ExecuteAsync(policy, ct => mapper(ct, id),
executeOnCurrentContext).ConfigureAwait(false);
}
finally
{
throttler.Release();
}
}).ToArray();
T[] results = await Task.WhenAll(tasks).ConfigureAwait(false);
return results.Where(r => r != null).ToList();
}
private async Task<T> ExecuteAsync<T>(IPollyPolicy policy,
Func<CancellationToken, Task<T>> function,
bool executeOnCurrentContext = false) where T : class
{
var response = await policy.Policy.ExecuteAndCaptureAsync(
ct => executeOnCurrentContext ? function(ct) : Task.Run(() => function(ct)),
CancellationToken.None, continueOnCapturedContext: executeOnCurrentContext)
.ConfigureAwait(executeOnCurrentContext);
if (response.Outcome == OutcomeType.Failure)
{
if (response.FinalException != null)
{
ExceptionDispatchInfo.Throw(response.FinalException);
}
}
return response?.Result;
}
You are throttling the rate at which you add tasks to the list. You are not throttling the rate at which tasks are executed. To do that, you'd probably have to implement your semaphore calls inside the Execute method itself.
If you can't modify Execute, another way to do it is to poll for completed tasks, sort of like this:
for (int i = 0; i < ids.Count; i++)
{
var pendingCount = tasks.Count( t => !t.IsCompleted );
while (pendingCount >= 500) await Task.Yield();
tasks.Add(Execute(policy, ct => mapper(ct, ids[i])));
}
await Task.WhenAll( tasks );
Actually the TPL is capable to control the task execution and limit the concurrency. You can test how many parallel tasks is suitable for your use-case. No need to think about threads, TPL will manage everything fine for you.
To use limited concurrency see this answer, credits to #panagiotis-kanavos
.Net TPL: Limited Concurrency Level Task scheduler with task priority?
The example code is (even using different priorities, you can strip that):
QueuedTaskScheduler qts = new QueuedTaskScheduler(TaskScheduler.Default,4);
TaskScheduler pri0 = qts.ActivateNewQueue(priority: 0);
TaskScheduler pri1 = qts.ActivateNewQueue(priority: 1);
Task.Factory.StartNew(()=>{ },
CancellationToken.None,
TaskCreationOptions.None,
pri0);
Just throw all your tasks to the queue and with Task.WhenAll you can wait till everything is done.
Lets say i have the following code for example:
private async Task ManageClients()
{
for (int i =0; i < listClients.Count; i++)
{
if (list[i] == 0)
await DoSomethingWithClientAsync();
else
await DoOtherThingAsync();
}
DoOtherWork();
}
My questions are:
1. Will the for() continue and process other clients on the list?, or it
will await untill it finishes one of the tasks.
2. Is even a good practice to use async/await inside a loop?
3. Can it be done in a better way?
I know it was a really simple example, but I'm trying to imagine what would happen if that code was a server with thousands of clients.
In your code example, the code will "block" when the loop reaches await, meaning the other clients will not be processed until the first one is complete. That is because, while the code uses asynchronous calls, it was written using a synchronous logic mindset.
An asynchronous approach should look more like this:
private async Task ManageClients()
{
var tasks = listClients.Select( client => DoSomethingWithClient() );
await Task.WhenAll(tasks);
DoOtherWork();
}
Notice there is only one await, which simultaneously awaits all of the clients, and allows them to complete in any order. This avoids the situation where the loop is blocked waiting for the first client.
If a thread that is executing an async function is calling another async function, this other function is executed as if it was not async until it sees a call to a third async function. This third async function is executed also as if it was not async.
This goes on, until the thread sees an await.
Instead of really doing nothing, the thread goes up the call stack, to see if the caller was not awaiting for the result of the called function. If not, the thread continues the statements in the caller function until it sees an await. The thread goes up the call stack again to see if it can continue there.
This can be seen in the following code:
var taskDoSomething = DoSomethingAsync(...);
// because we are not awaiting, the following is done as soon as DoSomethingAsync has to await:
DoSomethingElse();
// from here we need the result from DoSomethingAsync. await for it:
var someResult = await taskDoSomething;
You can even call several sub-procedures without awaiting:
var taskDoSomething = DoSomethingAsync(...);
var taskDoSomethingElse = DoSomethingElseAsync(...);
// we are here both tasks are awaiting
DoSomethingElse();
Once you need the results of the tasks, if depends what you want to do with them. Can you continue processing if one task is completed but the other is not?
var someResult = await taskDoSomething;
ProcessResult(someResult);
var someOtherResult = await taskDoSomethingelse;
ProcessBothResults(someResult, someOtherResult);
If you need the result of all tasks before you can continue, use Task.WhenAll:
Task[] allTasks = new Task[] {taskDoSomething, taskDoSomethingElse);
await Task.WhenAll(allTasks);
var someResult = taskDoSomething.Result;
var someOtherResult = taskDoSomethingElse.Result;
ProcessBothResults(someResult, someOtherResult);
Back to your question
If you have a sequence of items where you need to start awaitable tasks, it depends on whether the tasks need the result of other tasks or not. In other words can task[2] start if task[1] has not been completed yet? Do Task[1] and Task[2] interfere with each other if they run both at the same time?
If they are independent, then start all Tasks without awaiting. Then use Task.WhenAll to wait until all are finished. The Task scheduler will take care that not to many tasks will be started at the same time. Be aware though, that starting several tasks could lead to deadlocks. Check carefully if you need critical sections
var clientTasks = new List<Task>();
foreach(var client in clients)
{
if (list[i] == 0)
clientTasks.Add(DoSomethingWithClientAsync());
else
clientTasks.Add(DoOtherThingAsync());
}
// if here: all tasks started. If desired you can do other things:
AndNowForSomethingCompletelyDifferent();
// later we need the other tasks to be finished:
var taskWaitAll = Task.WhenAll(clientTasks);
// did you notice we still did not await yet, we are still in business:
MontyPython();
// okay, done with frolicking, we need the results:
await taskWaitAll;
DoOtherWork();
This was the scenario where all Tasks where independent: no task needed the other to be completed before it could start. However if you need Task[2] to be completed before you can start Task[3] you should await:
foreach(var client in clients)
{
if (list[i] == 0)
await DoSomethingWithClientAsync());
else
await DoOtherThingAsync();
}
I need to run multiple async tasks in a console application, and wait for them all to complete before further processing.
There's many articles out there, but I seem to get more confused the more I read. I've read and understand the basic principles of the Task library, but I'm clearly missing a link somewhere.
I understand that it's possible to chain tasks so that they start after another completes (which is pretty much the scenario for all the articles I've read), but I want all my Tasks running at the same time, and I want to know once they're all completed.
What's the simplest implementation for a scenario like this?
Both answers didn't mention the awaitable Task.WhenAll:
var task1 = DoWorkAsync();
var task2 = DoMoreWorkAsync();
await Task.WhenAll(task1, task2);
The main difference between Task.WaitAll and Task.WhenAll is that the former will block (similar to using Wait on a single task) while the latter will not and can be awaited, yielding control back to the caller until all tasks finish.
More so, exception handling differs:
Task.WaitAll:
At least one of the Task instances was canceled -or- an exception was thrown during the execution of at least one of the Task instances. If a task was canceled, the AggregateException contains an OperationCanceledException in its InnerExceptions collection.
Task.WhenAll:
If any of the supplied tasks completes in a faulted state, the returned task will also complete in a Faulted state, where its exceptions will contain the aggregation of the set of unwrapped exceptions from each of the supplied tasks.
If none of the supplied tasks faulted but at least one of them was canceled, the returned task will end in the Canceled state.
If none of the tasks faulted and none of the tasks were canceled, the resulting task will end in the RanToCompletion state.
If the supplied array/enumerable contains no tasks, the returned task will immediately transition to a RanToCompletion state before it's returned to the caller.
You could create many tasks like:
List<Task> TaskList = new List<Task>();
foreach(...)
{
var LastTask = new Task(SomeFunction);
LastTask.Start();
TaskList.Add(LastTask);
}
Task.WaitAll(TaskList.ToArray());
You can use WhenAll which will return an awaitable Task or WaitAll which has no return type and will block further code execution simular to Thread.Sleep until all tasks are completed, canceled or faulted.
WhenAll
WaitAll
Any of the supplied tasks completes in a faulted state
A task with the faulted state will be returned. The exceptions will contain the aggregation of the set of unwrapped exceptions from each of the supplied tasks.
An AggregateException will be thrown.
None of the supplied tasks faulted but at least one of them was canceled
The returned task will end in the TaskStatus.Canceled state
An AggregateException will be thrown which contains an OperationCanceledException in its InnerExceptions collection
An empty list was given
An ArgumentException will be thrown
The returned task will immediately transition to a TaskStatus.RanToCompletion State before it's returned to the caller.
Doesn't block the current thread
Blocks the current thread
Example
var tasks = new Task[] {
TaskOperationOne(),
TaskOperationTwo()
};
Task.WaitAll(tasks);
// or
await Task.WhenAll(tasks);
If you want to run the tasks in a particular/specific order you can get inspiration from this answer.
The best option I've seen is the following extension method:
public static Task ForEachAsync<T>(this IEnumerable<T> sequence, Func<T, Task> action) {
return Task.WhenAll(sequence.Select(action));
}
Call it like this:
await sequence.ForEachAsync(item => item.SomethingAsync(blah));
Or with an async lambda:
await sequence.ForEachAsync(async item => {
var more = await GetMoreAsync(item);
await more.FrobbleAsync();
});
Yet another answer...but I usually find myself in a case, when I need to load data simultaneously and put it into variables, like:
var cats = new List<Cat>();
var dog = new Dog();
var loadDataTasks = new Task[]
{
Task.Run(async () => cats = await LoadCatsAsync()),
Task.Run(async () => dog = await LoadDogAsync())
};
try
{
await Task.WhenAll(loadDataTasks);
}
catch (Exception ex)
{
// handle exception
}
Do you want to chain the Tasks, or can they be invoked in a parallel manner?
For chaining
Just do something like
Task.Run(...).ContinueWith(...).ContinueWith(...).ContinueWith(...);
Task.Factory.StartNew(...).ContinueWith(...).ContinueWith(...).ContinueWith(...);
and don't forget to check the previous Task instance in each ContinueWith as it might be faulted.
For the parallel manner
The most simple method I came across: Parallel.Invoke
Otherwise there's Task.WaitAll or you can even use WaitHandles for doing a countdown to zero actions left (wait, there's a new class: CountdownEvent), or ...
This is how I do it with an array Func<>:
var tasks = new Func<Task>[]
{
() => myAsyncWork1(),
() => myAsyncWork2(),
() => myAsyncWork3()
};
await Task.WhenAll(tasks.Select(task => task()).ToArray()); //Async
Task.WaitAll(tasks.Select(task => task()).ToArray()); //Or use WaitAll for Sync
I prepared a piece of code to show you how to use the task for some of these scenarios.
// method to run tasks in a parallel
public async Task RunMultipleTaskParallel(Task[] tasks) {
await Task.WhenAll(tasks);
}
// methode to run task one by one
public async Task RunMultipleTaskOneByOne(Task[] tasks)
{
for (int i = 0; i < tasks.Length - 1; i++)
await tasks[i];
}
// method to run i task in parallel
public async Task RunMultipleTaskParallel(Task[] tasks, int i)
{
var countTask = tasks.Length;
var remainTasks = 0;
do
{
int toTake = (countTask < i) ? countTask : i;
var limitedTasks = tasks.Skip(remainTasks)
.Take(toTake);
remainTasks += toTake;
await RunMultipleTaskParallel(limitedTasks.ToArray());
} while (remainTasks < countTask);
}
There should be a more succinct solution than the accepted answer. It shouldn't take three steps to run multiple tasks simultaneously and get their results.
Create tasks
await Task.WhenAll(tasks)
Get task results (e.g., task1.Result)
Here's a method that cuts this down to two steps:
public async Task<Tuple<T1, T2>> WhenAllGeneric<T1, T2>(Task<T1> task1, Task<T2> task2)
{
await Task.WhenAll(task1, task2);
return Tuple.Create(task1.Result, task2.Result);
}
You can use it like this:
var taskResults = await Task.WhenAll(DoWorkAsync(), DoMoreWorkAsync());
var DoWorkResult = taskResults.Result.Item1;
var DoMoreWorkResult = taskResults.Result.Item2;
This removes the need for the temporary task variables. The problem with using this is that while it works for two tasks, you'd need to update it for three tasks, or any other number of tasks. Also it doesn't work well if one of the tasks doesn't return anything. Really, the .Net library should provide something that can do this
If you're using the async/await pattern, you can run several tasks in parallel like this:
public async Task DoSeveralThings()
{
// Start all the tasks
Task first = DoFirstThingAsync();
Task second = DoSecondThingAsync();
// Then wait for them to complete
var firstResult = await first;
var secondResult = await second;
}
Been trying to execute tasks sequentially but they are executed in a random order instead.
Appending .Unwrap after .ContinueWith doesn't help
Returning a Task of T from these methods instead of Task and assigning their result in the caller doesn't work either
Not sure about signature of my methods, whether they should contain async/await or not.
Sequencing tasks :
Task biographies = LoadArtistBiographies(apiKey);
Task blogs = LoadArtistBlogs(apiKey);
Task familiarity = LoadArtistFamiliarity(apiKey);
Task hottness = LoadArtistHottness(apiKey);
Task images = LoadArtistImages(apiKey);
await biographies.ContinueWith(b => blogs);
await blogs.ContinueWith(f => familiarity);
await familiarity.ContinueWith(h => hottness);
await hottness.ContinueWith(i => images);
await images;
Sample of executed methods :
private async Task LoadArtistBiographies(string apiKey)
{
var parameters = new ArtistBiographiesParameters();
parameters.SetDefaultValues();
parameters.ApiKey = apiKey;
parameters.Id = _artistId;
ArtistBiographies biographies = await Queries.ArtistBiographies(parameters);
ItemsControlBiographies.ItemsSource = biographies.Biographies;
}
The Queries.* methods are also asynchronous :
public static async Task<ArtistBlogs> ArtistBlogs(ArtistBlogsParameters parameters)
What is the correct syntax for chaining tasks that themselves are executing asynchronous tasks ?
If you want to execute the tasks in a specific order, you should await them directly:
await LoadArtistBiographies(apiKey);
await LoadArtistBlogs(apiKey);
await LoadArtistFamiliarity(apiKey);
await LoadArtistHottness(apiKey);
await LoadArtistImages(apiKey);
This will cause the second task (LoadArtistBlogs) to be scheduled after the first task completes.
Right now, the tasks are executing "in random order" because you've assigned them to Task instances, which allows each to be executed simultaneously.
That being said, I would actually recommend changing your methods around to returning the values, instead of assigning them to the datasource within the method:
private async Task<Biographies> LoadArtistBiographiesAsync(string apiKey)
{
var parameters = new ArtistBiographiesParameters();
parameters.SetDefaultValues();
parameters.ApiKey = apiKey;
parameters.Id = _artistId;
var bio = await Queries.ArtistBiographies(parameters);
return bio.Biographies;
}
You could then write these as:
ItemsControlBiographies.ItemsSource = await LoadArtistBiographiesAsync(apiKey);
// Other methods below, with await as this example
This makes the intent as the logic flows through the async methods a bit more clear, in my opinion.
Your example code will start executing all the tasks without waiting for each one to complete. It then waits for them to complete in order.
The key is that an async method starts when you call it. So if you don't want to start it yet, don't call the method yet:
await LoadArtistBiographies(apiKey);
await LoadArtistBlogs(apiKey);
await LoadArtistFamiliarity(apiKey);
await LoadArtistHottness(apiKey);
await LoadArtistImages(apiKey);
await will wait for the given task to complete, it will not start the task. Your Load*-methods all most likely start a task. All five tasks are running in an arbitrary order.
At the point when you get to await, your task may already has finished or not. It does not matter. You call ContinueWith on it, telling your task it should continue with this method once finished. This will return a new Task, on which you finally await.
Actually I've just found a way but without ContinueWith :
ArtistBiographies biographies = await LoadArtistBiographies(apiKey);
ItemsControlBiographies.ItemsSource = biographies.Biographies;
ArtistBlogs blogs = await LoadArtistBlogs(apiKey);
ItemsControlBlogs.ItemsSource = blogs.Blogs;
ArtistFamiliarity familiarity = await LoadArtistFamiliarity(apiKey);
ContentControlFamiliarity.Content = familiarity.artist;
ArtistHotttnesss hottness = await LoadArtistHottness(apiKey);
ContentControlHottness.Content = hottness.Artist;
ArtistImages images = await LoadArtistImages(apiKey);
ItemsControlImages.ItemsSource = images.Images;
Curious if someone could provide the answer using ContinueWith.