Say I had a list of algorithms, each containing an async call somewhere in the body of that algorithm. The order in which I execute the algorithms is the order in which I want to receive the results. That is, I want the AlgorithmResult List to look like {Algorithm1Result, Algorithm2Result, Algorithm3Result} after all the algorithms have executed. Would I be right in saying that if Algorithm1 and 3 finished before 2 that my results would actually be in the order {Algorithm1Result, Algorithm3Result, Algorithm2Result}
var algorithms = new List<Algorithm>(){Algorithm1, Algorithm2, Algorithm3};
var algorithmResults = new List<AlgorithmResults>();
foreach (var algorithm in algorithms)
{
algorithmResults.Add(await algorithm.Execute());
}
NO, Result would be in the same order you added it to the list, since each operation is being waited for separately.
class Program
{
public static async Task<int> GetResult(int timing)
{
return await Task<int>.Run(() =>
{
Thread.Sleep(timing * 1000);
return timing;
});
}
public static async Task<List<int>> GetAll()
{
List<int> tasks = new List<int>();
tasks.Add(await GetResult(3));
tasks.Add(await GetResult(2));
tasks.Add(await GetResult(1));
return tasks;
}
static void Main(string[] args)
{
var res = GetAll().Result;
}
}
res anyway contains list in order it was added, also this is not parallel execution.
Since you don't add the tasks, but the results of the task after awaiting, they will be in the order you want.
Even if you did not await the result before adding the next task you could get the order you want:
in small steps, showing type awareness:
List<Task<AlgorithmResult>> tasks = new List<Task<AlgorithmResult>>();
foreach (Algorithm algorithm in algorithms)
{
Task<AlgorithmResult> task = algorithm.Execute();
// don't wait until task Completes, just remember the Task
// and continue executing the next Algorithm
tasks.Add(task);
}
Now some Tasks may be running, some may already have completed. Let's wait until they are all complete, and fetch the results:
Task.WhenAll(tasks.ToArray());
List<AlgrithmResults> results = tasks.Select(task => task.Result).ToList();
Related
I have an IEnumerable<Task>, where each Task will call the same endpoint. However, the endpoint can only handle so many calls per second. How can I put, say, a half second delay between each call?
I have tried adding Task.Delay(), but of course awaiting them simply means that the app waits a half second before sending all the calls at once.
Here is a code snippet:
var resultTasks = orders
.Select(async task =>
{
var result = new VendorTaskResult();
try
{
result.Response = await result.CallVendorAsync();
}
catch(Exception ex)
{
result.Exception = ex;
}
return result;
} );
var results = Task.WhenAll(resultTasks);
I feel like I should do something like
Task.WhenAll(resultTasks.EmitOverTime(500));
... but how exactly do I do that?
What you describe in your question is in other words rate limiting. You'd like to apply rate limiting policy to your client, because the API you use enforces such a policy on the server to protect itself from abuse.
While you could implement rate limiting yourself, I'd recommend you to go with some well established solution. Rate Limiter from Davis Desmaisons was the one that I picked at random and I instantly liked it. It had solid documentation, superior coverage and was easy to use. It is also available as NuGet package.
Check out the simple snippet below that demonstrates running semi-overlapping tasks in sequence while defering the task start by half a second after the immediately preceding task started. Each task lasts at least 750 ms.
using ComposableAsync;
using RateLimiter;
using System;
using System.Threading.Tasks;
namespace RateLimiterTest
{
class Program
{
static void Main(string[] args)
{
Log("Starting tasks ...");
var constraint = TimeLimiter.GetFromMaxCountByInterval(1, TimeSpan.FromSeconds(0.5));
var tasks = new[]
{
DoWorkAsync("Task1", constraint),
DoWorkAsync("Task2", constraint),
DoWorkAsync("Task3", constraint),
DoWorkAsync("Task4", constraint)
};
Task.WaitAll(tasks);
Log("All tasks finished.");
Console.ReadLine();
}
static void Log(string message)
{
Console.WriteLine(DateTime.Now.ToString("HH:mm:ss.fff ") + message);
}
static async Task DoWorkAsync(string name, IDispatcher constraint)
{
await constraint;
Log(name + " started");
await Task.Delay(750);
Log(name + " finished");
}
}
}
Sample output:
10:03:27.121 Starting tasks ...
10:03:27.154 Task1 started
10:03:27.658 Task2 started
10:03:27.911 Task1 finished
10:03:28.160 Task3 started
10:03:28.410 Task2 finished
10:03:28.680 Task4 started
10:03:28.913 Task3 finished
10:03:29.443 Task4 finished
10:03:29.443 All tasks finished.
If you change the constraint to allow maximum two tasks per second (var constraint = TimeLimiter.GetFromMaxCountByInterval(2, TimeSpan.FromSeconds(1));), which is not the same as one per half a second, then the output could be like:
10:06:03.237 Starting tasks ...
10:06:03.264 Task1 started
10:06:03.268 Task2 started
10:06:04.026 Task2 finished
10:06:04.031 Task1 finished
10:06:04.275 Task3 started
10:06:04.276 Task4 started
10:06:05.032 Task4 finished
10:06:05.032 Task3 finished
10:06:05.033 All tasks finished.
Note that the current version of Rate Limiter targets .NETFramework 4.7.2+ or .NETStandard 2.0+.
This is just a thought, but another approach could be to create a queue and add another thread that runs polling the queue for calls that need to go out to your endpoint.
Have you considered just turning that into a foreach-loop with a Task.Delay call? You seem to want to explicitly call them sequentially, it won't hurt if that is obvious from your code.
var results = new List<YourResultType>;
foreach(var order in orders){
var result = new VendorTaskResult();
try
{
result.Response = await result.CallVendorAsync();
results.Add(result.Response);
}
catch(Exception ex)
{
result.Exception = ex;
}
}
Instead of selecting from orders you could loop over them, and inside the loop put the result into a list and then call Task.WhenAll.
Would look something like:
var resultTasks = new List<VendorTaskResult>(orders.Count);
orders.ToList().ForEach( item => {
var result = new VendorTaskResult();
try
{
result.Response = await result.CallVendorAsync();
}
catch(Exception ex)
{
result.Exception = ex;
}
resultTasks.Add(result);
Thread.Sleep(x);
});
var results = Task.WhenAll(resultTasks);
If you want to control the number of requests executed simultaneously, you have to use a semaphore.
I have something very similar, and it works fine with me. Please note that I call ToArray() after the Linq query finishes, that triggers the tasks:
using (HttpClient client = new HttpClient()) {
IEnumerable<Task<string>> _downloads = _group
.Select(job => {
await Task.Delay(300);
return client.GetStringAsync(<url with variable job>);
});
Task<string>[] _downloadTasks = _downloads.ToArray();
_pages = await Task.WhenAll(_downloadTasks);
}
Now please note that this will create n nunmber of tasks, all in parallel, and the Task.Delay literally does nothing. If you want to call the pages synchronously (as it sounds by putting a delay between the calls), then this code may be better:
using (HttpClient client = new HttpClient()) {
foreach (string job in _group) {
await Task.Delay(300);
_pages.Add(await client.GetStringAsync(<url with variable job>));
}
}
The download of the pages is still asynchronous (while downloading other tasks are done), but each call to download the page is synchronous, ensuring that you can wait for one to finish in order to call the next one.
The code can be easily changed to call the pages asynchronously in chunks, like every 10 pages, wait 300ms, like in this sample:
IEnumerable<string[]> toParse = myData
.Select((v, i) => new { v.code, group = i / 20 })
.GroupBy(x => x.group)
.Select(g => g.Select(x => x.code).ToArray());
using (HttpClient client = new HttpClient()) {
foreach (string[] _group in toParse) {
string[] _pages = null;
IEnumerable<Task<string>> _downloads = _group
.Select(job => {
return client.GetStringAsync(<url with job>);
});
Task<string>[] _downloadTasks = _downloads.ToArray();
_pages = await Task.WhenAll(_downloadTasks);
await Task.Delay(5000);
}
}
All this does is group your pages in chunks of 20, iterate through the chunks, download all pages of the chunk asynchronously, wait 5 seconds, move on to the next chunk.
I hope that is what you were waiting for :)
The proposed method EmitOverTime is doable, but only by blocking the current thread:
public static IEnumerable<Task<TResult>> EmitOverTime<TResult>(
this IEnumerable<Task<TResult>> tasks, int delay)
{
foreach (var item in tasks)
{
Thread.Sleep(delay); // Delay by blocking
yield return item;
}
}
Usage:
var results = await Task.WhenAll(resultTasks.EmitOverTime(500));
Probably better is to create a variant of Task.WhenAll that accepts a delay argument, and delays asyncronously:
public static async Task<TResult[]> WhenAllWithDelay<TResult>(
IEnumerable<Task<TResult>> tasks, int delay)
{
var tasksList = new List<Task<TResult>>();
foreach (var task in tasks)
{
await Task.Delay(delay).ConfigureAwait(false);
tasksList.Add(task);
}
return await Task.WhenAll(tasksList).ConfigureAwait(false);
}
Usage:
var results = await WhenAllWithDelay(resultTasks, 500);
This design implies that the enumerable of tasks should be enumerated only once. It is easy to forget this during development, and start enumerating it again, spawning a new set of tasks. For this reason I propose to make it an OnlyOnce enumerable, as it is shown in this question.
Update: I should mention why the above methods work, and under what premise. The premise is that the supplied IEnumerable<Task<TResult>> is deferred, in other words non-materialized. At the method's start there are no tasks created yet. The tasks are created one after the other during the enumeration of the enumerable, and the trick is that the enumeration is slow and controlled. The delay inside the loop ensures that the tasks are not created all at once. They are created hot (in other words already started), so at the time the last task has been created some of the first tasks may have already been completed. The materialized list of half-running/half-completed tasks is then passed to Task.WhenAll, that waits for all to complete asynchronously.
I'm not sure if I'm misunderstanding the usage of Task.WhenAny but in the following code only "0" gets printed when it should print "1" and "2" and then "mx.Name" every time a task finishes:
public async void PopulateMxRecords(List<string> nsRecords, int threads)
{
ThreadPool.SetMinThreads(threads, threads);
var resolver = new DnsStubResolver();
var tasks = nsRecords.Select(ns => resolver.ResolveAsync<MxRecord>(ns, RecordType.Mx));
Console.WriteLine("0");
var finished = Task.WhenAny(tasks);
Console.WriteLine("1");
while (mxNsRecords.Count < nsRecords.Count)
{
Console.WriteLine("2");
var task = await finished;
var mxRecords = await task;
foreach(var mx in mxRecords)
Console.WriteLine(mx.Name);
}
}
The DnsStubResolver is part of ARSoft.Tools.Net.Dns. The nsRecords list contains up to 2 million strings.
I'm not sure if I'm misunderstanding the usage of Task.WhenAny
You might be. The pattern you seem to be looking for is interleaving. In the following example, notice the important changes that I have made:
use ToList() to materialize the LINQ query results,
move WhenAny() into the loop,
use Remove(task) as each task completes, and
run the while loop as long as tasks.Count() > 0.
Those are the important changes. The other changes are there to make your listing into a runnable demo of interleaving, the full listing of which is here: https://dotnetfiddle.net/nr1gQ7
public static async Task PopulateMxRecords(List<string> nsRecords)
{
var tasks = nsRecords.Select(ns => ResolveAsync(ns)).ToList();
while (tasks.Count() > 0)
{
var task = await Task.WhenAny(tasks);
tasks.Remove(task);
var mxRecords = await task;
Console.WriteLine(mxRecords);
}
}
This is the code that I'm running:
[TestMethod]
public async Task HelloTest()
{
List<int> hello = new List<int>();
//await Task.WhenAll(Enumerable.Range(0, 2).Select(async x => await Say(hello)));
await Say(hello);
await Say(hello);
}
private static async Task Say(List<int> hello)
{
await Task.Delay(100);
var rep = new Random().Next();
hello.Add(rep);
}
Why is it that running this code, as it is, works as intended and results in two random numbers, but that using the commented code instead always results in two of the exact same number?
So you have several issues here.
First, why are you seeing the same value twice. That's the easy one. When you create a Random instance it is seeded with the current time, but the precision of the current time it uses is rather low. If you get two new Random instances within say 16 milliseconds or so (which is a really long time for a computer) you'll see the same values out of them. That's what's happening for you.
Normally the fix for that is just to share a single Random instance, but the problem there is that your random instances aren't being accessed from the same thread (potentially, assuming you don't have a SynchronizationContext specified), and Random isn't thread safe. You can use something like this to get your random numbers instead:
public static class MyRandom
{
private static object key = new object();
private static Random random = new Random();
public static int Next()
{
lock (key)
{
return random.Next();
}
}
//TODO add other methods for other `Random` methods as needed
}
Use that and it will resolve the immediate issue.
The other problem that you have, although it doesn't seem to be biting you currently, is that you're modifying your List from two different tasks, possibly being executed in different threads. You shouldn't do that. It's bad enough practice to have methods like this in a single threaded environment (as you're relying on side effects to do your work) but in a multitheraded environment this is very problematic, for more than just conceptual reasons. Instead you should have each thread return a value, and then pull all of those values into a collection on the caller's side, like so:
public async Task HelloTest()
{
var data = await Task.WhenAll(Say(), Say());
}
private static async Task<int> Say()
{
await Task.Delay(100);
return MyRandom.Next();
}
As to why the two Say calls are run in parallel, rather than sequentially, that has to do with the fact that in your second code snippet you aren't actually waiting for one task to complete before starting the next.
The method that you pass to Select is the method to spin up the task, and it won't block until that task is done before starting the next. The code that you have here:
await Task.WhenAll(Enumerable.Range(0, 2).Select(async x => await Say(hello)));
Is no different than simply having:
await Task.WhenAll(Enumerable.Range(0, 2).Select(x => Say(hello)));
Having an async method that does nothing but await one method call is really no different than just having that one method call. What's happening here is that Select is calling Say, staring the task, continuing on, which stats the next task, and then WhenAll is waiting (asynchronously) for both tasks to finish before continuing on.
Note: This is an answer to the original question (the question has since been changed).
They operate identically. I ran the below Console application and got the results 0 1 for both versions:
class Program
{
static int m_nextNumber = 0;
static void Main(string[] args)
{
var t1 = Version1();
m_nextNumber = 0;
var t2 = Version2();
Task.WaitAll(t1, t2);
Console.ReadKey();
}
static async Task Version1()
{
List<int> hello = new List<int>();
await Say(hello);
await Say(hello);
PrintHello(hello);
}
static async Task Version2()
{
List<int> hello = new List<int>();
await Task.WhenAll(Enumerable.Range(0, 2).Select(async x => await Say(hello)));
PrintHello(hello);
}
static void PrintHello(List<int> hello)
{
foreach (var i in hello)
Console.WriteLine(i);
}
static int GotANumber()
{
return m_nextNumber++;
}
static async Task Say(List<int> hello)
{
var rep = GotANumber();
hello.Add(rep);
}
}
I have an abstract class called VehicleInfoFetcher which returns information asynchronously from a WebClient via this method:
public override async Task<DTOrealtimeinfo> getVehicleInfo(string stopID);
I'd like to combine the results of two separate instances of this class, running each in parallel before combining the results. This is done within a third class, CombinedVehicleInfoFetcher (also itself a subclass of VehicleInfoFetcher)
Here's my code - but I'm not quite convinced that it's running the tasks in parallel; am I doing it right? Could it be optimized?
public class CombinedVehicleInfoFetcher : VehicleInfoFetcher
{
public HashSet<VehicleInfoFetcher> VehicleInfoFetchers { get; set; }
public override async Task<DTOrealtimeinfo> getVehicleInfo(string stopID)
{
// Create a list of parallel tasks to run
var resultTasks = new List<Task<DTOrealtimeinfo>>();
foreach (VehicleInfoFetcher fetcher in VehicleInfoFetchers)
resultTasks.Add(fetcher.getVehicleInfo(stopID, stopID2, timePointLocal));
// run each task
foreach (var task in resultTasks)
await task;
// Wait for all the results to come in
await Task.WhenAll(resultTasks.ToArray());
// combine the results
var allRealtimeResults = new List<DTOrealtimeinfo>( resultTasks.Select(t => t.Result) );
return combineTaskResults(allRealtimeResults);
}
DTOrealtimeinfo combineTaskResults(List<DTOrealtimeinfo> realtimeResults)
{
// ...
return rtInfoOutput;
}
}
Edit
Some very helpful answers, here is a re-written example to aid discussion with usr below:
public override async Task<object> combineResults()
{
// Create a list of parallel tasks to run
var resultTasks= new List<object>();
foreach (AnotherClass cls in this.OtherClasses)
resultTasks.Add(cls.getResults() );
// Point A - have the cls.getResults() methods been called yet?
// Wait for all the results to come in
await Task.WhenAll(resultTasks.ToArray());
// combine the results
return new List<object>( resultTasks.Select(t => t.Result) );
}
}
Almost all tasks start out already started. Probably, whatever fetcher.getVehicleInfo returns is already started. So you can remove:
// run each task
foreach (var task in resultTasks)
await task;
Task.WhenAll is faster and has better error behavior (you want all exceptions to be propagated, not just the first you happen to stumble upon).
Also, await does not start a task. It waits for completion. You have to arrange for the tasks to be started separately, but as I said, almost all tasks are already started when you get them. This is best-practice as well.
To help our discussion in the comments:
Task Test1() { return new Task(() => {}); }
Task Test2() { return Task.Factory.StartNew(() => {}); }
Task Test3() { return new FileStream("").ReadAsync(...); }
Task Test4() { return new TaskCompletionSource<object>().Task; }
Does not "run" when returned from the method. Must be started. Bad practice.
Runs when returned. Does not matter what you do with it, it is already running. Not necessary to add it to a list or store it somewhere.
Already runs like (2).
The notion of running does not make sense here. This task will never complete although it cannot be explicitly started.
I have the following situation (or a basic misunderstanding with the async await mechanism).
Assume you have a set of 1-20 web request call that takes a long time: findItemsByProduct().
you want to wrap it around in an async request, that would be able to abstract all these calls into one async call, but I can't seem to be able to do it without using more threads.
If I'm doing:
int total = result.paginationOutput.totalPages;
for (int i = 2; i < total + 1; i++)
{
await Task.Factory.StartNew(() =>
{
result = client.findItemsByProduct(i);
});
newList.AddRange(result.searchResult.item);
}
}
return newList;
problem here, that the calls don't run together, rather they are waiting one by one.
I would like all the calls to run together and than harvest the results.
as pseudo code, I would like the code to run like this:
forEach item {
result = item.makeWebRequest();
}
foreach item {
List.addRange(item.harvestResults);
}
I have no idea how to make the code to do that though..
Ideally, you should add a findItemsByProductAsync that returns a Task<Item[]>. That way, you don't have to create unnecessary tasks using StartNew or Task.Run.
Then your code can look like this:
int total = result.paginationOutput.totalPages;
// Start all downloads; each download is represented by a task.
Task<Item[]>[] tasks = Enumerable.Range(2, total - 1)
.Select(i => client.findItemsByProductAsync(i)).ToArray();
// Wait for all downloads to complete.
Item[][] results = await Task.WhenAll(tasks);
// Flatten the results into a single collection.
return results.SelectMany(x => x).ToArray();
Given your requirements which I see as:
Process n number of non-blocking tasks
Process results after all queries have returned
I would use the CountdownEvent for this e.g.
var results = new ConcurrentBag<ItemType>(result.pagination.totalPages);
using (var e = new CountdownEvent(result.pagination.totalPages))
{
for (int i = 2; i <= result.pagination.totalPages+1; i++)
{
Task.Factory.StartNew(() => return client.findItemsByProduct(i))
.ContinueWith(items => {
results.AddRange(items);
e.Signal(); // signal task is done
});
}
// Wait for all requests to complete
e.Wait();
}
// Process results
foreach (var item in results)
{
...
}
This particular problem is solved easily enough without even using await. Simply create each of the tasks, put all of the tasks into a list, and then use WhenAll on that list to get a task that represents the completion of all of those tasks:
public static Task<Item[]> Foo()
{
int total = result.paginationOutput.totalPages;
var tasks = new List<Task<Item>>();
for (int i = 2; i < total + 1; i++)
{
tasks.Add(Task.Factory.StartNew(() => client.findItemsByProduct(i)));
}
return Task.WhenAll(tasks);
}
Also note you have a major problem in how you use result in your code. You're having each of the different tasks all using the same variable, so there are race conditions as to whether or not it works properly. You could end up adding the same call twice and having one skipped entirely. Instead you should have the call to findItemsByProduct be the result of the task, and use that task's Result.
If you want to use async-await properly you have to declare your functions async, and the functions that call you also have to be async. This continues until you have once synchronous function that starts the async process.
Your function would look like this:
by the way you didn't describe what's in the list. I assume they are
object of type T. in that case result.SearchResult.Item returns
IEnumerable
private async Task<List<T>> FindItems(...)
{
int total = result.paginationOutput.totalPages;
var newList = new List<T>();
for (int i = 2; i < total + 1; i++)
{
IEnumerable<T> result = await Task.Factory.StartNew(() =>
{
return client.findItemsByProduct(i);
});
newList.AddRange(result.searchResult.item);
}
return newList;
}
If you do it this way, your function will be asynchronous, but the findItemsByProduct will be executed one after another. If you want to execute them simultaneously you should not await for the result, but start the next task before the previous one is finished. Once all tasks are started wait until all are finished. Like this:
private async Task<List<T>> FindItems(...)
{
int total = result.paginationOutput.totalPages;
var tasks= new List<Task<IEnumerable<T>>>();
// start all tasks. don't wait for the result yet
for (int i = 2; i < total + 1; i++)
{
Task<IEnumerable<T>> task = Task.Factory.StartNew(() =>
{
return client.findItemsByProduct(i);
});
tasks.Add(task);
}
// now that all tasks are started, wait until all are finished
await Task.WhenAll(tasks);
// the result of each task is now in task.Result
// the type of result is IEnumerable<T>
// put all into one big list using some linq:
return tasks.SelectMany ( task => task.Result.SearchResult.Item)
.ToList();
// if you're not familiar to linq yet, use a foreach:
var newList = new List<T>();
foreach (var task in tasks)
{
newList.AddRange(task.Result.searchResult.item);
}
return newList;
}