Is there a proper pattern for multiple ContinueWith methods - c#

In the docs for TPL I found this line:
Invoke multiple continuations from the same antecedent
But this isn't explained any further. I naively assumed you could chain ContinueWiths in a pattern matching like manner until you hit the right TaskContinuationOptions.
TaskThatReturnsString()
.ContinueWith((s) => Console.Out.WriteLine(s.Result), TaskContinuationOptions.OnlyOnRanToCompletion)
.ContinueWith((f) => Console.Out.WriteLine(f.Exception.Message), TaskContinuationOptions.OnlyOnFaulted)
.ContinueWith((f) => Console.Out.WriteLine("Cancelled"), TaskContinuationOptions.OnlyOnCanceled)
.Wait();
But this doesn't work like I hoped for at least two reasons.
The continuations are properly chained so the 2nd ContinueWith gets the result form the 1st, that is implemented as new Task, basically the ContinueWith task itself. I realize that the String could be returned onwards, but won't that be a new task with other info lost?
Since the first option is not met, the Task is just cancelled. Meaning that the second set will never be met and the exceptions are lost.
So what do they mean in the docs when they say multiple continuations from the same antecedent?
Is there a proper patter for this or do we just have to wrap the calls in try catch blocks?
EDIT
So I guess this was what I was hoping I could do, note this is a simplified example.
public void ProccessAllTheThings()
{
var theThings = util.GetAllTheThings();
var tasks = new List<Task>();
foreach (var thing in theThings)
{
var task = util.Process(thing)
.ContinueWith((t) => Console.Out.WriteLine($"Finished processing {thing.ThingId} with result {t.Result}"), TaskContinuationOptions.OnlyOnRanToCompletion)
.ContinueWith((t) => Console.Out.WriteLine($"Error on processing {thing.ThingId} with error {t.Exception.Message}"), TaskContinuationOptions.OnlyOnFaulted);
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
}
Since this wasn't possible I was thinking I would have to wrap each task call in a try catch inside the loop so I wouldn't stop the process but not wait on it there. I wasn't sure what the correct way.
Sometimes a solution is just staring you in the face, this would work wouldn't it?
public void ProccessAllTheThings()
{
var theThings = util.GetAllTheThings();
var tasks = new List<Task>();
foreach (var thing in theThings)
{
var task = util.Process(thing)
.ContinueWith((t) =>
{
if (t.Status == TaskStatus.RanToCompletion)
{
Console.Out.WriteLine($"Finished processing {thing.ThingId} with result {t.Result}");
}
else
{
Console.Out.WriteLine($"Error on processing {thing.ThingId} - {t.Exception.Message}");
}
});
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
}

What you did is to create a sequential chain of multiple tasks.
What you need to do is attach all your continuation tasks to the first one:
var firstTask = TaskThatReturnsString();
var t1 = firstTask.ContinueWith (…);
var t2 = firstTask.ContinueWith (…);
var t3 = firstTask.ContinueWith (…);
Then you need to wait for all the continuation tasks:
Task.WaitAll (t1, t2, t3);

Related

Stuck at Task.WaitAll(tasks.ToArray()) while using Task.Start to trigger the tasks

We had something like below
List<string> uncheckItems = new List<string>();
for (int i = 0; i < 100; i++)
{
uncheckItems.Add($"item {i + 1}");
}
var tasks = uncheckItems.Select(item =>
new Task(async () => await ProcessItem(item))
);
// Do some preparations
foreach (var task in tasks)
{
task.Start();
}
Task.WaitAll(tasks.ToArray());
Console.WriteLine("=====================================================All finished");
It seems to make sense but the program never able to reach the all finished line.
And if I adjust the workflow to run tasks immediately like remove the task.Start() loop and change to
var tasks = uncheckItems.Select(async item =>
await ProcessItem(item)
);
Then it works.
However, I wonder
Why it stucks?
Is there any way we can keep the workflow(create tasks without trigger them directly and start them later on) and still able to utilize WaitAll()?
The reason is the lazy enumeration evaluation, you are starting different tasks than waiting with Task.WaitAll. This can be fixed for example with next:
var tasks = uncheckItems.Select(item =>
new Task(async () => await ProcessItem(item))
)
.ToArray();
Though it will not achieve your goal (as I understand) of waiting all ProcessItem to finish. You can do something like new Task(() => ProcessItem(item).GetAwaiter().GetResult()) but I think it would be better to change your approach, for example make ProcessItem return a "cold" task or using your second snippet and moving tasks creation to the point where they needed to be started.
You should be next to the world expert in Task to be using the constructor. The documentation warns against that:
This constructor should only be used in advanced scenarios where it is required that the creation and starting of the task is separated.
Rather than calling this constructor, the most common way to instantiate a Task object and launch a task is by calling the static Task.Run(Action) or TaskFactory.StartNew(Action) method.
If a task with no action is needed just for the consumer of an API to have something to await, a TaskCompletionSource should be used.
The Task constructor produces a non-started Task that will only start when Task.Start() is invoked, as you discovered.
The Task constructor also receives an Action (or Action<T>), so the return value is ignored. That means that, after started, the task will end as soon as async () => await ProcessItem(item) yields.
What you need is:
await Task.WhenAll(Enumerable.Range(0, 100).Select(i => ProcessItem($"item {i + 1}"));
Or, if you really have to block:
Task
.WhenAll(Enumerable.Range(0, 100).Select(i => ProcessItem($"item {i + 1}"))
.GetAwaiter().GetResult();
Get the select out of there.
List<string> uncheckItems = new List<string>();
for (int i = 0; i < 100; i++)
{
uncheckItems.Add($"item {i + 1}");
}
var tasks = new List<Task>();
foreach(var item in uncheckedItems) {
tasks.Add(Task.Run(() => ProcessItem(item)));
}
Task.WaitAll(tasks.ToArray());
Console.WriteLine("========All finished");
https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.waitall?view=net-6.0

Multiple Async Calls with Pause Between Calls

I have an IEnumerable<Task>, where each Task will call the same endpoint. However, the endpoint can only handle so many calls per second. How can I put, say, a half second delay between each call?
I have tried adding Task.Delay(), but of course awaiting them simply means that the app waits a half second before sending all the calls at once.
Here is a code snippet:
var resultTasks = orders
.Select(async task =>
{
var result = new VendorTaskResult();
try
{
result.Response = await result.CallVendorAsync();
}
catch(Exception ex)
{
result.Exception = ex;
}
return result;
} );
var results = Task.WhenAll(resultTasks);
I feel like I should do something like
Task.WhenAll(resultTasks.EmitOverTime(500));
... but how exactly do I do that?
What you describe in your question is in other words rate limiting. You'd like to apply rate limiting policy to your client, because the API you use enforces such a policy on the server to protect itself from abuse.
While you could implement rate limiting yourself, I'd recommend you to go with some well established solution. Rate Limiter from Davis Desmaisons was the one that I picked at random and I instantly liked it. It had solid documentation, superior coverage and was easy to use. It is also available as NuGet package.
Check out the simple snippet below that demonstrates running semi-overlapping tasks in sequence while defering the task start by half a second after the immediately preceding task started. Each task lasts at least 750 ms.
using ComposableAsync;
using RateLimiter;
using System;
using System.Threading.Tasks;
namespace RateLimiterTest
{
class Program
{
static void Main(string[] args)
{
Log("Starting tasks ...");
var constraint = TimeLimiter.GetFromMaxCountByInterval(1, TimeSpan.FromSeconds(0.5));
var tasks = new[]
{
DoWorkAsync("Task1", constraint),
DoWorkAsync("Task2", constraint),
DoWorkAsync("Task3", constraint),
DoWorkAsync("Task4", constraint)
};
Task.WaitAll(tasks);
Log("All tasks finished.");
Console.ReadLine();
}
static void Log(string message)
{
Console.WriteLine(DateTime.Now.ToString("HH:mm:ss.fff ") + message);
}
static async Task DoWorkAsync(string name, IDispatcher constraint)
{
await constraint;
Log(name + " started");
await Task.Delay(750);
Log(name + " finished");
}
}
}
Sample output:
10:03:27.121 Starting tasks ...
10:03:27.154 Task1 started
10:03:27.658 Task2 started
10:03:27.911 Task1 finished
10:03:28.160 Task3 started
10:03:28.410 Task2 finished
10:03:28.680 Task4 started
10:03:28.913 Task3 finished
10:03:29.443 Task4 finished
10:03:29.443 All tasks finished.
If you change the constraint to allow maximum two tasks per second (var constraint = TimeLimiter.GetFromMaxCountByInterval(2, TimeSpan.FromSeconds(1));), which is not the same as one per half a second, then the output could be like:
10:06:03.237 Starting tasks ...
10:06:03.264 Task1 started
10:06:03.268 Task2 started
10:06:04.026 Task2 finished
10:06:04.031 Task1 finished
10:06:04.275 Task3 started
10:06:04.276 Task4 started
10:06:05.032 Task4 finished
10:06:05.032 Task3 finished
10:06:05.033 All tasks finished.
Note that the current version of Rate Limiter targets .NETFramework 4.7.2+ or .NETStandard 2.0+.
This is just a thought, but another approach could be to create a queue and add another thread that runs polling the queue for calls that need to go out to your endpoint.
Have you considered just turning that into a foreach-loop with a Task.Delay call? You seem to want to explicitly call them sequentially, it won't hurt if that is obvious from your code.
var results = new List<YourResultType>;
foreach(var order in orders){
var result = new VendorTaskResult();
try
{
result.Response = await result.CallVendorAsync();
results.Add(result.Response);
}
catch(Exception ex)
{
result.Exception = ex;
}
}
Instead of selecting from orders you could loop over them, and inside the loop put the result into a list and then call Task.WhenAll.
Would look something like:
var resultTasks = new List<VendorTaskResult>(orders.Count);
orders.ToList().ForEach( item => {
var result = new VendorTaskResult();
try
{
result.Response = await result.CallVendorAsync();
}
catch(Exception ex)
{
result.Exception = ex;
}
resultTasks.Add(result);
Thread.Sleep(x);
});
var results = Task.WhenAll(resultTasks);
If you want to control the number of requests executed simultaneously, you have to use a semaphore.
I have something very similar, and it works fine with me. Please note that I call ToArray() after the Linq query finishes, that triggers the tasks:
using (HttpClient client = new HttpClient()) {
IEnumerable<Task<string>> _downloads = _group
.Select(job => {
await Task.Delay(300);
return client.GetStringAsync(<url with variable job>);
});
Task<string>[] _downloadTasks = _downloads.ToArray();
_pages = await Task.WhenAll(_downloadTasks);
}
Now please note that this will create n nunmber of tasks, all in parallel, and the Task.Delay literally does nothing. If you want to call the pages synchronously (as it sounds by putting a delay between the calls), then this code may be better:
using (HttpClient client = new HttpClient()) {
foreach (string job in _group) {
await Task.Delay(300);
_pages.Add(await client.GetStringAsync(<url with variable job>));
}
}
The download of the pages is still asynchronous (while downloading other tasks are done), but each call to download the page is synchronous, ensuring that you can wait for one to finish in order to call the next one.
The code can be easily changed to call the pages asynchronously in chunks, like every 10 pages, wait 300ms, like in this sample:
IEnumerable<string[]> toParse = myData
.Select((v, i) => new { v.code, group = i / 20 })
.GroupBy(x => x.group)
.Select(g => g.Select(x => x.code).ToArray());
using (HttpClient client = new HttpClient()) {
foreach (string[] _group in toParse) {
string[] _pages = null;
IEnumerable<Task<string>> _downloads = _group
.Select(job => {
return client.GetStringAsync(<url with job>);
});
Task<string>[] _downloadTasks = _downloads.ToArray();
_pages = await Task.WhenAll(_downloadTasks);
await Task.Delay(5000);
}
}
All this does is group your pages in chunks of 20, iterate through the chunks, download all pages of the chunk asynchronously, wait 5 seconds, move on to the next chunk.
I hope that is what you were waiting for :)
The proposed method EmitOverTime is doable, but only by blocking the current thread:
public static IEnumerable<Task<TResult>> EmitOverTime<TResult>(
this IEnumerable<Task<TResult>> tasks, int delay)
{
foreach (var item in tasks)
{
Thread.Sleep(delay); // Delay by blocking
yield return item;
}
}
Usage:
var results = await Task.WhenAll(resultTasks.EmitOverTime(500));
Probably better is to create a variant of Task.WhenAll that accepts a delay argument, and delays asyncronously:
public static async Task<TResult[]> WhenAllWithDelay<TResult>(
IEnumerable<Task<TResult>> tasks, int delay)
{
var tasksList = new List<Task<TResult>>();
foreach (var task in tasks)
{
await Task.Delay(delay).ConfigureAwait(false);
tasksList.Add(task);
}
return await Task.WhenAll(tasksList).ConfigureAwait(false);
}
Usage:
var results = await WhenAllWithDelay(resultTasks, 500);
This design implies that the enumerable of tasks should be enumerated only once. It is easy to forget this during development, and start enumerating it again, spawning a new set of tasks. For this reason I propose to make it an OnlyOnce enumerable, as it is shown in this question.
Update: I should mention why the above methods work, and under what premise. The premise is that the supplied IEnumerable<Task<TResult>> is deferred, in other words non-materialized. At the method's start there are no tasks created yet. The tasks are created one after the other during the enumeration of the enumerable, and the trick is that the enumeration is slow and controlled. The delay inside the loop ensures that the tasks are not created all at once. They are created hot (in other words already started), so at the time the last task has been created some of the first tasks may have already been completed. The materialized list of half-running/half-completed tasks is then passed to Task.WhenAll, that waits for all to complete asynchronously.

Task WhenAll usage

I have a bunch of tasks defined as:
Task t1 = new Task( () => { /* Do Something */ } );
Task t2 = new Task( () => { /* Do Something */ } );
Task t3 = new Task( () => { /* Do Something */ } );
Task t4 = new Task( () => { /* Do Something */ } );
List<Task> allTasks = new List<Task>();
allTasks.Add(t1);
allTasks.Add(t2); etc.
And then finally:
Task.WhenAll(allTasks).ContinueWith((t) =>
{
MyBlockingCollection.CompleteAdding();
});
foreach (Task t in allTasks)
{
t.Start();
}
My questions regarding the above code:
Is this the correct way to make use of Tasks?
Does Task.WhenAll() start the tasks by itself or do we have to explicitly start them. If so, do we FIRST start and then do Task.WhenALL()?
And I need to do exception handling for these tasks as well, Could you please suggest the proper way to handle exceptions within tasks, Ideally I want the task to write out some diagnostics information to a text document if an exception happens.
I'm kind of new to the Tasks world, thanks for your help!
Does Task.WhenAll() start the tasks by itself or do we have to
explicitly start them. If so, do we FIRST start and then do
Task.WhenAll()?
You need to start every task individually first, after wait for them.
And I need to do exception handling for these tasks as well,..
Every task is independent execution unit, so exception handling happens inside its scope. That means that what you can do is to return an exception from the task as a result. Main thread will read a result and behave appropriately.
These Task instances will not run if you simply call the constructor. As the Tasks have not been started, WhenAll will never return and you will deadlock.
Use System.Threading.Task.Run instead.
Task t1 = Task.Run(() => { /* Do Something */ });
Task t2 = Task.Run(() => { /* Do Something */ });
...
Remove the loop in which you start the tasks.
Task.Run Method (Action) .NET Framework 4.5
Queues the specified work to run on the ThreadPool and returns a task
handle for that work.
From MSDN
Further reading here: Task Parallelism (Task Parallel Library)
Regarding exception handling, you can use the Task parameter to access all results, even exceptions, from within the continuation:
Task.WhenAll(allTasks).ContinueWith((t) =>
{
if(t.RanToCompletion)
{
MyBlockingCollection.CompleteAdding();
}
else
{
Console.WriteLine(t.Exception);
}
});
More on that here: Exception Handling (Task Parallel Library)

Running multiple async tasks and waiting for them all to complete

I need to run multiple async tasks in a console application, and wait for them all to complete before further processing.
There's many articles out there, but I seem to get more confused the more I read. I've read and understand the basic principles of the Task library, but I'm clearly missing a link somewhere.
I understand that it's possible to chain tasks so that they start after another completes (which is pretty much the scenario for all the articles I've read), but I want all my Tasks running at the same time, and I want to know once they're all completed.
What's the simplest implementation for a scenario like this?
Both answers didn't mention the awaitable Task.WhenAll:
var task1 = DoWorkAsync();
var task2 = DoMoreWorkAsync();
await Task.WhenAll(task1, task2);
The main difference between Task.WaitAll and Task.WhenAll is that the former will block (similar to using Wait on a single task) while the latter will not and can be awaited, yielding control back to the caller until all tasks finish.
More so, exception handling differs:
Task.WaitAll:
At least one of the Task instances was canceled -or- an exception was thrown during the execution of at least one of the Task instances. If a task was canceled, the AggregateException contains an OperationCanceledException in its InnerExceptions collection.
Task.WhenAll:
If any of the supplied tasks completes in a faulted state, the returned task will also complete in a Faulted state, where its exceptions will contain the aggregation of the set of unwrapped exceptions from each of the supplied tasks.
If none of the supplied tasks faulted but at least one of them was canceled, the returned task will end in the Canceled state.
If none of the tasks faulted and none of the tasks were canceled, the resulting task will end in the RanToCompletion state.
If the supplied array/enumerable contains no tasks, the returned task will immediately transition to a RanToCompletion state before it's returned to the caller.
You could create many tasks like:
List<Task> TaskList = new List<Task>();
foreach(...)
{
var LastTask = new Task(SomeFunction);
LastTask.Start();
TaskList.Add(LastTask);
}
Task.WaitAll(TaskList.ToArray());
You can use WhenAll which will return an awaitable Task or WaitAll which has no return type and will block further code execution simular to Thread.Sleep until all tasks are completed, canceled or faulted.
WhenAll
WaitAll
Any of the supplied tasks completes in a faulted state
A task with the faulted state will be returned. The exceptions will contain the aggregation of the set of unwrapped exceptions from each of the supplied tasks.
An AggregateException will be thrown.
None of the supplied tasks faulted but at least one of them was canceled
The returned task will end in the TaskStatus.Canceled state
An AggregateException will be thrown which contains an OperationCanceledException in its InnerExceptions collection
An empty list was given
An ArgumentException will be thrown
The returned task will immediately transition to a TaskStatus.RanToCompletion State before it's returned to the caller.
Doesn't block the current thread
Blocks the current thread
Example
var tasks = new Task[] {
TaskOperationOne(),
TaskOperationTwo()
};
Task.WaitAll(tasks);
// or
await Task.WhenAll(tasks);
If you want to run the tasks in a particular/specific order you can get inspiration from this answer.
The best option I've seen is the following extension method:
public static Task ForEachAsync<T>(this IEnumerable<T> sequence, Func<T, Task> action) {
return Task.WhenAll(sequence.Select(action));
}
Call it like this:
await sequence.ForEachAsync(item => item.SomethingAsync(blah));
Or with an async lambda:
await sequence.ForEachAsync(async item => {
var more = await GetMoreAsync(item);
await more.FrobbleAsync();
});
Yet another answer...but I usually find myself in a case, when I need to load data simultaneously and put it into variables, like:
var cats = new List<Cat>();
var dog = new Dog();
var loadDataTasks = new Task[]
{
Task.Run(async () => cats = await LoadCatsAsync()),
Task.Run(async () => dog = await LoadDogAsync())
};
try
{
await Task.WhenAll(loadDataTasks);
}
catch (Exception ex)
{
// handle exception
}
Do you want to chain the Tasks, or can they be invoked in a parallel manner?
For chaining
Just do something like
Task.Run(...).ContinueWith(...).ContinueWith(...).ContinueWith(...);
Task.Factory.StartNew(...).ContinueWith(...).ContinueWith(...).ContinueWith(...);
and don't forget to check the previous Task instance in each ContinueWith as it might be faulted.
For the parallel manner
The most simple method I came across: Parallel.Invoke
Otherwise there's Task.WaitAll or you can even use WaitHandles for doing a countdown to zero actions left (wait, there's a new class: CountdownEvent), or ...
This is how I do it with an array Func<>:
var tasks = new Func<Task>[]
{
() => myAsyncWork1(),
() => myAsyncWork2(),
() => myAsyncWork3()
};
await Task.WhenAll(tasks.Select(task => task()).ToArray()); //Async
Task.WaitAll(tasks.Select(task => task()).ToArray()); //Or use WaitAll for Sync
I prepared a piece of code to show you how to use the task for some of these scenarios.
// method to run tasks in a parallel
public async Task RunMultipleTaskParallel(Task[] tasks) {
await Task.WhenAll(tasks);
}
// methode to run task one by one
public async Task RunMultipleTaskOneByOne(Task[] tasks)
{
for (int i = 0; i < tasks.Length - 1; i++)
await tasks[i];
}
// method to run i task in parallel
public async Task RunMultipleTaskParallel(Task[] tasks, int i)
{
var countTask = tasks.Length;
var remainTasks = 0;
do
{
int toTake = (countTask < i) ? countTask : i;
var limitedTasks = tasks.Skip(remainTasks)
.Take(toTake);
remainTasks += toTake;
await RunMultipleTaskParallel(limitedTasks.ToArray());
} while (remainTasks < countTask);
}
There should be a more succinct solution than the accepted answer. It shouldn't take three steps to run multiple tasks simultaneously and get their results.
Create tasks
await Task.WhenAll(tasks)
Get task results (e.g., task1.Result)
Here's a method that cuts this down to two steps:
public async Task<Tuple<T1, T2>> WhenAllGeneric<T1, T2>(Task<T1> task1, Task<T2> task2)
{
await Task.WhenAll(task1, task2);
return Tuple.Create(task1.Result, task2.Result);
}
You can use it like this:
var taskResults = await Task.WhenAll(DoWorkAsync(), DoMoreWorkAsync());
var DoWorkResult = taskResults.Result.Item1;
var DoMoreWorkResult = taskResults.Result.Item2;
This removes the need for the temporary task variables. The problem with using this is that while it works for two tasks, you'd need to update it for three tasks, or any other number of tasks. Also it doesn't work well if one of the tasks doesn't return anything. Really, the .Net library should provide something that can do this
If you're using the async/await pattern, you can run several tasks in parallel like this:
public async Task DoSeveralThings()
{
// Start all the tasks
Task first = DoFirstThingAsync();
Task second = DoSecondThingAsync();
// Then wait for them to complete
var firstResult = await first;
var secondResult = await second;
}

Automatic repeat of one task until another is finished (TAP)

I have two operations - long running OperationA and much quicker OperationB. I was running them in parallel using TAP and returning results as they both finish :
var taskA = Task.Factory.StartNew(()=>OperationA());
var taskB = Task.Factory.StartNew(()=>OperationB());
var tasks = new Task[] { taskA, taskB };
Task.WaitAll(tasks);
// processing taskA.Result, taskB.Result
No magic here. Now what I want to do is repeat OperationB when it's finished indefinitely in case OperationA is still running. So whole procedure finish point will occur when OperationA is finished and last pass of OperationB is finished. I'm looking for some sort of effective pattern for doing that will not involve polling for OperationA's Status in while loop if that's possible. Looking toward improving WaitAllOneByOne pattern proposed in this Pluralsight course or something similar.
Try this
// Get cancellation support.
CancellationTokenSource source = new CancellationTokenSource();
CancellationToken token = source.Token;
// Start off A and set continuation to cancel B when finished.
bool taskAFinished = false;
var taskA = Task.Factory.StartNew(() => OperationA());
Task contA = taskA.ContinueWith(ant => source.Cancel());
// Set off B and in the method perform your loop. Cancellation with be thrown when
// A has completed.
var taskB = Task.Factory.StartNew(() => OperationB(token), token);
Task contB = taskB.ContinueWith(ant =>
{
switch (task.Status)
{
// Handle any exceptions to prevent UnobservedTaskException.
case TaskStatus.RanToCompletion:
// Do stuff.
break;
case TaskStatus.Canceled:
// You know TaskA is finished.
break;
case TaskStatus.Faulted:
// Something bad.
break;
}
});
then in the OperationB method you can perform your loop and include a cancellation upon TaskA's compleation...
private void OperationB(CancellationToken token)
{
foreach (var v in object)
{
...
token.ThrowIfCancellationRequested(); // This must be handeled. AggregateException.
}
}
Note, instead of complicating with a cancellation, you can just set a bool from with in the continuation of TaskA and check for this in TaskB' loop - this will avoid any faffing about with cancellations.
I hope this helps
Took your approach as basis and adapted a bit :
var source = new CancellationTokenSource();
var token = source.Token;
var taskA = Task.Factory.StartNew(
() => OperationA()
);
var taskAFinished = taskA.ContinueWith(antecedent =>
{
source.Cancel();
return antecedent.Result;
});
var taskB = Task.Factory.StartNew(
() => OperationB(token), token
);
var taskBFinished = taskB.ContinueWith(antecedent =>
{
switch (antecedent.Status)
{
case TaskStatus.RanToCompletion:
case TaskStatus.Canceled:
try
{
return ant.Result;
}
catch (AggregateException ae)
{
// Operation was canceled before start if OperationA is short
return null;
}
case TaskStatus.Faulted:
return null;
}
return null;
});
Made two continuations that returns results for respective operations so I could make wait for them both to be finished (tried to do that only with second one and it didn't work).
var tasks = new Task[] { taskAFinished, taskBFinished };
Task.WaitAll(tasks);
First one is just passing antecedent's task Result further, second takes aggregate results available at this point in OperationB (both RanToCompletion and Canceled statuses are considered correct end of process). OperationB now looks like this :
public static List<Result> OperationB(CancellationToken token)
{
var resultsList = new List<Result>();
while (true)
{
foreach (var op in operations)
{
resultsList.Add(op.GetResult();
}
if (token.IsCancellationRequested)
{
return resultsList;
}
}
}
Changes logic a bit - all loops inside OperationB now are considered as single task, but this is easier than keep them atomic and write some sort of coordination primitive that will gather results from each run. In case I don't really care which loop produced which results this seems to be a decent solution. May improve to more flexible implementation later if needed (what I was actually looking for is to chain multiple operations recursively - OperationB itself may have smaller repeating OperationC's inside with same behavior, OperationC - multiple OperationD's that are running when C is active etc.).
edit
Added exception handling in taskBFinished for case when OperationA is quick and cancellation is issued before OperationB is even started.

Categories