How could I use DataflowBlockOptions.CancellationToken?
If I create instance of BufferBlock like this:
var queue = new BufferBlock<int>(new DataflowBlockOptions { BoundedCapacity = 5, CancellationToken = _cts.Token });
then having consumer/producer methods that use queue, how can I use its CancellationToken to handle cancellation?
E.g. in producer method, how can I check the cancellation token - I haven't found any property to access the token..
EDIT:
Sample of produce/consume methods:
private static async Task Produce(BufferBlock<int> queue, IEnumerable<int> values)
{
foreach (var value in values)
{
await queue.SendAsync(value);
}
queue.Complete();
}
private static async Task<IEnumerable<int>> Consume(BufferBlock<int> queue)
{
var ret = new List<int>();
while (await queue.OutputAvailableAsync())
{
ret.Add(await queue.ReceiveAsync());
}
return ret;
}
Code to call it:
var queue = new BufferBlock<int>(new DataflowBlockOptions { BoundedCapacity = 5, CancellationToken = _cts.Token });
// Start the producer and consumer.
var values = Enumerable.Range(0, 10);
Produce(queue, values);
var consumer = Consume(queue);
// Wait for everything to complete.
await Task.WhenAll(consumer, queue.Completion);
EDIT2:
If I call _cts.Cancel(), the Produce method does not cancel and finishes without interruption.
If you want to cancel produce process you should pass token in it, like this:
private static async Task Produce(
BufferBlock<int> queue,
IEnumerable<int> values,
CancellationToken token
)
{
foreach (var value in values)
{
await queue.SendAsync(value, token);
Console.WriteLine(value);
}
queue.Complete();
}
private static async Task<IEnumerable<int>> Consume(BufferBlock<int> queue)
{
var ret = new List<int>();
while (await queue.OutputAvailableAsync())
{
ret.Add(await queue.ReceiveAsync());
}
return ret;
}
static void Main(string[] args)
{
var cts = new CancellationTokenSource();
var queue = new BufferBlock<int>(new DataflowBlockOptions { BoundedCapacity = 5, CancellationToken = cts.Token });
// Start the producer and consumer.
var values = Enumerable.Range(0, 100);
Produce(queue, values, cts.Token);
var consumer = Consume(queue);
cts.Cancel();
try
{
Task.WaitAll(consumer, queue.Completion);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
foreach (var i in consumer.Result)
{
Console.WriteLine(i);
}
Console.ReadKey();
Normally you use the CancellationToken option in order to control the cancellation of a dataflow block, using an external CancellationTokenSource. Canceling the block (assuming that its a TransformBlock) has the following immediate effects:
The block stops accepting incoming messages. Invoking its Post returns false, meaning that the offered message is rejected.
The messages that are currently stored in the block's internal input buffer are immediately discarded. These messages are lost. They will not be processed or propagated.
If the block is not currently processing any messages, the following effects will also follow immediately. Otherwise they will follow when the processing of all currently processed messages is completed:
All the processed messages that are currently stored in this block's output buffer are discarded. The last processed messages (the messages that were in the middle of processing when the cancellation occurred) will not be propagated to linked blocks downstream.
Any pending asynchronous SendAsync operations targeting the block, that were in-flight when the cancellation occurred, will complete with a result of false (meaning "non accepted").
The Task that represents the Completion of the block transitions to the Canceled state. In other words this task's IsCanceled property becomes true.
You can achieve all but the last effect directly, without using the CancellationToken option, by invoking the block's Fault method. This method is accessible through the IDataflowBlock interface that all blocks implement. You can use it like this:
((IDataflowBlock)block).Fault(new OperationCanceledException());
The difference is that the Completion task will now become Faulted instead of Canceled. This difference may or may not be important, depending on the situation. If you just await the Completion, which is how this property is normally used, in both cases a OperationCanceledException will be thrown. So if you don't need to do anything fancy with the Completion property, and you also want to avoid configuring the CancellationToken for some reason, you could consider this trick as an option.
Update: Behavior when the cancellation occurs after the Complete method has been invoked, in other words when the block is already in its completion phase, but has not completed yet:
If the block is a processing block, like a TransformBlock, all of the above will happen just the same. The block will transition soon to the Canceled state.
If the block is a non-processing block, like a BufferBlock<T>, the (3) from the list above will not happen. The output buffer of a BufferBlock<T> is not emptied, when the cancellation happen after the invocation of the Complete method. See this GitHib issue for a demonstration of this behavior. Please take into consideration that the Complete method may be invoked not only manually, but also automatically, if the block has been linked as the target of a source block, with the PropagateCompletion configuration enabled. You may want to check out this question, to understand fully the implications of this behavior. Long story short, canceling all the blocks of a dataflow pipeline that contains a BufferBlock<T>, does not guarantee that the pipeline will terminate.
Side note: When both the Complete and Fault methods are invoked, whatever was invoked first prevails regarding the final status of the block. If the Complete was invoked first, the block will complete with status RanToCompletion. If the Fault was invoked first, the block will complete with status Faulted. Faulting a Completed block has still an effect though: it empties its internal input buffer.
Related
Since we expect to be reading frequently, and for us to often be reading when data is already available to be consumed, should SendLoopAsync return ValueTask rather than Task, so that we can make it allocation-free?
// Caller
_ = Task.Factory.StartNew(_ => SendLoopAsync(cancellationToken), TaskCreationOptions.LongRunning, cancellationToken);
// Method
private async ValueTask SendLoopAsync(CancellationToken cancellationToken)
{
while (await _outputChannel.Reader.WaitToReadAsync(cancellationToken).ConfigureAwait(false))
{
while (_outputChannel.Reader.TryRead(out var message))
{
using (await _mutex.LockAsync(cancellationToken).ConfigureAwait(false))
{
await _clientWebSocket.SendAsync(message.Data.AsMemory(), message.MessageType, true, cancellationToken).ConfigureAwait(false);
}
}
}
}
No, there is no value at the SendLoopAsync returning a ValueTask instead of Task. This method is invoked only once in your code. The impact of avoiding a single allocation of a tiny object is practically zero. You should consider using ValueTasks for asynchronous methods that are invoked repeatedly in loops, especially in hot paths. That's not the case in the example presented in the question.
As a side note, invoking asynchronous methods with the Task.Factory.StartNew+TaskCreationOptions.LongRunning combination is pointless. The new Thread that is going to be created, will have a very short life. It's gonna be terminated immediately when the code reaches the first await of an incomplete awaitable inside the async method. Also you are getting back a nested Task<Task>, which is tricky to handle. Using Task.Run is preferable. You can read here the reasons why.
Also be aware that the Nito.AsyncEx.AsyncLock class is not optimized for memory-efficiency. Lots of allocations are happening every time the lock is acquired. If you want a low-allocation synchronization primitive that can be acquired asynchronously, your best bet currently is probably to use a Channel<object> instance, initialized with a single null value: retrieve the value to enter, store it back to release.
The idiomatic way to use channels doesn't need locks, semaphores or Task.Factory.StartNew. The typical way to use a channel is to have a method that accepts just a ChannelReader as input. If the method wants to use a Channel as output, it should create it itself and only return a ChannelReader that can be passed to other methods. By owning the channel the method knows when it can be closed.
In the questions case, the code is simple enough though. A simple await foreach should be enough:
private async ValueTask SendLoopAsync(ChannelReader<Message> reader,
CancellationToken cancellationToken)
{
await foreach (var msg in reader.ReadAllAsync(cancellationToken))
{
await _clientWebSocket.SendAsync(message.Data.AsMemory(),
message.MessageType,
true, cancellationToken);
}
}
This method doesn't need an external Task.Run or Task.Factory.New to work. To run it, just call it and store its task somewhere, but not discarded:
public MyWorker(ChannelReader<Message> reader,CancellationToken token)
{
.....
_loopTask=SendLoopAsync(reader,token);
}
This way, once the input channel completes, the code can await for _loopTask to finish processing any pending messages.
Any blocking code should run inside it with Task.Run(), eg
private async ValueTask SendLoopAsync(ChannelReader<Message> reader,
CancellationToken cancellationToken)
{
await foreach (var msg in reader.ReadAllAsync(cancellationToken))
{
var new_msg=await Task.Run(()=>SomeHeavyWork(msg),cancellationToken);
await _clientWebSocket.SendAsync(message.Data.AsMemory(),
message.MessageType,
true, cancellationToken);
}
}
Concurrent Workers
This method could be used to start multiple concurrent workers too :
var tasks=Enumerable.Range(0,dop).Select(_=>SendLoopAsync(reader,token));
_loopTask=Task.WhenAll(tasks);
...
await _loopTask;
In .NET 6, Parallel.ForEachAsync can be used to process multiple messages with less code:
private async ValueTask SendLoopAsync(ChannelReader<Message> reader,
CancellationToken cancellationToken)
{
var options=new ParallelOptions {
CancellationToke=cancellationToken,
MaxDegreeOfParallellism=4
};
var input=reader.ReadAllAsync(cancellationToken);
await Parallel.ForEachAsync(input,options,async (msg,token)=>{
var new_msg=await Task.Run(()=>SomeHeavyWork(msg),token);
await _clientWebSocket.SendAsync(message.Data.AsMemory(),
message.MessageType,
true, token);
});
}
Idiomatic Channel Producers
Instead of using a class-level channel stored in a field, create the channel inside the producer method and only return its reader. This way the producer method has control of the channel's lifecycle and can close it when it's done. That's one of the reasons a Channel can only be accessed accessed only through its Reader and Writer classes.
A method can consume a ChannelReader and return another. This allows creating methods that can be chained together into a pipeline.
A simple producer can look like this:
ChannelReader<Message> Producer(CancellationToke token)
{
var channel=Channel.CreateUnbounded<Message>();
var writer=channel.Writer;
_ = Task.Run(()=>{
while(!token.IsCancellationRequested)
{
var msg=SomeHeavyJob();
await writer.SendAsync(msg);
},token)
.ContinueWith(t=>writer.TryComplete(t));
return channel.Reader;
}
When cancellation is signaled, the worker exits or an exception is thrown, the main task exists and ContinueWith calls TryComplete on the writer with any exception that may have been thrown. That's a simple non-blocking operation so it doesn't matter what thread it runs on.
A transforming method would look like this:
ChannelReader<Msg2> Transform(ChannelReader<Msg1> input,CancellationToke token)
{
var channel=Channel.CreateUnbounded<Msg2>();
var writer=channel.Writer;
_ = Task.Run(()=>{
await foreach(var msg1 in input.ReadAllAsync(token))
{
var msg2=SomeHeavyJob(msg1);
await writer.SendAsync(msg2);
},token)
.ContinueWith(t=>writer.TryComplete(t));
return channel.Reader;
}
Turning those methods into static extension methods would allow chaining them one after the other :
var task=Producer()
.Transformer()
.Consumer();
I don't set SingleWriter because this doesn't seem to be doing anything yet. Searching for this in the .NET runtime repo on Github doesn't show any results beyond test code.
According to the documentation:
A dataflow block is considered completed when it is not currently processing a message and when it has guaranteed that it will not process any more messages.
This behavior is not ideal in my case. I want to be able to cancel the job at any time, but the processing of each individual action takes a long time. So when I cancel the token, the effect is not immediate. I must wait for the currently processed item to complete. I have no way to cancel the actions directly, because the API I use is not cancelable. Can I do anything to make the block ignore the currently running action, and complete instantly?
Here is an example that demonstrates my problem. The token is canceled after 500 msec, and the duration of each action is 1000 msec:
static async Task Main()
{
var cts = new CancellationTokenSource(500);
var block = new ActionBlock<int>(async x =>
{
await Task.Delay(1000);
}, new ExecutionDataflowBlockOptions() { CancellationToken = cts.Token });
block.Post(1); // I must wait for this one to complete
block.Post(2); // This one is ignored
block.Complete();
var stopwatch = Stopwatch.StartNew();
try
{
await block.Completion;
}
catch (OperationCanceledException)
{
Console.WriteLine($"Canceled after {stopwatch.ElapsedMilliseconds} msec");
}
}
Output:
Canceled after 1035 msec
The desired output would be a cancellation after ~500 msec.
Based on this excerpt from your comment...:
What I want to happen in case of a cancellation request is to ignore the currently running workitem. I don't care about it any more, so why I have to wait for it?
...and assuming you are truly OK with leaving the Task running, you can simply wrap the job you wish to call inside another Task which will constantly poll for cancellation or completion, and cancel that Task instead. Take a look at the following "proof-of-concept" code that wraps a "long-running" task inside another Task "tasked" with constantly polling the wrapped task for completion, and a CancellationToken for cancellation (completely "spur-of-the-moment" status, you will want to re-adapt it a bit of course):
public class LongRunningTaskSource
{
public Task LongRunning(int milliseconds)
{
return Task.Run(() =>
{
Console.WriteLine("Starting long running task");
Thread.Sleep(3000);
Console.WriteLine("Finished long running task");
});
}
public Task LongRunningTaskWrapper(int milliseconds, CancellationToken token)
{
Task task = LongRunning(milliseconds);
Task wrapperTask = Task.Run(() =>
{
while (true)
{
//Check for completion (you could, of course, do different things
//depending on whether it is faulted or completed).
if (!(task.Status == TaskStatus.Running))
break;
//Check for cancellation.
if (token.IsCancellationRequested)
{
Console.WriteLine("Aborting Task.");
token.ThrowIfCancellationRequested();
}
}
}, token);
return wrapperTask;
}
}
Using the following code:
static void Main()
{
LongRunningTaskSource longRunning = new LongRunningTaskSource();
CancellationTokenSource cts = new CancellationTokenSource(1500);
Task task = longRunning.LongRunningTaskWrapper(3000, cts.Token);
//Sleep long enough to let things roll on their own.
Thread.Sleep(5000);
Console.WriteLine("Ended Main");
}
...produces the following output:
Starting long running task
Aborting Task.
Exception thrown: 'System.OperationCanceledException' in mscorlib.dll
Finished long running task
Ended Main
The wrapped Task obviously completes in its own good time. If you don't have a problem with that, which is often not the case, hopefully, this should fit your needs.
As a supplementary example, running the following code (letting the wrapped Task finish before time-out):
static void Main()
{
LongRunningTaskSource longRunning = new LongRunningTaskSource();
CancellationTokenSource cts = new CancellationTokenSource(3000);
Task task = longRunning.LongRunningTaskWrapper(1500, cts.Token);
//Sleep long enough to let things roll on their own.
Thread.Sleep(5000);
Console.WriteLine("Ended Main");
}
...produces the following output:
Starting long running task
Finished long running task
Ended Main
So the task started and finished before timeout and nothing had to be cancelled. Of course nothing is blocked while waiting. As you probably already know, of course, if you know what is being used behind the scenes in the long-running code, it would be good to clean up if necessary.
Hopefully, you can adapt this example to pass something like this to your ActionBlock.
Disclaimer & Notes
I am not familiar with the TPL Dataflow library, so this is just a workaround, of course. Also, if all you have is, for example, a synchronous method call that you do not have any influence on at all, then you will obviously need two tasks. One wrapper task to wrap the synchronous call and another one to wrap the wrapper task to include continuous status polling and cancellation checks.
With TPL we have CancellationTokenSource which provides tokens, useful to cooperatively cancellation of current task (or its start).
Question:
How long it take to propagate cancellation request to all hooked running tasks?
Is there any place, where code could look to check that: "from now" every interested Task, will find that cancellation has been requested?
Why there is need for it?
I would like to have stable unit test, to show that cancellation works in our code.
Problem details:
We have "Executor" which produces tasks, these task wrap some long running actions. Main job of executor is to limit how many concurrent actions were started. All of these tasks can be cancelled individually, and also these actions will respect CancellationToken internally.
I would like to provide unit test, which shows that when cancellation occurred while task is waiting for slot to start given action, that task will cancel itself (eventually) and does not start execution of given action.
So, idea was to prepare LimitingExecutor with single slot. Then start blocking action, which would request cancellation when unblocked. Then "enqueue" test action, which should fail when executed. With that setup, tests would call unblock and then assert that task of test action will throw TaskCanceledException when awaited.
[Test]
public void RequestPropagationTest()
{
using (var setupEvent = new ManualResetEvent(initialState: false))
using (var cancellation = new CancellationTokenSource())
using (var executor = new LimitingExecutor())
{
// System-state setup action:
var cancellingTask = executor.Do(() =>
{
setupEvent.WaitOne();
cancellation.Cancel();
}, CancellationToken.None);
// Main work action:
var actionTask = executor.Do(() =>
{
throw new InvalidOperationException(
"This action should be cancelled!");
}, cancellation.Token);
// Let's wait until this `Task` starts, so it will got opportunity
// to cancel itself, and expected later exception will not come
// from just starting that action by `Task.Run` with token:
while (actionTask.Status < TaskStatus.Running)
Thread.Sleep(millisecondsTimeout: 1);
// Let's unblock slot in Executor for the 'main work action'
// by finalizing the 'system-state setup action' which will
// finally request "global" cancellation:
setupEvent.Set();
Assert.DoesNotThrowAsync(
async () => await cancellingTask);
Assert.ThrowsAsync<TaskCanceledException>(
async () => await actionTask);
}
}
public class LimitingExecutor : IDisposable
{
private const int UpperLimit = 1;
private readonly Semaphore _semaphore
= new Semaphore(UpperLimit, UpperLimit);
public Task Do(Action work, CancellationToken token)
=> Task.Run(() =>
{
_semaphore.WaitOne();
try
{
token.ThrowIfCancellationRequested();
work();
}
finally
{
_semaphore.Release();
}
}, token);
public void Dispose()
=> _semaphore.Dispose();
}
Executable demo (via NUnit) of this problem could be found at GitHub.
However, implementation of that test sometimes fails (no expected TaskCanceledException), on my machin maybe 1 in 10 runs. Kind of "solution" to this problem is to insert Thread.Sleep right after request of cancellation. Even with sleep for 3 seconds this test sometimes fails (found after 20-ish runs), and when it passes, that long waiting is usually unnecessary (I guess). For reference, please see diff.
"Other problem", was to ensure that cancellation comes from "waiting time" and not from Task.Run, because ThreadPool could be busy (other executing tests), and it cold postpone start of second task after request of cancellation - that would render this test "falsy-green". The "easy fix by hack" was to actively wait until second task starts - its Status becomes TaskStatus.Running. Please check version under this branch and see that test without this hack will be sometimes "green" - so exampled bug could pass through it.
Your test method assumes that cancellingTask always takes the slot (enters the semaphore) in LimitingExecutor before the actionTask. Unfortunatelly, this assumption is wrong, LimitingExecutor does not guarantee this and it's just a matter of luck, which of the two task takes the slot (actually on my computer it only happens in something like 5% of runs).
To resolve this problem, you need another ManualResetEvent, that will allow main thread to wait until cancellingTask actually occupies the slot:
using (var slotTaken = new ManualResetEvent(initialState: false))
using (var setupEvent = new ManualResetEvent(initialState: false))
using (var cancellation = new CancellationTokenSource())
using (var executor = new LimitingExecutor())
{
// System-state setup action:
var cancellingTask = executor.Do(() =>
{
// This is called from inside the semaphore, so it's
// certain that this task occupies the only available slot.
slotTaken.Set();
setupEvent.WaitOne();
cancellation.Cancel();
}, CancellationToken.None);
// Wait until cancellingTask takes the slot
slotTaken.WaitOne();
// Now it's guaranteed that cancellingTask takes the slot, not the actionTask
// ...
}
.NET Framework doesn't provide API to detect task transition to the Running state, so if you don't like polling the State property + Thread.Sleep() in a loop, you'll need to modify LimitingExecutor.Do() to provide this information, probably using another ManualResetEvent, e.g.:
public Task Do(Action work, CancellationToken token, ManualResetEvent taskRunEvent = null)
=> Task.Run(() =>
{
// Optional notification to the caller that task is now running
taskRunEvent?.Set();
// ...
}, token);
In a scenario where await may be called on an 'empty' list of tasks.
How do I await a list of Task<T>, and then add new tasks to the awaiting list until one fails or completes.
I am sure there is must be an Awaiter or CancellationTokenSource solution for this problem.
public class LinkerThingBob
{
private List<Task> ofmyactions = new List<Task>();
public void LinkTo<T>(BufferBlock<T> messages) where T : class
{
var action = new ActionBlock<IMsg>(_ => this.Tx(messages, _));
// this would not actually work, because the WhenAny
// will not include subsequent actions.
ofmyactions.Add(action.Completion);
// link the new action block.
this._inboundMessageBuffer.LinkTo(block);
}
// used to catch exceptions since these blocks typically don't end.
public async Task CompletionAsync()
{
// how do i make the awaiting thread add a new action
// to the list of waiting tasks without interrupting it
// or graciously interrupting it to let it know there's one more
// more importantly, this CompletionAsync might actually be called
// before the first action is added to the list, so I actually need
// WhenAny(INFINITE + ofmyactions)
await Task.WhenAny(ofmyactions);
}
}
My problem is that I need a mechanism where I can add each of the action instances created above to a Task<T> that will complete when there is an exception.
I am not sure how best to explain this but:
The task must not complete until at least one call to LinkTo<T> has been made, so I need to start with an infinite task
each time LinkTo<T> is called, the new action must be added to the list of tasks, which may already be awaited on in another thread.
There isn't anything built-in for this, but it's not too hard to build one using TaskCompletionSource<T>. TCS is the type to use when you want to await something and there isn't already a construct for it. (Custom awaiters are for more advanced scenarios).
In this case, something like this should suffice:
public class LinkerThingBob
{
private readonly TaskCompletionSource<object> _tcs = new TaskCompletionSource<object>();
private async Task ObserveAsync(Task task)
{
try
{
await task;
_tcs.TrySetResult(null);
}
catch (Exception ex)
{
_tcs.TrySetException(ex);
}
}
public void LinkTo<T>(BufferBlock<T> messages) where T : class
{
var action = new ActionBlock<IMsg>(_ => this.Tx(messages, _));
var _ = ObserveAsync(action.Completion);
this._inboundMessageBuffer.LinkTo(block);
}
public Task Completion { get { return _tcs.Task; } }
}
Completion starts in a non-completed state. Any number of blocks can be linked to it using ObserveAsync. As soon as one of the blocks completes, Completion also completes. I wrote ObserveAsync here in a way so that if the first completed block completes without error, then so will Completion; and if the first completed block completes with an exception, then Completion will complete with that same exception. Feel free to tweak for your specific needs. :)
This is a solution that uses exclusively tools of the TPL Dataflow library itself. You can create a TransformBlock that will "process" the ActionBlocks you want to observe. Processing a block means simply awaiting for its completion. So the TransformBlock takes incomplete blocks, and outputs the same blocks as completed. The TransformBlock must be configured with unlimited parallelism and capacity, and with ordering disabled, so that all blocks are observed concurrently, and each one that completes is returned instantly.
var allBlocks = new TransformBlock<ActionBlock<IMsg>, ActionBlock<IMsg>>(async block =>
{
try { await block.Completion; }
catch { }
return block;
}, new ExecutionDataflowBlockOptions()
{
MaxDegreeOfParallelism = DataflowBlockOptions.Unbounded,
EnsureOrdered = false
});
Then inside the LinkerThingBob.LinkTo method, send the created ActionBlocks to the TransformBlock.
var actionBlock = new ActionBlock<IMsg>(_ => this.Tx(messages, _));
allBlocks.Post(actionBlock);
Now you need a target to receive the first faulted block. A WriteOnceBlock is quite suitable for this role, since it ensures that will receive at most one faulted block.
var firstFaulted = new WriteOnceBlock<ActionBlock<IMsg>>(x => x);
allBlocks.LinkTo(firstFaulted, block => block.Completion.IsFaulted);
Finally you can await at any place for the completion of the WriteOnceBlock. It will complete immediately after receiving a faulted block, or it may never complete if it never receives a faulted block.
await firstFaulted.Completion;
After the awaiting you can also get the faulted block if you want.
ActionBlock<IMsg> faultedBlock = firstFaulted.Receive();
The WriteOnceBlock is special on how it behaves when it forwards messages. Unlike most other blocks, you can call multiple times its Receive method, and you'll always get the same single item it contains (it is not removed from its buffer after the first Receive).
What is the difference between the Cancellation operations versus the loopState operation (Break/Stop)?
private static CancellationTokenSource cts;
public static loopingMethod()
{
cts = new CancellationTokenSource();
try
{
ParallelOptions pOptions = new ParallelOptions();
pOptions.MaxDegreeOfParallelism = 4;
pOptions.CancellationToken = cts.Token;
Parallel.ForEach(dictObj, pOptions, (KVP, loopState) =>
{
pOptions.CancellationToken.ThrowIfCancellationRequested();
parallelDoWork(KVP.Key, KVP.Value, loopState);
}); //End of Parallel.ForEach loop
}
catch (OperationCanceledException e)
{
//Catestrophic Failure
return -99;
}
}
public static void parallelDoWork(string Id, string Value, ParallelLoopState loopState)
{
try{
throw new exception("kill loop");
}
catch(exception ex)
{
if(ex.message == "kill loop")
{
cts.Cancel();
//Or do I use loopState here?
}
}
}
Why would I want to use the ParallelOptions Cancellation operation versus the loopState.Break(); or loopState.Stop(); or vice versa?
See this article
"Setting a cancellation token allows you to abort Invoke (remember that when a delegate throws an exception, the exception is swallowed and only re-thrown by Invoke after all other delegates have been executed)."
Scenario 1. Imagine you have a user about to send messages to all ex-[girl|boy]friends. They click send and then they come to their senses and want to cancel it. By using the cancellation token they are able to stop further messages from going out. So if you have a long running process that is allowed to be cancelled, use the cancellation token.
Scenario 2 On the other hand, if you don't want a process to be interrupted, then use normal loop state exceptions so that the exceptions will be swallowed until all threads finish.
Scenario 3 If you have a process that is I/O intensive, then you probably want to be using async/await and not parallel.foreach. Check out Microsoft's task-based asynchronous pattern.
ParallelLoopState.Break/Stop have well defined semantics specific to the execution of the loop. I.e. by using these you can be very specific about how you want the loop to terminate. A CancellationToken on the other hand is the generic stop mechanism in TPL, so it does nothing special for parallel loops. The advantage of using the token is that it can be shared among other TPL features, so you could have a task and a loop that are controlled by the same token.