With TPL we have CancellationTokenSource which provides tokens, useful to cooperatively cancellation of current task (or its start).
Question:
How long it take to propagate cancellation request to all hooked running tasks?
Is there any place, where code could look to check that: "from now" every interested Task, will find that cancellation has been requested?
Why there is need for it?
I would like to have stable unit test, to show that cancellation works in our code.
Problem details:
We have "Executor" which produces tasks, these task wrap some long running actions. Main job of executor is to limit how many concurrent actions were started. All of these tasks can be cancelled individually, and also these actions will respect CancellationToken internally.
I would like to provide unit test, which shows that when cancellation occurred while task is waiting for slot to start given action, that task will cancel itself (eventually) and does not start execution of given action.
So, idea was to prepare LimitingExecutor with single slot. Then start blocking action, which would request cancellation when unblocked. Then "enqueue" test action, which should fail when executed. With that setup, tests would call unblock and then assert that task of test action will throw TaskCanceledException when awaited.
[Test]
public void RequestPropagationTest()
{
using (var setupEvent = new ManualResetEvent(initialState: false))
using (var cancellation = new CancellationTokenSource())
using (var executor = new LimitingExecutor())
{
// System-state setup action:
var cancellingTask = executor.Do(() =>
{
setupEvent.WaitOne();
cancellation.Cancel();
}, CancellationToken.None);
// Main work action:
var actionTask = executor.Do(() =>
{
throw new InvalidOperationException(
"This action should be cancelled!");
}, cancellation.Token);
// Let's wait until this `Task` starts, so it will got opportunity
// to cancel itself, and expected later exception will not come
// from just starting that action by `Task.Run` with token:
while (actionTask.Status < TaskStatus.Running)
Thread.Sleep(millisecondsTimeout: 1);
// Let's unblock slot in Executor for the 'main work action'
// by finalizing the 'system-state setup action' which will
// finally request "global" cancellation:
setupEvent.Set();
Assert.DoesNotThrowAsync(
async () => await cancellingTask);
Assert.ThrowsAsync<TaskCanceledException>(
async () => await actionTask);
}
}
public class LimitingExecutor : IDisposable
{
private const int UpperLimit = 1;
private readonly Semaphore _semaphore
= new Semaphore(UpperLimit, UpperLimit);
public Task Do(Action work, CancellationToken token)
=> Task.Run(() =>
{
_semaphore.WaitOne();
try
{
token.ThrowIfCancellationRequested();
work();
}
finally
{
_semaphore.Release();
}
}, token);
public void Dispose()
=> _semaphore.Dispose();
}
Executable demo (via NUnit) of this problem could be found at GitHub.
However, implementation of that test sometimes fails (no expected TaskCanceledException), on my machin maybe 1 in 10 runs. Kind of "solution" to this problem is to insert Thread.Sleep right after request of cancellation. Even with sleep for 3 seconds this test sometimes fails (found after 20-ish runs), and when it passes, that long waiting is usually unnecessary (I guess). For reference, please see diff.
"Other problem", was to ensure that cancellation comes from "waiting time" and not from Task.Run, because ThreadPool could be busy (other executing tests), and it cold postpone start of second task after request of cancellation - that would render this test "falsy-green". The "easy fix by hack" was to actively wait until second task starts - its Status becomes TaskStatus.Running. Please check version under this branch and see that test without this hack will be sometimes "green" - so exampled bug could pass through it.
Your test method assumes that cancellingTask always takes the slot (enters the semaphore) in LimitingExecutor before the actionTask. Unfortunatelly, this assumption is wrong, LimitingExecutor does not guarantee this and it's just a matter of luck, which of the two task takes the slot (actually on my computer it only happens in something like 5% of runs).
To resolve this problem, you need another ManualResetEvent, that will allow main thread to wait until cancellingTask actually occupies the slot:
using (var slotTaken = new ManualResetEvent(initialState: false))
using (var setupEvent = new ManualResetEvent(initialState: false))
using (var cancellation = new CancellationTokenSource())
using (var executor = new LimitingExecutor())
{
// System-state setup action:
var cancellingTask = executor.Do(() =>
{
// This is called from inside the semaphore, so it's
// certain that this task occupies the only available slot.
slotTaken.Set();
setupEvent.WaitOne();
cancellation.Cancel();
}, CancellationToken.None);
// Wait until cancellingTask takes the slot
slotTaken.WaitOne();
// Now it's guaranteed that cancellingTask takes the slot, not the actionTask
// ...
}
.NET Framework doesn't provide API to detect task transition to the Running state, so if you don't like polling the State property + Thread.Sleep() in a loop, you'll need to modify LimitingExecutor.Do() to provide this information, probably using another ManualResetEvent, e.g.:
public Task Do(Action work, CancellationToken token, ManualResetEvent taskRunEvent = null)
=> Task.Run(() =>
{
// Optional notification to the caller that task is now running
taskRunEvent?.Set();
// ...
}, token);
Related
I would like to ask expert developers in C#. I have three recurrent tasks that my program needs to do. Task 2 depends on task 1 and task 3 depends on task 2, but task 1 doesn't need to wait for the other two tasks to finish in order to start again (the program is continuously running). Since each task takes some time, I would like to run each task in one thread or a C# Task. Once task 1 finishes task 2 starts and task 1 starts again ... etc.
I'm not sure what is the best way to implement this. I hope someone can guide me on this.
One way to achieve this is using something called the the Task Parallel Library. This provides a set of classes that allow you to arrange your tasks into "blocks". You create a method that does A, B and C sequentially, then TPL will take care of running multiple invocations of that method simultaneously. Here's a small example:
async Task Main()
{
var actionBlock = new ActionBlock<int>(DoTasksAsync, new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 2 // This is the number of simultaneous executions of DoTasksAsync that will be run
};
await actionBlock.SendAsync(1);
await actionBlock.SendAsync(2);
actionBlock.Complete();
await actionBlock.Completion;
}
async Task DoTasksAsync(int input)
{
await DoTaskAAsync();
await DoTaskBAsync();
await DoTaskCAsync();
}
I would probably use some kind of queue pattern.
I am not sure what the requirements for if task 1 is threadsafe or not, so I will keep it simple:
Task 1 is always executing. As soon as it finished, it posts a message on some queue and starts over.
Task 2 is listening to the queue. Whenever a message is available, it starts working on it.
Whenever task 2 finishes working, it calls task 3, so that it can do it's work.
As one of the comments mentioned, you should probably be able to use async/await successfully in your code. Especially between task 2 and 3. Note that task 1 can be run in parallel to task 2 and 3, since it is not dependent on any of the other task.
You could use the ParallelLoop method below. This method starts an asynchronous workflow, where the three tasks are invoked in parallel to each other, but sequentially to themselves. So you don't need to add synchronization inside each task, unless some task produces global side-effects that are visible from some other task.
The tasks are invoked on the ThreadPool, with the Task.Run method.
/// <summary>
/// Invokes three actions repeatedly in parallel on the ThreadPool, with the
/// action2 depending on the action1, and the action3 depending on the action2.
/// Each action is invoked sequentially to itself.
/// </summary>
public static async Task ParallelLoop<TResult1, TResult2>(
Func<TResult1> action1,
Func<TResult1, TResult2> action2,
Action<TResult2> action3,
CancellationToken cancellationToken = default)
{
// Arguments validation omitted
var task1 = Task.FromResult<TResult1>(default);
var task2 = Task.FromResult<TResult2>(default);
var task3 = Task.CompletedTask;
try
{
int counter = 0;
while (true)
{
counter++;
var result1 = await task1.ConfigureAwait(false);
cancellationToken.ThrowIfCancellationRequested();
task1 = Task.Run(action1); // Restart the task1
if (counter <= 1) continue; // In the first loop result1 is undefined
var result2 = await task2.ConfigureAwait(false);
cancellationToken.ThrowIfCancellationRequested();
task2 = Task.Run(() => action2(result1)); // Restart the task2
if (counter <= 2) continue; // In the second loop result2 is undefined
await task3.ConfigureAwait(false);
cancellationToken.ThrowIfCancellationRequested();
task3 = Task.Run(() => action3(result2)); // Restart the task3
}
}
finally
{
// Prevent fire-and-forget
Task allTasks = Task.WhenAll(task1, task2, task3);
try { await allTasks.ConfigureAwait(false); } catch { allTasks.Wait(); }
// Propagate all errors in an AggregateException
}
}
There is an obvious pattern in the implementation, that makes it trivial to add overloads having more than three actions. Each added action will require its own generic type parameter (TResult3, TResult4 etc).
Usage example:
var cts = new CancellationTokenSource();
Task loopTask = ParallelLoop(() =>
{
// First task
Thread.Sleep(1000); // Simulates synchronous work
return "OK"; // The result that is passed to the second task
}, result =>
{
// Second task
Thread.Sleep(1000); // Simulates synchronous work
return result + "!"; // The result that is passed to the third task
}, result =>
{
// Third task
Thread.Sleep(1000); // Simulates synchronous work
}, cts.Token);
In case any of the tasks fails, the whole loop will stop (with the loopTask.Exception containing the error). Since the tasks depend on each other, recovering from a single failed task is not possible¹. What you could do is to execute the whole loop through a Polly Retry policy, to make sure that the loop will be reincarnated in case of failure. If you are unfamiliar with the Polly library, you could use the simple and featureless RetryUntilCanceled method below:
public static async Task RetryUntilCanceled(Func<Task> action,
CancellationToken cancellationToken)
{
while (true)
{
cancellationToken.ThrowIfCancellationRequested();
try { await action().ConfigureAwait(false); }
catch { if (cancellationToken.IsCancellationRequested) throw; }
}
}
Usage:
Task loopTask = RetryUntilCanceled(() => ParallelLoop(() =>
{
//...
}, cts.Token), cts.Token);
Before exiting the process you are advised to Cancel() the CancellationTokenSource and Wait() (or await) the loopTask, in order for the loop to terminate gracefully. Otherwise some tasks may be aborted in the middle of their work.
¹ It is actually possible, and probably preferable, to execute each individual task through a Polly Retry policy. The parallel loop will be suspended until the failed task is retried successfully.
According to the documentation:
A dataflow block is considered completed when it is not currently processing a message and when it has guaranteed that it will not process any more messages.
This behavior is not ideal in my case. I want to be able to cancel the job at any time, but the processing of each individual action takes a long time. So when I cancel the token, the effect is not immediate. I must wait for the currently processed item to complete. I have no way to cancel the actions directly, because the API I use is not cancelable. Can I do anything to make the block ignore the currently running action, and complete instantly?
Here is an example that demonstrates my problem. The token is canceled after 500 msec, and the duration of each action is 1000 msec:
static async Task Main()
{
var cts = new CancellationTokenSource(500);
var block = new ActionBlock<int>(async x =>
{
await Task.Delay(1000);
}, new ExecutionDataflowBlockOptions() { CancellationToken = cts.Token });
block.Post(1); // I must wait for this one to complete
block.Post(2); // This one is ignored
block.Complete();
var stopwatch = Stopwatch.StartNew();
try
{
await block.Completion;
}
catch (OperationCanceledException)
{
Console.WriteLine($"Canceled after {stopwatch.ElapsedMilliseconds} msec");
}
}
Output:
Canceled after 1035 msec
The desired output would be a cancellation after ~500 msec.
Based on this excerpt from your comment...:
What I want to happen in case of a cancellation request is to ignore the currently running workitem. I don't care about it any more, so why I have to wait for it?
...and assuming you are truly OK with leaving the Task running, you can simply wrap the job you wish to call inside another Task which will constantly poll for cancellation or completion, and cancel that Task instead. Take a look at the following "proof-of-concept" code that wraps a "long-running" task inside another Task "tasked" with constantly polling the wrapped task for completion, and a CancellationToken for cancellation (completely "spur-of-the-moment" status, you will want to re-adapt it a bit of course):
public class LongRunningTaskSource
{
public Task LongRunning(int milliseconds)
{
return Task.Run(() =>
{
Console.WriteLine("Starting long running task");
Thread.Sleep(3000);
Console.WriteLine("Finished long running task");
});
}
public Task LongRunningTaskWrapper(int milliseconds, CancellationToken token)
{
Task task = LongRunning(milliseconds);
Task wrapperTask = Task.Run(() =>
{
while (true)
{
//Check for completion (you could, of course, do different things
//depending on whether it is faulted or completed).
if (!(task.Status == TaskStatus.Running))
break;
//Check for cancellation.
if (token.IsCancellationRequested)
{
Console.WriteLine("Aborting Task.");
token.ThrowIfCancellationRequested();
}
}
}, token);
return wrapperTask;
}
}
Using the following code:
static void Main()
{
LongRunningTaskSource longRunning = new LongRunningTaskSource();
CancellationTokenSource cts = new CancellationTokenSource(1500);
Task task = longRunning.LongRunningTaskWrapper(3000, cts.Token);
//Sleep long enough to let things roll on their own.
Thread.Sleep(5000);
Console.WriteLine("Ended Main");
}
...produces the following output:
Starting long running task
Aborting Task.
Exception thrown: 'System.OperationCanceledException' in mscorlib.dll
Finished long running task
Ended Main
The wrapped Task obviously completes in its own good time. If you don't have a problem with that, which is often not the case, hopefully, this should fit your needs.
As a supplementary example, running the following code (letting the wrapped Task finish before time-out):
static void Main()
{
LongRunningTaskSource longRunning = new LongRunningTaskSource();
CancellationTokenSource cts = new CancellationTokenSource(3000);
Task task = longRunning.LongRunningTaskWrapper(1500, cts.Token);
//Sleep long enough to let things roll on their own.
Thread.Sleep(5000);
Console.WriteLine("Ended Main");
}
...produces the following output:
Starting long running task
Finished long running task
Ended Main
So the task started and finished before timeout and nothing had to be cancelled. Of course nothing is blocked while waiting. As you probably already know, of course, if you know what is being used behind the scenes in the long-running code, it would be good to clean up if necessary.
Hopefully, you can adapt this example to pass something like this to your ActionBlock.
Disclaimer & Notes
I am not familiar with the TPL Dataflow library, so this is just a workaround, of course. Also, if all you have is, for example, a synchronous method call that you do not have any influence on at all, then you will obviously need two tasks. One wrapper task to wrap the synchronous call and another one to wrap the wrapper task to include continuous status polling and cancellation checks.
I try to implement long running operation handling on server without push notification.
My project methods all are Async. All methods out of the web project await with ConfigureAwait(false)
I have library referenced in my web that manages long running operation.
On my initial request I have fire-and-forget for what I think can continue longer:
// my fire and forget - the task is not awaited
longRunningOperation.RunAsync();
// add my delay
var result = await Task.WhenAny(longRunningOperation.Task, Task.Delay(LongRunningConfiguration.Instance.InitialRequestReleaseTime)).ConfigureAwait(false);
// if the task finishes the return on time, otherwise create long running handler
if (result == longRunningOperation.Task)
{
// it is OK
}
else
{
Task task = Task.Run(async () =>
{
await longRunningOperation.Task;
});
monitorTask = new ActiveMonitorTask(longRunningOperation, task)
{
Id = Guid.NewGuid()
};
_monitorStateSession.Add(monitorTask);
}
At the moment I have only one of my operations implemented to support long-running.
Myst of the time tasks for that operation goes to Done status. But from time to time they hang-out in WaitForActivation
Any suggestions how to track the problem or check what can causes it?
Regards,
Boris
All, here is a question about design/best practices of a complex case of canceling Task:s in C#. How do you implement cancellation of a shared task?
As a minimal example, lets assume the following; we have a long running, cooperatively cancellable operation 'Work'. It accepts a cancellation token as argument and throws if it has been canceled. It operates on some application state and returns a value. Its result is independently required by two UI components.
While the application state is unchanged, the value of the Work function should be cached, and if one computation is ongoing, a new request should not start a second computation but rather start waiting for a result.
Either of the UI components should be able to cancel it's Task without affecting the other UI components task.
Are you with me so far?
The above can be accomplished by introducing an Task cache that wraps the real Work task in TaskCompletionSources, whose Task:s are then returned to the UI components. If a UI component cancels it's Task, it only abandons the TaskCompletionSource Task and not the underlying task. This is all good. The UI components creates the CancellationSource and the request to cancel is a normal top down design, with the cooperating TaskCompletionSource Task at the bottom.
Now, to the real problem. What to do when the application state changes? Lets assume that having the 'Work' function operate on a copy of the state is not feasible.
One solution would be to listen to the state change in the task cache (or there about). If the cache has a CancellationToken used by the underlying task, the one running the Work function, it could cancel it. This could then trigger a cancellation of all attached TaskCompletionSources Task:s, and thus both UI components would get Canceled tasks. This is some kind of bottom up cancellation.
Is there a preferred way to do this? Is there a design pattern that describes it some where?
The bottom up cancellation can be implemented, but it feel a bit weird. The UI-task is created with a CancellationToken, but it is canceled due to another (inner) CancellationToken. Also, since the tokens are not the same, the OperationCancelledException can't just be ignored in the UI - that would (eventually) lead to an exception being thrown in the outer Task:s finalizer.
Here's my attempt:
// the Task for the current application state
Task<Result> _task;
// a CancellationTokenSource for the current application state
CancellationTokenSource _cts;
// called when the application state changes
void OnStateChange()
{
// cancel the Task for the old application state
if (_cts != null)
{
_cts.Cancel();
}
// new CancellationTokenSource for the new application state
_cts = new CancellationTokenSource();
// start the Task for the new application state
_task = Task.Factory.StartNew<Result>(() => { ... }, _cts.Token);
}
// called by UI component
Task<Result> ComputeResultAsync(CancellationToken cancellationToken)
{
var task = _task;
if (cancellationToken.CanBeCanceled && !task.IsCompleted)
{
task = WrapTaskForCancellation(cancellationToken, task);
}
return task;
}
with
static Task<T> WrapTaskForCancellation<T>(
CancellationToken cancellationToken, Task<T> task)
{
var tcs = new TaskCompletionSource<T>();
if (cancellationToken.IsCancellationRequested)
{
tcs.TrySetCanceled();
}
else
{
cancellationToken.Register(() =>
{
tcs.TrySetCanceled();
});
task.ContinueWith(antecedent =>
{
if (antecedent.IsFaulted)
{
tcs.TrySetException(antecedent.Exception.GetBaseException());
}
else if (antecedent.IsCanceled)
{
tcs.TrySetCanceled();
}
else
{
tcs.TrySetResult(antecedent.Result);
}
}, TaskContinuationOptions.ExecuteSynchronously);
}
return tcs.Task;
}
It sounds like you want a greedy set of task operations - you have a task result provider, and then construct a task set to return the first completed operation, ex:
// Task Provider - basically, construct your first call as appropriate, and then
// invoke this on state change
public void OnStateChanged()
{
if(_cts != null)
_cts.Cancel();
_cts = new CancellationTokenSource();
_task = Task.Factory.StartNew(() =>
{
// Do Computation, checking for cts.IsCancellationRequested, etc
return result;
});
}
// Consumer 1
var cts = new CancellationTokenSource();
var task = Task.Factory.StartNew(() =>
{
var waitForResultTask = Task.Factory.StartNew(() =>
{
// Internally, this is invoking the task and waiting for it's value
return MyApplicationState.GetComputedValue();
});
// Note this task cares about being cancelled, not the one above
var cancelWaitTask = Task.Factory.StartNew(() =>
{
while(!cts.IsCancellationRequested)
Thread.Sleep(25);
return someDummyValue;
});
Task.WaitAny(waitForResultTask, cancelWaitTask);
if(cancelWaitTask.IsComplete)
return "Blah"; // I cancelled waiting on the original task, even though it is still waiting for it's response
else
return waitForResultTask.Result;
});
Now, I haven't fully tested this, but it should allow you to "cancel" waiting on a task by cancelling the token (and thus forcing the "wait" task to complete first and hit the WaitAny), and allow you to "cancel" the computation task.
The other thing is to figure out a clean way of making the "cancel" task wait without horrible blocking. It's a good start I think.
Our application uses the TPL to serialize (potentially) long running units of work. The creation of work (tasks) is user-driven and may be cancelled at any time. In order to have a responsive user interface, if the current piece of work is no longer required we would like to abandon what we were doing, and immediately start a different task.
Tasks are queued up something like this:
private Task workQueue;
private void DoWorkAsync
(Action<WorkCompletedEventArgs> callback, CancellationToken token)
{
if (workQueue == null)
{
workQueue = Task.Factory.StartWork
(() => DoWork(callback, token), token);
}
else
{
workQueue.ContinueWork(t => DoWork(callback, token), token);
}
}
The DoWork method contains a long running call, so it is not as simple as constantly checking the status of token.IsCancellationRequested and bailing if/when a cancel is detected. The long running work will block the Task continuations until it finishes, even if the task is cancelled.
I have come up with two sample methods to work around this issue, but am not convinced that either are proper. I created simple console applications to demonstrate how they work.
The important point to note is that the continuation fires before the original task completes.
Attempt #1: An inner task
static void Main(string[] args)
{
CancellationTokenSource cts = new CancellationTokenSource();
var token = cts.Token;
token.Register(() => Console.WriteLine("Token cancelled"));
// Initial work
var t = Task.Factory.StartNew(() =>
{
Console.WriteLine("Doing work");
// Wrap the long running work in a task, and then wait for it to complete
// or the token to be cancelled.
var innerT = Task.Factory.StartNew(() => Thread.Sleep(3000), token);
innerT.Wait(token);
token.ThrowIfCancellationRequested();
Console.WriteLine("Completed.");
}
, token);
// Second chunk of work which, in the real world, would be identical to the
// first chunk of work.
t.ContinueWith((lastTask) =>
{
Console.WriteLine("Continuation started");
});
// Give the user 3s to cancel the first batch of work
Console.ReadKey();
if (t.Status == TaskStatus.Running)
{
Console.WriteLine("Cancel requested");
cts.Cancel();
Console.ReadKey();
}
}
This works, but the "innerT" Task feels extremely kludgey to me. It also has the drawback of forcing me to refactor all parts of my code that queue up work in this manner, by necessitating the wrapping up of all long running calls in a new Task.
Attempt #2: TaskCompletionSource tinkering
static void Main(string[] args)
{ var tcs = new TaskCompletionSource<object>();
//Wire up the token's cancellation to trigger the TaskCompletionSource's cancellation
CancellationTokenSource cts = new CancellationTokenSource();
var token = cts.Token;
token.Register(() =>
{ Console.WriteLine("Token cancelled");
tcs.SetCanceled();
});
var innerT = Task.Factory.StartNew(() =>
{
Console.WriteLine("Doing work");
Thread.Sleep(3000);
Console.WriteLine("Completed.");
// When the work has complete, set the TaskCompletionSource so that the
// continuation will fire.
tcs.SetResult(null);
});
// Second chunk of work which, in the real world, would be identical to the
// first chunk of work.
// Note that we continue when the TaskCompletionSource's task finishes,
// not the above innerT task.
tcs.Task.ContinueWith((lastTask) =>
{
Console.WriteLine("Continuation started");
});
// Give the user 3s to cancel the first batch of work
Console.ReadKey();
if (innerT.Status == TaskStatus.Running)
{
Console.WriteLine("Cancel requested");
cts.Cancel();
Console.ReadKey();
}
}
Again this works, but now I have two problems:
a) It feels like I'm abusing TaskCompletionSource by never using it's result, and just setting null when I've finished my work.
b) In order to properly wire up continuations I need to keep a handle on the previous unit of work's unique TaskCompletionSource, and not the task that was created for it. This is technically possible, but again feels clunky and strange.
Where to go from here?
To reiterate, my question is: are either of these methods the "correct" way to tackle this problem, or is there a more correct/elegant solution that will allow me to prematurely abort a long running task and immediately starting a continuation? My preference is for a low-impact solution, but I'd be willing to undertake some huge refactoring if it's the right thing to do.
Alternately, is the TPL even the correct tool for the job, or am I missing a better task queuing mechanism. My target framework is .NET 4.0.
The real issue here is that the long-running call in DoWork is not cancellation-aware. If I understand correctly, what you're doing here is not really cancelling the long-running work, but merely allowing the continuation to execute and, when the work completes on the cancelled task, ignoring the result. For example, if you used the inner task pattern to call CrunchNumbers(), which takes several minutes, cancelling the outer task will allow continuation to occur, but CrunchNumbers() will continue to execute in the background until completion.
I don't think there's any real way around this other than making your long-running calls support cancellation. Often this isn't possible (they may be blocking API calls, with no API support for cancellation.) When this is the case, it's really a flaw in the API; you may check to see if there are alternate API calls that could be used to perform the operation in a way that can be cancelled. One hack approach to this is to capture a reference to the underlying Thread being used by the Task when the Task is started and then call Thread.Interrupt. This will wake up the thread from various sleep states and allow it to terminate, but in a potentially nasty way. Worst case, you can even call Thread.Abort, but that's even more problematic and not recommended.
Here is a stab at a delegate-based wrapper. It's untested, but I think it will do the trick; feel free to edit the answer if you make it work and have fixes/improvements.
public sealed class AbandonableTask
{
private readonly CancellationToken _token;
private readonly Action _beginWork;
private readonly Action _blockingWork;
private readonly Action<Task> _afterComplete;
private AbandonableTask(CancellationToken token,
Action beginWork,
Action blockingWork,
Action<Task> afterComplete)
{
if (blockingWork == null) throw new ArgumentNullException("blockingWork");
_token = token;
_beginWork = beginWork;
_blockingWork = blockingWork;
_afterComplete = afterComplete;
}
private void RunTask()
{
if (_beginWork != null)
_beginWork();
var innerTask = new Task(_blockingWork,
_token,
TaskCreationOptions.LongRunning);
innerTask.Start();
innerTask.Wait(_token);
if (innerTask.IsCompleted && _afterComplete != null)
{
_afterComplete(innerTask);
}
}
public static Task Start(CancellationToken token,
Action blockingWork,
Action beginWork = null,
Action<Task> afterComplete = null)
{
if (blockingWork == null) throw new ArgumentNullException("blockingWork");
var worker = new AbandonableTask(token, beginWork, blockingWork, afterComplete);
var outerTask = new Task(worker.RunTask, token);
outerTask.Start();
return outerTask;
}
}