Polly with cancellation token is not working for synchronous thread - c#

I'm new to Polly so there may be a completely different approach compared to the one I'm trying to do and that would be perfectly OK.
My goals are this:
I am using timeout strategy as: TimeoutStrategy.Optimistic
I want to timeout the call for given clientTimeOut
I am using cancellation token to timeout the thread
I am using cancellation token to timeout the call but somehow the below code is not working. I am not creating a new thread while adding a cancellation token but using the same thread to honoured the timeout.
Does it always need to add the cancellation token in asynchronous call i.e. ExecuteAsync() or use another thread for it ?
Can someone help me with what is wrong with the below code and why timeout is not working?
Note: Using TimeoutStrategy.Pessimist will work for me but I can’t use this due to some other usecase. I need to read the https headers from request context, and we will loose this information when using async call. Hence I need to use the sync call.
public TResult ExecuteWrapper<TResult>(Func<CancellationToken ,TResult> func)
{
int attempts = 0;
int retryCounter = 3;
int waitTime = 1000;
int clientTimeOut = 10000;
var cts = new CancellationTokenSource(TimeSpan.FromMilliseconds(10000));
var cancellationToken = cts.Token;
RetryPolicy retryPolicy = Policy.Handle<Exception>().Retry(retryCounter);
retryPolicy = Policy.Handle<Exception>().WaitAndRetry(retryCounter, sleepDurationProvider: attempt => TimeSpan.FromMilliseconds(this.waitTime), onRetry: (exception, timeSpan) =>
{
logRetryException(timeSpan, attempts++, exception);
});
PolicyWrap retrypolicywrap = retryPolicy.Wrap(Policy.Timeout(TimeSpan.FromMilliseconds(clientTimeOut), TimeoutStrategy.Optimistic));
return retrypolicywrap.Execute(func, cancellationToken);
}
public void BaseFunction(){
var result = ExecuteWrapper(() => DummyFunctionToSleepTheThread())
}
public string DummyFunctionToSleepTheThread(){
//Reading http header from the request context
//which will lost if we use async call
//Hence we can't use Task.Delay(milliseconds) here.
int milliseconds = 15000;
System.Threading.Thread.Sleep(milliseconds);
return "SUCCESS";
}
I tried adding the cancellation token but it did not honoured the timeout.

You can't use optimistic timeout strategy to cancel a Thread. This strategy uses the co-operative behaviour of the CancellationToken. Threads do not natively support CancellationToken like Tasks.
So, you have two options. Fall back to pessimistic timeout strategy or use Task instead of Thread.
As far as I can tell from the your code you haven't designed your solution to take advantage of Tasks and async-await. So, try to use the pessimistic strategy.
UPDATE #1
Here is a simple app which demonstrates the difference
(Please note that bellow overload of Timeout requires seconds not milliseconds)
public static void Main()
{
var optimisticTimeout = Policy.Timeout(1, TimeoutStrategy.Optimistic);
optimisticTimeout.Execute(ToBeDecoratedMethod);
var pessimisticTimeout = Policy.Timeout(1, TimeoutStrategy.Pessimistic);
try
{
pessimisticTimeout.Execute(() => ToBeDecoratedMethod());
}
catch(TimeoutRejectedException)
{
Console.WriteLine("Timed out");
}
}
public static void ToBeDecoratedMethod()
{
Console.WriteLine("Before Sleep");
Thread.Sleep(1_500);
Console.WriteLine("After Sleep");
}
The output will be:
Before Sleep
After Sleep
Before Sleep
Timed out
The working code can be found on this dotnet fiddle link.

Related

How to limit number of async IO tasks to database?

I have a list of id's and I want to get data for each of those id in parallel from database. My below ExecuteAsync method is called at very high throughput and for each request we have around 500 ids for which I need to extract data.
So I have got below code where I am looping around list of ids and making async calls for each of those id in parallel and it works fine.
private async Task<List<T>> ExecuteAsync<T>(IList<int> ids, IPollyPolicy policy,
Func<CancellationToken, int, Task<T>> mapper) where T : class
{
var tasks = new List<Task<T>>(ids.Count);
// invoking multiple id in parallel to get data for each id from database
for (int i = 0; i < ids.Count; i++)
{
tasks.Add(Execute(policy, ct => mapper(ct, ids[i])));
}
// wait for all id response to come back
var responses = await Task.WhenAll(tasks);
var excludeNull = new List<T>(ids.Count);
for (int i = 0; i < responses.Length; i++)
{
var response = responses[i];
if (response != null)
{
excludeNull.Add(response);
}
}
return excludeNull;
}
private async Task<T> Execute<T>(IPollyPolicy policy,
Func<CancellationToken, Task<T>> requestExecuter) where T : class
{
var response = await policy.Policy.ExecuteAndCaptureAsync(
ct => requestExecuter(ct), CancellationToken.None);
if (response.Outcome == OutcomeType.Failure)
{
if (response.FinalException != null)
{
// log error
throw response.FinalException;
}
}
return response?.Result;
}
Question:
Now as you can see I am looping all ids and making bunch of async calls to database in parallel for each id which can put lot of load on database (depending on how many request is coming). So I want to limit the number of async calls we are making to database. I modified ExecuteAsync to use Semaphore as shown below but it doesn't look like it does what I want it to do:
private async Task<List<T>> ExecuteAsync<T>(IList<int> ids, IPollyPolicy policy,
Func<CancellationToken, int, Task<T>> mapper) where T : class
{
var throttler = new SemaphoreSlim(250);
var tasks = new List<Task<T>>(ids.Count);
// invoking multiple id in parallel to get data for each id from database
for (int i = 0; i < ids.Count; i++)
{
await throttler.WaitAsync().ConfigureAwait(false);
try
{
tasks.Add(Execute(policy, ct => mapper(ct, ids[i])));
}
finally
{
throttler.Release();
}
}
// wait for all id response to come back
var responses = await Task.WhenAll(tasks);
// same excludeNull code check here
return excludeNull;
}
Does Semaphore works on Threads or Tasks? Reading it here looks like Semaphore is for Threads and SemaphoreSlim is for tasks.
Is this correct? If yes then what's the best way to fix this and limit the number of async IO tasks we make to database here.
Task is an abstraction on threads, and doesn’t necessarily create a new thread. Semaphore limits the number of threads that can access that for loop. Execute returns a Task which aren’t threads. If there’s only 1 request, there will be only 1 thread inside that for loop, even if it is asking for 500 ids. The 1 thread sends off all the async IO tasks itself.
Sort of. I would not say that tasks are related to threads at all. There are actually two kinds of tasks: a delegate task (which is kind of an abstraction of a thread), and a promise task (which has nothing to do with threads).
Regarding the SemaphoreSlim, it does limit the concurrency of a block of code (not threads).
I recently started playing with C# so my understanding is not right looks like w.r.t Threads and Tasks.
I recommend reading my async intro and best practices. Follow up with There Is No Thread if you're interested more about how threads aren't really involved.
I modified ExecuteAsync to use Semaphore as shown below but it doesn't look like it does what I want it to do
The current code is only throttling the adding of the tasks to the list, which is only done one at a time anyway. What you want to do is throttle the execution itself:
private async Task<List<T>> ExecuteAsync<T>(IList<int> ids, IPollyPolicy policy, Func<CancellationToken, int, Task<T>> mapper) where T : class
{
var throttler = new SemaphoreSlim(250);
var tasks = new List<Task<T>>(ids.Count);
// invoking multiple id in parallel to get data for each id from database
for (int i = 0; i < ids.Count; i++)
tasks.Add(ThrottledExecute(ids[i]));
// wait for all id response to come back
var responses = await Task.WhenAll(tasks);
// same excludeNull code check here
return excludeNull;
async Task<T> ThrottledExecute(int id)
{
await throttler.WaitAsync().ConfigureAwait(false);
try {
return await Execute(policy, ct => mapper(ct, id)).ConfigureAwait(false);
} finally {
throttler.Release();
}
}
}
Your colleague has probably in mind the Semaphore class, which is indeed a thread-centric throttler, with no asynchronous capabilities.
Limits the number of threads that can access a resource or pool of resources concurrently.
The SemaphoreSlim class is a lightweight alternative to Semaphore, which includes the asynchronous method WaitAsync, that makes all the difference in the world. The WaitAsync doesn't block a thread, it blocks an asynchronous workflow. Asynchronous workflows are cheap (usually less than 1000 bytes each). You can have millions of them "running" concurrently at any given moment. This is not the case with threads, because of the 1 MB of memory that each thread reserves for its stack.
As for the ExecuteAsync method, here is how you could refactor it by using the LINQ methods Select, Where, ToArray and ToList:
Update: The Polly library supports capturing and continuing on the current synchronization context, so I added a bool executeOnCurrentContext
argument to the API. I also renamed the asynchronous Execute method to ExecuteAsync, to be in par with the guidelines.
private async Task<List<T>> ExecuteAsync<T>(IList<int> ids, IPollyPolicy policy,
Func<CancellationToken, int, Task<T>> mapper,
int concurrencyLevel = 1, bool executeOnCurrentContext = false) where T : class
{
var throttler = new SemaphoreSlim(concurrencyLevel);
Task<T>[] tasks = ids.Select(async id =>
{
await throttler.WaitAsync().ConfigureAwait(executeOnCurrentContext);
try
{
return await ExecuteAsync(policy, ct => mapper(ct, id),
executeOnCurrentContext).ConfigureAwait(false);
}
finally
{
throttler.Release();
}
}).ToArray();
T[] results = await Task.WhenAll(tasks).ConfigureAwait(false);
return results.Where(r => r != null).ToList();
}
private async Task<T> ExecuteAsync<T>(IPollyPolicy policy,
Func<CancellationToken, Task<T>> function,
bool executeOnCurrentContext = false) where T : class
{
var response = await policy.Policy.ExecuteAndCaptureAsync(
ct => executeOnCurrentContext ? function(ct) : Task.Run(() => function(ct)),
CancellationToken.None, continueOnCapturedContext: executeOnCurrentContext)
.ConfigureAwait(executeOnCurrentContext);
if (response.Outcome == OutcomeType.Failure)
{
if (response.FinalException != null)
{
ExceptionDispatchInfo.Throw(response.FinalException);
}
}
return response?.Result;
}
You are throttling the rate at which you add tasks to the list. You are not throttling the rate at which tasks are executed. To do that, you'd probably have to implement your semaphore calls inside the Execute method itself.
If you can't modify Execute, another way to do it is to poll for completed tasks, sort of like this:
for (int i = 0; i < ids.Count; i++)
{
var pendingCount = tasks.Count( t => !t.IsCompleted );
while (pendingCount >= 500) await Task.Yield();
tasks.Add(Execute(policy, ct => mapper(ct, ids[i])));
}
await Task.WhenAll( tasks );
Actually the TPL is capable to control the task execution and limit the concurrency. You can test how many parallel tasks is suitable for your use-case. No need to think about threads, TPL will manage everything fine for you.
To use limited concurrency see this answer, credits to #panagiotis-kanavos
.Net TPL: Limited Concurrency Level Task scheduler with task priority?
The example code is (even using different priorities, you can strip that):
QueuedTaskScheduler qts = new QueuedTaskScheduler(TaskScheduler.Default,4);
TaskScheduler pri0 = qts.ActivateNewQueue(priority: 0);
TaskScheduler pri1 = qts.ActivateNewQueue(priority: 1);
Task.Factory.StartNew(()=>{ },
CancellationToken.None,
TaskCreationOptions.None,
pri0);
Just throw all your tasks to the queue and with Task.WhenAll you can wait till everything is done.

Propagation time of cancellation request to all tasks (TPL)

With TPL we have CancellationTokenSource which provides tokens, useful to cooperatively cancellation of current task (or its start).
Question:
How long it take to propagate cancellation request to all hooked running tasks?
Is there any place, where code could look to check that: "from now" every interested Task, will find that cancellation has been requested?
Why there is need for it?
I would like to have stable unit test, to show that cancellation works in our code.
Problem details:
We have "Executor" which produces tasks, these task wrap some long running actions. Main job of executor is to limit how many concurrent actions were started. All of these tasks can be cancelled individually, and also these actions will respect CancellationToken internally.
I would like to provide unit test, which shows that when cancellation occurred while task is waiting for slot to start given action, that task will cancel itself (eventually) and does not start execution of given action.
So, idea was to prepare LimitingExecutor with single slot. Then start blocking action, which would request cancellation when unblocked. Then "enqueue" test action, which should fail when executed. With that setup, tests would call unblock and then assert that task of test action will throw TaskCanceledException when awaited.
[Test]
public void RequestPropagationTest()
{
using (var setupEvent = new ManualResetEvent(initialState: false))
using (var cancellation = new CancellationTokenSource())
using (var executor = new LimitingExecutor())
{
// System-state setup action:
var cancellingTask = executor.Do(() =>
{
setupEvent.WaitOne();
cancellation.Cancel();
}, CancellationToken.None);
// Main work action:
var actionTask = executor.Do(() =>
{
throw new InvalidOperationException(
"This action should be cancelled!");
}, cancellation.Token);
// Let's wait until this `Task` starts, so it will got opportunity
// to cancel itself, and expected later exception will not come
// from just starting that action by `Task.Run` with token:
while (actionTask.Status < TaskStatus.Running)
Thread.Sleep(millisecondsTimeout: 1);
// Let's unblock slot in Executor for the 'main work action'
// by finalizing the 'system-state setup action' which will
// finally request "global" cancellation:
setupEvent.Set();
Assert.DoesNotThrowAsync(
async () => await cancellingTask);
Assert.ThrowsAsync<TaskCanceledException>(
async () => await actionTask);
}
}
public class LimitingExecutor : IDisposable
{
private const int UpperLimit = 1;
private readonly Semaphore _semaphore
= new Semaphore(UpperLimit, UpperLimit);
public Task Do(Action work, CancellationToken token)
=> Task.Run(() =>
{
_semaphore.WaitOne();
try
{
token.ThrowIfCancellationRequested();
work();
}
finally
{
_semaphore.Release();
}
}, token);
public void Dispose()
=> _semaphore.Dispose();
}
Executable demo (via NUnit) of this problem could be found at GitHub.
However, implementation of that test sometimes fails (no expected TaskCanceledException), on my machin maybe 1 in 10 runs. Kind of "solution" to this problem is to insert Thread.Sleep right after request of cancellation. Even with sleep for 3 seconds this test sometimes fails (found after 20-ish runs), and when it passes, that long waiting is usually unnecessary (I guess). For reference, please see diff.
"Other problem", was to ensure that cancellation comes from "waiting time" and not from Task.Run, because ThreadPool could be busy (other executing tests), and it cold postpone start of second task after request of cancellation - that would render this test "falsy-green". The "easy fix by hack" was to actively wait until second task starts - its Status becomes TaskStatus.Running. Please check version under this branch and see that test without this hack will be sometimes "green" - so exampled bug could pass through it.
Your test method assumes that cancellingTask always takes the slot (enters the semaphore) in LimitingExecutor before the actionTask. Unfortunatelly, this assumption is wrong, LimitingExecutor does not guarantee this and it's just a matter of luck, which of the two task takes the slot (actually on my computer it only happens in something like 5% of runs).
To resolve this problem, you need another ManualResetEvent, that will allow main thread to wait until cancellingTask actually occupies the slot:
using (var slotTaken = new ManualResetEvent(initialState: false))
using (var setupEvent = new ManualResetEvent(initialState: false))
using (var cancellation = new CancellationTokenSource())
using (var executor = new LimitingExecutor())
{
// System-state setup action:
var cancellingTask = executor.Do(() =>
{
// This is called from inside the semaphore, so it's
// certain that this task occupies the only available slot.
slotTaken.Set();
setupEvent.WaitOne();
cancellation.Cancel();
}, CancellationToken.None);
// Wait until cancellingTask takes the slot
slotTaken.WaitOne();
// Now it's guaranteed that cancellingTask takes the slot, not the actionTask
// ...
}
.NET Framework doesn't provide API to detect task transition to the Running state, so if you don't like polling the State property + Thread.Sleep() in a loop, you'll need to modify LimitingExecutor.Do() to provide this information, probably using another ManualResetEvent, e.g.:
public Task Do(Action work, CancellationToken token, ManualResetEvent taskRunEvent = null)
=> Task.Run(() =>
{
// Optional notification to the caller that task is now running
taskRunEvent?.Set();
// ...
}, token);

How to cancel a RestSharp synchronous execute() call?

I have this line, which does a blocking (synchronous) web service call, and works fine:
var response = client.Execute<DataRequestResponse>(request);
(The var represents IRestResponse<DataRequestResponse>)
But, now I want to be able to cancel it (from another thread). (I found this similar question, but my code must stay sync - the code changes have to stay localized in this function.)
I found CancellationTokenSource, and an ExecuteTaskAsync() which takes a CancellationToken. (See https://stackoverflow.com/a/21779724/841830) That sounds like it could do the job. I got as far as this code:
var cancellationTokenSource = new CancellationTokenSource();
var task = client.ExecuteTaskAsync(request, cancellationTokenSource.Token);
task.Wait();
var response = task.Result;
The last line refuses to compile, telling me it cannot do an implicit cast. So I tried an explicit cast:
IRestResponse<DataRequestResponse> response = task.Result as IRestResponse<DataRequestResponse>;
That compiles, runs, but then crashes (throws a NullReferenceException, saying "Object reference not set to an instance of an object").
(By the way, once I have this working, then the cancellationTokenSource.Token will of course be passed in from the master thread, and I will also be adding some code to detect when an abort happened: I will throw an exception.)
My back-up plan is just to abort the whole thread this is running in. Crude, but that is actually good enough at the moment. But I'd rather be doing it "properly" if I can.
MORE INFORMATION:
The sync Execute call is here: https://github.com/restsharp/RestSharp/blob/master/RestSharp/RestClient.Sync.cs#L55 where it ends up calling Http.AsGet() or Http.AsPost(), which then ends up here: https://github.com/restsharp/RestSharp/blob/master/RestSharp/Http.Sync.cs#L194
In other words, under the covers RestSharp is using HttpWebRequest.GetResponse. That has an Abort function, but being a sync function (i.e. which does not return until the request is done) that is not much use to me!
The async counterpart of your call
var response = client.Execute<DataRequestResponse>(request);
is the public virtual Task<IRestResponse<T>> ExecuteTaskAsync<T>(IRestRequest request, CancellationToken token) RestSharp async method.
It takes a cancellation token, and returns a task with the correct return type signature. To use it and wait for its (cancellable) completion, you could change your sample code to:
var cancellationTokenSource = new CancellationTokenSource();
// ...
var task = client.ExecuteTaskAsync<DataRequestResponse>(
request,
cancellationTokenSource.Token);
// this will wait for completion and throw on cancellation.
var response = task.Result;
Given this sync call:
var response = client.Execute<MyData>(request);
process(response);
Change it to:
var response = null;
EventWaitHandle handle = new AutoResetEvent (false);
client.ExecuteAsync<MyData>(request, r => {
response = r;
handle.Set();
});
handle.WaitOne();
process(response);
That is equivalent to the sync version, no benefit is gained. Here is one way to then add in the abort functionality:
bool volatile cancelAllRequests = false;
...
var response = null;
EventWaitHandle handle = new AutoResetEvent (false);
RestRequestAsyncHandle asyncRequest = client.ExecuteAsync<MyData>(request, r => {
response = r;
handle.Set();
});
while(true){
if(handle.WaitOne(250))break; //Returns true if async operation finished
if(cancelAllRequests){
asyncRequest.WebRequest.Abort();
return;
}
}
process(response);
(Sorry, couldn't wait for the bounty to bear fruit, so had to work this out for myself, by trial and error this afternoon...)

cancelling a function

i have a function that makes a few http requests, and runs in a task.
that function can be interrupted in the middle, since i cant abort the task i added some boolean conditions in the function.
example:
public int foo(ref bool cancel)
{
if(cancel)
{
return null
}
//do some work...
if(cancel)
{
return null
}
//http webrequest
if(cancel)
{
return null
}
}
thisworked pretty good, although this is quite some ugly code.
another problem is when i already executed the web request, and it takes time for me to get the response than the function cncelation takes a lot of time (till i get a response).
is there a better way for me to check this? or mybe i should use threads instead of task?
edit
i added a cancelation token: declared a cancelationTokenSource, and passed its token to the task
CancellationTokenSource cncelToken = new CancellationTokenSource();
Task t = new Task(() => {foo()},cancelToken.token);
when i do cancelToken.Cancel();
i still wait for the response, and the tsk isnt cancelling.
Tasks support cancellation - see here.
Here's a quick snippet.
class Program
{
static void Main(string[] args)
{
var token = new CancellationTokenSource();
var t = Task.Factory.StartNew(
o =>
{
while (true)
Console.WriteLine("{0}: Processing", DateTime.Now);
}, token);
token.CancelAfter(1000);
t.Wait(token.Token);
}
}
Remember to wait for the task using the provided cancellation token. You ought to receive an OperationCanceledException.

Aborting a long running task in TPL

Our application uses the TPL to serialize (potentially) long running units of work. The creation of work (tasks) is user-driven and may be cancelled at any time. In order to have a responsive user interface, if the current piece of work is no longer required we would like to abandon what we were doing, and immediately start a different task.
Tasks are queued up something like this:
private Task workQueue;
private void DoWorkAsync
(Action<WorkCompletedEventArgs> callback, CancellationToken token)
{
if (workQueue == null)
{
workQueue = Task.Factory.StartWork
(() => DoWork(callback, token), token);
}
else
{
workQueue.ContinueWork(t => DoWork(callback, token), token);
}
}
The DoWork method contains a long running call, so it is not as simple as constantly checking the status of token.IsCancellationRequested and bailing if/when a cancel is detected. The long running work will block the Task continuations until it finishes, even if the task is cancelled.
I have come up with two sample methods to work around this issue, but am not convinced that either are proper. I created simple console applications to demonstrate how they work.
The important point to note is that the continuation fires before the original task completes.
Attempt #1: An inner task
static void Main(string[] args)
{
CancellationTokenSource cts = new CancellationTokenSource();
var token = cts.Token;
token.Register(() => Console.WriteLine("Token cancelled"));
// Initial work
var t = Task.Factory.StartNew(() =>
{
Console.WriteLine("Doing work");
// Wrap the long running work in a task, and then wait for it to complete
// or the token to be cancelled.
var innerT = Task.Factory.StartNew(() => Thread.Sleep(3000), token);
innerT.Wait(token);
token.ThrowIfCancellationRequested();
Console.WriteLine("Completed.");
}
, token);
// Second chunk of work which, in the real world, would be identical to the
// first chunk of work.
t.ContinueWith((lastTask) =>
{
Console.WriteLine("Continuation started");
});
// Give the user 3s to cancel the first batch of work
Console.ReadKey();
if (t.Status == TaskStatus.Running)
{
Console.WriteLine("Cancel requested");
cts.Cancel();
Console.ReadKey();
}
}
This works, but the "innerT" Task feels extremely kludgey to me. It also has the drawback of forcing me to refactor all parts of my code that queue up work in this manner, by necessitating the wrapping up of all long running calls in a new Task.
Attempt #2: TaskCompletionSource tinkering
static void Main(string[] args)
{ var tcs = new TaskCompletionSource<object>();
//Wire up the token's cancellation to trigger the TaskCompletionSource's cancellation
CancellationTokenSource cts = new CancellationTokenSource();
var token = cts.Token;
token.Register(() =>
{ Console.WriteLine("Token cancelled");
tcs.SetCanceled();
});
var innerT = Task.Factory.StartNew(() =>
{
Console.WriteLine("Doing work");
Thread.Sleep(3000);
Console.WriteLine("Completed.");
// When the work has complete, set the TaskCompletionSource so that the
// continuation will fire.
tcs.SetResult(null);
});
// Second chunk of work which, in the real world, would be identical to the
// first chunk of work.
// Note that we continue when the TaskCompletionSource's task finishes,
// not the above innerT task.
tcs.Task.ContinueWith((lastTask) =>
{
Console.WriteLine("Continuation started");
});
// Give the user 3s to cancel the first batch of work
Console.ReadKey();
if (innerT.Status == TaskStatus.Running)
{
Console.WriteLine("Cancel requested");
cts.Cancel();
Console.ReadKey();
}
}
Again this works, but now I have two problems:
a) It feels like I'm abusing TaskCompletionSource by never using it's result, and just setting null when I've finished my work.
b) In order to properly wire up continuations I need to keep a handle on the previous unit of work's unique TaskCompletionSource, and not the task that was created for it. This is technically possible, but again feels clunky and strange.
Where to go from here?
To reiterate, my question is: are either of these methods the "correct" way to tackle this problem, or is there a more correct/elegant solution that will allow me to prematurely abort a long running task and immediately starting a continuation? My preference is for a low-impact solution, but I'd be willing to undertake some huge refactoring if it's the right thing to do.
Alternately, is the TPL even the correct tool for the job, or am I missing a better task queuing mechanism. My target framework is .NET 4.0.
The real issue here is that the long-running call in DoWork is not cancellation-aware. If I understand correctly, what you're doing here is not really cancelling the long-running work, but merely allowing the continuation to execute and, when the work completes on the cancelled task, ignoring the result. For example, if you used the inner task pattern to call CrunchNumbers(), which takes several minutes, cancelling the outer task will allow continuation to occur, but CrunchNumbers() will continue to execute in the background until completion.
I don't think there's any real way around this other than making your long-running calls support cancellation. Often this isn't possible (they may be blocking API calls, with no API support for cancellation.) When this is the case, it's really a flaw in the API; you may check to see if there are alternate API calls that could be used to perform the operation in a way that can be cancelled. One hack approach to this is to capture a reference to the underlying Thread being used by the Task when the Task is started and then call Thread.Interrupt. This will wake up the thread from various sleep states and allow it to terminate, but in a potentially nasty way. Worst case, you can even call Thread.Abort, but that's even more problematic and not recommended.
Here is a stab at a delegate-based wrapper. It's untested, but I think it will do the trick; feel free to edit the answer if you make it work and have fixes/improvements.
public sealed class AbandonableTask
{
private readonly CancellationToken _token;
private readonly Action _beginWork;
private readonly Action _blockingWork;
private readonly Action<Task> _afterComplete;
private AbandonableTask(CancellationToken token,
Action beginWork,
Action blockingWork,
Action<Task> afterComplete)
{
if (blockingWork == null) throw new ArgumentNullException("blockingWork");
_token = token;
_beginWork = beginWork;
_blockingWork = blockingWork;
_afterComplete = afterComplete;
}
private void RunTask()
{
if (_beginWork != null)
_beginWork();
var innerTask = new Task(_blockingWork,
_token,
TaskCreationOptions.LongRunning);
innerTask.Start();
innerTask.Wait(_token);
if (innerTask.IsCompleted && _afterComplete != null)
{
_afterComplete(innerTask);
}
}
public static Task Start(CancellationToken token,
Action blockingWork,
Action beginWork = null,
Action<Task> afterComplete = null)
{
if (blockingWork == null) throw new ArgumentNullException("blockingWork");
var worker = new AbandonableTask(token, beginWork, blockingWork, afterComplete);
var outerTask = new Task(worker.RunTask, token);
outerTask.Start();
return outerTask;
}
}

Categories