How to make the completion of TaskCompletionSource.Task happen on specific TaskScheduler, when I call TaskCompletionSource.SetResult?
Currently, I'm using the idea I borrowed from this post:
static public Task<TResult> ContinueOnTaskScheduler<TResult>(
this Task<TResult> #this, TaskScheduler scheduler)
{
return #this.ContinueWith(
antecedent => antecedent,
CancellationToken.None,
TaskContinuationOptions.ExecuteSynchronously,
scheduler).Unwrap();
}
So whenever I would return TaskCompletionSource.Task to the caller, I now return TaskCompletionSource.Task.ContinueOnTaskScheduler(scheduler) instead.
Is it possible to somehow avoid this another level of indirection of ContinueWith?
It would be interesting to know your goals behind this. Anyway, if you like to avoid the overhead of ContinueWith (which I think is quite low), you'd probably have to come up with your own version of a pattern similar to TaskCompletionSource.
It's not that complex. E.g., something like Promise below can be used in the same way you use TaskCompletionSource, but would allow to provide a custom TaskScheduler for completion (disclaimer: almost untested):
public class Promise
{
readonly Task _task;
readonly CancellationTokenSource _cts;
readonly object _lock = new Object();
Action _completionAction = null;
// public API
public Promise()
{
_cts = new CancellationTokenSource();
_task = new Task(InvokeCompletionAction, _cts.Token);
}
public Task Task { get { return _task; } }
public void SetCompleted(TaskScheduler sheduler = null)
{
lock(_lock)
Complete(sheduler);
}
public void SetException(Exception ex, TaskScheduler sheduler = null)
{
lock (_lock)
{
_completionAction = () => { throw ex; };
Complete(sheduler);
}
}
public void SetException(System.Runtime.ExceptionServices.ExceptionDispatchInfo edi, TaskScheduler sheduler = null)
{
lock (_lock)
{
_completionAction = () => { edi.Throw(); };
Complete(sheduler);
}
}
public void SetCancelled(TaskScheduler sheduler = null)
{
lock (_lock)
{
// don't call _cts.Cancel() outside _completionAction
// otherwise the cancellation won't be done on the sheduler
_completionAction = () =>
{
_cts.Cancel();
_cts.Token.ThrowIfCancellationRequested();
};
Complete(sheduler);
}
}
// implementation
void InvokeCompletionAction()
{
if (_completionAction != null)
_completionAction();
}
void Complete(TaskScheduler sheduler)
{
if (Task.Status != TaskStatus.Created)
throw new InvalidOperationException("Invalid task state.");
_task.RunSynchronously(sheduler?? TaskScheduler.Current);
}
}
On a side note, this version has an override for SetException(ExceptionDispatchInfo edi), so you could propagate the active exception's state from inside catch:
catch(Exception ex)
{
var edi = ExceptionDispatchInfo.Capture(ex);
promise.SetException(edi);
}
It's easy to create a generic version of this, too.
There's a downside of this approach, though. A 3rd party can do promise.Task.Run or promise.Task.RunSynchronously, as the Task is exposed in the TaskStatus.Created state.
You could add a check for that into InvokeCompletionAction, or you could probably hide it using nested tasks / Task.Unwrap (although the latter would bring some overhead back).
Related
I need to execute a kind of LongRunning task after a delay.
Each Task can be cancelled. I prefer TPL with cancellationToken.
Since my task is long running and before starting a task it has to be placed in dictionary I have to use new Task(). But I've faced different behavior - when task is created using new Task() after Cancel() it throws TaskCanceledException whereas a task created with Task.Run doesn't throw an exception.
Generally I need to recognize the difference and not get TaskCanceledException.
It's my code:
internal sealed class Worker : IDisposable
{
private readonly IDictionary<Guid, (Task task, CancellationTokenSource cts)> _tasks =
new Dictionary<Guid, (Task task, CancellationTokenSource cts)>();
public void ExecuteAfter(Action action, TimeSpan waitBeforeExecute, out Guid cancellationId)
{
var cts = new CancellationTokenSource();
var task = new Task(async () =>
{
await Task.Delay(waitBeforeExecute, cts.Token);
action();
}, cts.Token, TaskCreationOptions.LongRunning);
cancellationId = Guid.NewGuid();
_tasks.Add(cancellationId, (task, cts));
task.Start(TaskScheduler.Default);
}
public void ExecuteAfter2(Action action, TimeSpan waitBeforeExecute, out Guid cancellationId)
{
var cts = new CancellationTokenSource();
cancellationId = Guid.NewGuid();
_tasks.Add(cancellationId, (Task.Run(async () =>
{
await Task.Delay(waitBeforeExecute, cts.Token);
action();
}, cts.Token), cts));
}
public void Abort(Guid cancellationId)
{
if (_tasks.TryGetValue(cancellationId, out var value))
{
value.cts.Cancel();
//value.task.Wait();
_tasks.Remove(cancellationId);
Dispose(value.cts);
Dispose(value.task);
}
}
public void Dispose()
{
if (_tasks.Count > 0)
{
foreach (var t in _tasks)
{
Dispose(t.Value.cts);
Dispose(t.Value.task);
}
_tasks.Clear();
}
}
private static void Dispose(IDisposable obj)
{
if (obj == null)
{
return;
}
try
{
obj.Dispose();
}
catch (Exception ex)
{
//Log.Exception(ex);
}
}
}
internal class Program
{
private static void Main(string[] args)
{
Action act = () => Console.WriteLine("......");
Console.WriteLine("Started");
using (var w = new Worker())
{
w.ExecuteAfter(act, TimeSpan.FromMilliseconds(10000), out var id);
//w.ExecuteAfter2(act, TimeSpan.FromMilliseconds(10000), out var id);
Thread.Sleep(3000);
w.Abort(id);
}
Console.WriteLine("Enter to exit");
Console.ReadKey();
}
}
UPD:
This approach also works without exception
public void ExecuteAfter3(Action action, TimeSpan waitBeforeExecute, out Guid cancellationId)
{
var cts = new CancellationTokenSource();
cancellationId = Guid.NewGuid();
_tasks.Add(cancellationId, (Task.Factory.StartNew(async () =>
{
await Task.Delay(waitBeforeExecute, cts.Token);
action();
}, cts.Token, TaskCreationOptions.LongRunning, TaskScheduler.Default), cts)); ;
}
The reason of the inconsistent behavior is fundamentally incorrect usage of an async delegate in the first case. The Task constructors just don't receive Func<Task> and your asynchronous delegate is always interpreted as async void not async Task in case of using with constructor. If an exception is raised in an async Task method it's caught and placed into Task object which isn't true for an async void method, in that case exception just bubbles up out of the method to a synchronization context and goes under category of unhandled exceptions (you can familiarize with details in this Stephen Cleary article). So what happens in case of using constructor: a task which is supposed to initiate asynchronous flow is created and started. Once it reaches point when Task.Delay(...) returns a promise, the task completes and it has no more relationship to anything which happens in Task.Delay continuation (you can easily check in debugger by setting a breakpoint to value.cts.Cancel() that the task object in the _tasks dictionary has status RanToCompletetion while however the task delegate essentially is still running). When a cancellation is requested the exception is raised inside the Task.Delay method and without existence of any promise object is being promoted to app domain.
In case of Task.Run the situation is different because there are overloads of this method which are able to accept Func<Task> or Func<Task<T>> and unwrap the tasks internally in order to return underlying promise instead of wrapped task which ensures proper task object inside the _tasks dictionary and proper error handling.
The third scenario despite the fact that it doesn't throw an exception it is partially correct. Unlike Task.Run, Task.Factory.StartNew doesn't unwrap underlying task to return promise, so task stored in the _tasks is just wrapper task, like in the case with constructor (again you can check its state with debugger). It however is able to understand Func<Task> parameters, so asynchronous delegate has async Task signature which allows at least to handle and store exception in the underlying task. In order to get this underlying task with Task.Factory.StartNew you need to unwrap the task by yourself with Unwrap() extension method.
The Task.Factory.StartNew isn't considered as a beast practice of creating tasks because of certain dangers related to its application (see there). It however can be used with some caveats if you need to apply specific options like LongRunning which cannot be directly applied with Task.Run.
I don't know why I got down votes here but it's inspired me to update my answer.
UPDATED
My full approach:
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApp4
{
internal class Program
{
private static void Main(string[] args)
{
using (var delayedWorker = new DelayedWorker())
{
delayedWorker.ProcessWithDelay(() => { Console.WriteLine("100"); }, TimeSpan.FromSeconds(5), out var cancellationId_1);
delayedWorker.ProcessWithDelay(() => { Console.WriteLine("200"); }, TimeSpan.FromSeconds(10), out var cancellationId_2);
delayedWorker.ProcessWithDelay(() => { Console.WriteLine("300"); }, TimeSpan.FromSeconds(15), out var cancellationId_3);
Cancel_3(delayedWorker, cancellationId_3);
Console.ReadKey();
}
}
private static void Cancel_3(DelayedWorker delayedWorker, Guid cancellationId_3)
{
Task.Run(() => { delayedWorker.Abort(cancellationId_3); }).Wait();
}
internal sealed class DelayedWorker : IDisposable
{
private readonly object _locker = new object();
private readonly object _disposeLocker = new object();
private readonly IDictionary<Guid, (Task task, CancellationTokenSource cts)> _tasks = new Dictionary<Guid, (Task task, CancellationTokenSource cts)>();
private bool _disposing;
public void ProcessWithDelay(Action action, TimeSpan waitBeforeExecute, out Guid cancellationId)
{
Console.WriteLine("Creating delayed action...");
CancellationTokenSource tempCts = null;
CancellationTokenSource cts = null;
try
{
var id = cancellationId = Guid.NewGuid();
tempCts = new CancellationTokenSource();
cts = tempCts;
var task = new Task(() => { Process(action, waitBeforeExecute, cts); }, TaskCreationOptions.LongRunning);
_tasks.Add(cancellationId, (task, cts));
tempCts = null;
task.ContinueWith(t =>
{
lock (_disposeLocker)
{
if (!_disposing)
{
TryRemove(id);
}
}
}, TaskContinuationOptions.ExecuteSynchronously);
Console.WriteLine($"Created(cancellationId: {cancellationId})");
task.Start(TaskScheduler.Default);
}
finally
{
if (tempCts != null)
{
tempCts.Dispose();
}
}
}
private void Process(Action action, TimeSpan waitBeforeExecute, CancellationTokenSource cts)
{
Console.WriteLine("Starting delayed action...");
cts.Token.WaitHandle.WaitOne(waitBeforeExecute);
if (cts.Token.IsCancellationRequested)
{
return;
}
lock (_locker)
{
Console.WriteLine("Performing action...");
action();
}
}
public bool Abort(Guid cancellationId)
{
Console.WriteLine($"Aborting(cancellationId: {cancellationId})...");
lock (_locker)
{
if (_tasks.TryGetValue(cancellationId, out var value))
{
if (value.task.IsCompleted)
{
Console.WriteLine("too late");
return false;
}
value.cts.Cancel();
value.task.Wait();
Console.WriteLine("Aborted");
return true;
}
Console.WriteLine("Either too late or wrong cancellation id");
return true;
}
}
private void TryRemove(Guid id)
{
if (_tasks.TryGetValue(id, out var value))
{
Remove(id, value.task, value.cts);
}
}
private void Remove(Guid id, Task task, CancellationTokenSource cts)
{
_tasks.Remove(id);
Dispose(cts);
Dispose(task);
}
public void Dispose()
{
lock (_disposeLocker)
{
_disposing = true;
}
if (_tasks.Count > 0)
{
foreach (var t in _tasks)
{
t.Value.cts.Cancel();
t.Value.task.Wait();
Dispose(t.Value.cts);
Dispose(t.Value.task);
}
_tasks.Clear();
}
}
private static void Dispose(IDisposable obj)
{
if (obj == null)
{
return;
}
try
{
obj.Dispose();
}
catch (Exception ex)
{
//log ex
}
}
}
}
}
I have two async functions, which I will call ChangeState() and DoThing(). Each of them awaits downstream async methods. These are called are from event handlers, so they will not block any other code while they execute. If ChangeState() is called, it's imperative that DoThing() does not do its thing until any previous ChangeState() has completed. ChangeState() could be called again while it's still executing. Any executions started before DoThing() should be completed before DoThing() can continue.
The reverse is also true; ChangeState() should wait until any previously running DoStuff() is complete.
How can I implement this without the danger of deadlocks?
I know awaits are not allowed inside of lock statements and that's for good reasons, which I why I'm not trying to recreate that functionality.
async void ChangeState(bool state)
{
//Wait here until any pending DoStuff() is complete.
await OutsideApi.ChangeState(state);
}
async void DoStuff()
{
//Wait here until any pending ChangeState() is complete.
await OutsideApi.DoStuff();
}
By your requirements seems something like ReaderWriterLock could help you. Also, since you have async methods you should use async lock. Unfortunately, there is no await ready ReaderWriterLock lock provided by the .NET framework itself. Luckily, you could take a look at the AsyncEx library or this article. The example using AsyncEx.
var readerWriterLock = new AsyncReaderWriterLock();
async void ChangeState(bool state)
{
using(await readerWriterLock.ReaderLockAsync())
{
await OutsideApi.ChangeState(state);
}
}
async void DoStuff()
{
using(await readerWriterLock.WriterLockAsync())
{
await OutsideApi.DoStuff();
}
}
n.b. This solution still has the limitation that DoStuff calls could not be concurrent, writer lock, but still the order of the calls and the requirement to finalize all DoStuff before ChangeState and vice versa will be fulfilled.(tip from #Scott Chamberlain to use both reader and writer lock)
You can use ManualResetEvent or AutoResetEvent to signal that a thread has finished so that another thread can continue the work.
Some samples can be found here and here:
EDIT: The first solution didn't meet the requirements.
Create a custom lock class.
This class keeps track of how many instances are running from which type (ChangeState and DoThing) and provides a way to check if a task can run.
public class CustomLock
{
private readonly int[] Running;
private readonly object _lock;
public CustomLock(int Count)
{
Running = new int[Count];
_lock = new object();
}
public void LockOne(int Task)
{
lock (_lock)
{
Running[Task]++;
}
}
public void UnlockOne(int Task)
{
lock (_lock)
{
Running[Task]--;
}
}
public bool Locked(int Task)
{
lock (_lock)
{
for (int i = 0; i < Running.Length; i++)
{
if (i != Task && Running[i] != 0)
return true;
}
return false;
}
}
}
Change the already existing code.
ChangeState will be task 0, and DoStuff will be task 1.
private CustomLock Lock = new CustomLock(2); //Create a new instance of the class for 2 tasks
async Task ChangeState(bool state)
{
while (Lock.Locked(0)) //Wait for the task to get unlocked
await Task.Delay(10);
Lock.LockOne(0); //Lock this task
await OutsideApi.ChangeState(state);
Lock.UnlockOne(0); //Task finished, unlock one
}
async Task DoStuff()
{
while (Lock.Locked(1))
await Task.Delay(10);
Lock.LockOne(1);
await OutsideApi.DoStuff();
Lock.UnlockOne(1);
}
While any ChangeState is running a new one can be started without waiting but when a DoStuff is called it will wait untill all ChangeStates finish, and this works the other way too.
I made for practice a synchronization primitive named KeyedLock, that allows concurrent asynchronous operations of only one key at a time. All other keys are queued, and unblocked later in batches (by key). The class is intended to be used like this:
KeyedLock _keyedLock;
async Task ChangeState(bool state)
{
using (await this._keyedLock.LockAsync("ChangeState"))
{
await OutsideApi.ChangeState(state);
}
}
async Task DoStuff()
{
using (await this._keyedLock.LockAsync("DoStuff"))
{
await OutsideApi.DoStuff();
}
}
For example the calls bellow:
await ChangeState(true);
await DoStuff();
await DoStuff();
await ChangeState(false);
await DoStuff();
await ChangeState(true);
...will be executed in this order:
ChangeState(true);
ChangeState(false); // concurrently with the above
ChangeState(true); // concurrently with the above
DoStuff(); // after completion of the above
DoStuff(); // concurrently with the above
DoStuff(); // concurrently with the above
The KeyedLock class:
class KeyedLock
{
private object _currentKey;
private int _currentCount = 0;
private WaitingQueue _waitingQueue = new WaitingQueue();
private readonly object _locker = new object();
public Task WaitAsync(object key, CancellationToken cancellationToken)
{
if (key == null) throw new ArgumentNullException(nameof(key));
lock (_locker)
{
if (_currentKey != null && key != _currentKey)
{
var waiter = new TaskCompletionSource<bool>();
_waitingQueue.Enqueue(new KeyValuePair<object,
TaskCompletionSource<bool>>(key, waiter));
if (cancellationToken != null)
{
cancellationToken.Register(() => waiter.TrySetCanceled());
}
return waiter.Task;
}
else
{
_currentKey = key;
_currentCount++;
return cancellationToken.IsCancellationRequested ?
Task.FromCanceled(cancellationToken) : Task.FromResult(true);
}
}
}
public Task WaitAsync(object key) => WaitAsync(key, CancellationToken.None);
public void Release()
{
List<TaskCompletionSource<bool>> tasksToRelease;
lock (_locker)
{
if (_currentCount <= 0) throw new InvalidOperationException();
_currentCount--;
if (_currentCount > 0) return;
_currentKey = null;
if (_waitingQueue.Count == 0) return;
var newWaitingQueue = new WaitingQueue();
tasksToRelease = new List<TaskCompletionSource<bool>>();
foreach (var entry in _waitingQueue)
{
if (_currentKey == null || entry.Key == _currentKey)
{
_currentKey = entry.Key;
_currentCount++;
tasksToRelease.Add(entry.Value);
}
else
{
newWaitingQueue.Enqueue(entry);
}
}
_waitingQueue = newWaitingQueue;
}
foreach (var item in tasksToRelease)
{
item.TrySetResult(true);
}
}
private class WaitingQueue :
Queue<KeyValuePair<object, TaskCompletionSource<bool>>>
{ }
public Task<Releaser> LockAsync(object key,
CancellationToken cancellationToken)
{
var waitTask = this.WaitAsync(key, cancellationToken);
return waitTask.ContinueWith(
(_, state) => new Releaser((KeyedLock)state),
this, cancellationToken,
TaskContinuationOptions.ExecuteSynchronously,
TaskScheduler.Default
);
}
public Task<Releaser> LockAsync(object key)
=> LockAsync(key, CancellationToken.None);
public struct Releaser : IDisposable
{
private readonly KeyedLock _parent;
internal Releaser(KeyedLock parent) { _parent = parent; }
public void Dispose() { _parent?.Release(); }
}
}
This seems like a good fit for a pair of ReaderWriterLockSlims.
private readonly ReaderWriterLockSlim changeStateLock = new ReaderWriterLockSlim();
private readonly ReaderWriterLockSlim doStuffLock = new ReaderWriterLockSlim();
One controls the access to ChangeState and the other controls the access to DoStuff.
The reader lock is used to signal that a method is being executed and the writer lock is used to signal that the other method is being executed. ReaderWriterLockSlim allows multiple reads but writes are exclusive.
Task.Yield is just to yield control back to the caller because ReaderWriterLockSlim's ar blocking.
async Task ChangeState(bool state)
{
await Task.Yield();
doStuffLock.EnterWriteLock();
try
{
changeStateLock.EnterReadLock();
try
{
await OutsideApi.ChangeState(state);
}
finally
{
changeStateLock.ExitReadLock();
}
}
finally
{
doStuffLock.ExitWriteLock();
}
}
async Task DoStuff()
{
await Task.Yield();
changeStateLock.EnterWriteLock();
try
{
doStuffLock.EnterReadLock();
try
{
await OutsideApi.DoStuff();
}
finally
{
doStuffLock.ExitReadLock();
}
}
finally
{
changeStateLock.ExitWriteLock();
}
}
My app needs to load plugins into separate app domains and then execute some code inside of them asynchronously. I've written some code to wrap Task in marshallable types:
static class RemoteTask
{
public static async Task<T> ClientComplete<T>(RemoteTask<T> remoteTask,
CancellationToken cancellationToken)
{
T result;
using (cancellationToken.Register(remoteTask.Cancel))
{
RemoteTaskCompletionSource<T> tcs = new RemoteTaskCompletionSource<T>();
remoteTask.Complete(tcs);
result = await tcs.Task;
}
await Task.Yield(); // HACK!!
return result;
}
public static RemoteTask<T> ServerStart<T>(Func<CancellationToken, Task<T>> func)
{
return new RemoteTask<T>(func);
}
}
class RemoteTask<T> : MarshalByRefObject
{
readonly CancellationTokenSource cts = new CancellationTokenSource();
readonly Task<T> task;
internal RemoteTask(Func<CancellationToken, Task<T>> starter)
{
this.task = starter(cts.Token);
}
internal void Complete(RemoteTaskCompletionSource<T> tcs)
{
task.ContinueWith(t =>
{
if (t.IsFaulted)
{
tcs.TrySetException(t.Exception);
}
else if (t.IsCanceled)
{
tcs.TrySetCancelled();
}
else
{
tcs.TrySetResult(t.Result);
}
}, TaskContinuationOptions.ExecuteSynchronously);
}
internal void Cancel()
{
cts.Cancel();
}
}
class RemoteTaskCompletionSource<T> : MarshalByRefObject
{
readonly TaskCompletionSource<T> tcs = new TaskCompletionSource<T>();
public bool TrySetResult(T result) { return tcs.TrySetResult(result); }
public bool TrySetCancelled() { return tcs.TrySetCanceled(); }
public bool TrySetException(Exception ex) { return tcs.TrySetException(ex); }
public Task<T> Task
{
get
{
return tcs.Task;
}
}
}
It's used like:
sealed class ControllerAppDomain
{
PluginAppDomain plugin;
public Task<int> SomethingAsync()
{
return RemoteTask.ClientComplete(plugin.SomethingAsync(), CancellationToken.None);
}
}
sealed class PluginAppDomain : MarshalByRefObject
{
public RemoteTask<int> SomethingAsync()
{
return RemoteTask.ServerStart(async cts =>
{
cts.ThrowIfCancellationRequested();
return 1;
});
}
}
But I've run into a snag. If you look in ClientComplete, there's a Task.Yield() I've inserted. If I comment this line, ClientComplete will never return. Any ideas?
My best guess is that you are facing these issues because of the async method that contains await and this is managed via the ThreadPool which can allocate some recycled Thread.
Reference
Best practice to call ConfigureAwait for all server-side code
Actually, just doing an await can do that(put you on a different thread). Once your async method hits
an await, the method is blocked but the thread returns to the thread
pool. When the method is ready to continue, any thread is snatched
from the thread pool and used to resume the method.
Try to streamline the code, generate threads for baseline cases and
performance is last.
I'm currently working on a a project and I have a need to queue some jobs for processing, here's the requirement:
Jobs must be processed one at a time
A queued item must be able to be waited on
So I want something akin to:
Task<result> QueueJob(params here)
{
/// Queue the job and somehow return a waitable task that will wait until the queued job has been executed and return the result.
}
I've tried having a background running task that just pulls items off a queue and processes the job, but the difficulty is getting from a background task to the method.
If need be I could go the route of just requesting a completion callback in the QueueJob method, but it'd be great if I could get a transparent Task back that allows you to wait on the job to be processed (even if there are jobs before it in the queue).
You might find TaskCompletionSource<T> useful, it can be used to create a Task that completes exactly when you want it to. If you combine it with BlockingCollection<T>, you will get your queue:
class JobProcessor<TInput, TOutput> : IDisposable
{
private readonly Func<TInput, TOutput> m_transform;
// or a custom type instead of Tuple
private readonly
BlockingCollection<Tuple<TInput, TaskCompletionSource<TOutput>>>
m_queue =
new BlockingCollection<Tuple<TInput, TaskCompletionSource<TOutput>>>();
public JobProcessor(Func<TInput, TOutput> transform)
{
m_transform = transform;
Task.Factory.StartNew(ProcessQueue, TaskCreationOptions.LongRunning);
}
private void ProcessQueue()
{
Tuple<TInput, TaskCompletionSource<TOutput>> tuple;
while (m_queue.TryTake(out tuple, Timeout.Infinite))
{
var input = tuple.Item1;
var tcs = tuple.Item2;
try
{
tcs.SetResult(m_transform(input));
}
catch (Exception ex)
{
tcs.SetException(ex);
}
}
}
public Task<TOutput> QueueJob(TInput input)
{
var tcs = new TaskCompletionSource<TOutput>();
m_queue.Add(Tuple.Create(input, tcs));
return tcs.Task;
}
public void Dispose()
{
m_queue.CompleteAdding();
}
}
I would go for something like this:
class TaskProcessor<TResult>
{
// TODO: Error handling!
readonly BlockingCollection<Task<TResult>> blockingCollection = new BlockingCollection<Task<TResult>>(new ConcurrentQueue<Task<TResult>>());
public Task<TResult> AddTask(Func<TResult> work)
{
var task = new Task<TResult>(work);
blockingCollection.Add(task);
return task; // give the task back to the caller so they can wait on it
}
public void CompleteAddingTasks()
{
blockingCollection.CompleteAdding();
}
public TaskProcessor()
{
ProcessQueue();
}
void ProcessQueue()
{
Task<TResult> task;
while (blockingCollection.TryTake(out task))
{
task.Start();
task.Wait(); // ensure this task finishes before we start a new one...
}
}
}
Depending on the type of app that is using it, you could switch out the BlockingCollection/ConcurrentQueue for something simpler (eg just a plain queue). You can also adjust the signature of the "AddTask" method depending on what sort of methods/parameters you will be queueing up...
Func<T> takes no parameters and returns a value of type T. The jobs are run one by one and you can wait on the returned task to get the result.
public class TaskQueue
{
private Queue<Task> InnerTaskQueue;
private bool IsJobRunning;
public void Start()
{
Task.Factory.StartNew(() =>
{
while (true)
{
if (InnerTaskQueue.Count > 0 && !IsJobRunning)
{
var task = InnerTaskQueue.Dequeue()
task.Start();
IsJobRunning = true;
task.ContinueWith(t => IsJobRunning = false);
}
else
{
Thread.Sleep(1000);
}
}
}
}
public Task<T> QueueJob(Func<T> job)
{
var task = new Task<T>(() => job());
InnerTaskQueue.Enqueue(task);
return task;
}
}
I'm trying to implement a custom awaiteable to execute await Thread.SleepAsync() without creating any additional threads.
Here's what I've got:
class AwaitableThread : INotifyCompletion
{
public AwaitableThread(long milliseconds)
{
var timer = new Timer(obj => { IsCompleted = true; }, null, milliseconds, Timeout.Infinite);
}
private bool isCompleted = false;
public bool IsCompleted
{
get { return isCompleted; }
set { isCompleted = value; }
}
public void GetResult()
{}
public AwaitableThread GetAwaiter() { return this; }
public void OnCompleted(Action continuation)
{
if (continuation != null)
{
continuation();
}
}
}
And here's how the sleep would work:
static async Task Sleep(int milliseconds)
{
await new AwaitableThread(milliseconds);
}
The problem is that this function returns immidiatly, even though in OnCompleted, IsCompleted is still false.
What am I doing wrong?
Fully implementing the awaitable pattern for production use is a tricky business - you need to capture the execution context, amongst other things. Stephen Toub's blog post on this has a lot more detail. In many cases, it's easier to piggy-back onto Task<T> or Task, potentially using TaskCompletionSource. For example, in your case, you could write the equivalent of Task.Delay like this:
public Task MyDelay(int milliseconds)
{
// There's only a generic TaskCompletionSource, but we don't really
// care about the result. Just use int as a reasonably cheap version.
var tcs = new TaskCompletionSource<int>();
Timer timer = new Timer(_ => tcs.SetResult(0), null, milliseconds,
Timeout.Infinite);
// Capture the timer variable so that the timer can't be garbage collected
// unless the task is (in which case it doesn't matter).
tcs.Task.ContinueWith(task => timer = null);
return tcs.Task;
}
You can now await that task, just like you can await the result of Task.Delay.