Syncronizing Async Tasks Across Threads - c#

Trying to figure out the best way to architect a solution for this problem
HTTP Request Pipeline when Handling Authentication Middleware. runs OnTokenValidated in the JwtBearerOptions is executed in parallel to my requests code. I could run this synchronously but I would prefer not to,
Some of my request Require this claim, how can I be assure that this claim has been set?
I'm running two threads parallel,
one async function is dependent on the other thread to set a variable
I created an await manager to get my task.
However, There is a chance the await manager has not set the key for task because i didn't execute.
My code:
public class TaskManager : ITaskManager
{
Dictionary<string, Task> _taskAwaiterMap = new Dictionary<string, Task>();
public Task GetTaskForKey(string key)
{
this._taskAwaiterMap.TryGetValue(key, out var awaiter);
return awaiter;
}
public void QueueTask(string key, Task task)
{
this._taskAwaiterMap[key] = task;
}
}
Edit
public async Task GetManditoryTaskForKey(string key, int timeout)
{
var cancellationTokenSource = new CancellationTokenSource(timeout);
await GetManditoryResolverWaitTask(key, cancellationTokenSource.Token);
}
protected async Task<Task> GetManditoryResolverWaitTask(string key, CancellationToken cancellationToken)
{
while (false == this._taskAwaiterMap.TryGetValue(key, out var task))
{
if(cancellationToken.IsCancellationRequested)
{
return Task.FromCanceled(cancellationToken);
}
await Task.Yield();
}
return this.GetTaskForKey(key);
}
I've figure out a way to await for my key to be set but is the an efficient way for synchronizing async tasks?
EDIT
In my Http Pipe Line I would Queue a task like so:
OnTokenValidated = async tvc => { await AuthenticationRule.ValidateToken(tvc); }
//... Source ValidateToken
//Some Potentially long Task
var awaitManager = serviceProvider.GetRequiredService<ITaskManager>();
var userClaimsTask = SetUserClaims(claimsIdentity, context, userSubjectId);
awaitManager.QueueTask(USER_CLAIM_AWAITER_KEY, userClaimsTask);
How can I efficiently synchronize my async Tasks?

I was trying to answer your previous question. :)
Anyway, I am not certain that this is the best solution for your problem because I am not familiar with the Async HTTP pipeline. But the following can be a solution for the approach you are taking:
public class TaskManager : ITaskManager
{
private readonly ConcurrentDictionary<string, TaskCompletionSource<Task>> _taskAwaiterMap = new ConcurrentDictionary<string, TaskCompletionSource<Task>>();
public async Task GetTaskForKey(string key)
{
await await this._taskAwaiterMap.GetOrAdd(key, _ => new TaskCompletionSource<Task>()).Task;
}
public void QueueTask(string key, Task task)
{
this._taskAwaiterMap.GetOrAdd(key, _ => new TaskCompletionSource<Task>()).SetResult(task);
}
}
I assume TaskManager will only live within the session. Otherwise, you will have to think of how to clean up _taskAwaiterMap.

Related

Pattern that offers mutual exclusion, re-entrancy, distributed locking towards one (of many) APIs with HttpClient

I'm looking for an approach to locking that, by default, makes sure that all calls to a single API are run mutually exclusive using distributed locking. However, at the same time I need the option to instead lock larger blocks of code (critical procedures) containing several calls to that API. Those calls should still be run mutually exclusive. In those cases the approach should be re-entrant, so that each call isn't blocked because the block of code it is in already holds the lock. It should also support re-entrancy if there are several methods nested that lock sections of code.
Examples of use:
// Should have lock registered by default (f.ex. in HttpMessageHandler)
await _deviceClient.PerformAction();
async Task CriticalProcedure()
{
// Should only use one lock that is reused in nested code (re-entrant)
await using (await _reentrantLockProvider.AcquireLockAsync())
{
await _deviceClient.TriggerAction();
await SharedCriticalProcedure();
}
// Should only dispose lock at this point
}
async Task SharedCriticalProcedure()
{
await using (await _customLockProvider.AcquireLockAsync())
{
await _deviceClient.HardReset();
await _deviceClient.Refresh();
}
}
// Should be forced to run sequentially even though they are not awaited (mutex)
var task1 = _deviceClient.PerformAction1();
var task2 = _deviceClient.PerformAction2();
await Task.WhenAll(task1, task2);
Background:
My team is working on a WebAPI that is responsible for making calls to hardware devices. When an endpoint in our API is called, we get a header that identifies the hardware device in question, used in startup to configure the baseUrl of our HttpClients, and we make one or more calls to that API. We have the following limitations:
A device shouldn't be called when it is already busy with a request (mutual exclusion)
Some procedures against the device (blocks of code containing several calls) are critical and shouldn't be interrupted by other calls to the device (why I want re-entry)
A user may run multiple requests to our API simultaneously, so the locking should work across requests
Our WebAPI may have multiple deployments, so the locking should be distributed
We use Refit to describe the API of our hardware devices and create HttpClients
I have created the following solution which I believe works. However, it seems clumsy and overengineered, mostly because HttpMessageHandlers have unpredictable lifetimes, not scoped to request, so I needed to use TraceIdentifier and a dictionary to enable re-entry during the request lifecycle.
// In startup
service.AddSingleton<IReentrantLockProvider, ReentrantLockProvider>();
services.
.AddHttpClient(IDeviceClient)
.AddTypedClient(client => RestService.For<IDeviceClient>(client, refitSettings))
.ConfigureHttpClient((provider, client) => ConfigureHardwareBaseUrl())
.AddHttpMessageHandler<HardwareMutexMessageHandler>();
public class HardwareMutexMessageHandler : DelegatingHandler
{
private readonly IReentrantLockProvider _reentrantPanelLockProvider;
private readonly IHttpContextAccessor _httpContextAccessor;
private readonly ConcurrentDictionary<string, object> _locks;
public HardwareMutexMessageHandler(IReentrantLockProvider reentrantPanelLockProvider, IHttpContextAccessor httpContextAccessor)
{
_reentrantPanelLockProvider = reentrantPanelLockProvider;
_httpContextAccessor = httpContextAccessor;
}
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
await using (await _reentrantPanelLockProvider.AcquireLockAsync(cancellationToken))
{
var hardwareId = _httpContextAccessor.HttpContext.Request.Headers["HardwareId"];
var mutex = _locks.GetOrAdd(hardwareId, _ => new());
// This is only used to handle cases where developer chooses to batch calls or forgets to await a call
lock (mutex)
{
return base.SendAsync(request, cancellationToken).Result;
}
}
}
}
public class ReentrantLockProvider : IReentrantLockProvider
{
private readonly IDistributedLockProvider _distributedLockProvider;
private readonly IHttpContextAccessor _httpContextAccessor;
private readonly ConcurrentDictionary<string, ReferenceCountedDisposable> _lockDictionary;
private readonly object _lockVar = new();
public ReentrantLockProvider(IDistributedLockProvider distributedLockProvider, IHttpContextAccessor httpContextAccessor)
{
_distributedLockProvider = distributedLockProvider;
_httpContextAccessor = httpContextAccessor;
_lockDictionary = new ConcurrentDictionary<string, ReferenceCountedDisposable>();
}
public async Task<IAsyncDisposable> AcquireLockAsync(CancellationToken cancellationToken = default)
{
var hardwareId = _httpContextAccessor.HttpContext.Request.Headers["HardwareId"];
var requestId = _httpContextAccessor.HttpContext.TraceIdentifier;
lock (_lockVar)
{
if (_lockDictionary.TryGetValue(requestContext.CorrelationId, out referenceCountedLock))
{
referenceCountedLock.RegisterReference();
return referenceCountedLock;
}
acquireLockTask = _distributedLockProvider.AcquireLockAsync(hardwareId, timeout: null, cancellationToken);
referenceCountedLock = new ReferenceCountedDisposable(async () =>
await RemoveLock(acquireLockTask.Result, requestContext.CorrelationId)
);
_lockDictionary.TryAdd(requestContext.CorrelationId, referenceCountedLock);
}
}
private async Task RemoveLock(IDistributedSynchronizationHandle acquiredLock, string correlationId)
{
ValueTask disposeAsyncTask;
lock (_lockVar)
{
disposeAsyncTask = acquiredLock.DisposeAsync();
_ = _lockDictionary.TryRemove(correlationId, out _);
}
await disposeAsyncTask;
}
}
public class ReferenceCountedDisposable : IAsyncDisposable
{
private readonly Func<Task> _asyncDispose;
private int _refCount;
public ReferenceCountedDisposable(Func<Task> asyncDispose)
{
_asyncDispose = asyncDispose;
_refCount = 1;
}
public void RegisterReference()
{
Interlocked.Increment(ref _refCount);
}
public async ValueTask DisposeAsync()
{
var references = Interlocked.Decrement(ref _refCount);
if (references == 0)
{
await _asyncDispose();
}
else if (references < 0)
{
throw new InvalidOperationException("Can't dispose multiple times");
}
else
{
GC.SuppressFinalize(this);
}
}
}

Await long-running task on original context

I have some objects that start long-running background work on construction and would like to await their completion on IAsyncDisposable.
I would like to run this work on the thread pool. I am trying to figure out the safest way to do this while avoiding deadlocks. I cannot figure out how to use JoinableTaskFactory and/or JoinableContext to do this.
using Microsoft.VisualStudio.Threading;
public class Worker : System.IAsyncDisposable
{
private CancellationTokenSource _cts;
private Task _work;
private JoinableTask _workSafe;
public Worker()
{
_cts = new();
// goes on the thread pool. But no way to join/await on that thread later?
_work = Task.Run(() => DoWorkAsync(_cts.Token));
// using the library I can await on same thread later. But it
// executes on the same thread as the constructor (undesirable)
// according to the documentation
var ctx = new JoinableTaskContext();
_workSafe = ctx.Factory.RunAsync(() => DoWorkAsync(_cts.Token), JoinableTaskCreationOptions.LongRunning);
}
private async Task DoWorkAsync(CancellationToken token)
{
while(!token.IsCancellationRequested)
{
await Task.Delay(50);
}
}
public async ValueTask DisposeAsync()
{
_cts.Cancel();
// VS says this is unsafe
await _work;
// But this is safe
await _workSafe;
_cts.Dispose();
}
}

Redirect to action after finishing background task queue

I'm working on a .Net core solution that takes backup of storage files from another microservice and because this process takes too long time, we decided to build this routine under a background task.By following this link:
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-2.1
I have implemented the background by using Queued background tasks like the following :
public interface IBackgroundTaskQueue
{
void QueueBackgroundWorkItem(Func<CancellationToken, Task> workItem);
Task<Func<CancellationToken, Task>> DequeueAsync(
CancellationToken cancellationToken);
}
public class BackgroundTaskQueue : IBackgroundTaskQueue
{
private ConcurrentQueue<Func<CancellationToken, Task>> _workItems =
new ConcurrentQueue<Func<CancellationToken, Task>>();
private SemaphoreSlim _signal = new SemaphoreSlim(0);
public void QueueBackgroundWorkItem(
Func<CancellationToken, Task> workItem)
{
if (workItem == null)
{
throw new ArgumentNullException(nameof(workItem));
}
_workItems.Enqueue(workItem);
_signal.Release();
}
public async Task<Func<CancellationToken, Task>> DequeueAsync(
CancellationToken cancellationToken)
{
await _signal.WaitAsync(cancellationToken);
_workItems.TryDequeue(out var workItem);
return workItem;
}
}
public class QueuedHostedService : BackgroundService
{
private readonly ILogger _logger;
public QueuedHostedService(IBackgroundTaskQueue taskQueue,
ILoggerFactory loggerFactory)
{
TaskQueue = taskQueue;
_logger = loggerFactory.CreateLogger<QueuedHostedService>();
}
public IBackgroundTaskQueue TaskQueue { get; }
protected async override Task ExecuteAsync(
CancellationToken cancellationToken)
{
_logger.LogInformation("Queued Hosted Service is starting.");
while (!cancellationToken.IsCancellationRequested)
{
var workItem = await TaskQueue.DequeueAsync(cancellationToken);
try
{
await workItem(cancellationToken);
}
catch (Exception ex)
{
_logger.LogError(ex,
$"Error occurred executing {nameof(workItem)}.");
}
}
_logger.LogInformation("Queued Hosted Service is stopping.");
}
}
}
and in the controller action method I did that:
[HttpPost]
[ValidateAntiForgeryToken]
public IActionResult TakeBackup()
{
// Process #1: update latest backup time in setting table.
var _setting = _settingService.FindByKey("BackupData");
var data = JsonConvert.DeserializeObject<BackUpData>(_setting.Value);
data.LatestBackupTime = DateTime.UtcNow;
_setting.Value = JsonConvert.SerializeObject(data);
_settingService.AddOrUpdate(_setting);
// Process #2: Begin a background service to excaute the backup task.
_queue.QueueBackgroundWorkItem(async token =>
{
// instead of this staff I will replace by the API I want to consume.
var guid = Guid.NewGuid().ToString();
for (int delayLoop = 0; delayLoop < 3; delayLoop++)
{
_logger.LogInformation(
$"Queued Background Task {guid} is running. {delayLoop}/3");
await Task.Delay(TimeSpan.FromSeconds(5), token);
}
_logger.LogInformation(
$"Queued Background Task {guid} is complete. 3/3");
// Here I need to redirect to the index view after the task is finished (my issue) ..
RedirectToAction("Index",new {progress="Done"});
});
return RedirectToAction("Index");
}
}
The logger information displays successfully
All what I need is to find away to be able to reload the index controller after the background task is done successfully but for some reason I don't know it can't be redirected.
The Index action method is like that :
public async Task<IActionResult> Index()
{
var links = new List<LinkObject>();
var files = await _storageProvider.GetAllFiles(null, "backup");
foreach (var f in files)
{
var file = f;
if (f.Contains("/devstoreaccount1/"))
{
file = file.Replace("/devstoreaccount1/", "");
}
file = file.TrimStart('/');
links.Add(new LinkObject()
{
Method = "GET",
Href = await _storageProvider.GetSasUrl(file),
Rel = f
});
}
return View(links);
}
Thanks !
If you want the current page to interact with a long running task, you don't necessarily need the overhead of BackgroundService. That feature is for cases where there is no page to interact with.
First, the server cannot call a client to tell it to reload. At least not without the use of WebSockets, which would definitely be overkill for this. Instead, you will use Javascript (AJAX) to make background calls to poll for the status of your task. This is a common pattern used by any complex web application.
On the server, you'll create a normal async action method that takes all the time it needs to complete the task.
The web page (after it has loaded) will call this action method using AJAX and will ignore the response. That call will eventually time out, but it's not a concern, you don't need the response and the server will continue processing the action even though the socket connection has terminated.
The web page will subsequently begin polling (using AJAX) a different action method which will tell you whether the task has completed or not. You'll need some shared state on the server, perhaps a database table that gets updated by your background task, etc. This method should always return very quickly - all it needs to do is read the present state of the task and return that status.
The web page will continue polling that method until the response changes (e.g. from RUNNING to COMPLETED.) Once the status changes, then you can reload the page using Javascript or whatever you need to do in response to the task completing.
Note: There are some nuances here, like the cost of holding client connections that you expect to time out. If you care you can optimize these away but in most cases it won't be an issue and it adds complexity.

Queuing tasks in asp.net core

E.g. of functionality There is 20 users and they clicked send button almost in one time, so methods stacking in queue and first user message is sent and response received, after second third and so on. Users wont chat with other people but with device which response is pretty fast
So I am trying to queue Task which sends Message.
I found code samples that uses Task queuing as shown in Example 1 and Example 2.
Example 1
public class SerialQueue
{
readonly object _locker = new object();
WeakReference<Task> _lastTask;
public Task Enqueue(Action action)
{
return Enqueue<object>(() => {
action();
return null;
});
}
public Task<T> Enqueue<T>(Func<T> function)
{
lock (_locker)
{
Task lastTask = null;
Task<T> resultTask = null;
if (_lastTask != null && _lastTask.TryGetTarget(out lastTask))
{
resultTask = lastTask.ContinueWith(_ => function());
}
else
{
resultTask = Task.Run(function);
}
_lastTask = new WeakReference<Task>(resultTask);
return resultTask;
}
}
}
Example 2
public class TaskQueue
{
private readonly SemaphoreSlim _semaphoreSlim;
public TaskQueue()
{
_semaphoreSlim = new SemaphoreSlim(1);
}
public async Task<T> Enqueue<T>(Func<Task<T>> taskGenerator)
{
await _semaphoreSlim.WaitAsync();
try
{
return await taskGenerator();
}
finally
{
_semaphoreSlim.Release();
}
}
public async Task Enqueue(Func<Task> taskGenerator)
{
await _semaphoreSlim.WaitAsync();
try
{
await taskGenerator();
}
finally
{
_semaphoreSlim.Release();
}
}
}
Problem is that when I'm passing task which I want to queue (Example 3) each time I pressing button, tasks still are executed at the same time and interrupting each other.
Example 3
[HttpPost(Name = "add-message")]
public async Task<IActionResult> PostMessage([FromBody] MessengerViewModel messengerViewModel)
{
TaskQueue taskQueue = new TaskQueue();
SerialQueue serialQueue = new SerialQueue();
await taskQueue.Enqueue(() => SendMessage(messengerViewModel.PhoneNr, messengerViewModel.MessageBody,
messengerViewModel.ContactId, messengerViewModel.State));
//I'm not running tasks at same time, using one or other at time
await serialQueue.Enqueue(() => SendMessage(messengerViewModel.PhoneNr, messengerViewModel.MessageBody,
messengerViewModel.ContactId, messengerViewModel.State));
return Ok();
}
How could I solve problem and stack task to queue by each click?
Your problem is that you create a new TaskQueueand SerialQueue everytime. Thus each time a user clicks/invokes PostMessage a new queue is created, and the task is the first task in the queue and executed directly.
You should use a static/singleton queue so each click/invoke works on the same queue object.
But that would deliver problems when you scale your webapp across multiple servers. To that end you should use things like (for example) Azure Queue Storage in combination with Azure Functions.
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<TaskQueue>();
services.AddSingleton<SerialQueue>();
// the rest
}
SomeController.cs
[HttpPost(Name = "add-message")]
public async Task<IActionResult> PostMessage(
[FromBody] MessengerViewModel messengerViewModel,
[FromServices] TaskQueue taskQueue,
[FromServices] SerialQueue serialQueue)
{
await taskQueue.Enqueue(
() => SendMessage(
messengerViewModel.PhoneNr,
messengerViewModel.MessageBody,
messengerViewModel.ContactId,
messengerViewModel.State));
//I'm not running tasks at same time, using one or other at time
await serialQueue.Enqueue(
() => SendMessage(
messengerViewModel.PhoneNr,
messengerViewModel.MessageBody,
messengerViewModel.ContactId,
messengerViewModel.State));
return Ok();
}

How to make a nonblocking wait handle?

Essentially, what I'm doing is creating a web server to handle an API call, and then when it's done continue the method execution, so essentially:
new WebServer(myAutoResetEvent);
myAutoResetEvent.WaitOne();
However, this blocks the thread until then. Is there any way to make this async? Is it fine just to wrap it in a await Task.Run() call, i.e. await Task.Run(() => myAutoResetEvent.WaitOne())?
Thanks!
Normally, the WebServer ctor should not do anything interesting. There should be a Task WebServer.RunAsync function that runs the server. You can then use the resulting task to synchronize and coordinate.
If you don't want that you can use a TaskCompletionSource<object> as a one-shot async-ready event.
I believe the ThreadPool class has a way to efficiently wait for a WaitHandle to be set but that's a worse solution.
You should not block ThreadPool threads, this is a quick way to lead to ThreadPool starvation, instead there is a provided method to asynchronously wait for WaitHandle instances, this is called ThreadPool.RegisterWaitForSingleObject.
By using ThreadPool.RegisterWaitForSingleObject a callback is registered to be invoked when the WaitHandle is available, unfortunately this is not async/await compatible out of the box, a full implementation which makes this async/await compatible is as follows:
public static class WaitHandleExtensions
{
public static Task WaitOneAsync(this WaitHandle waitHandle, CancellationToken cancellationToken)
{
return WaitOneAsync(waitHandle, Timeout.Infinite, cancellationToken);
}
public static async Task<bool> WaitOneAsync(this WaitHandle waitHandle, int timeout, CancellationToken cancellationToken)
{
// A Mutex can't use RegisterWaitForSingleObject as a Mutex requires the wait and release to be on the same thread
// but RegisterWaitForSingleObject acquires the Mutex on a ThreadPool thread.
if (waitHandle is Mutex)
throw new ArgumentException(StringResources.MutexMayNotBeUsedWithWaitOneAsyncAsThreadIdentityIsEnforced, nameof(waitHandle));
cancellationToken.ThrowIfCancellationRequested();
var tcs = new TaskCompletionSource<bool>();
var rwh = ThreadPool.RegisterWaitForSingleObject(waitHandle, OnWaitOrTimerCallback, tcs, timeout, true);
var cancellationCallback = BuildCancellationCallback(rwh, tcs);
using (cancellationToken.Register(cancellationCallback))
{
try
{
return await tcs.Task.ConfigureAwait(false);
}
finally
{
rwh.Unregister(null);
}
}
}
private static Action BuildCancellationCallback(RegisteredWaitHandle rwh, TaskCompletionSource<bool> tcs)
{
return () =>
{
if (rwh.Unregister(null))
{
tcs.SetCanceled();
}
};
}
private static void OnWaitOrTimerCallback(object state, bool timedOut)
{
var taskCompletionSource = (TaskCompletionSource<bool>)state;
taskCompletionSource.SetResult(!timedOut);
}
}
The only limitation is that this cannot be used with a Mutex.
This can be used like so:
await myAutoResetEvent.WaitOneAsync(cancellationToken).ConfigureAwait(false);
Another approach to consider would be to use HttpSelfHostServer (System.Web.Http.SelfHost.dll) and leaving all of the threading details to its implementation.
var config = new HttpSelfHostConfiguration("http://localhost:9999");
var tcs = new TaskCompletionSource<Uri>();
using (var server = new HttpSelfHostServer(config, new MessageHandler(tcs)))
{
await server.OpenAsync();
await tcs.Task;
await server.CloseAsync();
}
return tcs.Task.Result;
class MessageHandler : HttpMessageHandler
{
private readonly TaskCompletionSource<Uri> _task;
public MessageHandler(TaskCompletionSource<Uri> task)
{
_task = task;
}
protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
_task.SetResult(request.RequestUri);
return Task.FromResult(new HttpResponseMessage(HttpStatusCode.OK));
}
}

Categories