I'm using Blazor server-side (with pre-render disabled) with .NET 6. My page looks like:
<MyComponentA />
<MyComponentB />
and the components both look like:
#inject MyDbService db
...
#code {
protected override async Task OnParametersSetAsync()
{
var v = await db.SomeSelect();
// ... use v
}
}
MyDbService is a Scoped service that uses EFCore and has a DbContext member, so there is only one active database connection. It's an external requirement for this task to only have a single database connection -- I don't want to use a Transient service instead.
This code causes a runtime error because the DbContext throws an exception if two components both try to use it concurrently. (i.e. DbContext is not re-entrant).
aside If I understand correctly, the flow is that db.SomeSelect in MyComponentA initiates, and await means the execution of the Page continues and eventually reaches db.SomeSelect in MyComponentB, which makes a second request on the same thread that still has the first call in progress.
My question is: Is there a tidy way to make MyComponentB not make its database call until MyComponentA has finished initializing?
One option would be to make MyComponentB do nothing until a specific function is called; and pass that function as parameter to MyComponentA to call when it's finished loading. But that feels pretty clunky and spaghetti so I wonder if there is a better way.
Here's a lightweight scheduler that uses TaskCompletionSource objects to manually control Tasks passed back to the caller.
This is a simple demo, getting time with a configurable delay from a single source that throws an exception if you try and run it in parallel.
You should be able to apply this pattern to schedule sequential requests into your data pipeline.
The Scoped Queue Service:
public class GetTheTime : IAsyncDisposable
{
private bool _processing;
private Queue<TimeRequest> _timeQueue = new();
private Task _queueTask = Task.CompletedTask;
private bool _disposing;
// only way to get the time
public Task<string> GetTime(int delay)
{
var value = new TimeRequest(delay);
// Queues the request
_timeQueue.Enqueue(value);
// Checks if the queue service is running and if not run it
// lock it while we check and potentially start it to ensure thread safety
lock (_queueTask)
{
if (_queueTask.IsCompleted)
_queueTask = this.QueueService();
}
// returns the maunally controlled Task to the caller who can await it
return value.CompletionSource.Task;
}
private async Task QueueService()
{
// loop thro the queue and run the enqueued requests till it's empty
while (_timeQueue.Count > 0)
{
if (_disposing)
break;
var value = _timeQueue.Dequeue();
// do the work and wait for it to complete
var result = await _getTime(value.Delay);
value.CompletionSource.TrySetResult(result);
}
}
private async Task<string> _getTime(int delay)
{
// If more than one of me is running go BANG
if (_processing)
throw new Exception("Bang!");
_processing = true;
// Emulate an async database call
await Task.Delay(delay);
_processing = false;
return DateTime.Now.ToLongTimeString();
}
public async ValueTask DisposeAsync()
{
_disposing = true;
await _queueTask;
}
private readonly struct TimeRequest
{
public int Delay { get; }
public TaskCompletionSource<string> CompletionSource { get; } = new TaskCompletionSource<string>();
public TimeRequest(int delay)
=> Delay = delay;
}
}
A simple Component:
#inject GetTheTime service
<div class="alert alert-info">
#this.message
</div>
#code {
[Parameter] public int Delay { get; set; } = 1000;
private string message = "Not Started";
protected async override Task OnInitializedAsync()
{
message = "Processing";
message = await service.GetTime(this.Delay);
}
}
And a demo page:
#page "/"
<PageTitle>Index</PageTitle>
<h1>Hello, world!</h1>
Welcome to your new app.
<Component Delay="3000" />
<Component Delay="2000" />
<Component Delay="1000" />
Related
I'm looking for an approach to locking that, by default, makes sure that all calls to a single API are run mutually exclusive using distributed locking. However, at the same time I need the option to instead lock larger blocks of code (critical procedures) containing several calls to that API. Those calls should still be run mutually exclusive. In those cases the approach should be re-entrant, so that each call isn't blocked because the block of code it is in already holds the lock. It should also support re-entrancy if there are several methods nested that lock sections of code.
Examples of use:
// Should have lock registered by default (f.ex. in HttpMessageHandler)
await _deviceClient.PerformAction();
async Task CriticalProcedure()
{
// Should only use one lock that is reused in nested code (re-entrant)
await using (await _reentrantLockProvider.AcquireLockAsync())
{
await _deviceClient.TriggerAction();
await SharedCriticalProcedure();
}
// Should only dispose lock at this point
}
async Task SharedCriticalProcedure()
{
await using (await _customLockProvider.AcquireLockAsync())
{
await _deviceClient.HardReset();
await _deviceClient.Refresh();
}
}
// Should be forced to run sequentially even though they are not awaited (mutex)
var task1 = _deviceClient.PerformAction1();
var task2 = _deviceClient.PerformAction2();
await Task.WhenAll(task1, task2);
Background:
My team is working on a WebAPI that is responsible for making calls to hardware devices. When an endpoint in our API is called, we get a header that identifies the hardware device in question, used in startup to configure the baseUrl of our HttpClients, and we make one or more calls to that API. We have the following limitations:
A device shouldn't be called when it is already busy with a request (mutual exclusion)
Some procedures against the device (blocks of code containing several calls) are critical and shouldn't be interrupted by other calls to the device (why I want re-entry)
A user may run multiple requests to our API simultaneously, so the locking should work across requests
Our WebAPI may have multiple deployments, so the locking should be distributed
We use Refit to describe the API of our hardware devices and create HttpClients
I have created the following solution which I believe works. However, it seems clumsy and overengineered, mostly because HttpMessageHandlers have unpredictable lifetimes, not scoped to request, so I needed to use TraceIdentifier and a dictionary to enable re-entry during the request lifecycle.
// In startup
service.AddSingleton<IReentrantLockProvider, ReentrantLockProvider>();
services.
.AddHttpClient(IDeviceClient)
.AddTypedClient(client => RestService.For<IDeviceClient>(client, refitSettings))
.ConfigureHttpClient((provider, client) => ConfigureHardwareBaseUrl())
.AddHttpMessageHandler<HardwareMutexMessageHandler>();
public class HardwareMutexMessageHandler : DelegatingHandler
{
private readonly IReentrantLockProvider _reentrantPanelLockProvider;
private readonly IHttpContextAccessor _httpContextAccessor;
private readonly ConcurrentDictionary<string, object> _locks;
public HardwareMutexMessageHandler(IReentrantLockProvider reentrantPanelLockProvider, IHttpContextAccessor httpContextAccessor)
{
_reentrantPanelLockProvider = reentrantPanelLockProvider;
_httpContextAccessor = httpContextAccessor;
}
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
await using (await _reentrantPanelLockProvider.AcquireLockAsync(cancellationToken))
{
var hardwareId = _httpContextAccessor.HttpContext.Request.Headers["HardwareId"];
var mutex = _locks.GetOrAdd(hardwareId, _ => new());
// This is only used to handle cases where developer chooses to batch calls or forgets to await a call
lock (mutex)
{
return base.SendAsync(request, cancellationToken).Result;
}
}
}
}
public class ReentrantLockProvider : IReentrantLockProvider
{
private readonly IDistributedLockProvider _distributedLockProvider;
private readonly IHttpContextAccessor _httpContextAccessor;
private readonly ConcurrentDictionary<string, ReferenceCountedDisposable> _lockDictionary;
private readonly object _lockVar = new();
public ReentrantLockProvider(IDistributedLockProvider distributedLockProvider, IHttpContextAccessor httpContextAccessor)
{
_distributedLockProvider = distributedLockProvider;
_httpContextAccessor = httpContextAccessor;
_lockDictionary = new ConcurrentDictionary<string, ReferenceCountedDisposable>();
}
public async Task<IAsyncDisposable> AcquireLockAsync(CancellationToken cancellationToken = default)
{
var hardwareId = _httpContextAccessor.HttpContext.Request.Headers["HardwareId"];
var requestId = _httpContextAccessor.HttpContext.TraceIdentifier;
lock (_lockVar)
{
if (_lockDictionary.TryGetValue(requestContext.CorrelationId, out referenceCountedLock))
{
referenceCountedLock.RegisterReference();
return referenceCountedLock;
}
acquireLockTask = _distributedLockProvider.AcquireLockAsync(hardwareId, timeout: null, cancellationToken);
referenceCountedLock = new ReferenceCountedDisposable(async () =>
await RemoveLock(acquireLockTask.Result, requestContext.CorrelationId)
);
_lockDictionary.TryAdd(requestContext.CorrelationId, referenceCountedLock);
}
}
private async Task RemoveLock(IDistributedSynchronizationHandle acquiredLock, string correlationId)
{
ValueTask disposeAsyncTask;
lock (_lockVar)
{
disposeAsyncTask = acquiredLock.DisposeAsync();
_ = _lockDictionary.TryRemove(correlationId, out _);
}
await disposeAsyncTask;
}
}
public class ReferenceCountedDisposable : IAsyncDisposable
{
private readonly Func<Task> _asyncDispose;
private int _refCount;
public ReferenceCountedDisposable(Func<Task> asyncDispose)
{
_asyncDispose = asyncDispose;
_refCount = 1;
}
public void RegisterReference()
{
Interlocked.Increment(ref _refCount);
}
public async ValueTask DisposeAsync()
{
var references = Interlocked.Decrement(ref _refCount);
if (references == 0)
{
await _asyncDispose();
}
else if (references < 0)
{
throw new InvalidOperationException("Can't dispose multiple times");
}
else
{
GC.SuppressFinalize(this);
}
}
}
I have a SignalR app in DotNet 3.1, kind-of a large chat app, and I am trying to add two BackgroundServices.
The BackgroundServices are setup to run for as long as the ASP.NET app runs.
The first BackgroundService has a very fast main loop (50 ms) and seems to work well.
The second BackgroundService has a much longer main loop (1000 ms) and seems to start randomly, stop executing randomly, and then re-starts executing again ... randomly. It is almost like the second bot is going to sleep, for a long period of time (30 to 90 seconds) and then wakes up again with the object state preserved.
Both BackgroundServices have the same base code with different Delays.
Is it possible to have multiple, independent, non-ending, BackgroundServices? If so, then what am I doing wrong?
I have the services registered like this ...
_services.AddSimpleInjector(_simpleInjectorContainer, options =>
{
options.AddHostedService<SecondaryBackgroundService>();
options.AddHostedService<PrimaryBackgroundService>();
// AddAspNetCore() wraps web requests in a Simple Injector scope.
options.AddAspNetCore()
// Ensure activation of a specific framework type to be created by
// Simple Injector instead of the built-in configuration system.
.AddControllerActivation()
.AddViewComponentActivation()
.AddPageModelActivation()
.AddTagHelperActivation();
});
And I have two classes (PrimaryBackgroundService/SecondaryBackgroundService) that have this ...
public class SecondaryBackgroundService : BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken cancellationToken)
{
await Task.Factory.StartNew(async () =>
{
// loop until a cancalation is requested
while (!cancellationToken.IsCancellationRequested)
{
//await Task.Delay(TimeSpan.FromMilliseconds(50), cancellationToken);
await Task.Delay(TimeSpan.FromMilliseconds(1000), cancellationToken);
try
{
await _doWorkDelegate();
}
catch (Exception ex)
{
}
}
}, cancellationToken);
}
}
Should I setup a single BackgroundService that spins off two different Tasks; in their own threads? Should I be using IHostedService instead?
I need to make sure that the second BackgroundService runs every second. Also, I need to make sure that the second BackgroundService never impacts the faster running primary BackgroundService.
UPDATE:
I changed the code to use a Timer, as suggested, but now I am struggling with calling an async Task from a Timer event.
Here is the class I created with the different options that work and do not work.
// used this as the base: https://github.com/aspnet/Hosting/blob/master/src/Microsoft.Extensions.Hosting.Abstractions/BackgroundService.cs
public abstract class RecurringBackgroundService : IHostedService, IDisposable
{
private Timer _timer;
protected int TimerIntervalInMilliseconds { get; set; } = 250;
// OPTION 1. This causes strange behavior; random starts and stops
/*
protected abstract Task DoRecurringWork();
private async void OnTimerCallback(object notUsedTimerState) // use "async void" for event handlers
{
try
{
await DoRecurringWork();
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
*/
// OPTION 2. This causes strange behavior; random starts and stops
/*
protected abstract Task DoRecurringWork();
private void OnTimerCallback(object notUsedTimerState)
{
try
{
var tf = new TaskFactory(System.Threading.CancellationToken.None, TaskCreationOptions.None, TaskContinuationOptions.None, TaskScheduler.Default);
tf.StartNew(async () =>
{
await DoRecurringWork();
})
.Unwrap()
.GetAwaiter()
.GetResult();
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
*/
// OPTION 3. This works but requires the drived to have "async void"
/*
protected abstract void DoRecurringWork();
private void OnTimerCallback(object notUsedTimerState)
{
try
{
DoRecurringWork(); // use "async void" in the derived class
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
*/
// OPTION 4. This works just like OPTION 3 and allows the drived class to use a Task
protected abstract Task DoRecurringWork();
protected async void DoRecurringWorkInternal() // use "async void"
{
await DoRecurringWork();
}
private void OnTimerCallback(object notUsedTimerState)
{
try
{
DoRecurringWork();
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
public virtual Task StartAsync(CancellationToken cancellationToken)
{
// https://stackoverflow.com/questions/684200/synchronizing-a-timer-to-prevent-overlap
// do a single call timer pulse
_timer = new Timer(OnTimerCallback, null, this.TimerIntervalInMilliseconds, Timeout.Infinite);
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
try { _timer.Change(Timeout.Infinite, 0); } catch {; }
return Task.CompletedTask;
}
public void Dispose()
{
try { _timer.Change(Timeout.Infinite, 0); } catch {; }
try { _timer.Dispose(); } catch {; }
}
}
Is OPTION 3 and/or OPTION 4 correct?
I have confirmed that OPTION 3 and OPTION 4 are overlapping. How can I stop them from overlapping? (UPDATE: use OPTION 1)
UPDATE
Looks like OPTION 1 was correct after all.
Stephen Cleary was correct. After digging and digging into the code I did find a Task that was stalling the execution under the _doWorkDelegate() method. The random starts and stops was caused by an HTTP call that was failing. Once I fixed that (with a fire-and-forget) OPTION 1 started working as expected.
I would recommend writing two timed background tasks as shown in the documentation
Timed background tasks documentation
then they are independent and isolated.
public class PrimaryBackgroundService : IHostedService, IDisposable
{
private readonly ILogger<PrimaryBackgroundService> _logger;
private Timer _timer;
public PrimaryBackgroundService(ILogger<PrimaryBackgroundService> logger)
{
_logger = logger;
}
public Task StartAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("PrimaryBackgroundService StartAsync");
TimeSpan waitTillStart = TimeSpan.Zero;
TimeSpan intervalBetweenWork = TimeSpan.FromMilliseconds(50);
_timer = new Timer(DoWork, null, waitTillStart, intervalBetweenWork);
return Task.CompletedTask;
}
private void DoWork(object state)
{
_logger.LogInformation("PrimaryBackgroundService DoWork");
// ... do work
}
public Task StopAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("PrimaryBackgroundService is stopping.");
_timer?.Change(Timeout.Infinite, 0);
return Task.CompletedTask;
}
public void Dispose()
{
_timer?.Dispose();
}
}
create the SecondaryBackgroundService using similar code and register them as you did before
options.AddHostedService<SecondaryBackgroundService>();
options.AddHostedService<PrimaryBackgroundService>();
Note that if you want to use any dependency injection then you have to inject IServiceScopeFactory into the background service constructor and call scopeFactory.CreateScope()
I'm working on a .Net core solution that takes backup of storage files from another microservice and because this process takes too long time, we decided to build this routine under a background task.By following this link:
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-2.1
I have implemented the background by using Queued background tasks like the following :
public interface IBackgroundTaskQueue
{
void QueueBackgroundWorkItem(Func<CancellationToken, Task> workItem);
Task<Func<CancellationToken, Task>> DequeueAsync(
CancellationToken cancellationToken);
}
public class BackgroundTaskQueue : IBackgroundTaskQueue
{
private ConcurrentQueue<Func<CancellationToken, Task>> _workItems =
new ConcurrentQueue<Func<CancellationToken, Task>>();
private SemaphoreSlim _signal = new SemaphoreSlim(0);
public void QueueBackgroundWorkItem(
Func<CancellationToken, Task> workItem)
{
if (workItem == null)
{
throw new ArgumentNullException(nameof(workItem));
}
_workItems.Enqueue(workItem);
_signal.Release();
}
public async Task<Func<CancellationToken, Task>> DequeueAsync(
CancellationToken cancellationToken)
{
await _signal.WaitAsync(cancellationToken);
_workItems.TryDequeue(out var workItem);
return workItem;
}
}
public class QueuedHostedService : BackgroundService
{
private readonly ILogger _logger;
public QueuedHostedService(IBackgroundTaskQueue taskQueue,
ILoggerFactory loggerFactory)
{
TaskQueue = taskQueue;
_logger = loggerFactory.CreateLogger<QueuedHostedService>();
}
public IBackgroundTaskQueue TaskQueue { get; }
protected async override Task ExecuteAsync(
CancellationToken cancellationToken)
{
_logger.LogInformation("Queued Hosted Service is starting.");
while (!cancellationToken.IsCancellationRequested)
{
var workItem = await TaskQueue.DequeueAsync(cancellationToken);
try
{
await workItem(cancellationToken);
}
catch (Exception ex)
{
_logger.LogError(ex,
$"Error occurred executing {nameof(workItem)}.");
}
}
_logger.LogInformation("Queued Hosted Service is stopping.");
}
}
}
and in the controller action method I did that:
[HttpPost]
[ValidateAntiForgeryToken]
public IActionResult TakeBackup()
{
// Process #1: update latest backup time in setting table.
var _setting = _settingService.FindByKey("BackupData");
var data = JsonConvert.DeserializeObject<BackUpData>(_setting.Value);
data.LatestBackupTime = DateTime.UtcNow;
_setting.Value = JsonConvert.SerializeObject(data);
_settingService.AddOrUpdate(_setting);
// Process #2: Begin a background service to excaute the backup task.
_queue.QueueBackgroundWorkItem(async token =>
{
// instead of this staff I will replace by the API I want to consume.
var guid = Guid.NewGuid().ToString();
for (int delayLoop = 0; delayLoop < 3; delayLoop++)
{
_logger.LogInformation(
$"Queued Background Task {guid} is running. {delayLoop}/3");
await Task.Delay(TimeSpan.FromSeconds(5), token);
}
_logger.LogInformation(
$"Queued Background Task {guid} is complete. 3/3");
// Here I need to redirect to the index view after the task is finished (my issue) ..
RedirectToAction("Index",new {progress="Done"});
});
return RedirectToAction("Index");
}
}
The logger information displays successfully
All what I need is to find away to be able to reload the index controller after the background task is done successfully but for some reason I don't know it can't be redirected.
The Index action method is like that :
public async Task<IActionResult> Index()
{
var links = new List<LinkObject>();
var files = await _storageProvider.GetAllFiles(null, "backup");
foreach (var f in files)
{
var file = f;
if (f.Contains("/devstoreaccount1/"))
{
file = file.Replace("/devstoreaccount1/", "");
}
file = file.TrimStart('/');
links.Add(new LinkObject()
{
Method = "GET",
Href = await _storageProvider.GetSasUrl(file),
Rel = f
});
}
return View(links);
}
Thanks !
If you want the current page to interact with a long running task, you don't necessarily need the overhead of BackgroundService. That feature is for cases where there is no page to interact with.
First, the server cannot call a client to tell it to reload. At least not without the use of WebSockets, which would definitely be overkill for this. Instead, you will use Javascript (AJAX) to make background calls to poll for the status of your task. This is a common pattern used by any complex web application.
On the server, you'll create a normal async action method that takes all the time it needs to complete the task.
The web page (after it has loaded) will call this action method using AJAX and will ignore the response. That call will eventually time out, but it's not a concern, you don't need the response and the server will continue processing the action even though the socket connection has terminated.
The web page will subsequently begin polling (using AJAX) a different action method which will tell you whether the task has completed or not. You'll need some shared state on the server, perhaps a database table that gets updated by your background task, etc. This method should always return very quickly - all it needs to do is read the present state of the task and return that status.
The web page will continue polling that method until the response changes (e.g. from RUNNING to COMPLETED.) Once the status changes, then you can reload the page using Javascript or whatever you need to do in response to the task completing.
Note: There are some nuances here, like the cost of holding client connections that you expect to time out. If you care you can optimize these away but in most cases it won't be an issue and it adds complexity.
I have a complex situation but I will try to short it out and let only know for important details. I am trying to implement a task-based job handling. here is the class for that:
internal class TaskBasedJob : IJob
{
public WaitHandle WaitHandle { get; }
public JobStatus Status { get; private set; }
public TaskBasedJob(Func<Task<JobStatus>> action, TimeSpan interval, TimeSpan delay)
{
Status = JobStatus.NotExecuted;
var semaphore = new SemaphoreSlim(0, 1);
WaitHandle = semaphore.AvailableWaitHandle;
_timer = new Timer(async x =>
{
// return to prevent duplicate executions
// Semaphore starts locked so WaitHandle works properly
if (semaphore.CurrentCount == 0 && Status != JobStatus.NotExecuted)
{
return;
Status = JobStatus.Failure;
}
if(Status != JobStatus.NotExecuted)
await semaphore.WaitAsync();
try
{
await action();
}
finally
{
semaphore.Release();
}
}, null, delay, interval);
}
}
Below is the scheduler class :
internal class Scheduler : IScheduler
{
private readonly ILogger _logger;
private readonly ConcurrentDictionary<string, IJob> _timers = new ConcurrentDictionary<string, IJob>();
public Scheduler(ILogger logger)
{
_logger = logger;
}
public IJob ScheduleAsync(string jobName, Func<Task<JobStatus>> action, TimeSpan interval, TimeSpan delay = default(TimeSpan))
{
if (!_timers.ContainsKey(jobName))
{
lock (_timers)
{
if (!_timers.ContainsKey(jobName))
_timers.TryAdd(jobName, new TaskBasedJob(jobName, action, interval, delay, _logger));
}
}
return _timers[jobName];
}
public IReadOnlyDictionary<string, IJob> GetJobs()
{
return _timers;
}
}
Inside of this library I have a service like below: So the idea of this service is only to fetch some data at the dictionary called _accessInfos and its async method. You can see at the constructor I already add the job to fetch the data.
internal class AccessInfoStore : IAccessInfoStore
{
private readonly ILogger _logger;
private readonly Func<HttpClient> _httpClientFunc;
private volatile Dictionary<string, IAccessInfo> _accessInfos;
private readonly IScheduler _scheduler;
private static string JobName = "AccessInfoProviderJob";
public AccessInfoStore(IScheduler scheduler, ILogger logger, Func<HttpClient> httpClientFunc)
{
_accessInfos = new Dictionary<string, IAccessInfo>();
_config = config;
_logger = logger;
_httpClientFunc = httpClientFunc;
_scheduler = scheduler;
scheduler.ScheduleAsync(JobName, FetchAccessInfos, TimeSpan.FromMinutes(1));
}
public IJob FetchJob => _scheduler.GetJobs()[JobName];
private async Task<JobStatus> FetchAccessInfos()
{
using (var client = _httpClientFunc())
{
accessIds = //calling a webservice
_accessInfos = accessIds;
return JobStatus.Success;
}
}
All of this code is inside another library that I have referenced into my ASP.NET Core 2.1 project. On the startup class I have a call like this:
//adding services
...
services.AddScoped<IScheduler, Scheduler>();
services.AddScoped<IAccessInfoStore, AccessInfoStore>();
var accessInfoStore = services.BuildServiceProvider().GetService<IAccessInfoStore>();
accessInfoStore.FetchJob.WaitHandle.WaitOne();
At the first time WaitOne() method does not work so the data are not loaded(_accessInfos is empty) but if I refresh the page again I can see the data loaded(_accessInfos is not empty but has data). So, as far as I know WaitOne() method is to block thread execution until my job is completed.
Does anybody know why WaitOne() method does not work properly or what I might be doing wrong ?
EDIT 1:
Scheduler only stores all IJob-s into a concurrent dictionary in order to get them later if needed mainly for showing them in a health page. Then every time we insert a new TaskBasedJob in dictionary the constructor will be executed and at the end we use a Timer to re-execute the job later after some interval, but in order to make this thread-safe I use SemaphoreSlim class and from there I expose WaitHandle. This is only for those rare cases I need to turn a method from async to sync. Because in general I would not use this because the job will execute in async manner for normal cases.
What I expect - The WaitOne() should stop execution of current thread and wait until my scheduled job is executed and then continue on executing current thread. In my case current thread is the one running Configure method at StartUp class.
Colleague of Rajmond here. I figure out our issue. Basically, waiting works fine and so on. Our issue is simply that if you do IServiceCollection.BuildServiceProvider() you will get a different scope each time (and thus a different object is created even with Singleton instance). Simple way to try this out:
var serviceProvider1 = services.BuildServiceProvider();
var hashCode1 = serviceProvider1.GetService<IAccessInfoStore>().GetHashCode();
var hashCode2 = serviceProvider1.GetService<IAccessInfoStore>().GetHashCode();
var serviceProvider2 = services.BuildServiceProvider();
var hashCode3 = serviceProvider2.GetService<IAccessInfoStore>().GetHashCode();
var hashCode4 = serviceProvider2.GetService<IAccessInfoStore>().GetHashCode();
hashCode1 and hashCode2 are the same, same as hashCode3 and hashCode4 (because Singleton), but hashCode1/hashCode2 are not the same as hashCode3/hashCode4 (because different service provider).
The real fix will probably be some check in that IAccessInfoStore that will block internally until the job has finished the first time.
Cheers!
E.g. of functionality There is 20 users and they clicked send button almost in one time, so methods stacking in queue and first user message is sent and response received, after second third and so on. Users wont chat with other people but with device which response is pretty fast
So I am trying to queue Task which sends Message.
I found code samples that uses Task queuing as shown in Example 1 and Example 2.
Example 1
public class SerialQueue
{
readonly object _locker = new object();
WeakReference<Task> _lastTask;
public Task Enqueue(Action action)
{
return Enqueue<object>(() => {
action();
return null;
});
}
public Task<T> Enqueue<T>(Func<T> function)
{
lock (_locker)
{
Task lastTask = null;
Task<T> resultTask = null;
if (_lastTask != null && _lastTask.TryGetTarget(out lastTask))
{
resultTask = lastTask.ContinueWith(_ => function());
}
else
{
resultTask = Task.Run(function);
}
_lastTask = new WeakReference<Task>(resultTask);
return resultTask;
}
}
}
Example 2
public class TaskQueue
{
private readonly SemaphoreSlim _semaphoreSlim;
public TaskQueue()
{
_semaphoreSlim = new SemaphoreSlim(1);
}
public async Task<T> Enqueue<T>(Func<Task<T>> taskGenerator)
{
await _semaphoreSlim.WaitAsync();
try
{
return await taskGenerator();
}
finally
{
_semaphoreSlim.Release();
}
}
public async Task Enqueue(Func<Task> taskGenerator)
{
await _semaphoreSlim.WaitAsync();
try
{
await taskGenerator();
}
finally
{
_semaphoreSlim.Release();
}
}
}
Problem is that when I'm passing task which I want to queue (Example 3) each time I pressing button, tasks still are executed at the same time and interrupting each other.
Example 3
[HttpPost(Name = "add-message")]
public async Task<IActionResult> PostMessage([FromBody] MessengerViewModel messengerViewModel)
{
TaskQueue taskQueue = new TaskQueue();
SerialQueue serialQueue = new SerialQueue();
await taskQueue.Enqueue(() => SendMessage(messengerViewModel.PhoneNr, messengerViewModel.MessageBody,
messengerViewModel.ContactId, messengerViewModel.State));
//I'm not running tasks at same time, using one or other at time
await serialQueue.Enqueue(() => SendMessage(messengerViewModel.PhoneNr, messengerViewModel.MessageBody,
messengerViewModel.ContactId, messengerViewModel.State));
return Ok();
}
How could I solve problem and stack task to queue by each click?
Your problem is that you create a new TaskQueueand SerialQueue everytime. Thus each time a user clicks/invokes PostMessage a new queue is created, and the task is the first task in the queue and executed directly.
You should use a static/singleton queue so each click/invoke works on the same queue object.
But that would deliver problems when you scale your webapp across multiple servers. To that end you should use things like (for example) Azure Queue Storage in combination with Azure Functions.
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<TaskQueue>();
services.AddSingleton<SerialQueue>();
// the rest
}
SomeController.cs
[HttpPost(Name = "add-message")]
public async Task<IActionResult> PostMessage(
[FromBody] MessengerViewModel messengerViewModel,
[FromServices] TaskQueue taskQueue,
[FromServices] SerialQueue serialQueue)
{
await taskQueue.Enqueue(
() => SendMessage(
messengerViewModel.PhoneNr,
messengerViewModel.MessageBody,
messengerViewModel.ContactId,
messengerViewModel.State));
//I'm not running tasks at same time, using one or other at time
await serialQueue.Enqueue(
() => SendMessage(
messengerViewModel.PhoneNr,
messengerViewModel.MessageBody,
messengerViewModel.ContactId,
messengerViewModel.State));
return Ok();
}