I have an ASP .NET core Web API which uses Queued background tasks like described
here.
I've used the code sample provided and added the IBackgroundTaskQueue, BackgroundTaskQueue and QueuedHostedService exactly as described in the article.
In my Startup.cs, I'm registering only one QueuedHostedService instance as follows: services.AddHostedService<QueuedHostedService>();
Tasks coming from the WebApi's controller are enqueued and then dequeued and executed one by one by the QueuedHostedService.
I'll would like to allow more than one background processing thread that will dequeue and execute the incoming Tasks.
The most straight forward solution i can come up with is to register more than one instance of the QueuedHostedService in my Startup.cs. i.e, something like this:
int maxNumOfParallelOperations;
var isValid = int.TryParse(Configuration["App:MaxNumOfParallelOperations"], out maxNumOfParallelOperations);
maxNumOfParallelOperations = isValid && maxNumOfParallelOperations > 0 ? maxNumOfParallelOperations : 2;
for (int index = 0; index < maxNumOfParallelOperations; index++)
{
services.AddHostedService<QueuedHostedService>();
}
I've also noticed that thanks to the singal Semaphore in BackgroundTaskQueue, the QueuedHostedService instances are not really working all the time, but only awaken when a new Task is available in the queue.
This solution seems to works just fine in my tests.
But, In this particular use case - is it really a valid, recommended solution for parallel processing?
You can use an IHostedService with a number of threads to consume the IBackgroundTaskQueue.
Here is a basic implementation. I assume you're using the same IBackgroundTaskQueue and BackgroundTaskQueue described here.
public class QueuedHostedService : IHostedService
{
private readonly ILogger _logger;
private readonly Task[] _executors;
private readonly int _executorsCount = 2; //--default value: 2
private CancellationTokenSource _tokenSource;
public IBackgroundTaskQueue TaskQueue { get; }
public QueuedHostedService(IBackgroundTaskQueue taskQueue,
ILoggerFactory loggerFactory,
IConfiguration configuration)
{
TaskQueue = taskQueue;
_logger = loggerFactory.CreateLogger<QueuedHostedService>();
if (ushort.TryParse(configuration["App:MaxNumOfParallelOperations"], out var ct))
{
_executorsCount = ct;
}
_executors = new Task[_executorsCount];
}
public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Queued Hosted Service is starting.");
_tokenSource = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken);
for (var i = 0; i < _executorsCount; i++)
{
var executorTask = new Task(
async () =>
{
while (!cancellationToken.IsCancellationRequested)
{
#if DEBUG
_logger.LogInformation("Waiting background task...");
#endif
var workItem = await TaskQueue.DequeueAsync(cancellationToken);
try
{
#if DEBUG
_logger.LogInformation("Got background task, executing...");
#endif
await workItem(cancellationToken);
}
catch (Exception ex)
{
_logger.LogError(ex,
"Error occurred executing {WorkItem}.", nameof(workItem)
);
}
}
}, _tokenSource.Token);
_executors[i] = executorTask;
executorTask.Start();
}
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Queued Hosted Service is stopping.");
_tokenSource.Cancel(); // send the cancellation signal
if (_executors != null)
{
// wait for _executors completion
Task.WaitAll(_executors, cancellationToken);
}
return Task.CompletedTask;
}
}
You need to register the services in ConfigureServices on Startup class.
...
services.AddSingleton<IBackgroundTaskQueue, BackgroundTaskQueue>();
services.AddHostedService<QueuedHostedService>();
...
Aditionally, you can set the number of threads in configuration (appsettings.json)
...
"App": {
"MaxNumOfParallelOperations": 4
}
...
Related
I have a BackgroundService that I start from an API Controller.
There should never be more than one BackgroundService running.
How can I check if a job is already running? So I don't start a new?
API to start a new job and related code
[HttpPost]
public async Task<IActionResult> RunJob(JobMessage msg)
{
if (_queue.Count > 0)
{
return StatusCode(429, "DocumentDistributor are running. Try again later");
}
await _queue.Queue(msg);
return Ok("DocumentDistributor will start in about one minute.");
}
public interface IBackgroundTaskQueue
{
Task Queue(JobMessage message);
Task<JobMessage> Dequeue();
public int Count { get; }
}
public sealed class QueuedHostedService : BackgroundService
{
private readonly IServiceProvider _serviceProvider;
public QueuedHostedService(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
try
{
using var scope = _serviceProvider.CreateScope();
var calculator = scope.ServiceProvider.GetRequiredService<QueueDocumentDistributor>();
await calculator.RunService();
}
catch (OperationCanceledException)
{
// Prevent throwing if the Delay is cancelled
}
catch (Exception e)
{
Log.Error(e, "Error in QueuedHostedService");
}
// check queue every 1 minute
await Task.Delay(1000 * 60, stoppingToken);
}
}
}
public class QueueDocumentDistributor
{
private readonly IBackgroundTaskQueue _queue;
private readonly ReportService _service;
public QueueDocumentDistributor(IBackgroundTaskQueue queue, ReportService service)
{
_queue = queue;
_service = service;
}
public async Task RunService()
{
var message = await _queue.Dequeue();
if (message == null) return;
await _service.CreateReports(message);
}
}
AddHostedService adds singleton instance of IHostedService, so if there is no parallel processing in the implementation framework guarantees the single job execution.
There should never be more than one BackgroundService running.
There will be only single instance of background service per type running.
How can I check if a job is already running?
Depends on what do you mean by "job". If BackgroundService - then it is started by the framework. If your some custom payload in queue - then you will need to implement some monitoring manually.
So I don't start a new?
You don't start (usually) background service manually. If QueueDocumentDistributor.RunService gurantees single execution of your logic at a time - your are fine.
Based on provided implementation looks like a single queue element is processed at a time.
Read more:
Background tasks with hosted services in ASP.NET Core
I'm looking for an approach to locking that, by default, makes sure that all calls to a single API are run mutually exclusive using distributed locking. However, at the same time I need the option to instead lock larger blocks of code (critical procedures) containing several calls to that API. Those calls should still be run mutually exclusive. In those cases the approach should be re-entrant, so that each call isn't blocked because the block of code it is in already holds the lock. It should also support re-entrancy if there are several methods nested that lock sections of code.
Examples of use:
// Should have lock registered by default (f.ex. in HttpMessageHandler)
await _deviceClient.PerformAction();
async Task CriticalProcedure()
{
// Should only use one lock that is reused in nested code (re-entrant)
await using (await _reentrantLockProvider.AcquireLockAsync())
{
await _deviceClient.TriggerAction();
await SharedCriticalProcedure();
}
// Should only dispose lock at this point
}
async Task SharedCriticalProcedure()
{
await using (await _customLockProvider.AcquireLockAsync())
{
await _deviceClient.HardReset();
await _deviceClient.Refresh();
}
}
// Should be forced to run sequentially even though they are not awaited (mutex)
var task1 = _deviceClient.PerformAction1();
var task2 = _deviceClient.PerformAction2();
await Task.WhenAll(task1, task2);
Background:
My team is working on a WebAPI that is responsible for making calls to hardware devices. When an endpoint in our API is called, we get a header that identifies the hardware device in question, used in startup to configure the baseUrl of our HttpClients, and we make one or more calls to that API. We have the following limitations:
A device shouldn't be called when it is already busy with a request (mutual exclusion)
Some procedures against the device (blocks of code containing several calls) are critical and shouldn't be interrupted by other calls to the device (why I want re-entry)
A user may run multiple requests to our API simultaneously, so the locking should work across requests
Our WebAPI may have multiple deployments, so the locking should be distributed
We use Refit to describe the API of our hardware devices and create HttpClients
I have created the following solution which I believe works. However, it seems clumsy and overengineered, mostly because HttpMessageHandlers have unpredictable lifetimes, not scoped to request, so I needed to use TraceIdentifier and a dictionary to enable re-entry during the request lifecycle.
// In startup
service.AddSingleton<IReentrantLockProvider, ReentrantLockProvider>();
services.
.AddHttpClient(IDeviceClient)
.AddTypedClient(client => RestService.For<IDeviceClient>(client, refitSettings))
.ConfigureHttpClient((provider, client) => ConfigureHardwareBaseUrl())
.AddHttpMessageHandler<HardwareMutexMessageHandler>();
public class HardwareMutexMessageHandler : DelegatingHandler
{
private readonly IReentrantLockProvider _reentrantPanelLockProvider;
private readonly IHttpContextAccessor _httpContextAccessor;
private readonly ConcurrentDictionary<string, object> _locks;
public HardwareMutexMessageHandler(IReentrantLockProvider reentrantPanelLockProvider, IHttpContextAccessor httpContextAccessor)
{
_reentrantPanelLockProvider = reentrantPanelLockProvider;
_httpContextAccessor = httpContextAccessor;
}
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
await using (await _reentrantPanelLockProvider.AcquireLockAsync(cancellationToken))
{
var hardwareId = _httpContextAccessor.HttpContext.Request.Headers["HardwareId"];
var mutex = _locks.GetOrAdd(hardwareId, _ => new());
// This is only used to handle cases where developer chooses to batch calls or forgets to await a call
lock (mutex)
{
return base.SendAsync(request, cancellationToken).Result;
}
}
}
}
public class ReentrantLockProvider : IReentrantLockProvider
{
private readonly IDistributedLockProvider _distributedLockProvider;
private readonly IHttpContextAccessor _httpContextAccessor;
private readonly ConcurrentDictionary<string, ReferenceCountedDisposable> _lockDictionary;
private readonly object _lockVar = new();
public ReentrantLockProvider(IDistributedLockProvider distributedLockProvider, IHttpContextAccessor httpContextAccessor)
{
_distributedLockProvider = distributedLockProvider;
_httpContextAccessor = httpContextAccessor;
_lockDictionary = new ConcurrentDictionary<string, ReferenceCountedDisposable>();
}
public async Task<IAsyncDisposable> AcquireLockAsync(CancellationToken cancellationToken = default)
{
var hardwareId = _httpContextAccessor.HttpContext.Request.Headers["HardwareId"];
var requestId = _httpContextAccessor.HttpContext.TraceIdentifier;
lock (_lockVar)
{
if (_lockDictionary.TryGetValue(requestContext.CorrelationId, out referenceCountedLock))
{
referenceCountedLock.RegisterReference();
return referenceCountedLock;
}
acquireLockTask = _distributedLockProvider.AcquireLockAsync(hardwareId, timeout: null, cancellationToken);
referenceCountedLock = new ReferenceCountedDisposable(async () =>
await RemoveLock(acquireLockTask.Result, requestContext.CorrelationId)
);
_lockDictionary.TryAdd(requestContext.CorrelationId, referenceCountedLock);
}
}
private async Task RemoveLock(IDistributedSynchronizationHandle acquiredLock, string correlationId)
{
ValueTask disposeAsyncTask;
lock (_lockVar)
{
disposeAsyncTask = acquiredLock.DisposeAsync();
_ = _lockDictionary.TryRemove(correlationId, out _);
}
await disposeAsyncTask;
}
}
public class ReferenceCountedDisposable : IAsyncDisposable
{
private readonly Func<Task> _asyncDispose;
private int _refCount;
public ReferenceCountedDisposable(Func<Task> asyncDispose)
{
_asyncDispose = asyncDispose;
_refCount = 1;
}
public void RegisterReference()
{
Interlocked.Increment(ref _refCount);
}
public async ValueTask DisposeAsync()
{
var references = Interlocked.Decrement(ref _refCount);
if (references == 0)
{
await _asyncDispose();
}
else if (references < 0)
{
throw new InvalidOperationException("Can't dispose multiple times");
}
else
{
GC.SuppressFinalize(this);
}
}
}
I have a SignalR app in DotNet 3.1, kind-of a large chat app, and I am trying to add two BackgroundServices.
The BackgroundServices are setup to run for as long as the ASP.NET app runs.
The first BackgroundService has a very fast main loop (50 ms) and seems to work well.
The second BackgroundService has a much longer main loop (1000 ms) and seems to start randomly, stop executing randomly, and then re-starts executing again ... randomly. It is almost like the second bot is going to sleep, for a long period of time (30 to 90 seconds) and then wakes up again with the object state preserved.
Both BackgroundServices have the same base code with different Delays.
Is it possible to have multiple, independent, non-ending, BackgroundServices? If so, then what am I doing wrong?
I have the services registered like this ...
_services.AddSimpleInjector(_simpleInjectorContainer, options =>
{
options.AddHostedService<SecondaryBackgroundService>();
options.AddHostedService<PrimaryBackgroundService>();
// AddAspNetCore() wraps web requests in a Simple Injector scope.
options.AddAspNetCore()
// Ensure activation of a specific framework type to be created by
// Simple Injector instead of the built-in configuration system.
.AddControllerActivation()
.AddViewComponentActivation()
.AddPageModelActivation()
.AddTagHelperActivation();
});
And I have two classes (PrimaryBackgroundService/SecondaryBackgroundService) that have this ...
public class SecondaryBackgroundService : BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken cancellationToken)
{
await Task.Factory.StartNew(async () =>
{
// loop until a cancalation is requested
while (!cancellationToken.IsCancellationRequested)
{
//await Task.Delay(TimeSpan.FromMilliseconds(50), cancellationToken);
await Task.Delay(TimeSpan.FromMilliseconds(1000), cancellationToken);
try
{
await _doWorkDelegate();
}
catch (Exception ex)
{
}
}
}, cancellationToken);
}
}
Should I setup a single BackgroundService that spins off two different Tasks; in their own threads? Should I be using IHostedService instead?
I need to make sure that the second BackgroundService runs every second. Also, I need to make sure that the second BackgroundService never impacts the faster running primary BackgroundService.
UPDATE:
I changed the code to use a Timer, as suggested, but now I am struggling with calling an async Task from a Timer event.
Here is the class I created with the different options that work and do not work.
// used this as the base: https://github.com/aspnet/Hosting/blob/master/src/Microsoft.Extensions.Hosting.Abstractions/BackgroundService.cs
public abstract class RecurringBackgroundService : IHostedService, IDisposable
{
private Timer _timer;
protected int TimerIntervalInMilliseconds { get; set; } = 250;
// OPTION 1. This causes strange behavior; random starts and stops
/*
protected abstract Task DoRecurringWork();
private async void OnTimerCallback(object notUsedTimerState) // use "async void" for event handlers
{
try
{
await DoRecurringWork();
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
*/
// OPTION 2. This causes strange behavior; random starts and stops
/*
protected abstract Task DoRecurringWork();
private void OnTimerCallback(object notUsedTimerState)
{
try
{
var tf = new TaskFactory(System.Threading.CancellationToken.None, TaskCreationOptions.None, TaskContinuationOptions.None, TaskScheduler.Default);
tf.StartNew(async () =>
{
await DoRecurringWork();
})
.Unwrap()
.GetAwaiter()
.GetResult();
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
*/
// OPTION 3. This works but requires the drived to have "async void"
/*
protected abstract void DoRecurringWork();
private void OnTimerCallback(object notUsedTimerState)
{
try
{
DoRecurringWork(); // use "async void" in the derived class
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
*/
// OPTION 4. This works just like OPTION 3 and allows the drived class to use a Task
protected abstract Task DoRecurringWork();
protected async void DoRecurringWorkInternal() // use "async void"
{
await DoRecurringWork();
}
private void OnTimerCallback(object notUsedTimerState)
{
try
{
DoRecurringWork();
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
public virtual Task StartAsync(CancellationToken cancellationToken)
{
// https://stackoverflow.com/questions/684200/synchronizing-a-timer-to-prevent-overlap
// do a single call timer pulse
_timer = new Timer(OnTimerCallback, null, this.TimerIntervalInMilliseconds, Timeout.Infinite);
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
try { _timer.Change(Timeout.Infinite, 0); } catch {; }
return Task.CompletedTask;
}
public void Dispose()
{
try { _timer.Change(Timeout.Infinite, 0); } catch {; }
try { _timer.Dispose(); } catch {; }
}
}
Is OPTION 3 and/or OPTION 4 correct?
I have confirmed that OPTION 3 and OPTION 4 are overlapping. How can I stop them from overlapping? (UPDATE: use OPTION 1)
UPDATE
Looks like OPTION 1 was correct after all.
Stephen Cleary was correct. After digging and digging into the code I did find a Task that was stalling the execution under the _doWorkDelegate() method. The random starts and stops was caused by an HTTP call that was failing. Once I fixed that (with a fire-and-forget) OPTION 1 started working as expected.
I would recommend writing two timed background tasks as shown in the documentation
Timed background tasks documentation
then they are independent and isolated.
public class PrimaryBackgroundService : IHostedService, IDisposable
{
private readonly ILogger<PrimaryBackgroundService> _logger;
private Timer _timer;
public PrimaryBackgroundService(ILogger<PrimaryBackgroundService> logger)
{
_logger = logger;
}
public Task StartAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("PrimaryBackgroundService StartAsync");
TimeSpan waitTillStart = TimeSpan.Zero;
TimeSpan intervalBetweenWork = TimeSpan.FromMilliseconds(50);
_timer = new Timer(DoWork, null, waitTillStart, intervalBetweenWork);
return Task.CompletedTask;
}
private void DoWork(object state)
{
_logger.LogInformation("PrimaryBackgroundService DoWork");
// ... do work
}
public Task StopAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("PrimaryBackgroundService is stopping.");
_timer?.Change(Timeout.Infinite, 0);
return Task.CompletedTask;
}
public void Dispose()
{
_timer?.Dispose();
}
}
create the SecondaryBackgroundService using similar code and register them as you did before
options.AddHostedService<SecondaryBackgroundService>();
options.AddHostedService<PrimaryBackgroundService>();
Note that if you want to use any dependency injection then you have to inject IServiceScopeFactory into the background service constructor and call scopeFactory.CreateScope()
how are you?.
I have a web api in net core 3.1, this in turn contains a service that every X minutes has to run to perform data migrations (I'm just testing it), but I have 2 problems.
For the service to run, I must first run some url of my apis. The question is: How can I make this service start automatically, without the need to run any api?
When I stop using the apis for a few minutes, the service stops working. The question is: How can I keep the service "Forever" alive?
I must emphasize that my web api is hosted in a web hosting, where I do not have access to all the features of IIS
This is my code, and in advance I appreciate your help.
MySuperService.cs
public class MySuperService : IHostedService, IDisposable
{
private bool _stopping;
private Task _backgroundTask;
private static readonly log4net.ILog log =log4net.LogManager.GetLogger(typeof(MySuperService));
public Task StartAsync(CancellationToken cancellationToken)
{
Console.WriteLine("MySuperService is starting.");
log.Info("MySuperService is starting.");
_backgroundTask = BackgroundTask();
return Task.CompletedTask;
}
private async Task BackgroundTask()
{
int contador = 1;
while (!_stopping)
{
Console.WriteLine("MySuperService is working--> " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));
log.Info("MySuperService is working--> " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));
await Task.Delay(TimeSpan.FromMinutes(3));
contador++;
}
Console.WriteLine("MySuperService background task is stopping.");
log.Info("MySuperService background task is stopping.");
}
public async Task StopAsync(CancellationToken cancellationToken)
{
Console.WriteLine("MySuperService is stopping.");
log.Info("MySuperService is stopping.");
_stopping = true;
if (_backgroundTask != null)
{
// TODO: cancellation
await BackgroundTask();
//await _backgroundTask;
}
}
public void Dispose()
{
Console.WriteLine("MySuperService is disposing.");
log.Info("MySuperService is disposing.");
}
}
Program.cs
public class Program
{
private static readonly log4net.ILog log = log4net.LogManager.GetLogger(typeof(Program));
public static void Main(string[] args)
{
...
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
}).ConfigureServices((hostContext, services) =>
{
services.AddHostedService<MySuperService>();
});
}
Inherit from BackgroundService instead of implemented IHostedService. That will take care of the machinery of starting, running and stopping your service for you.
However the problem you are facing is that IIS isn't starting your c# process until the first request of the service. Then the default application pool settings will shut it down again if there are no requests. I'd suggest setting up some kind of scheduled task to periodically request a url and monitor that the service is running. You'll want to be notified if it stops anyway right?
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.0&tabs=visual-studio
If you inherit your infinite job service from the BackgroundService class and implement your logic inside a loop and the needed
await Task.Delay(TimeSpan.FromMinutes(x miutes))
the job will run as soon as the application starts without any API call, and stops when the app stops.
public class MyService : BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
Console.WriteLine("test");
//await run job
await Task.Delay(TimeSpan.FromSeconds(1));
}
}
public override async Task StartAsync(CancellationToken cancellationToken)
{
Console.WriteLine("start");
await ExecuteAsync(cancellationToken);
}
public override Task StopAsync(CancellationToken cancellationToken)
{
Console.WriteLine("stop");
return Task.CompletedTask;
}
}
The approach I came up with goes as follows:
public class PingPongHostedService : IHostedService
{
public Task StartAsync(CancellationToken cancellationToken)
{
Console.WriteLine(">>>>> Hosted service starting at {0}", DateTimeOffset.Now.ToUnixTimeMilliseconds());
int count = 0;
try
{
Task.Run(async () => { // run in background and return completed task
while (!cancellationToken.IsCancellationRequested)
{
await Task.Delay(1_766, cancellationToken);
Console.WriteLine("loop no. {0}", ++count);
}
}, cancellationToken);
}
catch (OperationCanceledException e){} // Prevent throwing if the Delay is cancelled
return Task.CompletedTask; // return ASAP
}
public Task StopAsync(CancellationToken cancellationToken)
{
Console.WriteLine(">>>>> Hosted service stopped at {0}", DateTimeOffset.Now.ToUnixTimeMilliseconds());
return Task.CompletedTask;
}
}
The important thing here is: StartAsync must exit ASAP, so that IGenericHost bootstrap can continue. That's why I'm using Task.run to transfer real work to another thread, allowing caller thread to continue.
I'm well aware that it is quite an old question and there have been already a few answers already but I would like to provide my input on the matter. I have used a brilliant library to schedule tasks called FluentScheduler. Before we start, add this in your Startup.cs:
public void ConfigureServices(IServiceCollection services)
{
services.AddHostedService<AppLifetimeEventsService>();
}
This is how I solved the above issue:
public class AppLifetimeEventsService : IHostedService
{
private readonly ILogger _logger;
public AppLifetimeEventsService(IServiceProvider services)
{
_logger = services.GetRequiredService<ILogger<AppLifetimeEventsService>>();
}
public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("The Web API has been started...");
//FluentScheduler
var registry = new Registry();
//For example let's run our method every 1 hour or 10 seconds
registry.Schedule(async () => await SomeBackgroundTask()).ToRunNow().AndEvery(1).Hours();
//registry.Schedule(async () => await SomeBackgroundTask()).ToRunNow().AndEvery(10).Seconds();
//FluentScheduler
JobManager.Initialize(registry);
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
//Needed to remove all jobs from our Job manager when our Web API is shutting down
JobManager.RemoveAllJobs();
_logger.LogInformation("The Web API is stopping now...");
return Task.CompletedTask;
}
private async Task SomeBackgroundTask()
{
//Your long task goes here... In my case, I used a method with await here.
}
}
If you want to run a method every X seconds/minutes/hours/days/etc. you will have to implement IHostedService: info and docs. However, if you want to execute the method only once, you should implement BackgroundService.
We are working with .NET Core Web Api, and looking for a lightweight solution to log requests with variable intensity into database, but don't want client's to wait for the saving process.
Unfortunately there's no HostingEnvironment.QueueBackgroundWorkItem(..) implemented in dnx, and Task.Run(..) is not safe.
Is there any elegant solution?
As #axelheer mentioned IHostedService is the way to go in .NET Core 2.0 and above.
I needed a lightweight like for like ASP.NET Core replacement for HostingEnvironment.QueueBackgroundWorkItem, so I wrote DalSoft.Hosting.BackgroundQueue which uses.NET Core's 2.0 IHostedService.
PM> Install-Package DalSoft.Hosting.BackgroundQueue
In your ASP.NET Core Startup.cs:
public void ConfigureServices(IServiceCollection services)
{
services.AddBackgroundQueue(onException:exception =>
{
});
}
To queue a background Task just add BackgroundQueue to your controller's constructor and call Enqueue.
public EmailController(BackgroundQueue backgroundQueue)
{
_backgroundQueue = backgroundQueue;
}
[HttpPost, Route("/")]
public IActionResult SendEmail([FromBody]emailRequest)
{
_backgroundQueue.Enqueue(async cancellationToken =>
{
await _smtp.SendMailAsync(emailRequest.From, emailRequest.To, request.Body);
});
return Ok();
}
QueueBackgroundWorkItem is gone, but we've got IApplicationLifetime instead of IRegisteredObject, which is being used by the former one. And it looks quite promising for such scenarios, I think.
The idea (and I'm still not quite sure, if it's a pretty bad one; thus, beware!) is to register a singleton, which spawns and observes new tasks. Within that singleton we can furthermore register a "stopped event" in order to proper await still running tasks.
This "concept" could be used for short running stuff like logging, mail sending, and the like. Things, that should not take much time, but would produce unnecessary delays for the current request.
public class BackgroundPool
{
protected ILogger<BackgroundPool> Logger { get; }
public BackgroundPool(ILogger<BackgroundPool> logger, IApplicationLifetime lifetime)
{
if (logger == null)
throw new ArgumentNullException(nameof(logger));
if (lifetime == null)
throw new ArgumentNullException(nameof(lifetime));
lifetime.ApplicationStopped.Register(() =>
{
lock (currentTasksLock)
{
Task.WaitAll(currentTasks.ToArray());
}
logger.LogInformation(BackgroundEvents.Close, "Background pool closed.");
});
Logger = logger;
}
private readonly object currentTasksLock = new object();
private readonly List<Task> currentTasks = new List<Task>();
public void SendStuff(Stuff whatever)
{
var task = Task.Run(async () =>
{
Logger.LogInformation(BackgroundEvents.Send, "Sending stuff...");
try
{
// do THE stuff
Logger.LogInformation(BackgroundEvents.SendDone, "Send stuff returns.");
}
catch (Exception ex)
{
Logger.LogError(BackgroundEvents.SendFail, ex, "Send stuff failed.");
}
});
lock (currentTasksLock)
{
currentTasks.Add(task);
currentTasks.RemoveAll(t => t.IsCompleted);
}
}
}
Such a BackgroundPool should be registered as a singleton and can be used by any other component via DI. I'm currently using it for sending mails and it works fine (tested mail sending during app shutdown too).
Note: accessing stuff like the current HttpContext within the background task should not work. The old solution uses UnsafeQueueUserWorkItem to prohibit that anyway.
What do you think?
Update:
With ASP.NET Core 2.0 there's new stuff for background tasks, which get's better with ASP.NET Core 2.1: Implementing background tasks in .NET Core 2.x webapps or microservices with IHostedService and the BackgroundService class
You can use Hangfire (http://hangfire.io/) for background jobs in .NET Core.
For example :
var jobId = BackgroundJob.Enqueue(
() => Console.WriteLine("Fire-and-forget!"));
Here is a tweaked version of Axel's answer that lets you pass in delegates and does more aggressive cleanup of completed tasks.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Logging;
namespace Example
{
public class BackgroundPool
{
private readonly ILogger<BackgroundPool> _logger;
private readonly IApplicationLifetime _lifetime;
private readonly object _currentTasksLock = new object();
private readonly List<Task> _currentTasks = new List<Task>();
public BackgroundPool(ILogger<BackgroundPool> logger, IApplicationLifetime lifetime)
{
if (logger == null)
throw new ArgumentNullException(nameof(logger));
if (lifetime == null)
throw new ArgumentNullException(nameof(lifetime));
_logger = logger;
_lifetime = lifetime;
_lifetime.ApplicationStopped.Register(() =>
{
lock (_currentTasksLock)
{
Task.WaitAll(_currentTasks.ToArray());
}
_logger.LogInformation("Background pool closed.");
});
}
public void QueueBackgroundWork(Action action)
{
#pragma warning disable 1998
async Task Wrapper() => action();
#pragma warning restore 1998
QueueBackgroundWork(Wrapper);
}
public void QueueBackgroundWork(Func<Task> func)
{
var task = Task.Run(async () =>
{
_logger.LogTrace("Queuing background work.");
try
{
await func();
_logger.LogTrace("Background work returns.");
}
catch (Exception ex)
{
_logger.LogError(ex.HResult, ex, "Background work failed.");
}
}, _lifetime.ApplicationStopped);
lock (_currentTasksLock)
{
_currentTasks.Add(task);
}
task.ContinueWith(CleanupOnComplete, _lifetime.ApplicationStopping);
}
private void CleanupOnComplete(Task oldTask)
{
lock (_currentTasksLock)
{
_currentTasks.Remove(oldTask);
}
}
}
}
I know this is a little late, but we just ran into this issue too. So after reading lots of ideas, here's the solution we came up with.
/// <summary>
/// Defines a simple interface for scheduling background tasks. Useful for UnitTesting ASP.net code
/// </summary>
public interface ITaskScheduler
{
/// <summary>
/// Schedules a task which can run in the background, independent of any request.
/// </summary>
/// <param name="workItem">A unit of execution.</param>
[SecurityPermission(SecurityAction.LinkDemand, Unrestricted = true)]
void QueueBackgroundWorkItem(Action<CancellationToken> workItem);
/// <summary>
/// Schedules a task which can run in the background, independent of any request.
/// </summary>
/// <param name="workItem">A unit of execution.</param>
[SecurityPermission(SecurityAction.LinkDemand, Unrestricted = true)]
void QueueBackgroundWorkItem(Func<CancellationToken, Task> workItem);
}
public class BackgroundTaskScheduler : BackgroundService, ITaskScheduler
{
public BackgroundTaskScheduler(ILogger<BackgroundTaskScheduler> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
_logger.LogTrace("BackgroundTaskScheduler Service started.");
_stoppingToken = stoppingToken;
_isRunning = true;
try
{
await Task.Delay(-1, stoppingToken);
}
catch (TaskCanceledException)
{
}
finally
{
_isRunning = false;
_logger.LogTrace("BackgroundTaskScheduler Service stopped.");
}
}
public void QueueBackgroundWorkItem(Action<CancellationToken> workItem)
{
if (workItem == null)
{
throw new ArgumentNullException(nameof(workItem));
}
if (!_isRunning)
throw new Exception("BackgroundTaskScheduler is not running.");
_ = Task.Run(() => workItem(_stoppingToken), _stoppingToken);
}
public void QueueBackgroundWorkItem(Func<CancellationToken, Task> workItem)
{
if (workItem == null)
{
throw new ArgumentNullException(nameof(workItem));
}
if (!_isRunning)
throw new Exception("BackgroundTaskScheduler is not running.");
_ = Task.Run(async () =>
{
try
{
await workItem(_stoppingToken);
}
catch (Exception e)
{
_logger.LogError(e, "When executing background task.");
throw;
}
}, _stoppingToken);
}
private readonly ILogger _logger;
private volatile bool _isRunning;
private CancellationToken _stoppingToken;
}
The ITaskScheduler (which we already defined in our old ASP.NET client code for UTest test purposes) allows a client to add a background task. The main purpose of the BackgroundTaskScheduler is to capture the stop cancellation token (which is own by the Host) and to pass it into all the background Tasks; which by definition, runs in the System.Threading.ThreadPool so there is no need to create our own.
To configure Hosted Services properly see this post.
Enjoy!
I have used Quartz.NET (does not require SQL Server) with the following extension method to easily set up and run a job:
public static class QuartzUtils
{
public static async Task<JobKey> CreateSingleJob<JOB>(this IScheduler scheduler,
string jobName, object data) where JOB : IJob
{
var jm = new JobDataMap { { "data", data } };
var jobKey = new JobKey(jobName);
await scheduler.ScheduleJob(
JobBuilder.Create<JOB>()
.WithIdentity(jobKey)
.Build(),
TriggerBuilder.Create()
.WithIdentity(jobName)
.UsingJobData(jm)
.StartNow()
.Build());
return jobKey;
}
}
Data is passed as an object that must be serializable. Create an IJob that processes the job like this:
public class MyJobAsync :IJob
{
public async Task Execute(IJobExecutionContext context)
{
var data = (MyDataType)context.MergedJobDataMap["data"];
....
Execute like this:
await SchedulerInstance.CreateSingleJob<MyJobAsync>("JobTitle 123", myData);
The original HostingEnvironment.QueueBackgroundWorkItem was a one-liner and very convenient to use.
The "new" way of doing this in ASP Core 2.x requires reading pages of cryptic documentation and writing considerable amount of code.
To avoid this you can use the following alternative method
public static ConcurrentBag<Boolean> bs = new ConcurrentBag<Boolean>();
[HttpPost("/save")]
public async Task<IActionResult> SaveAsync(dynamic postData)
{
var id = (String)postData.id;
Task.Run(() =>
{
bs.Add(Create(id));
});
return new OkResult();
}
private Boolean Create(String id)
{
/// do work
return true;
}
The static ConcurrentBag<Boolean> bs will hold a reference to the object, this will prevent garbage collector from collecting the task after the controller returns.