I was able to schedule 3 chained jobs using Quartz.NET. This strategy is working fine:
var j1 = new TestJob1();
var j2 = new TestJob2();
var j3 = new TestJob3();
var jd1 = j1.Build();
var jd2 = j2.Build();
var jd3 = j3.Build();
var chain = new JobChainingJobListener("jobchain");
chain.AddJobChainLink(jd1.Key, jd2.Key);
chain.AddJobChainLink(jd2.Key, jd3.Key);
Scheduler.ListenerManager.AddJobListener(chain, GroupMatcher<JobKey>.AnyGroup());
Scheduler.ScheduleJob(jd1, j1.JobTrigger());
Scheduler.AddJob(jd2, true);
Scheduler.AddJob(jd3, true);
Scheduler.Start();
The code for each job is as follows:
public class TestJob1 : BaseJob, IJob
{
public override ITrigger JobTrigger()
{
return TriggerBuilder.Create()
.WithSimpleSchedule(
ssb =>
ssb.WithInterval(new TimeSpan(0, 0, 0, 10)).RepeatForever().WithMisfireHandlingInstructionFireNow())
.Build();
}
public void Execute(IJobExecutionContext context)
{
Debug.WriteLine($"Running Job 1 at {DateTime.Now.ToString("O")}");
}
}
public class TestJob2 : BaseJob, IJob
{
public override ITrigger JobTrigger()
{
throw new System.NotImplementedException();
}
public void Execute(IJobExecutionContext context)
{
Debug.WriteLine($"Running Job 2 at {DateTime.Now.ToString("O")}");
throw new Exception("forced error");
}
}
public class TestJob3 : BaseJob, IJob
{
public override ITrigger JobTrigger()
{
throw new System.NotImplementedException();
}
public void Execute(IJobExecutionContext context)
{
Debug.WriteLine($"Running Job 3 at {DateTime.Now.ToString("O")}");
}
}
If you see, the TestJob2 is throwing an exception when it runs. Even on this situation, the TestJob3 is fired. My business requirement is that TestJob3 shouldn't be fired if TestJob2 fails. Notice that actually I don't need to implement the trigger for job2 and job3 because I'm adding those jobs without a trigger to the scheduler.
How would this be done?
Thanks in advance,
Mário
Subclass JobChainingJobListener, and use JobChainingJobListenerFailOnError in place of JobChainingJobListener:
/// <summary>
/// JobChainingJobListener that doesn't run subsequent jobs when one fails.
/// </summary>
public class JobChainingJobListenerFailOnError : JobChainingJobListener
{
public JobChainingJobListenerFailOnError(String name) : base(name) { }
public override void JobWasExecuted(IJobExecutionContext context, JobExecutionException jobException)
{
//Only call the base method if jobException is null. Otherwise, an
//error has occurred & we don't want to continue chaining
if (jobException == null)
base.JobWasExecuted(context, jobException);
}
}
While the JobChain is new to me I would still employ base clr api calls and take advantage of the Task Parallel library (tpl) by chaining tasks (https://msdn.microsoft.com/en-us/library/ee372288(v=vs.110).aspx).
This particular code chains four different tasks together and one cannot fire without the previous finishing. All i want Quartz to do is schedule and call my job. I apologize if i veered away from the quartz api but i just wanted provide a way i handled multiple tasks.
In my job i enter the job Execute and execute calls Process()
private async Task<Boolean> Process()
{
Task<bool> t1 = Task<bool>.Factory.StartNew(() =>
{
return processThis();
});
Task<bool> t2 = t1.ContinueWith((ProcessMore) =>
{
return processMoreStuff();
});
Task<bool> t3 = t2.ContinueWith((ProcessEvenMore) =>
{
return processEvenMoreStuff();
});
Task<bool> t4 = t3.ContinueWith((ProcessStillSomeMore) =>
{
return processStillMoreStuff();
});
var result = await t4;
try
{
Task.WaitAll(t1, t2, t3, t4);
}
catch (Exception ex)
{
System.Diagnostics.Trace.WriteLine(ex.Message);
}
return result;
}
I've had to chain jobs in the past but I did not use JobChainingJobListener.
What I do is add and schedule the (n+1)th job when the (n)th job is done. This is helpful for example when the next job to execute depends on the result of the current job, as it does in your case.
To continue to use JobChainingJobListener I think you could get the TestJob3 and set a flag in its data map, when TestJob2 ends successfully. TestJob3 will still be executed when there is an exception in TestJob2 but you just have to check your flag to see if it needs to carry on with its execution.
Related
I have a SignalR app in DotNet 3.1, kind-of a large chat app, and I am trying to add two BackgroundServices.
The BackgroundServices are setup to run for as long as the ASP.NET app runs.
The first BackgroundService has a very fast main loop (50 ms) and seems to work well.
The second BackgroundService has a much longer main loop (1000 ms) and seems to start randomly, stop executing randomly, and then re-starts executing again ... randomly. It is almost like the second bot is going to sleep, for a long period of time (30 to 90 seconds) and then wakes up again with the object state preserved.
Both BackgroundServices have the same base code with different Delays.
Is it possible to have multiple, independent, non-ending, BackgroundServices? If so, then what am I doing wrong?
I have the services registered like this ...
_services.AddSimpleInjector(_simpleInjectorContainer, options =>
{
options.AddHostedService<SecondaryBackgroundService>();
options.AddHostedService<PrimaryBackgroundService>();
// AddAspNetCore() wraps web requests in a Simple Injector scope.
options.AddAspNetCore()
// Ensure activation of a specific framework type to be created by
// Simple Injector instead of the built-in configuration system.
.AddControllerActivation()
.AddViewComponentActivation()
.AddPageModelActivation()
.AddTagHelperActivation();
});
And I have two classes (PrimaryBackgroundService/SecondaryBackgroundService) that have this ...
public class SecondaryBackgroundService : BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken cancellationToken)
{
await Task.Factory.StartNew(async () =>
{
// loop until a cancalation is requested
while (!cancellationToken.IsCancellationRequested)
{
//await Task.Delay(TimeSpan.FromMilliseconds(50), cancellationToken);
await Task.Delay(TimeSpan.FromMilliseconds(1000), cancellationToken);
try
{
await _doWorkDelegate();
}
catch (Exception ex)
{
}
}
}, cancellationToken);
}
}
Should I setup a single BackgroundService that spins off two different Tasks; in their own threads? Should I be using IHostedService instead?
I need to make sure that the second BackgroundService runs every second. Also, I need to make sure that the second BackgroundService never impacts the faster running primary BackgroundService.
UPDATE:
I changed the code to use a Timer, as suggested, but now I am struggling with calling an async Task from a Timer event.
Here is the class I created with the different options that work and do not work.
// used this as the base: https://github.com/aspnet/Hosting/blob/master/src/Microsoft.Extensions.Hosting.Abstractions/BackgroundService.cs
public abstract class RecurringBackgroundService : IHostedService, IDisposable
{
private Timer _timer;
protected int TimerIntervalInMilliseconds { get; set; } = 250;
// OPTION 1. This causes strange behavior; random starts and stops
/*
protected abstract Task DoRecurringWork();
private async void OnTimerCallback(object notUsedTimerState) // use "async void" for event handlers
{
try
{
await DoRecurringWork();
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
*/
// OPTION 2. This causes strange behavior; random starts and stops
/*
protected abstract Task DoRecurringWork();
private void OnTimerCallback(object notUsedTimerState)
{
try
{
var tf = new TaskFactory(System.Threading.CancellationToken.None, TaskCreationOptions.None, TaskContinuationOptions.None, TaskScheduler.Default);
tf.StartNew(async () =>
{
await DoRecurringWork();
})
.Unwrap()
.GetAwaiter()
.GetResult();
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
*/
// OPTION 3. This works but requires the drived to have "async void"
/*
protected abstract void DoRecurringWork();
private void OnTimerCallback(object notUsedTimerState)
{
try
{
DoRecurringWork(); // use "async void" in the derived class
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
*/
// OPTION 4. This works just like OPTION 3 and allows the drived class to use a Task
protected abstract Task DoRecurringWork();
protected async void DoRecurringWorkInternal() // use "async void"
{
await DoRecurringWork();
}
private void OnTimerCallback(object notUsedTimerState)
{
try
{
DoRecurringWork();
}
finally
{
// do a single call timer pulse
_timer.Change(this.TimerIntervalInMilliseconds, Timeout.Infinite);
}
}
public virtual Task StartAsync(CancellationToken cancellationToken)
{
// https://stackoverflow.com/questions/684200/synchronizing-a-timer-to-prevent-overlap
// do a single call timer pulse
_timer = new Timer(OnTimerCallback, null, this.TimerIntervalInMilliseconds, Timeout.Infinite);
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
try { _timer.Change(Timeout.Infinite, 0); } catch {; }
return Task.CompletedTask;
}
public void Dispose()
{
try { _timer.Change(Timeout.Infinite, 0); } catch {; }
try { _timer.Dispose(); } catch {; }
}
}
Is OPTION 3 and/or OPTION 4 correct?
I have confirmed that OPTION 3 and OPTION 4 are overlapping. How can I stop them from overlapping? (UPDATE: use OPTION 1)
UPDATE
Looks like OPTION 1 was correct after all.
Stephen Cleary was correct. After digging and digging into the code I did find a Task that was stalling the execution under the _doWorkDelegate() method. The random starts and stops was caused by an HTTP call that was failing. Once I fixed that (with a fire-and-forget) OPTION 1 started working as expected.
I would recommend writing two timed background tasks as shown in the documentation
Timed background tasks documentation
then they are independent and isolated.
public class PrimaryBackgroundService : IHostedService, IDisposable
{
private readonly ILogger<PrimaryBackgroundService> _logger;
private Timer _timer;
public PrimaryBackgroundService(ILogger<PrimaryBackgroundService> logger)
{
_logger = logger;
}
public Task StartAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("PrimaryBackgroundService StartAsync");
TimeSpan waitTillStart = TimeSpan.Zero;
TimeSpan intervalBetweenWork = TimeSpan.FromMilliseconds(50);
_timer = new Timer(DoWork, null, waitTillStart, intervalBetweenWork);
return Task.CompletedTask;
}
private void DoWork(object state)
{
_logger.LogInformation("PrimaryBackgroundService DoWork");
// ... do work
}
public Task StopAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("PrimaryBackgroundService is stopping.");
_timer?.Change(Timeout.Infinite, 0);
return Task.CompletedTask;
}
public void Dispose()
{
_timer?.Dispose();
}
}
create the SecondaryBackgroundService using similar code and register them as you did before
options.AddHostedService<SecondaryBackgroundService>();
options.AddHostedService<PrimaryBackgroundService>();
Note that if you want to use any dependency injection then you have to inject IServiceScopeFactory into the background service constructor and call scopeFactory.CreateScope()
I have a console app which uses a class library to execute some long running tasks. This is a .net core console app and uses the .net core Generic Host. I also use the ShellProgressBar library to display some progress bars.
My Hosted service looks like this
internal class MyHostedService : IHostedService, IDisposable
{
private readonly ILogger _logger;
private readonly IMyService _myService;
private readonly IProgress<MyCustomProgress> _progress;
private readonly IApplicationLifetime _appLifetime;
private readonly ProgressBar _progressBar;
private readonly IProgressBarFactory _progressBarFactory;
public MyHostedService(
ILogger<MyHostedService> logger,
IMyService myService,
IProgressBarFactory progressBarFactory,
IApplicationLifetime appLifetime)
{
_logger = logger;
_myService = myService;
_appLifetime = appLifetime;
_progressBarFactory = progressBarFactory;
_progressBar = _progressBarFactory.GetProgressBar(); // this just returns an instance of ShellProgressBar
_progress = new Progress<MyCustomProgress>(progress =>
{
_progressBar.Tick(progress.Current);
});
}
public void Dispose()
{
_progressBar.Dispose();
}
public Task StartAsync(CancellationToken cancellationToken)
{
_myService.RunJobs(_progress);
_appLifetime.StopApplication();
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
}
Where MyCustomProgress looks like this
public class MyCustomProgress
{
public int Current {get; set;}
public int Total {get; set;}
}
and MyService looks something like so (Job1, Job2, Job3 implement IJob)
public class MyService : IMyService
{
private void List<IJob> _jobsToRun;
public MyService()
{
_jobsToRun.Add(new Job1());
_jobsToRun.Add(new Job2());
_jobsToRun.Add(new Job3());
}
public void RunJobs(IProgress<MyCustomProgress> progress)
{
_jobsToRun.ForEach(job =>
{
job.Execute();
progress.Report(new MyCustomProgress { Current = _jobsToRun.IndexOf(job) + 1, Total = _jobsToRun.Count() });
});
}
}
And IJob is
public interface IJob
{
void Execute();
}
This setup works well and I'm able to display the progress bar from my HostedService by creating a ShellProgressBar instance and using the one IProgress instance I have to update it.
However, I have another implementation of IMyService that I also need to run that looks something like this
public class MyService2 : IMyService
{
private void List<IJob> _sequentialJobsToRun;
private void List<IJob> _parallelJobsToRun;
public MyService()
{
_sequentialJobsToRun.Add(new Job1());
_sequentialJobsToRun.Add(new Job2());
_sequentialJobsToRun.Add(new Job3());
_parallelJobsToRun.Add(new Job4());
_parallelJobsToRun.Add(new Job5());
_parallelJobsToRun.Add(new Job6());
}
public void RunJobs(IProgress<MyCustomProgress> progress)
{
_sequentialJobsToRun.ForEach(job =>
{
job.Execute();
progress.Report(new MyCustomProgress { Current = _jobsToRun.IndexOf(job) + 1, Total = _jobsToRun.Count() });
});
Parallel.ForEach(_parallelJobsToRun, job =>
{
job.Execute();
// Report progress here
});
}
}
This is the one I'm struggling with. when _parallelJobsToRun is executed, I need to be able to create a new child ShellProgressBar (ShellProgressBar.Spawn) and display them as child progress bars of let's say 'Parallel Jobs'.
This is where I'm looking for some help as to how I can achieve this.
Note: I don't want to take a dependency on ShellProgressBar in my class library containing MyService
Any help much appreciated.
I am a little confused by your description, but let's see if I understand what you are up to. So if you wrap all of this in a class, then taskList1 and taskList2 could be class variables. (By the way taskList1/2 should be named better: say parallelTaskList and whatever . . . anyway.) Then you could write a new method on the class CheckTaskStatus() and just iterate over the two class variables. Does that help or have I completely missed your question?
Can you modify it like this?
public Task<ICollection<IProgress<int>>> StartAsync(CancellationToken cancellationToken)
{
var progressList = _myServiceFromLibrary.RunTasks();
return Task.FromResult(progressList);
}
public ICollection<IProgress<int>> RunTasks()
{
var taskList1 = new List<ITask> { Task1, Task2 };
var plist1 = taskList1.Select(t => t.Progress).ToList();
var taskList2 = new List<ITask> { Task3, Task4, Task5 }:
var plist2 = taskList2.Select(t => t.Progress).ToList();
taskList1.foreach( task => task.Run() );
Parallel.Foreach(taskList2, task => { task.Run() });
return plist1.Concat(plist2).ToList();
}
Task.Progress there is probably a progress getter. realistically IProgress should probably be injected via Tasks constructors. But the point is your public interface doesn't accept list of tasks, thus it should just return collection of progress reports.
How to inject progress reporters into your tasks is a different story that depends on tasks implementations and it may or may not be supported. out of the box.
However what you probably should do is to supply progress callback or progress factory so that progress reporters of your choice are created:
public Task StartAsync(CancellationToken cancellationToken, Action<Task,int> onprogress)
{
_myServiceFromLibrary.RunTasks(onprogress);
return Task.CompletedTask;
}
public class SimpleProgress : IProgress<int>
{
private readonly Task task;
private readonly Action<Task,int> action;
public SimpleProgress(Task task, Action<Task,int> action)
{
this.task = task;
this.action = action;
}
public void Report(int progress)
{
action(task, progress);
}
}
public ICollection<IProgress<int>> RunTasks(Action<Task,int> onprogress)
{
var taskList1 = new List<ITask> { Task1, Task2 };
taskList1.foreach(t => t.Progress = new SimpleProgress(t, onprogress));
var taskList2 = new List<ITask> { Task3, Task4, Task5 }:
taskList2.foreach(t => t.Progress = new SimpleProgress(t, onprogress));
taskList1.foreach( task => task.Run() );
Parallel.Foreach(taskList2, task => { task.Run() });
}
you may see here, that it really is mostly question about how your tasks are going to call IProgress<T>.Report(T value) method.
Honestly I would just use an event in your task prototype.
It's not really clear exactly what you want because the code you posted doesn't match the names you then reference in your question text... It would be helpful to have all the code (the RunTasks function for example, your IProgress prototype, etc).
Nevertheless, an event exists specifically to signal calling code. Let's go back to the basics. Let's say you have library called MyLib, with a method DoThings().
Create a new class that inherits from EventArgs, and that will carry your task's progress reports...
public class ProgressEventArgs : EventArgs
{
private int _taskId;
private int _percent;
private string _message;
public int TaskId => _taskId;
public int Percent => _percent;
public string Message => _message;
public ProgressEventArgs(int taskId, int percent, string message)
{
_taskId = taskId;
_percent = percent;
_message = message;
}
}
Then on your library's class definition, add an event like so:
public event EventHandler<ProgressEventArgs> Progress;
And in your console application, create a handler for progress events:
void ProgressHandler(object sender, ProgressEventArgs e)
{
// Do whatever you want with your progress report here, all your
// info is in the e variable
}
And subscribe to your class library's event:
var lib = new MyLib();
lib.Progress += ProgressHandler;
lib.DoThings();
When you are done, unsubscribe from the event:
lib.Progress -= ProgressHandler;
In your class library, now you can send back progress reports by raising the event in your code. First create a stub method to invoke the event:
protected virtual void OnProgress(ProgressEventArgs e)
{
var handler = Progress;
if (handler != null)
{
handler(this, e);
}
}
And then add this to your task's code where you want it:
OnProgress(new ProgressEventArgs(2452343, 10, "Reindexing google..."));
The only thing to be careful about is to report progress sparingly, because each time your event fires it interrupts your console application, and you can really bog it down hard if you send 10 million events all at once. Be logical about it.
Alternate way; If you own the code IProgress<T> and Progress
IProgress<T>
{
IProgress<T> CreateNew();
Report(T progress);
}
Progress<T> : IProgress<T>
{
Progress(ShellProgressClass)
{
// initialize progressBar or span new
}
....
IProgress<T> CreateNew()
{
return new Progress();
}
}
you can later improvise to have one big progressBar (collection of Sequential or Parallel) and what not
Your MyService could have a dependency similar to:
public interface IJobContainer
{
void Add(IJob job);
void RunJobs(IProgress<MyProgress> progress, Action<IJob>? callback = null); // Using an action for extra work you may want to do
}
This way you don't have to worry about reporting progress in MyService (which doesn't feel like it should be MyService's job anyway. The implementation could look something like this for the parallel job container:
public class MyParallelJobContainer
{
private readonly IList<IJob> parallelJobs = new List<IJob>();
public MyParallelJobContainer()
{
this.progress = progress;
}
public void Add(IJob job) { ... }
void RunJobs(IProgress<MyProgress> progress, Action<IJob>? callback = null)
{
using (var progressBar = new ProgressBar(options...))
{
Parallel.ForEach(parallelJobs, job =>
{
callback?.Invoke(job);
job.Execute();
progressBar.Tick();
})
}
}
}
MyService would then look like this:
public class MyService : IMyService
{
private readonly IJobContainer sequentialJobs;
private readonly IJobContainer parallelJobs;
public MyService(
IJobContainer sequentialJobs,
IJobContainer parallelJobs)
{
this.sequentialJobs = sequentialJobs;
this.parallelJobs = parallelJobs;
this.sequentialJobs.Add(new DoSequentialJob1());
this.sequentialJobs.Add(new DoSequentialJob2());
this.sequentialJobs.Add(new DoSequentialJob3));
this.parallelJobs.Add(new DoParallelJobA());
this.parallelJobs.Add(new DoParallelJobB());
this.parallelJobs.Add(new DoParallelJobC());
}
public void RunJobs(IProgress<MyCustomProgress> progress)
{
sequentialJobs.RunJobs(progress, job =>
{
// do something with the job if necessary
});
parallelJobs.RunJobs(progress, job =>
{
// do something with the job if necessary
});
}
The advantage of this way is that MyService only has one job and doesn't have to worry about what you do once the job is completed.
From my understanding of your issue the question is how do you display progress across both completion of the synchronous jobs and parallelized jobs.
In theory the parallel jobs could start and finish at the same time, so you could treat the parallel jobs as a single job. Instead of using the count of sequential jobs as your total, increase that number by one. This might be satisfactory for a small number of parallel jobs.
If you want to add progress between the parallel jobs, you will need to handle multi-threading in your code because the parallel jobs will be running concurrently.
object pJobLock = new object();
int numProcessed = 0;
foreach(var parallelJob in parallelJobs)
{
parallelJob.DoWork();
lock (pJobLock)
{
numProcessed++;
progress.Report(new MyCustomProgress { Current = numProcessed, Total = parallelJobs.Count() });
}
}
We are working with .NET Core Web Api, and looking for a lightweight solution to log requests with variable intensity into database, but don't want client's to wait for the saving process.
Unfortunately there's no HostingEnvironment.QueueBackgroundWorkItem(..) implemented in dnx, and Task.Run(..) is not safe.
Is there any elegant solution?
As #axelheer mentioned IHostedService is the way to go in .NET Core 2.0 and above.
I needed a lightweight like for like ASP.NET Core replacement for HostingEnvironment.QueueBackgroundWorkItem, so I wrote DalSoft.Hosting.BackgroundQueue which uses.NET Core's 2.0 IHostedService.
PM> Install-Package DalSoft.Hosting.BackgroundQueue
In your ASP.NET Core Startup.cs:
public void ConfigureServices(IServiceCollection services)
{
services.AddBackgroundQueue(onException:exception =>
{
});
}
To queue a background Task just add BackgroundQueue to your controller's constructor and call Enqueue.
public EmailController(BackgroundQueue backgroundQueue)
{
_backgroundQueue = backgroundQueue;
}
[HttpPost, Route("/")]
public IActionResult SendEmail([FromBody]emailRequest)
{
_backgroundQueue.Enqueue(async cancellationToken =>
{
await _smtp.SendMailAsync(emailRequest.From, emailRequest.To, request.Body);
});
return Ok();
}
QueueBackgroundWorkItem is gone, but we've got IApplicationLifetime instead of IRegisteredObject, which is being used by the former one. And it looks quite promising for such scenarios, I think.
The idea (and I'm still not quite sure, if it's a pretty bad one; thus, beware!) is to register a singleton, which spawns and observes new tasks. Within that singleton we can furthermore register a "stopped event" in order to proper await still running tasks.
This "concept" could be used for short running stuff like logging, mail sending, and the like. Things, that should not take much time, but would produce unnecessary delays for the current request.
public class BackgroundPool
{
protected ILogger<BackgroundPool> Logger { get; }
public BackgroundPool(ILogger<BackgroundPool> logger, IApplicationLifetime lifetime)
{
if (logger == null)
throw new ArgumentNullException(nameof(logger));
if (lifetime == null)
throw new ArgumentNullException(nameof(lifetime));
lifetime.ApplicationStopped.Register(() =>
{
lock (currentTasksLock)
{
Task.WaitAll(currentTasks.ToArray());
}
logger.LogInformation(BackgroundEvents.Close, "Background pool closed.");
});
Logger = logger;
}
private readonly object currentTasksLock = new object();
private readonly List<Task> currentTasks = new List<Task>();
public void SendStuff(Stuff whatever)
{
var task = Task.Run(async () =>
{
Logger.LogInformation(BackgroundEvents.Send, "Sending stuff...");
try
{
// do THE stuff
Logger.LogInformation(BackgroundEvents.SendDone, "Send stuff returns.");
}
catch (Exception ex)
{
Logger.LogError(BackgroundEvents.SendFail, ex, "Send stuff failed.");
}
});
lock (currentTasksLock)
{
currentTasks.Add(task);
currentTasks.RemoveAll(t => t.IsCompleted);
}
}
}
Such a BackgroundPool should be registered as a singleton and can be used by any other component via DI. I'm currently using it for sending mails and it works fine (tested mail sending during app shutdown too).
Note: accessing stuff like the current HttpContext within the background task should not work. The old solution uses UnsafeQueueUserWorkItem to prohibit that anyway.
What do you think?
Update:
With ASP.NET Core 2.0 there's new stuff for background tasks, which get's better with ASP.NET Core 2.1: Implementing background tasks in .NET Core 2.x webapps or microservices with IHostedService and the BackgroundService class
You can use Hangfire (http://hangfire.io/) for background jobs in .NET Core.
For example :
var jobId = BackgroundJob.Enqueue(
() => Console.WriteLine("Fire-and-forget!"));
Here is a tweaked version of Axel's answer that lets you pass in delegates and does more aggressive cleanup of completed tasks.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Logging;
namespace Example
{
public class BackgroundPool
{
private readonly ILogger<BackgroundPool> _logger;
private readonly IApplicationLifetime _lifetime;
private readonly object _currentTasksLock = new object();
private readonly List<Task> _currentTasks = new List<Task>();
public BackgroundPool(ILogger<BackgroundPool> logger, IApplicationLifetime lifetime)
{
if (logger == null)
throw new ArgumentNullException(nameof(logger));
if (lifetime == null)
throw new ArgumentNullException(nameof(lifetime));
_logger = logger;
_lifetime = lifetime;
_lifetime.ApplicationStopped.Register(() =>
{
lock (_currentTasksLock)
{
Task.WaitAll(_currentTasks.ToArray());
}
_logger.LogInformation("Background pool closed.");
});
}
public void QueueBackgroundWork(Action action)
{
#pragma warning disable 1998
async Task Wrapper() => action();
#pragma warning restore 1998
QueueBackgroundWork(Wrapper);
}
public void QueueBackgroundWork(Func<Task> func)
{
var task = Task.Run(async () =>
{
_logger.LogTrace("Queuing background work.");
try
{
await func();
_logger.LogTrace("Background work returns.");
}
catch (Exception ex)
{
_logger.LogError(ex.HResult, ex, "Background work failed.");
}
}, _lifetime.ApplicationStopped);
lock (_currentTasksLock)
{
_currentTasks.Add(task);
}
task.ContinueWith(CleanupOnComplete, _lifetime.ApplicationStopping);
}
private void CleanupOnComplete(Task oldTask)
{
lock (_currentTasksLock)
{
_currentTasks.Remove(oldTask);
}
}
}
}
I know this is a little late, but we just ran into this issue too. So after reading lots of ideas, here's the solution we came up with.
/// <summary>
/// Defines a simple interface for scheduling background tasks. Useful for UnitTesting ASP.net code
/// </summary>
public interface ITaskScheduler
{
/// <summary>
/// Schedules a task which can run in the background, independent of any request.
/// </summary>
/// <param name="workItem">A unit of execution.</param>
[SecurityPermission(SecurityAction.LinkDemand, Unrestricted = true)]
void QueueBackgroundWorkItem(Action<CancellationToken> workItem);
/// <summary>
/// Schedules a task which can run in the background, independent of any request.
/// </summary>
/// <param name="workItem">A unit of execution.</param>
[SecurityPermission(SecurityAction.LinkDemand, Unrestricted = true)]
void QueueBackgroundWorkItem(Func<CancellationToken, Task> workItem);
}
public class BackgroundTaskScheduler : BackgroundService, ITaskScheduler
{
public BackgroundTaskScheduler(ILogger<BackgroundTaskScheduler> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
_logger.LogTrace("BackgroundTaskScheduler Service started.");
_stoppingToken = stoppingToken;
_isRunning = true;
try
{
await Task.Delay(-1, stoppingToken);
}
catch (TaskCanceledException)
{
}
finally
{
_isRunning = false;
_logger.LogTrace("BackgroundTaskScheduler Service stopped.");
}
}
public void QueueBackgroundWorkItem(Action<CancellationToken> workItem)
{
if (workItem == null)
{
throw new ArgumentNullException(nameof(workItem));
}
if (!_isRunning)
throw new Exception("BackgroundTaskScheduler is not running.");
_ = Task.Run(() => workItem(_stoppingToken), _stoppingToken);
}
public void QueueBackgroundWorkItem(Func<CancellationToken, Task> workItem)
{
if (workItem == null)
{
throw new ArgumentNullException(nameof(workItem));
}
if (!_isRunning)
throw new Exception("BackgroundTaskScheduler is not running.");
_ = Task.Run(async () =>
{
try
{
await workItem(_stoppingToken);
}
catch (Exception e)
{
_logger.LogError(e, "When executing background task.");
throw;
}
}, _stoppingToken);
}
private readonly ILogger _logger;
private volatile bool _isRunning;
private CancellationToken _stoppingToken;
}
The ITaskScheduler (which we already defined in our old ASP.NET client code for UTest test purposes) allows a client to add a background task. The main purpose of the BackgroundTaskScheduler is to capture the stop cancellation token (which is own by the Host) and to pass it into all the background Tasks; which by definition, runs in the System.Threading.ThreadPool so there is no need to create our own.
To configure Hosted Services properly see this post.
Enjoy!
I have used Quartz.NET (does not require SQL Server) with the following extension method to easily set up and run a job:
public static class QuartzUtils
{
public static async Task<JobKey> CreateSingleJob<JOB>(this IScheduler scheduler,
string jobName, object data) where JOB : IJob
{
var jm = new JobDataMap { { "data", data } };
var jobKey = new JobKey(jobName);
await scheduler.ScheduleJob(
JobBuilder.Create<JOB>()
.WithIdentity(jobKey)
.Build(),
TriggerBuilder.Create()
.WithIdentity(jobName)
.UsingJobData(jm)
.StartNow()
.Build());
return jobKey;
}
}
Data is passed as an object that must be serializable. Create an IJob that processes the job like this:
public class MyJobAsync :IJob
{
public async Task Execute(IJobExecutionContext context)
{
var data = (MyDataType)context.MergedJobDataMap["data"];
....
Execute like this:
await SchedulerInstance.CreateSingleJob<MyJobAsync>("JobTitle 123", myData);
The original HostingEnvironment.QueueBackgroundWorkItem was a one-liner and very convenient to use.
The "new" way of doing this in ASP Core 2.x requires reading pages of cryptic documentation and writing considerable amount of code.
To avoid this you can use the following alternative method
public static ConcurrentBag<Boolean> bs = new ConcurrentBag<Boolean>();
[HttpPost("/save")]
public async Task<IActionResult> SaveAsync(dynamic postData)
{
var id = (String)postData.id;
Task.Run(() =>
{
bs.Add(Create(id));
});
return new OkResult();
}
private Boolean Create(String id)
{
/// do work
return true;
}
The static ConcurrentBag<Boolean> bs will hold a reference to the object, this will prevent garbage collector from collecting the task after the controller returns.
I have a project where I use TopShelf and TopShelf.Quartz
Following this example I am building my jobs with
s.ScheduleQuartzJob(q =>
q.WithJob(() => JobBuilder.Create<MyJob>().Build())
.AddTrigger(() => TriggerBuilder.Create()
.WithSimpleSchedule(builder => builder
.WithIntervalInSeconds(5)
.RepeatForever())
.Build())
);
which fires my job every five seconds even if the previous is still running. What I really want to achive is to start a job and after the completion wait five seconds and start again. Is this possible or do I have to implement my own logic (for example via a static variable).
A job listener as proposed by #NateKerkhofs will work, like this:
public class RepeatAfterCompletionJobListener : IJobListener
{
private readonly TimeSpan interval;
public RepeatAfterCompletionJobListener(TimeSpan interval)
{
this.interval = interval;
}
public void JobExecutionVetoed(IJobExecutionContext context)
{
}
public void JobToBeExecuted(IJobExecutionContext context)
{
}
public void JobWasExecuted(IJobExecutionContext context, JobExecutionException jobException)
{
string triggerKey = context.JobDetail.Key.Name + ".trigger";
var trigger = TriggerBuilder.Create()
.WithIdentity(triggerKey)
.StartAt(new DateTimeOffset(DateTime.UtcNow.Add(interval)))
.Build();
context.Scheduler.RescheduleJob(new TriggerKey(triggerKey), trigger);
}
public string Name
{
get
{
return "RepeatAfterCompletionJobListener";
}
}
}
Then add the listener to the scheduler:
var jobKey = "myJobKey";
var schedule = new StdSchedulerFactory().GetScheduler();
listener = new
RepeatAfterCompletionJobListener(TimeSpan.FromSeconds(5));
schedule.ListenerManager.AddJobListener
(listener, KeyMatcher<JobKey>.KeyEquals(new JobKey(jobKey)));
var job = JobBuilder.Create(MyJob)
.WithIdentity(jobKey)
.Build();
// Schedule the job to start in 5 seconds to give the service time to initialise
var trigger = TriggerBuilder.Create()
.WithIdentity(CreateTriggerKey(jobKey))
.StartAt(DateTimeOffset.Now.AddSeconds(5))
.Build();
schedule.ScheduleJob(job, trigger);
Unfortunately I don't know how to do this (or if it can be done) with the fluent syntax used by Typshelf.Quartz library, I use this with TopShelf and regular Quartz.Net.
You can use a TriggerListener (http://www.quartz-scheduler.net/documentation/quartz-2.x/tutorial/trigger-and-job-listeners.html) to listen to when the trigger finishes, then reschedule in 5 seconds.
Another option is to schedule the next job as the final action in the Execute of the job itself.
http://www.quartz-scheduler.net/documentation/faq.html has a question somewhere 2/3rds of the way down that explains more about it.
The JobListener solution is a very powerful and flexible way to reschedule your job after completion. Thanks to Nate Kerkhofs and stuartd for the input.
In my case it was sufficient to decorate my Job class with the DisallowConcurrentExecution attribute since I don't have different instances of my job
[DisallowConcurrentExecution]
public class MyJob : IJob
{
}
FYI: Using a JobListerener with TopShelf.Quartz the code could look like this
var jobName = "MyJob";
var jobKey = new JobKey(jobName);
s.ScheduleQuartzJob(q =>
q.WithJob(() => JobBuilder.Create<MyJob>()
.WithIdentity(jobKey).Build())
.AddTrigger(() => TriggerBuilder.Create()
.WithSimpleSchedule(builder => builder
.WithIntervalInSeconds(5)
.Build())
var listener = new RepeatAfterCompletionJobListener(TimeSpan.FromSeconds(5));
var listenerManager = ScheduleJobServiceConfiguratorExtensions
.SchedulerFactory().ListenerManager;
listenerManager.AddJobListener(listener, KeyMatcher<JobKey>.KeyEquals(jobKey));
If you are using TopShelf.Quartz.Ninject (like I do) don't forget to call UseQuartzNinject() prior to calling ScheduleJobServiceConfiguratorExtensions.SchedulerFactory()
The best way I found is to add simple Job Listener.
In my example it reschedules job, just after failure.
Of cause you can add delay in .StartAt(DateTime.UtcNow)
public class QuartzRetryJobListner : IJobListener
{
public string Name => GetType().Name;
public async Task JobExecutionVetoed(IJobExecutionContext context, CancellationToken cancellationToken = default) => await Task.CompletedTask;
public async Task JobToBeExecuted(IJobExecutionContext context, CancellationToken cancellationToken = default) => await Task.CompletedTask;
public async Task JobWasExecuted(
IJobExecutionContext context,
JobExecutionException jobException,
CancellationToken cancellationToken = default)
{
if (jobException == null) return;
// Create and schedule new trigger
ITrigger retryTrigger = TriggerBuilder.Create()
.StartAt(DateTime.UtcNow)
.Build();
await context.Scheduler.ScheduleJob(context.JobDetail, new[] { retryTrigger }, true);
}
}
Also, I think it's useful to add class extension
public static class QuartzExtensions
{
public static void RepeatJobAfterFall(this IScheduler scheduler, IJobDetail job)
{
scheduler.ListenerManager.AddJobListener(
new QuartzRetryJobListner(),
KeyMatcher<JobKey>.KeyEquals(job.Key));
}
}
Just for simplify usage.
_scheduler.ScheduleJob(job, trigger);
//In case of failue repeat job immediately
_scheduler.RepeatJobAfterFall(job);
I'm currently working on a a project and I have a need to queue some jobs for processing, here's the requirement:
Jobs must be processed one at a time
A queued item must be able to be waited on
So I want something akin to:
Task<result> QueueJob(params here)
{
/// Queue the job and somehow return a waitable task that will wait until the queued job has been executed and return the result.
}
I've tried having a background running task that just pulls items off a queue and processes the job, but the difficulty is getting from a background task to the method.
If need be I could go the route of just requesting a completion callback in the QueueJob method, but it'd be great if I could get a transparent Task back that allows you to wait on the job to be processed (even if there are jobs before it in the queue).
You might find TaskCompletionSource<T> useful, it can be used to create a Task that completes exactly when you want it to. If you combine it with BlockingCollection<T>, you will get your queue:
class JobProcessor<TInput, TOutput> : IDisposable
{
private readonly Func<TInput, TOutput> m_transform;
// or a custom type instead of Tuple
private readonly
BlockingCollection<Tuple<TInput, TaskCompletionSource<TOutput>>>
m_queue =
new BlockingCollection<Tuple<TInput, TaskCompletionSource<TOutput>>>();
public JobProcessor(Func<TInput, TOutput> transform)
{
m_transform = transform;
Task.Factory.StartNew(ProcessQueue, TaskCreationOptions.LongRunning);
}
private void ProcessQueue()
{
Tuple<TInput, TaskCompletionSource<TOutput>> tuple;
while (m_queue.TryTake(out tuple, Timeout.Infinite))
{
var input = tuple.Item1;
var tcs = tuple.Item2;
try
{
tcs.SetResult(m_transform(input));
}
catch (Exception ex)
{
tcs.SetException(ex);
}
}
}
public Task<TOutput> QueueJob(TInput input)
{
var tcs = new TaskCompletionSource<TOutput>();
m_queue.Add(Tuple.Create(input, tcs));
return tcs.Task;
}
public void Dispose()
{
m_queue.CompleteAdding();
}
}
I would go for something like this:
class TaskProcessor<TResult>
{
// TODO: Error handling!
readonly BlockingCollection<Task<TResult>> blockingCollection = new BlockingCollection<Task<TResult>>(new ConcurrentQueue<Task<TResult>>());
public Task<TResult> AddTask(Func<TResult> work)
{
var task = new Task<TResult>(work);
blockingCollection.Add(task);
return task; // give the task back to the caller so they can wait on it
}
public void CompleteAddingTasks()
{
blockingCollection.CompleteAdding();
}
public TaskProcessor()
{
ProcessQueue();
}
void ProcessQueue()
{
Task<TResult> task;
while (blockingCollection.TryTake(out task))
{
task.Start();
task.Wait(); // ensure this task finishes before we start a new one...
}
}
}
Depending on the type of app that is using it, you could switch out the BlockingCollection/ConcurrentQueue for something simpler (eg just a plain queue). You can also adjust the signature of the "AddTask" method depending on what sort of methods/parameters you will be queueing up...
Func<T> takes no parameters and returns a value of type T. The jobs are run one by one and you can wait on the returned task to get the result.
public class TaskQueue
{
private Queue<Task> InnerTaskQueue;
private bool IsJobRunning;
public void Start()
{
Task.Factory.StartNew(() =>
{
while (true)
{
if (InnerTaskQueue.Count > 0 && !IsJobRunning)
{
var task = InnerTaskQueue.Dequeue()
task.Start();
IsJobRunning = true;
task.ContinueWith(t => IsJobRunning = false);
}
else
{
Thread.Sleep(1000);
}
}
}
}
public Task<T> QueueJob(Func<T> job)
{
var task = new Task<T>(() => job());
InnerTaskQueue.Enqueue(task);
return task;
}
}