Quartz.net CancellationToken - c#

In my scheduler, implemented with quartz.net v3, i'm trying to test the behaviour of the cancellation token:
....
IScheduler scheduler = await factory.GetScheduler();
....
var tokenSource = new CancellationTokenSource();
CancellationToken ct = tokenSource.Token;
// Start scheduler
await scheduler.Start(ct);
// some sleep
await Task.Delay(TimeSpan.FromSeconds(60));
// communicate cancellation
tokenSource.Cancel();
I have a test Job that runs infinitely and in the Execute method checks the cancellation token:
public async Task Execute(IJobExecutionContext context)
{
while (true)
{
if (context.CancellationToken.IsCancellationRequested)
{
context.CancellationToken.ThrowIfCancellationRequested();
}
}
}
I would expect that when tokenSource.Cancel() is fired the job will enter in the if and throws the Exception. But it doesn't work.

According to the documentation, you should use Interrupt method to cancel Quartz jobs.
NameValueCollection props = new NameValueCollection
{
{ "quartz.serializer.type", "binary" }
};
StdSchedulerFactory factory = new StdSchedulerFactory(props);
var scheduler = await factory.GetScheduler();
await scheduler.Start();
IJobDetail job = JobBuilder.Create<HelloJob>()
.WithIdentity("myJob", "group1")
.Build();
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity("myTrigger", "group1")
.StartNow()
.WithSimpleSchedule(x => x
.WithRepeatCount(1)
.WithIntervalInSeconds(40))
.Build();
await scheduler.ScheduleJob(job, trigger);
//Configure the cancellation of the schedule job with jobkey
await Task.Delay(TimeSpan.FromSeconds(1));
await scheduler.Interrupt(job.Key);
Scheduled job class;
public class HelloJob : IJob
{
public async Task Execute(IJobExecutionContext context)
{
while (true)
{
if (context.CancellationToken.IsCancellationRequested)
{
context.CancellationToken.ThrowIfCancellationRequested();
// After interrupt the job, the cancellation request activated
}
}
}
}
Apply scheduler.Interrupt after the job executed and the quartz will terminate the job.
EDIT
According to source code (Line 2151), the Interrupt method applys cancellation tokens of the job execution contexts. So, it could be better to use facility of the library.

Here is a Unit Test from Github Repo: https://github.com/quartznet/quartznet/blob/master/src/Quartz.Tests.Unit/InterrubtableJobTest.cs
I tried to implement the cancellation the same way, but it didn't work for me either.
#Stormcloak I have to check the cancellation request because I want to do some aborting operations for the job, e.g. write status data to a database.
EDIT:
So, after multiple tests and implementations. I've got it running.
Some Pseudo code here:
this.scheduler = await StdSchedulerFactory.GetDefaultScheduler();
this.tokenSource = new CancellationTokenSource();
this.token = tokenSource.Token;
// Start scheduler.
await this.scheduler.Start(token);
// add some jobs here
// ...
// cancel running jobs.
IReadOnlyCollection<IJobExecutionContext> jobs = await this.scheduler.GetCurrentlyExecutingJobs();
foreach (IJobExecutionContext context in jobs)
{
result = await this.scheduler.Interrupt(context.JobDetail.Key, this.token);
}
await this.scheduler.Shutdown(true);
So now you can use the CancellationToken in your Execute method.

Related

Running a task periodically without blocking the main thread C#

I am trying to send a keep-alive HTTP request each 30 minutes. However I have other methods to be called at the time in between, so what I tried to do is:
Task.Factory.StartNew(async () =>
{
while(true){
await Task.Delay(new TimeSpan(0,30,0), CancellationToken.None);
await FooTask();
}
});
Am I using it properly?
Are you doing it properly? No. You say you want a loop, but don't write a loop. You're also using the wrong task creation function:
Task.Run(async () =>
{
while(true)
{
await FooTask().ConfigureAwait(false);
await Task.Delay(new TimeSpan(0, 30, 0)).ConfigureAwait(false);
}
});
There's also PeriodicTimer which follows a similar pattern, you do your action and await the next tick.
I'd suggest using Microsoft's Reactive Framework.
Then you can just do this:
IDisposable subscription =
Observable
.Timer(TimeSpan.Zero, TimeSpan.FromMinutes(30.0))
.SelectMany(_ => Observable.FromAsync(() => FooTask()))
.Subscribe();
Calling subscription.Dispose() shuts it down. It's nice and simple.
You likely want to use Task.Run() instead of Task.Factory.StartNewAsync(), per the remarks in the docs. Task.Run() is recommended unless you need the extra options in Task.Factory.StartNewAsync() (ex: Task Creation Options, Task Scheduler, Parameter Passing that Task.Run() does not cover).
Task.Run() will queue the task to run on the current thread pool, and it will also return a handle to the task in case you want to Wait for completion or Cancel the task, as shown in the below example.
CancellationTokenSource src = new CancellationTokenSource();
CancellationToken ct = src.Token;
Task t = Task.Run(async () =>
{
while(true){
ct.ThrowIfCancellationRequested();
await Task.Delay(new TimeSpan(0,30,0), ct);
await FooTask();
}
}, ct);
// Do some other stuff while your task runs
// Cancel Your Task, OR
ct.Cancel();
// Wait for your task to finish
try
{
t.Wait();
}
catch (AggregateException e)
{
// Handle the exception
}

How can I use Parallel tasks and make them NOT waiting each other?

I have this code:
var Options = new ParallelOptions
{
MaxDegreeOfParallelism = Environment.ProcessorCount * 10,
CancellationToken = CTS.Token
};
while (!CTS.IsCancellationRequested)
{
var TasksZ = new[]
{
"Task A",
"Task B",
"Task C"
};
await Parallel.ForEachAsync(TasksZ, Options, async (Comando, Token) =>
{
await MyFunction(Comando)
await Task.Delay(1000, Token);
});
Now, Task A, B and C start together and the cycle finish when ALL tasks are completed. Let's suppose that Task A and B finish in 10 seconds, but Task C in 2 minutes. In this case, A nd B have to wait 2 minutes too to start again. How can i make this independent? I mean, every task for it's own thread AND considering that var TasksZ is load dynamically and can change during the execution, by adding or removing other tasks.
Also, for stop/pause each individual task, i need a separate TaskCompletionSource for everyone, but MyFunction is an Interface in common with the main app & every DLL, i need to declare every TCS separated in the DLL(s) or just one in the common Interface?
Edit:
My idea is (using this this code from Microsoft) to have an app that run separated DLL, using the same interface but everyone have his job to do and can't wait each other. They mainly have this sequence of work: read a file -> handle an online POST request -> save a file -> communicate with the main app, the returned JSON, via custom class -> repeat.
There are no other code that i can show you for let you understand, because now 90% is same as the link above, the other 10% is just the POST request with a JSON return in a custom class and load/save file.
For be 101% clear, suppose the example before, the situation should be this:
AM 12:00:00 = start all
AM 12:00:10 = task_A end // 10s
AM 12:00:10 = task_B end // 10s
AM 12:00:20 = task_A end // 10s
AM 12:00:20 = task_B end // 10s
AM 12:00:30 = task_A end // 10s
AM 12:00:30 = task_B end // 10s
...
AM 12:01:50 = task_A end // 10s
AM 12:01:50 = task_B end // 10s
AM 12:02:00 = task_C end // 2 minutes
AM 12:02:10 = task_A end // 10s
AM 12:02:10 = task_B end // 10s
...
(This because i don't need live data for task_3, so it can POST every 2 minutes or so, but for task_1 and task_2 i need to have it live)
About the cores, the important is that the PC will not freeze or have 100% CPU. The server where i run this is a Dual Core, so MaxDegreeOfParallelism = Environment.ProcessorCount * 10 was just for not stress too much the server.
I don't think that the Parallel.ForEachAsync is a suitable tool for solving your problem. My suggestion is to store the tasks in a dictionary that has the string commandos as keys, and (Task, CancellationTokenSource) tuples as values. Each time you add a commando in the dictionary, you start a Task associated with a CancellationTokenSource, after awaiting any previous Task that was stored previously for the same commando, in order to prevent concurrent executions of the same commando. For limiting the concurrency of all commandos, you can use a SemaphoreSlim. For limiting the parallelism (number of threads actively running code at any given moment) you can use a limited concurrency TaskScheduler. Here is a demo:
const int maximumConcurrency = 10;
const int maximumParallelism = 2;
Dictionary<string, (Task, CancellationTokenSource)> commandos = new();
SemaphoreSlim semaphore = new(maximumConcurrency, maximumConcurrency);
TaskScheduler scheduler = new ConcurrentExclusiveSchedulerPair(
TaskScheduler.Default, maximumParallelism).ConcurrentScheduler;
StartCommando("Task A");
StartCommando("Task B");
StartCommando("Task C");
void StartCommando(string commando)
{
Task existingTask = null;
CancellationTokenSource existingCts = null;
if (commandos.TryGetValue(commando, out var entry))
{
(existingTask, existingCts) = entry;
existingCts.Cancel();
}
CancellationTokenSource cts = new();
CancellationToken token = cts.Token;
Task task = Task.Factory.StartNew(async () =>
{
if (existingTask is not null) try { await existingTask; } catch { }
while (true)
{
await semaphore.WaitAsync(token);
try
{
await MyFunction(commando, token);
}
finally { semaphore.Release(); }
}
}, token, TaskCreationOptions.DenyChildAttach, scheduler).Unwrap();
commandos[commando] = (task, cts);
existingCts?.Dispose();
}
void StopCommando(string commando)
{
if (commandos.TryGetValue(commando, out var entry))
{
(_, CancellationTokenSource cts) = entry;
cts.Cancel();
}
}
Task DisposeAllCommandos()
{
List<Task> tasks = new(commandos.Count);
foreach (var (commando, entry) in commandos)
{
(Task task, CancellationTokenSource cts) = entry;
cts.Cancel();
commandos.Remove(commando);
cts.Dispose();
tasks.Add(task);
}
return Task.WhenAll(tasks);
}
Online demo.
It is important that all the awaits are not configured with ConfigureAwait(false). Enforcing the maximumParallelism policy depends on staying always in the realm of our preferred scheduler, so capturing the TaskScheduler.Current at the await points and continuing on that same scheduler is the desirable behavior. Which is also the default behavior of await.
The StartCommando, StopCommando and DisposeAllCommandos methods are intended to be called sequentially, not in parallel. In case you want to control the execution of the commandos from multiple threads in parallel, you'll have to synchronize these calls with a lock.
The DisposeAllCommandos is intended to be used before terminating the application. For a clean termination, the returned Task should be awaited. No more commandos should be started after calling this method.
As I mentioned in my comment above, you can create your own wrapper around a queue that manages background processors of your queue and re-queues the tasks as they complete.
In addition, you mentioned the need to dynamically add or remove tasks at will, which the below implementation will handle.
And finally, it takes an external CancellationToken so that you can either call stop on the processor itself, or cancel the parent CancellationTokenSource.
public class QueueProcessor
{
// could be replaced with a ref-count solution to ensure
// all duplicated tasks are removed
private readonly HashSet<string> _tasksToRemove = new();
private readonly ConcurrentQueue<string> _taskQueue;
private Task[] _processors;
private Func<string, CancellationToken, Task> _processorCallback;
private CancellationTokenSource _cts;
public QueueProcessor(
string[] tasks,
Func<string, CancellationToken, Task> processorCallback)
{
_taskQueue = new(tasks);
_processorCallback = processorCallback;
}
public async Task StartAsync(int numberOfProcessorThreads,
CancellationToken cancellationToken = default)
{
_cts = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken);
_processors = new Task[numberOfProcessorThreads];
for (int i = 0; i < _processors.Length; i++)
{
_processors[i] = Task.Run(async () => await ProcessQueueAsync());
}
await Task.WhenAll(_processors);
}
public void Stop()
{
_cts.Cancel();
_cts.Dispose();
}
public void RemoveTask(string task)
{
lock (_tasksToRemove)
{
_tasksToRemove.Add(task);
}
}
public void AddTask(string task) => _taskQueue.Enqueue(task);
private async Task ProcessQueueAsync()
{
while (!_cts.IsCancellationRequested)
{
if (_taskQueue.TryDequeue(out var task))
{
if (ShouldTaskBeRemoved(task))
{
continue;
}
await _processorCallback(task, _cts.Token);
if (!ShouldTaskBeRemoved(task))
{
_taskQueue.Enqueue(task);
}
}
else
{
// sleep for a bit before checking for more work
await Task.Delay(1000, _cts.Token);
}
}
}
private bool ShouldTaskBeRemoved(string task)
{
lock (_tasksToRemove)
{
if (_tasksToRemove.Contains(task))
{
Console.WriteLine($"Task {task} requested for removal");
_tasksToRemove.Remove(task);
return true;
}
}
return false;
}
}
You can test the above with the following:
public async Task MyFunction(string command, CancellationToken cancellationToken)
{
await Task.Delay(50);
if (!cancellationToken.IsCancellationRequested)
{
Console.WriteLine($"Execute command: {command}");
}
else
{
Console.WriteLine($"Terminating command: {command}");
}
}
var cts = new CancellationTokenSource();
var processor = new QueueProcessor(
new string[] { "Task1", "Task2", "Task3" },
MyFunction);
var task = processor.StartAsync(2, cts.Token);
await Task.Delay(100);
processor.RemoveTask("Task1");
await Task.Delay(500);
cts.Cancel();
await runningProcessorTask;
This results in the following output:
Execute command: Task2
Execute command: Task1
Execute command: Task3
Execute command: Task2
Task Task1 requested for removal
Execute command: Task3
Execute command: Task2
Execute command: Task2
Execute command: Task3
Execute command: Task3
Execute command: Task2
Execute command: Task2
Execute command: Task3
Execute command: Task3
Execute command: Task2
Execute command: Task2
Execute command: Task3
Execute command: Task2
Execute command: Task3
Terminating command: Task2
Terminating command: Task3
If you would prefer to use a Channel<T> backed version that handles waiting for additional work gracefully without a manual Task.Delay, the following version exposes the same public api without the internal ConcurrentQueue<T>.
public class QueueProcessor
{
// could be replaced with a ref-count solution to ensure all duplicated tasks are removed
private readonly HashSet<string> _tasksToRemove = new();
private readonly System.Threading.Channels.Channel<string> _taskQueue;
private Task[] _processors;
private Func<string, CancellationToken, Task> _processorCallback;
private CancellationTokenSource _cts;
public QueueProcessor(string[] tasks, Func<string, CancellationToken, Task> processorCallback)
{
_taskQueue = Channel.CreateUnbounded<string>();
_processorCallback = processorCallback;
for (int i = 0; i < tasks.Length; i++)
{
_taskQueue.Writer.WriteAsync(tasks[i]);
}
}
public async Task StartAsync(int numberOfProcessorThreads, CancellationToken cancellationToken = default)
{
_cts = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken);
_processors = new Task[numberOfProcessorThreads];
for (int i = 0; i < _processors.Length; i++)
{
_processors[i] = Task.Run(async () => await ProcessQueueAsync());
}
await Task.WhenAll(_processors);
}
public void Stop()
{
_taskQueue.Writer.TryComplete();
_cts.Cancel();
_cts.Dispose();
}
public void RemoveTask(string task)
{
lock (_tasksToRemove)
{
_tasksToRemove.Add(task);
}
}
public ValueTask AddTask(string task) => _taskQueue.Writer.WriteAsync(task);
private async Task ProcessQueueAsync()
{
while (!_cts.IsCancellationRequested && await _taskQueue.Reader.WaitToReadAsync(_cts.Token))
{
if (_taskQueue.Reader.TryRead(out var task))
{
if (ShouldTaskBeRemoved(task))
{
continue;
}
await _processorCallback(task, _cts.Token);
if (!ShouldTaskBeRemoved(task))
{
await _taskQueue.Writer.WriteAsync(task);
}
}
}
}
private bool ShouldTaskBeRemoved(string task)
{
lock (_tasksToRemove)
{
if (_tasksToRemove.Contains(task))
{
Console.WriteLine($"Task {task} requested for removal");
_tasksToRemove.Remove(task);
return true;
}
}
return false;
}
}
Let me take a stab at identifying your actual root problem: you have I/O bound operations (network access, file I/O, database queries, etc) running at the same time as CPU bound operations (whatever processing you have on the former), and because of the way you wrote your code (that you don't show), you have I/O bound operations waiting for CPU bound ones to even start.
I'm guessing that because by reductio ad absurdum if everything was CPU bound then your CPU cores would be equally used no matter the order of operations, and for I/O bound operations the total time they'd take is equally independent of the order, they just have to get woken up when something finally finishes.
If I'm right, then the actual solution is to split your calls between two thread pools, one for CPU bound operations (that max at the number of available cores) and one for I/O bound operations (that max at some reasonable default, the maximum number of I/O connections that can be in flight at the same time). You can then schedule each operation to its own thread pool and await them as you normally would and they'd never step on each others' toes.
You can use Parallel.Invoke() method to execute multiple processes at the same time.
var TasksZ = new[]
{
() => MyFunction("Task A"),
() => MyFunction("Task B"),
() => MyFunction("Task C")
};
Parallel.Invoke(Options, TasksZ);
void MyFunction(string comando)
{
Console.WriteLine(comando);
}

Quartz doesnt end job and/or trigger chained job

I have two quartz jobs, one for Downloading (whose source only is available during specific times) and one for Compressing (to compress the downloaded data), which I have chained with JobChainingJobListener.
However, the first Download job never completes (I am spinning on the CancellationToken but it never gets set) - similar to this blog https://blexin.com/en/blog-en/stopping-asynchronous-jobs-of-quartz-3/
I am not sure if Quartz supports external runtime duration, or if I need to manage the runtime within the job (i.e. run for X number of hours and then return). To separate concerns, externally managing this to the job would be best.
In my demo below I have set to run 30 seconds from starting the app and set to end running 30 seconds after it started (but for my real version I run between 9am to 6pm, in the provided timezone).
Greatly appreciate advise as to what i am doing wrong.
var scheduler = await StdSchedulerFactory.GetDefaultScheduler();
var schedule = SchedulerBuilder.Create();
var now = DateTime.Now.TimeOfDay;
var startAt = now.Add(TimeSpan.FromSeconds(30));
var endAt = startAt.Add(TimeSpan.FromSeconds(30));
Console.WriteLine($"Run from {now} to {endAt}");
var trigger = TriggerBuilder
.Create()
.ForJob(downloadJob)
.WithDailyTimeIntervalSchedule(x => x.InTimeZone(timeZoneInfo)
.StartingDailyAt(new TimeOfDay(startAt.Hours, startAt.Minutes, startAt.Seconds))
.EndingDailyAt(new TimeOfDay(endAt.Hours, endAt.Minutes, endAt.Seconds)))
.Build();
var chain = new JobChainingJobListener("DownloadAndCompress");
chain.AddJobChainLink(downloadJob.Key, compressJob.Key);
scheduler.ListenerManager.AddJobListener(chain, GroupMatcher<JobKey>.AnyGroup());
await this.scheduler.ScheduleJob(downloadJob, trigger, cancellationToken);
await this.scheduler.AddJob(compressJob, true, true, cancellationToken);
await this.scheduler.Start(cancellationToken);
And my template jobs are below:
class DownloadJob : IJob
{
public async Task Execute(IJobExecutionContext context)
{
Console.WriteLine("Running download job");
var cancellationToken = context.CancellationToken;
while (!cancellationToken.IsCancellationRequested)
{
await Task.Delay(1000, cancellationToken);
}
Console.WriteLine("End download job");
}
}
class CompressJob : IJob
{
public async Task Execute(IJobExecutionContext context)
{
Console.WriteLine("Running compress job");
var cancellationToken = context.CancellationToken;
while (!cancellationToken.IsCancellationRequested)
{
await Task.Delay(1000, cancellationToken);
}
Console.WriteLine("End compress job");
}
}

C# Schedule a Function using Quartz.net (or alternatives)

I have a C# service and I need to run a function once a week.
I have a working C# service which currently is running on a timer every 60 seconds.
Please see below a section of the services OnStart function:
// Set up a timer to trigger.
System.Timers.Timer timer = new System.Timers.Timer
{
Interval = 60000 //*1000; // 60 second
};
timer.Elapsed += delegate {
// Runs the code every 60 seconds but only triggers it if the schedule matches
Function1();
};
timer.Start();
The above code calls Function1() every 60 seconds and I am checking it in Function1 if the current dayofweek and time matches the schedule and if it does than execute the rest of the function.
Although this does work it not the most elegant way IMO.
I have tried using Quartz.net as it was looking promising but when I used all the examples available online (questions answered some 7 years ago in 2012), it is showing as an error in visual studio:
using System;
using Quartz;
public class SimpleJob : IJob
{
public void Execute(IJobExecutionContext context)
{
throw new NotImplementedException();
}
}
This is erroring
(Error CS0738 'SimpleJob' does not implement interface member 'IJob.Execute(IJobExecutionContext)'. 'SimpleJob.Execute(IJobExecutionContext)' cannot implement 'IJob.Execute(IJobExecutionContext)' because it does not have the matching return type of 'Task'.)
but this does not:
public Task Execute(IJobExecutionContext context)
{
throw new NotImplementedException();
}
Could someone give a current working example of a job scheduled through Quartz.net for a beginner?
Or using another elegant method than Quartz.net in a C# service?
First of all we need to implement a job implementation. For example:
internal class TestJob : IJob
{
public Task Execute(IJobExecutionContext context)
{
Console.WriteLine("Job started");
return Task.CompletedTask;
}
}
Now we need to write a method which will return a Scheduler of Quartz :
static async Task TestScheduler()
{
// construct a scheduler factory
NameValueCollection props = new NameValueCollection
{
{ "quartz.serializer.type", "binary" }
};
StdSchedulerFactory factory = new StdSchedulerFactory(props);
// get a scheduler
IScheduler sched = await factory.GetScheduler();
await sched.Start();
// define the job and tie it to our HelloJob class
IJobDetail job = JobBuilder.Create<TestJob>()
.WithIdentity("myJob", "group1")
.Build();
// Trigger the job to run now, and then every 40 seconds
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity("myTrigger", "group1")
.StartNow()
.WithSimpleSchedule(x => x
.WithIntervalInMinutes(1)
.RepeatForever())
.Build();
await sched.ScheduleJob(job, trigger);
}
and in the Main method of the Program we will need to write following code:
static async Task Main()
{
Console.WriteLine("Test Scheduler started");
await TestScheduler();
Console.ReadKey();
}
Now this will keep executing after every minute.
Hope it helps.

Configuring quartz.net scheduler in .net

I have an application in .Net framework and I'm using quartz scheduler. I need to configure quartz.
Now I have one method which is fired every 15 minutes. These method is used to do some work with database. I want, in case, that work of procedure is complete, then start waiting period and after that period again start these database method.
For procedure there will be maximum time which cannot be longer. For examplpe 60 minutes. Do you have any ideas how to configure length of working procedure, how to stop when work is finished and how to define waiting time between?
// configure Quartz
var stdSchedulerProperties = new NameValueCollection
{
{ "quartz.threadPool.threadCount", "10" },
{ "quartz.jobStore.misfireThreshold", "60000" }
};
var stdSchedulerFactory = new StdSchedulerFactory(stdSchedulerProperties);
var scheduler = stdSchedulerFactory.GetScheduler().Result;
scheduler.Start();
// create job and specify timeout
IJobDetail job = JobBuilder.Create<JobWithTimeout>()
.WithIdentity("job1", "group1")
.UsingJobData("timeoutInMinutes", 60)
.Build();
// create trigger and specify repeat interval
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity("trigger1", "group1")
.StartNow()
.WithSimpleSchedule(x => x.WithIntervalInMinutes(15).RepeatForever())
.Build();
// schedule job
scheduler.ScheduleJob(job, trigger).Wait();
/// <summary>
/// Implementation of IJob. Represents the wrapper job for a task with timeout
/// </summary>
public class JobWithTimeout : IJob
{
public Task Execute(IJobExecutionContext context)
{
return Task.Run(() => Execute(context));
}
public void Execute(IJobExecutionContext context)
{
Thread workerThread = new Thread(DoWork);
workerThread.Start();
context.JobDetail.JobDataMap.TryGetValue("timeoutInMinutes", out object timeoutInMinutes);
TimeSpan timeout = TimeSpan.FromMinutes((int)timeoutInMinutes);
bool finished = workerThread.Join(timeout);
if (!finished) workerThread.Abort();
}
public void DoWork()
{
// do stuff
}
}

Categories