I created a job that implements IStatefulJob and according to the quartz docs
"if a job is stateful, and a trigger attempts to 'fire' the job while it is already
executing, the trigger will block (wait) until the previous execution completes"
Is there anyway way to remove the block and kill the newly fired instance of the job?
The job I am running can have wildly different run times based on the amount of data behind it and I am concerned that if we have a number of jobs waiting to run that it could have a negative effect...
Thanks
Unfortunately no. As a job implementor you are responsible for making sure that job will keep track whether it has reached its time limit of 'good behavior'. Normally there's no need as jobs take somewhat expected time to complete.
Same goes when you want to interrupt all jobs in scheduler, you need to implement IInterruptableJob and set flag that your main job loop watches.
You can always rethink the design. It shouldn't be problem to queue same job as it has the same duty to do. With misfire instructions you can configure misfired (queued too long) instanced to be discarded and wait for the next fire time.
Related
I am using Quartz.net.
I have configure job with attribute DisallowConcurrentExecution. I want single instance of that job execute.
I have configured trigger that fire every 10 seconds but in some situation my job get more than minutes to complete. Once this happen I can not see Last Execution Time and Next execution correct. It still refer to old time.
I am new to quartz but I know that thread pool might schedule job that in queue and when one instance complete and new will get start because of attribute configuration but why it is not maintaining time of execution properly.
Please help.
Double-posted here: https://github.com/quartznet/quartznet/issues/173
This works as designed. Quartz considers your trigger misfired as it
didn't run when it was supposed to (job's concurrent execution
protection prohibited it). You need to tweak your misfire handling
configuration.
http://www.quartz-scheduler.net/documentation/quartz-2.x/tutorial/more-about-triggers.html
I have a queue of tasks or work items that needs to be executed in sequence, in the background. These tasks will be of the "fire and forget" type, meaning that once they are started, I do not really care if they complete or not, no need for cancellation or status update. If they do not complete, the user will be able to retry or diagnose manually.
The goal is to be able to keep a reference to the queue and only have to do
myQueue.Add( () => DoMyStuff() );
in order to add something to the queue.
The System.Threading.Task class only seems to be able to queue tasks one after the other, not by referencing a common queue. I do not want to manage the complexity of getting the latest taks and attach to it.
Threadpools do not guarantee sequencing and will execute work items in parallel. (Which is great, but not what I need)
Is there any built-in class that can handle that that I did not think of?
Edit:
We need to be able to add tasks to the queue at a later time. The scenario is that we want to send commands to a device (think switching a light bulb on or off) when the user clicks on a button. The commands take 5 seconds to process and we want the user to be able to click more than once and queue the requests. We do not know upfront how many tasks will be queued nor what will the tasks be.
Create a BlockingCollection, by default it will use a ConcurrentQueue as its internal data structure. Ensure that any task with prerequisites has its prerequisites added to the collection first. The BlockingCollection can be a collection of Tasks, some custom item representing the parameters for a method to be called, a Func<> or Action<> to execute, or whatever. Personally I'd go with either Task or Action.
Then you just need to have a single thread that goes through each item in the collection and executes them synchronously.
You can add new items to the queue while it's working and it won't have any problems.
You can create a queue object as a wrapper around System.Threading.Task. If you limit the number of concurrently executing threads to just 1 in the underlying thread pool, I think your problem is solved.
Limiting the number of executing tasks: System.Threading.Tasks - Limit the number of concurrent Tasks
How about starting all the threads at the same time and make them listen to a job completion event.
say your threads have id according to the sequence to run, all the threads can start at the same time, but will sleep till they get the job complete/ timeout of the previous job
The Job complete/ timeout event will also help your monitoring thread to keep track of the worker threads
Background is the following: A Windows Service which is supposed to perform an action once per day at a given time.
I have currently implemented this by creating a timer and added the ElapsedEventHandler. The event fires every t minutes and it is then checked that we are passed the configured time. If so the action is performed and if not nothing happens.
A colleague asked me if it was not easier just to have a while(true) loop containing a sleep() and then of course the same logic for checking if we are past the time for action.
Question:
Can one say anything about the "robustness" of an event vs. a while(loop)? I am thinking of the situation where the thread "dies" so the while(true) loop exits. Is this more "likely" to happen in the one scenario vs. the other?
I'd vote for neither.
If your service just sits idle for an entire day periodically waking up (and paging code in) to see if "it's time to run", then this is a task better suited for the Windows Task Scheduler. You can programatically install a task to run every day through the task scheduler. Then your code doesn't need to be running at all unless it's time to run. (Or if your service does need to run in the background anyway, the task in the scheduler can signal your service to wake up instead of timer logic).
Both will be equally robust if you use proper error handling.
If you don't use proper error handling they will be equally brittle.
while(true)
{
...
Thread.Sleep(1000);
}
will make your service slow when responding to the standard service events like OnStop.
Besides, where do you put your while loop? In a separate thread? You will get more manual management if you use a loop too.
To summarize: use a timer.
I have a C# program, which has an "Agent" class. The program creates several Agents, and each Agent has a "run()" method, which executes a Task (i.e.: Task.Factory.StartNew()...).
Each Agent performs some calculations, and then needs to wait for all the other Agents to finish their calculations, before proceeding to the next stage (his actions will be based according to the calculations of the others).
In order to make an Agent wait, I have created a CancellationTokenSource (named "tokenSource"), and in order to alert the program that this Agent is going to sleep, I threw an event. Thus, the 2 consecutive commands are:
(1) OnWaitingForAgents(new EventArgs());
(2) tokenSource.Token.WaitHandle.WaitOne();
(The event is caught by an "AgentManager" class, which is a thread in itself, and the 2nd command makes the Agent Task thread sleep until a signal will be received for the Cancellation Token).
Each time the above event is fired, the AgentManager class catches it, and adds +1 to a counter. If the number of the counter equals the number of Agents used in the program, the AgentManager (which holds a reference to all Agents) wakes each one up as follows:
agent.TokenSource.Cancel();
Now we reach my problem: The 1st command is executed asynchronously by an Agent, then due to a context switch between threads, the AgentManager seems to catch the event, and goes on to wake up all the Agents. BUT - the current Agent has not even reached the 2nd command yet !
Thus, the Agent is receiving a "wake up" signal, and only then does he go to sleep, which means he gets stuck sleeping with no one to wake him up!
Is there a way to "atomize" the 2 consecutive methods together, so no context switch will happen, thus forcing the Agent to go to sleep before the AgentManager has the chance to wake him up?
The low-level technique that you are asking about is thread synchronisation. What you have there is a critical section (or part of one), and you need to protect access to it. I'm surprised that you've learned about multithreaded programming without having learned about thread synchronisation and critical sections yet! It's essential to know about these things for any kind of "low-level" multithreaded programming.
Maybe look into Parallel.Invoke or Parallel.For in .NET 4, which allows you to execute methods in parallel and wait until all parallel methods have been invoked.
http://msdn.microsoft.com/en-us/library/dd992634.aspx
Seems like that would help you out a lot, and take care of all the queuing for you.
humm... I don't think it's good idea (or even possible) develop software in .NET worrying about context switches, since neither Windows or .NET are real time. Probably you have another kind of problem in that code.
I've understood that you simply run all your agents in parallel, and you want to wait till all of them have finished to go to the next stage. You can use several techniques to accomplish that, the easiest one would be using Monitor.Wait(Object monitor) and Monitor.PulseAll(Object monitor).
In the task library there are several things to do it as well. As #jishi has pointed out, you can use the Parallel flavours, or spawn a lot of Tasks and then wait for all with the Task.WaitAll(Task[] tasks) method.
Each time the above event is fired,
the AgentManager class catches it, and
adds +1 to a counter.
How are you adding 1 to that counter and how are you reading it? You should use Interloked.Increment to ensure an atomic operation, and read it in a volatile operation with Thread.VolatileRead for example, or simply put it in a lock statement.
I have queue of tasks for the ThreadPool, and each task has a tendency to froze locking up all the resources it is using. And these cant be released unless the service is restarted.
Is there a way in the ThreadPool to know that its thread is already frozen? I have an idea of using a time out, (though i still dont know how to write it), but i think its not safe because the length of time for processing is not uniform.
I don't want to be too presumptuous here, but a good dose of actually finding out what the problem is and fixing it is the best course with deadlocks.
Run a debug version of your service and wait until it deadlocks. It will stay deadlocked as this is a wonderful property of deadlocks.
Attach the Visual Studio debugger to the service.
"Break All".
Bring up your threads windows, and start spelunking...
Unless you have a sound architecture\design\reason to choose victims in the first place, don't do it - period. It's pretty much a recipe for disaster to arbitrarily bash threads over the head when they're in the middle of something.
(This is perhaps a bit lowlevel, but at least it is a simple solution. As I don't know C#'s API, this is a general solution for any language using thread-pools.)
Insert a watchdog task after each real task that updates a time value with the current time. If this value is larger than you max task run time (say 10 seconds), you know that something is stuck.
Instead of setting a time and polling it, you could continuously set and reset some timers 10 secs into the future. When it triggers, a task has hung.
The best way is probably to wrap each task in a "Watchdog" Task class that does this automatically. That way, upon completion, you'd clear the timer, and you could also set a per-task timeout, which might be useful.
You obviously need one time/timer object for each thread in the threadpool, but that's solvable via thread-local variables.
Note that this solution does not require you to modify your tasks' code. It only modifies the code putting tasks into the pool.
One way is to use a watchdog timer (a solution usually done in hardware but applicable to software as well).
Have each thread set a thread-specific value to 1 at least once every five seconds (for example).
Then your watchdog timer wakes every ten seconds (again, this is an example figure only) and checks to ensure that all the values are 1. If they're not 1, then a thread has locked up.
The watchdog timer then sets them all to 0 and goes back to sleep for the next cycle.
Providing your worker threads are written in such a way so that they will be able to set the values in a timely manner under non-frozen conditions, this scheme will work okay.
The first thread that locks up will not set its value to 1, and this will be detected by the watchdog timer on the next cycle.
However, a better solution is to find out why the threads are freezing in the first place and fix that.