ManualResetEvent vs. Thread.Sleep - c#

I implemented the following background processing thread, where Jobs is a Queue<T>:
static void WorkThread()
{
while (working)
{
var job;
lock (Jobs)
{
if (Jobs.Count > 0)
job = Jobs.Dequeue();
}
if (job == null)
{
Thread.Sleep(1);
}
else
{
// [snip]: Process job.
}
}
}
This produced a noticable delay between when the jobs were being entered and when they were actually starting to be run (batches of jobs are entered at once, and each job is only [relatively] small.) The delay wasn't a huge deal, but I got to thinking about the problem, and made the following change:
static ManualResetEvent _workerWait = new ManualResetEvent(false);
// ...
if (job == null)
{
lock (_workerWait)
{
_workerWait.Reset();
}
_workerWait.WaitOne();
}
Where the thread adding jobs now locks _workerWait and calls _workerWait.Set() when it's done adding jobs. This solution (seemingly) instantly starts processing jobs, and the delay is gone altogether.
My question is partly "Why does this happen?", granted that Thread.Sleep(int) can very well sleep for longer than you specify, and partly "How does the ManualResetEvent achieve this level of performance?".
EDIT: Since someone asked about the function that's queueing items, here it is, along with the full system as it stands at the moment.
public void RunTriggers(string data)
{
lock (this.SyncRoot)
{
this.Triggers.Sort((a, b) => { return a.Priority - b.Priority; });
foreach (Trigger trigger in this.Triggers)
{
lock (Jobs)
{
Jobs.Enqueue(new TriggerData(this, trigger, data));
_workerWait.Set();
}
}
}
}
static private ManualResetEvent _workerWait = new ManualResetEvent(false);
static void WorkThread()
{
while (working)
{
TriggerData job = null;
lock (Jobs)
{
if (Jobs.Count > 0)
job = Jobs.Dequeue();
if (job == null)
{
_workerWait.Reset();
}
}
if (job == null)
_workerWait.WaitOne();
else
{
try
{
foreach (Match m in job.Trigger.Regex.Matches(job.Data))
job.Trigger.Value.Action(job.World, m);
}
catch (Exception ex)
{
job.World.SendLineToClient("\r\n\x1B[32m -- {0} in trigger ({1}): {2}\x1B[m",
ex.GetType().ToString(), job.Trigger.Name, ex.Message);
}
}
}
}

The events are kernel primitives provided by the OS/Kernel that's designed just for this sort of things. The kernel provides a boundary upon which you can guarantee atomic operations which is important for synchronization(Some atomicity can be done in user space too with hardware support).
In short, when a thread waits on an event it's put on a waiting list for that event and marked as non-runnable.
When the event is signaled, the kernel wakes up the ones in the waiting list and marks them as runnable and they can continue to run. It's naturally a huge benefit that a thread can wake up immediately when the event is signalled, vs sleeping for a long time and recheck the condition every now and then.
Even one millisecond is a really really long time, you could have processed thousands of event in that time. Also the time resolution is traditionally 10ms, so sleeping less than 10ms usually just results in a 10ms sleep anyway. With an event, a thread can be woken up and scheduled immediately

First locking on _workerWait is pointless, an Event is a system (kernel) object designed for signaling between threads (and heavily used in the Win32 API for asynchronous operations). Therefore it is quite safe for multiple threads to set or reset it without additional synchronization.
As to your main question, need to see the logic for placing things on the queue as well, and some information on how much work is done for each job (is the worker thread spending more time processing work or on waiting for work).
Likely the best solution would be to use an object instance to lock on and use Monitor.Pulse and Monitor.Wait as a condition variable.
Edit: With a view of the code to enqueue, it appears that answer #1116297 has it right: a 1ms delay is too long to wait, given that many of the work items will be extremely quick to process.
The approach of having a mechanism to wake up the worker thread is correct (as there is no .NET concurrent queue with a blocking dequeue operation). However rather than using an event, a condition variable is going to be a little more efficient (as in non-contended cases it does not require a kernel transition):
object sync = new Object();
var queue = new Queue<TriggerData>();
public void EnqueueTriggers(IEnumerable<TriggerData> triggers) {
lock (sync) {
foreach (var t in triggers) {
queue.Enqueue(t);
}
Monitor.Pulse(sync); // Use PulseAll if there are multiple worker threads
}
}
void WorkerThread() {
while (!exit) {
TriggerData job = DequeueTrigger();
// Do work
}
}
private TriggerData DequeueTrigger() {
lock (sync) {
if (queue.Count > 0) {
return queue.Dequeue();
}
while (queue.Count == 0) {
Monitor.Wait(sync);
}
return queue.Dequeue();
}
}
Monitor.Wait will release the lock on the parameter, wait until Pulse() or PulseAll() is called against the lock, then re-enter the lock and return. Need to recheck the wait condition because some other thread could have read the item off the queue.

Related

Better approach to concurrently "do or wait and skip"

I wonder is there a better solution for this task. One have a function which called concurrently by some amount of threads, but if some thread is already executing the code the other threads should skip that part of code and wait until that thread finish the execution. Here is what I have for now:
int _flag = 0;
readonly ManualResetEventSlim Mre = new ManualResetEventSlim();
void Foo()
{
if (Interlocked.CompareExchange(ref _flag, 1, 0) == 0)
{
Mre.Reset();
try
{
// do stuff
}
finally
{
Mre.Set();
Interlocked.Exchange(ref _flag, 0);
}
}
else
{
Mre.Wait();
}
}
What I want to achieve is faster execution, lower overhead and prettier look.
You could use a combination of an AutoResetEvent and a Barrier to do this.
You can use the AutoResetEvent to ensure that only one thread enters a "work" method.
The Barrier is used to ensure that all the threads wait until the one that entered the "work" method has returned from it.
Here's some sample code:
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Demo
{
class Program
{
const int TASK_COUNT = 3;
static readonly Barrier barrier = new Barrier(TASK_COUNT);
static readonly AutoResetEvent gate = new AutoResetEvent(true);
static void Main()
{
Parallel.Invoke(task, task, task);
}
static void task()
{
while (true)
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is waiting at the gate.");
// This bool is just for test purposes to prevent the same thread from doing the
// work every time!
bool didWork = false;
if (gate.WaitOne(0))
{
work();
didWork = true;
gate.Set();
}
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is waiting at the barrier.");
barrier.SignalAndWait();
if (didWork)
Thread.Sleep(10); // Give a different thread a chance to get past the gate!
}
}
static void work()
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is entering work()");
Thread.Sleep(3000);
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is leaving work()");
}
}
}
However, it might well be that the Task Parallel Library might have a better, higher-level solution. It's worth reading up on it a bit.
First of all, the waiting threads wouldn't do anything, they only wait, and after they get the signal from the event, they simply move out of the method, so you should add the while loop. After that, you can use the AutoResetEvent instead of manual one, as #MatthewWatson suggested. Also, you may consider SpinWait inside the loop, which is a lightweight solution.
Second, why use int, if this is definitely bool nature for the flag field?
Third, why not to use the simple locking, as #grrrrrrrrrrrrr suggested? This is exactly what are you doing here: forcing other threads to wait for one. If your code should write something by only one thread in a given time, but can read by multiple threads, you can use the ReaderWriterLockSlim object for such synchronization.
What I want to achieve is faster execution, lower overhead and prettier look.
faster execution
unless your "Do Stuff" is extremely fast this code shouldn't have any major overhead.
lower overhead
Again, Interlocked Exchange,/CompareExchange are very low overhead, as is manual reset event.
If your "Do Stuff" is really fast, e.g. moving a linked list head, then you can spin:
prettier look
Correct multi-threaded C# code rarely looks pretty when compared to correct single threaded C# code. The language idioms are just not there yet.
That said: If you have a really fast operation ("a few tens of cycles"), then you can spin: (although without knowing exactly what your code is doing, I can't say if this is correct).
if (Interlocked.CompareExchange(ref _flag, 1, 0) == 0)
{
try
{
// do stuff that is very quick.
}
finally
{
Interlocked.Exchange(ref _flag, 0);
}
}
else
{
SpinWait.SpinUntil(() => _flag == 0);
}
The first thing that springs to mind is to change it to use a lock. This won't skip the code, but will cause each thread getting to it to pause while the first thread executes its stuff. This way the lock will also automatically get released in the case of an exception.
object syncer = new object();
void Foo()
{
lock(syncer)
{
//Do stuff
}
}

Producer/Consumer Thread Pool w/ Main Thread Support - Infrequent Deadlock?

I have a C# thread pool class that is based heavily on the producer/consumer code from https://stackoverflow.com/a/1656662/782181. NOTE: I'm doing this instead of using BlockingCollection because I'm stuck with .NET2.0!
I added a function to the class that can be called from the main thread to allow the main thread to do some work. My thinking here was that, at some point, the main thread waits for work to be done, but instead of waiting, I could also have the main thread do some of the work to speed things up.
Here's a slimmed version of the class to demonstrate:
public static class SGThreadPool
{
// Shared object to lock access to the queue between threads.
private static object locker = new object();
// The various threads that are doing our work.
private static List<Thread> workers = null;
// A queue of tasks to be completed by the workers.
private static Queue<object> taskQueue = new Queue<object>();
private static Queue<WaitCallback> taskCallbacks = new Queue<WaitCallback>();
//OMMITTED: Init function (starts threads)
// Enqueues a task for a thread to do.
public static void EnqueueTask(WaitCallback callback, object context)
{
lock(locker)
{
taskQueue.Enqueue(context);
taskCallbacks.Enqueue(callback);
Monitor.PulseAll(locker); //Q: should I just use 'Pulse' here?
}
}
// Can be called from main thread to have it "help out" with tasks.
public static bool PerformTask()
{
WaitCallback taskCallback = null;
object task = null;
lock(locker)
{
if(taskQueue.Count > 0)
{
task = taskQueue.Dequeue();
}
if(taskCallbacks.Count > 0)
{
taskCallback = taskCallbacks.Dequeue();
}
}
// No task means no work, return false.
if(task == null || taskCallback == null) { return false; }
// Do the work!
taskCallback(task);
return true;
}
private static void Consume()
{
while(true)
{
WaitCallback taskCallback = null;
object task = null;
lock(locker)
{
// While no tasks in the queue, wait.
while(taskQueue.Count == 0)
{
Monitor.Wait(locker);
}
// Get a task.
task = taskQueue.Dequeue();
taskCallback = taskCallbacks.Dequeue();
}
// Null task signals an exit.
if(task == null || taskCallback == null) { return; }
// Call consume callback with task as context.
taskCallback(task);
}
}
}
Basically, I can enqueue a number of tasks to be performed by background threads. But it is also possible for the main thread to take a task and perform it by calling PerformTask().
I'm running into an infrequent problem where the main thread will try to "lock" in PerformTask(), but the lock is already taken. The main thread waits, but the lock doesn't ever become available, for some reason.
Nothing in the code is jumping out at me as a problem causing the deadlock - I'm hoping that someone else might be able to spot the problem. I've been looking at this for a couple hours, and I'm not sure why the main thread would get stuck at the "lock()" call in PerformTask(). It seems like no other thread would be holding the lock indefinitely? Is it a bad idea to allow the main thread to interact with the pool in this way?
Hmm, so, while I would still like to know why the code above could deadlock in certain scenarios, I think I've found a workaround that will do the trick.
If the main thread is going to be doing work here, I want to make sure the main thread doesn't get blocked for a long period of time. After all, a general dev rule: don't block the main thread!
So, the solution I'm trying is to use Monitor.TryEnter directly, rather than using lock() for the main thread. This allows me to specify a timeout for how long the main thread is willing to wait for the lock.
public static bool PerformTask()
{
WaitCallback taskCallback = null;
object task = null;
// Use TryEnter, rather than "lock" because
// it allows us to specify a timeout as a failsafe.
if(Monitor.TryEnter(locker, 500))
{
try
{
// Pull a task from the queue.
if(taskQueue.Count > 0)
{
task = taskQueue.Dequeue();
}
if(taskCallbacks.Count > 0)
{
taskCallback = taskCallbacks.Dequeue();
}
}
finally
{
Monitor.Exit(locker);
}
}
// No task means no work, return false.
if(task == null || taskCallback == null) { return false; }
// Do the work!
taskCallback(task);
return true;
}
In this code, the thread will wait to acquire the lock for up to 500ms. If it can't for whatever reason, it fails to do any tasks, but at least it doesn't get stuck. It seems like a good idea to not put the main thread in a position where it could wait indefinitely.
I believe that when you use lock(), the compiler generates similar code anyways, so I don't think there would be any performance issue with this solution.

Monitor.Enter vs Monitor.Wait

I'm still unsure on the differences between these two calls. From MSDN,
Monitor.Enter(Object) Acquires an exclusive lock on the specified object.
Monitor.Wait(Object) Releases the lock on an object and blocks the current thread until it reacquires the lock.
From that I assume that Monitor.Wait is the same as Monitor.Enter except that it releases the lock on the object first before reacquiring.
Does the current thread have to have the lock in the first place? How could a different thread force a release on a lock of an object? Why would the same thread want to reacquire a lock?
According to MSDN: Monitor.Wait Method(Object)
SynchronizationLockException: The calling thread does not own the lock for the specified object.
In other words: You can only call Monitor.Wait(Object), when you already own the lock, whereas you call Monitor.Enter(Object) in order to acquire the lock.
As for why Monitor.Wait is needed: If your thread realizes, that it is lacking information to continue execution (e.g. it's waiting for a signal), you might want to let other threads enter the critical section, because not all threads have the same prerequisites.
For the waiting thread to continue execution, you will need to call Monitor.Pulse(Object) or Monitor.PulseAll(Object) before releasing the lock (otherwise, you're going to get the same kind of exception as with Monitor.Wait(Object)).
Keep in mind, that the next thread that acquires the lock after a pulse and after the lock was released, is not necessarily the thread that received the pulse.
Also keep in mind, that receiving a pulse, is not equivalent to having your condition met. You might still need to wait just a little longer:
// make sure to synchronize this correctly ;)
while (ConditionNotMet)
{
Monitor.Wait(mutex);
if (ConditionNotMet) // We woke up, but our condition is still not met
Monitor.Pulse(mutex); // Perhaps another waiting thread wants to wake up?
}
Consider this example:
public class EnterExitExample
{
private object myLock;
private bool running;
private void ThreadProc1()
{
while (running)
{
lock (myLock)
{
// Do stuff here...
}
Thread.Yield();
}
}
private void ThreadProc2()
{
while (running)
{
lock (myLock)
{
// Do other stuff here...
}
Thread.Yield();
}
}
}
Now you have two threads, each waiting for lock, then doing their stuff, then releasing the lock. The lock (myLock) syntax is just sugar for Monitor.Enter(myLock) and Monitor.Exit(myLock).
Let us now look at a more complicated example, where Wait and Pulse come into play.
public class PulseWaitExample
{
private Queue<object> queue;
private bool running;
private void ProducerThreadProc()
{
while (running)
{
object produced = ...; // Do production stuff here.
lock (queue)
{
queue.Enqueue(produced);
Monitor.Pulse(queue);
}
}
}
private void ConsumerThreadProc()
{
while (running)
{
object toBeConsumed;
lock (queue)
{
Monitor.Wait(queue);
toBeConsumed = queue.Dequeue();
}
// Do consuming stuff with toBeConsumed here.
}
}
}
What do we have here?
The producer produces an object whenever he feels like it. As soon as he has, he obtains lock on the queue, enqueues the object, then does a Pulse call.
At the same time, the consumer does NOT have lock, he left it by calling Wait. As soon as he gets a Pulse on that object, he will re-lock, and do his consuming stuff.
So what you have here is a direct thread-to-thread notification that there is something to do for the consumer. If you wouldn't have that, all you could do is have the consumer keep polling on the collection if there is something to do yet. Using Wait, you can make sure that there is.
As Cristi mentioned, a naive wait/pulse code does not work. Because your are completely missing the crucial point here : The monitor is NOT a message queue. If you pulse and no one is waiting, the pulse is LOST.
The right philosophy is that your are waiting for a condition, and if the condition is not satisfied, there is a way to wait for it, without eating cpu and without holding the lock. Here, the condition for the consumer is that there is something in the queue.
See https://ideone.com/tWqTS1 which work (a fork from by Cristi's example).
public class PulseWaitExample
{
private Queue<object> queue;
private bool running;
private void ProducerThreadProc()
{
while (running)
{
object produced = ...; // Do production stuff here.
lock (queue)
{
queue.Enqueue(produced);
Monitor.Pulse(queue);
}
}
}
private void ConsumerThreadProc()
{
while (running)
{
object toBeConsumed;
lock (queue)
{
// here is the fix
if (queue.Count == 0)
{
Monitor.Wait(queue);
}
toBeConsumed = queue.Dequeue();
}
// Do consuming stuff with toBeConsumed here.
}
}
}

Scheduling a method to run rather than lock in C#

I have a method (let's call it "CheckAll") that is called from multiple areas of my program, and can therefore be called for a 2nd time before the 1st time has completed.
To get around this I have implemented a "lock" that (if I understand it correctly), halts the 2nd thread until the 1st thread has completed.
However what I really want is for this 2nd call to return to the calling method immediately (rather than halt the thread), and to schedule CheckAll to be run again once it has completed the 1st time.
I could setup a timer to do this but that seems cumbersome and difficult. Is there a better way?
Easy/cheap implementation.
private Thread checkThread = null;
private int requests = 0;
void CheckAll()
{
lock(SyncRoot){
if (checkThread != null; && checkThread.ThreadState == ThreadState.Running)
{
requests++;
return;
}else
{
CheckAllImpl();
}
}
}
void CheckAppImpl()
{
// start a new thread and run the following code in it.
checkThread = new Thread(newThreadStart( () => {
while (true)
{
// 1. Do what ever checkall need to do.
// 2.
lock (SyncRoot)
{
requests--;
if (!(requests > 0))
break;
}
}});
checkThread.Start();
}
Just on a side note, this can have some race conditions. Better implementation swould be to use ConcurrentQueue introduced in .NET 4 which handles all the threading craziness for you.
UPDATE: Here's a more 'cool' implementation using ConcurrentQueue (turns out we don't need TPL.)
public class CheckAllService
{
// Make sure you don't create multiple
// instances of this class. Make it a singleton.
// Holds all the pending requests
private ConcurrentQueue<object> requests = new ConcurrentQueue<object>();
private object syncLock = new object();
private Thread checkAllThread;
/// <summary>
/// Requests to Check All. This request is async,
/// and will be serviced when all pending requests
/// are serviced (if any).
/// </summary>
public void RequestCheckAll()
{
requests.Enqueue("Process this Scotty...");
lock (syncLock)
{ // Lock is to make sure we don't create multiple threads.
if (checkAllThread == null ||
checkAllThread.ThreadState != ThreadState.Running)
{
checkAllThread = new Thread(new ThreadStart(ListenAndProcessRequests));
checkAllThread.Start();
}
}
}
private void ListenAndProcessRequests()
{
while (requests.Count != 0)
{
object thisRequestData;
requests.TryDequeue(out thisRequestData);
try
{
CheckAllImpl();
}
catch (Exception ex)
{
// TODO: Log error ?
// Can't afford to fail.
// Failing the thread will cause all
// waiting requests to delay until another
// request come in.
}
}
}
protected void CheckAllImpl()
{
throw new NotImplementedException("Check all is not gonna write it-self...");
// TODO: Check All
}
}
NOTE: I use a real Thread instead of a TPL Task because a Task doesn't hold on to a real thread as an optimization. When there's no Thread, that means at the time your application closes, any waiting CheckAll requests are ignored.(I got bitten hard by this when I thought I'm so smart to call my logging methods in a task once, which ignored a couple of dozen log records when closing. CLR checks and waits for any waiting threads when gracefully exiting.)
Happy Coding...
Use a separate thread to call CheckAll() in a loop that also waits on a semaphore. A 'PerformCheck()' method signals the semaphore.
Your system can then make as many calls to 'PerformCheck()' as it might wish, from any thread, and CheckAll() will be run exactly as many times as there are PerformCheck() calls, but with no blocking on PerformCheck().
No flags, no limits, no locking, no polling.
You can setup a flag for this.
When this CheckAll() method runs. at the end of this method you can put a flag for each of the separate method. means if the method is being called from other method lets say a() then immidiately after this it is going to be called from b() then>>> when it is called from a() put a flaga variable(which may be global) in CheckAll() at the end(assign it to particular value) and give the condition in b() according to the flaga variable value. Means something like this...
public a()
{
CheckAll();
}
public b()
{
.
.
(put condition here for check when flaga=1 from the method CheckAll())
CheckAll();
}
public CheckAll()
{
.
.
.
flaga=1;
}
}

Threading only block the first thread (Attempt Two)

I have asked this question before - but I have spent some time thinking about it and have implemented a working version.
Overview
1) Threads are being created to perform a certain task.
2) Only one thread can perform the task at a time.
3) Each thread performs the exact same task. (Does a bunch of checks and validations on a system)
3) The threads are being created faster than the task can be performed. (I have no control over the thread creation)
Result is that overtime I get a backlog of threads to perform the task.
What I have implemented goes as follows
1) Thread checks to see how many active threads there are.
2) If there are 0 threads it is marked to PerformTask and it starts the task
3) If there is 1 thread it is marked to PerformTak and it blocks
4) If there is more than 1 thread the thread is not marked to PerformTasks and just dies
The idea is that if there is a thread waiting to perform the task already I just kill the thread.
Here is the code that I came up with
bool tvPerformTask = false;
ivNumberOfProcessesSemaphore.WaitOne();
if (ivNumberOfProcessesWaiting == 0 ||
ivNumberOfProcessesWaiting == 1)
{
ivNumberOfProcessesWaiting++;
tvPerformTask = true;
}
ivNumberOfProcessesSemaphore.Release();
if (tvPerformTask)
{
//Here we perform the work
ivProcessSemaphore.WaitOne();
//Thread save
ivProcessSemaphore.Release();
ivNumberOfProcessesSemaphore.WaitOne();
ivNumberOfProcessesWaiting--;
ivNumberOfProcessesSemaphore.Release();
}
else
{
//we just let the thread die
}
The problem that I have is not that it doesn't work it is just that I do not find the code elegant specifically I am not very happy that I need 2 semaphores an integer and a local flag to control it all. If there a way to implement this or pattern that would make the code simpler.
How about this?
private readonly _lock = new object();
private readonly _semaphore = new Semaphore(2, 2);
private void DoWork()
{
if (_semaphore.WaitOne(0))
{
try
{
lock (_lock)
{
// ...
}
}
finally
{
_semaphore.Release();
}
}
}
Consider using a ThreadPool instead of trying to managing the creation and destruction of individual threads on your own.

Categories