Monitor.Enter vs Monitor.Wait - c#

I'm still unsure on the differences between these two calls. From MSDN,
Monitor.Enter(Object) Acquires an exclusive lock on the specified object.
Monitor.Wait(Object) Releases the lock on an object and blocks the current thread until it reacquires the lock.
From that I assume that Monitor.Wait is the same as Monitor.Enter except that it releases the lock on the object first before reacquiring.
Does the current thread have to have the lock in the first place? How could a different thread force a release on a lock of an object? Why would the same thread want to reacquire a lock?

According to MSDN: Monitor.Wait Method(Object)
SynchronizationLockException: The calling thread does not own the lock for the specified object.
In other words: You can only call Monitor.Wait(Object), when you already own the lock, whereas you call Monitor.Enter(Object) in order to acquire the lock.
As for why Monitor.Wait is needed: If your thread realizes, that it is lacking information to continue execution (e.g. it's waiting for a signal), you might want to let other threads enter the critical section, because not all threads have the same prerequisites.
For the waiting thread to continue execution, you will need to call Monitor.Pulse(Object) or Monitor.PulseAll(Object) before releasing the lock (otherwise, you're going to get the same kind of exception as with Monitor.Wait(Object)).
Keep in mind, that the next thread that acquires the lock after a pulse and after the lock was released, is not necessarily the thread that received the pulse.
Also keep in mind, that receiving a pulse, is not equivalent to having your condition met. You might still need to wait just a little longer:
// make sure to synchronize this correctly ;)
while (ConditionNotMet)
{
Monitor.Wait(mutex);
if (ConditionNotMet) // We woke up, but our condition is still not met
Monitor.Pulse(mutex); // Perhaps another waiting thread wants to wake up?
}

Consider this example:
public class EnterExitExample
{
private object myLock;
private bool running;
private void ThreadProc1()
{
while (running)
{
lock (myLock)
{
// Do stuff here...
}
Thread.Yield();
}
}
private void ThreadProc2()
{
while (running)
{
lock (myLock)
{
// Do other stuff here...
}
Thread.Yield();
}
}
}
Now you have two threads, each waiting for lock, then doing their stuff, then releasing the lock. The lock (myLock) syntax is just sugar for Monitor.Enter(myLock) and Monitor.Exit(myLock).
Let us now look at a more complicated example, where Wait and Pulse come into play.
public class PulseWaitExample
{
private Queue<object> queue;
private bool running;
private void ProducerThreadProc()
{
while (running)
{
object produced = ...; // Do production stuff here.
lock (queue)
{
queue.Enqueue(produced);
Monitor.Pulse(queue);
}
}
}
private void ConsumerThreadProc()
{
while (running)
{
object toBeConsumed;
lock (queue)
{
Monitor.Wait(queue);
toBeConsumed = queue.Dequeue();
}
// Do consuming stuff with toBeConsumed here.
}
}
}
What do we have here?
The producer produces an object whenever he feels like it. As soon as he has, he obtains lock on the queue, enqueues the object, then does a Pulse call.
At the same time, the consumer does NOT have lock, he left it by calling Wait. As soon as he gets a Pulse on that object, he will re-lock, and do his consuming stuff.
So what you have here is a direct thread-to-thread notification that there is something to do for the consumer. If you wouldn't have that, all you could do is have the consumer keep polling on the collection if there is something to do yet. Using Wait, you can make sure that there is.

As Cristi mentioned, a naive wait/pulse code does not work. Because your are completely missing the crucial point here : The monitor is NOT a message queue. If you pulse and no one is waiting, the pulse is LOST.
The right philosophy is that your are waiting for a condition, and if the condition is not satisfied, there is a way to wait for it, without eating cpu and without holding the lock. Here, the condition for the consumer is that there is something in the queue.
See https://ideone.com/tWqTS1 which work (a fork from by Cristi's example).
public class PulseWaitExample
{
private Queue<object> queue;
private bool running;
private void ProducerThreadProc()
{
while (running)
{
object produced = ...; // Do production stuff here.
lock (queue)
{
queue.Enqueue(produced);
Monitor.Pulse(queue);
}
}
}
private void ConsumerThreadProc()
{
while (running)
{
object toBeConsumed;
lock (queue)
{
// here is the fix
if (queue.Count == 0)
{
Monitor.Wait(queue);
}
toBeConsumed = queue.Dequeue();
}
// Do consuming stuff with toBeConsumed here.
}
}
}

Related

Wait for a lock to be released

I have a thread. At a certain point, what I want to do is check if a certain lock is free. If it is free, I want the thread to continue on its merry way. If it is not free, I want to wait until it is free, but then not actually acquire the lock.
Here is my code so far:
private object LockObject = new Object();
async void SlowMethod() {
lock (LockObject) {
// lotsa stuff
}
}
void AppStartup() {
this.SlowMethod();
UIThreadStartupStuff();
// now, I want to wait on the lock. I don't know where/how the results
// of SlowMethod might be needed. But I do know that they will be needed.
// And I don't want to check the lock every time.
}
I think you have classical XY problem. I guess what you want is start a Task with you SlowMethod and then Continue it with the UIThreadStartupStuff is UI thread.
Task.Factory.StartNew(()=>SlowMethod())
.ContinueWith(t=>UIThreadStartupStuff(), TaskScheduler.FromCurrentSynchronizationContext());
or with async/await (make your SlowMethod to return Task)
try
{
await SlowMethod();
}
catch(...)
{}
UIThreadStartupStuff();
You don't want to use a lock here. You need an event. Either ManualResetEvent or AutoResetEvent.
Remember, locks are used for mutual exclusion. Events are used for signaling.
You have your SlowMethod set the event when it's done. For example:
private ManualResetEvent DoneEvent = new ManualResetEvent(false);
async void SlowMethod() {
// lotsa stuff
// done with lotsa stuff. Signal the event.
DoneEvent.Set();
}
void AppStartup() {
this.SlowMethod();
UIThreadStartupStuff();
// Wait for the SlowMethod to set the event:
DoneEvent.WaitOne();
}
I might not get what you want to achieve, but why not wait on the lock "properly"?
After all, it is a clear sign of the lock being free if you can take it.
Also, you can release it immediately if it is important.
void AppStartup() {
this.SlowMethod();
UIThreadStartupStuff();
// now, I want to wait on the lock. I don't know where/how the results
// of SlowMethod might be needed. But I do know that they will be needed.
// And I don't want to check the lock every time.
lock (LockObject) {
// nothing, you now know the lock was free
}
// continue...
}

Scheduling a method to run rather than lock in C#

I have a method (let's call it "CheckAll") that is called from multiple areas of my program, and can therefore be called for a 2nd time before the 1st time has completed.
To get around this I have implemented a "lock" that (if I understand it correctly), halts the 2nd thread until the 1st thread has completed.
However what I really want is for this 2nd call to return to the calling method immediately (rather than halt the thread), and to schedule CheckAll to be run again once it has completed the 1st time.
I could setup a timer to do this but that seems cumbersome and difficult. Is there a better way?
Easy/cheap implementation.
private Thread checkThread = null;
private int requests = 0;
void CheckAll()
{
lock(SyncRoot){
if (checkThread != null; && checkThread.ThreadState == ThreadState.Running)
{
requests++;
return;
}else
{
CheckAllImpl();
}
}
}
void CheckAppImpl()
{
// start a new thread and run the following code in it.
checkThread = new Thread(newThreadStart( () => {
while (true)
{
// 1. Do what ever checkall need to do.
// 2.
lock (SyncRoot)
{
requests--;
if (!(requests > 0))
break;
}
}});
checkThread.Start();
}
Just on a side note, this can have some race conditions. Better implementation swould be to use ConcurrentQueue introduced in .NET 4 which handles all the threading craziness for you.
UPDATE: Here's a more 'cool' implementation using ConcurrentQueue (turns out we don't need TPL.)
public class CheckAllService
{
// Make sure you don't create multiple
// instances of this class. Make it a singleton.
// Holds all the pending requests
private ConcurrentQueue<object> requests = new ConcurrentQueue<object>();
private object syncLock = new object();
private Thread checkAllThread;
/// <summary>
/// Requests to Check All. This request is async,
/// and will be serviced when all pending requests
/// are serviced (if any).
/// </summary>
public void RequestCheckAll()
{
requests.Enqueue("Process this Scotty...");
lock (syncLock)
{ // Lock is to make sure we don't create multiple threads.
if (checkAllThread == null ||
checkAllThread.ThreadState != ThreadState.Running)
{
checkAllThread = new Thread(new ThreadStart(ListenAndProcessRequests));
checkAllThread.Start();
}
}
}
private void ListenAndProcessRequests()
{
while (requests.Count != 0)
{
object thisRequestData;
requests.TryDequeue(out thisRequestData);
try
{
CheckAllImpl();
}
catch (Exception ex)
{
// TODO: Log error ?
// Can't afford to fail.
// Failing the thread will cause all
// waiting requests to delay until another
// request come in.
}
}
}
protected void CheckAllImpl()
{
throw new NotImplementedException("Check all is not gonna write it-self...");
// TODO: Check All
}
}
NOTE: I use a real Thread instead of a TPL Task because a Task doesn't hold on to a real thread as an optimization. When there's no Thread, that means at the time your application closes, any waiting CheckAll requests are ignored.(I got bitten hard by this when I thought I'm so smart to call my logging methods in a task once, which ignored a couple of dozen log records when closing. CLR checks and waits for any waiting threads when gracefully exiting.)
Happy Coding...
Use a separate thread to call CheckAll() in a loop that also waits on a semaphore. A 'PerformCheck()' method signals the semaphore.
Your system can then make as many calls to 'PerformCheck()' as it might wish, from any thread, and CheckAll() will be run exactly as many times as there are PerformCheck() calls, but with no blocking on PerformCheck().
No flags, no limits, no locking, no polling.
You can setup a flag for this.
When this CheckAll() method runs. at the end of this method you can put a flag for each of the separate method. means if the method is being called from other method lets say a() then immidiately after this it is going to be called from b() then>>> when it is called from a() put a flaga variable(which may be global) in CheckAll() at the end(assign it to particular value) and give the condition in b() according to the flaga variable value. Means something like this...
public a()
{
CheckAll();
}
public b()
{
.
.
(put condition here for check when flaga=1 from the method CheckAll())
CheckAll();
}
public CheckAll()
{
.
.
.
flaga=1;
}
}

how to obtain a lock in two places but release on one place?

i'm newbie in c#. I need to obtain lock in 2 methods, but release in one method. Will that work?
public void obtainLock() {
Monitor.Enter(lockObj);
}
public void obtainReleaseLock() {
lock (lockObj) {
doStuff
}
}
Especially can I call obtainLock and then obtainReleaseLock? Is "doubleLock" allowed in C#? These two methods are always called from the same thread, however lockObj is used in another thread for synchronization.
upd: after all comments what do you think about such code? is it ideal?
public void obtainLock() {
if (needCallMonitorExit == false) {
Monitor.Enter(lockObj);
needCallMonitorExit = true;
}
// doStuff
}
public void obtainReleaseLock() {
try {
lock (lockObj) {
// doAnotherStuff
}
} finally {
if (needCallMonitorExit == true) {
needCallMonitorExit = false;
Monitor.Exit(lockObj);
}
}
}
Yes, locks are "re-entrant", so a call can "double-lock" (your phrase) the lockObj. however note that it needs to be released exactly as many times as it is taken; you will need to ensure that there is a corresponding "ReleaseLock" to match "ObtainLock".
I do, however, suggest it is easier to let the caller lock(...) on some property you expose, though:
public object SyncLock { get { return lockObj; } }
now the caller can (instead of obtainLock()):
lock(something.SyncLock) {
//...
}
much easier to get right. Because this is the same underlying lockObj that is used internally, this synchronizes against either usage, even if obtainReleaseLock (etc) is used inside code that locked against SyncLock.
With the context clearer (comments), it seems that maybe Wait and Pulse are the way to do this:
void SomeMethodThatMightNeedToWait() {
lock(lockObj) {
if(needSomethingSpecialToHappen) {
Monitor.Wait(lockObj);
// ^^^ this ***releases*** the lock (however many times needed), and
// enters the pending-queue; when *another* thread "pulses", it
// enters the ready-queue; when the lock is *available*, it
// reacquires the lock (back to as many times as it held it
// previously) and resumes work
}
// do some work, happy that something special happened, and
// we have the lock
}
}
void SomeMethodThatMightSignalSomethingSpecial() {
lock(lockObj) {
// do stuff
Monitor.PulseAll(lockObj);
// ^^^ this moves **all** items from the pending-queue to the ready-queue
// note there is also Pulse(...) which moves a *single* item
}
}
Note that when using Wait you might want to use the overload that accepts a timeout, to avoid waiting forever; note also it is quite common to have to loop and re-validate, for example:
lock(lockObj) {
while(needSomethingSpecialToHappen) {
Monitor.Wait(lockObj);
// at this point, we know we were pulsed, but maybe another waiting
// thread beat us to it! re-check the condition, and continue; this might
// also be a good place to check for some "abort" condition (and
// remember to do a PulseAll() when aborting)
}
// do some work, happy that something special happened, and we have the lock
}
You would have to use the Monitor for this functionality. Note that you open yourself up to deadlocks and race conditions if you aren't careful with your locks and having them taken and released in seperate areas of code can be risky
Monitor.Exit(lockObj);
Only one owner can hold the lock at a given time; it is exclusive. While the locking can be chained the more important component is making sure you obtain and release the proper number of times, avoiding difficult to diagnose threading issues.
When you wrap your code via lock { ... }, you are essentially calling Monitor.Enter and Monitor.Exit as scope is entered and departed.
When you explicitly call Monitor.Enter you are obtaining the lock and at that point you would need to call Monitor.Exit to release the lock.
This doesn't work.
The code
lock(lockObj)
{
// do stuff
}
is translated to something like
Monitor.Enter(lockObj)
try
{
// do stuff
}
finally
{
Monitor.Exit(lockObj)
}
That means that your code enters the lock twice but releases it only once. According to the documentation, the lock is only really released by the thread if Exit was called as often as Enter which is not the case in your code.
Summary: Your code will not deadlock on the call to obtainReleaseLock, but the lock on lockObj will never be released by the thread. You would need to have an explicit call to Monitor.Exit(lockObj), so the calls to Monitor.Enter matches the number of calls to Monitor.Exit.

Can you signal and wait atomically with C# thread synchronization?

I'm having some issues with thread synchronization in C#. I have a shared object which gets manipulated by two threads, I've made access to the object mutually exclusive using lock(), but I also want to block each thread depending on the state of the shared object. Specially block thread A when the object is empty, block thread B when the object is full, and have the other thread signal the blocked thread when the object state changes.
I tried doing this with a ManualResetEvent, but have run into a race condition where thread B will detect the object is full, move to WaitOne, and thread A will come in and empty the object (signalling the MRE every access, and block itself once the object is empty) before thread A hits its WaitOne, meaning thread A is waiting for the thread to not be full, even though it isn't.
I figure that if I could call a function like 'SignalAndWaitOne', that would atomically signal before waiting, it would prevent that race condition?
Thanks!
A typical way to do this is to use Monitor.Enter, Monitor.Wait and Monitor.Pulse to control access to the shared queue. A sketch:
shared object sync = new object()
shared Queue q = new Queue()
Producer()
Enter(sync)
// This blocks until the lock is acquired
while(true)
while(q.IsFull)
Wait(sync)
// this releases the lock and blocks the thread
// until the lock is acquired again
// We have the lock and the queue is not full.
q.Enqueue(something)
Pulse(sync)
// This puts the waiting consumer thread to the head of the list of
// threads to be woken up when this thread releases the lock
Consumer()
Enter(sync)
// This blocks until the lock is acquired
while(true)
while(q.IsEmpty)
Wait(sync)
// this releases the lock and blocks the thread
// until the lock is acquired again
// We have the lock and the queue is not empty.
q.Dequeue()
Pulse(sync)
// This puts the waiting producer thread to the head of the list of
// threads to be woken up when this thread releases the lock
A BlockingCollection is already provided by .NET 4.0.
If you're on an earlier version, then you can use the Monitor class directly.
EDIT: The following code is totally untested, and does not handle maxCount values that are small (<= 2). It also doesn't have any provisions for timeouts or cancellation:
public sealed class BlockingList<T>
{
private readonly List<T> data;
private readonly int maxCount;
public BlockingList(int maxCount)
{
this.data = new List<T>();
this.maxCount = maxCount;
}
public void Add(T item)
{
lock (data)
{
// Wait until the collection is not full.
while (data.Count == maxCount)
Monitor.Wait(data);
// Add our item.
data.Add(item);
// If the collection is no longer empty, signal waiting threads.
if (data.Count == 1)
Monitor.PulseAll(data);
}
}
public T Remove()
{
lock (data)
{
// Wait until the collection is not empty.
while (data.Count == 0)
Monitor.Wait(data);
// Remove our item.
T ret = data.RemoveAt(data.Count - 1);
// If the collection is no longer full, signal waiting threads.
if (data.Count == maxCount - 1)
Monitor.PulseAll(data);
}
}
}

ManualResetEvent vs. Thread.Sleep

I implemented the following background processing thread, where Jobs is a Queue<T>:
static void WorkThread()
{
while (working)
{
var job;
lock (Jobs)
{
if (Jobs.Count > 0)
job = Jobs.Dequeue();
}
if (job == null)
{
Thread.Sleep(1);
}
else
{
// [snip]: Process job.
}
}
}
This produced a noticable delay between when the jobs were being entered and when they were actually starting to be run (batches of jobs are entered at once, and each job is only [relatively] small.) The delay wasn't a huge deal, but I got to thinking about the problem, and made the following change:
static ManualResetEvent _workerWait = new ManualResetEvent(false);
// ...
if (job == null)
{
lock (_workerWait)
{
_workerWait.Reset();
}
_workerWait.WaitOne();
}
Where the thread adding jobs now locks _workerWait and calls _workerWait.Set() when it's done adding jobs. This solution (seemingly) instantly starts processing jobs, and the delay is gone altogether.
My question is partly "Why does this happen?", granted that Thread.Sleep(int) can very well sleep for longer than you specify, and partly "How does the ManualResetEvent achieve this level of performance?".
EDIT: Since someone asked about the function that's queueing items, here it is, along with the full system as it stands at the moment.
public void RunTriggers(string data)
{
lock (this.SyncRoot)
{
this.Triggers.Sort((a, b) => { return a.Priority - b.Priority; });
foreach (Trigger trigger in this.Triggers)
{
lock (Jobs)
{
Jobs.Enqueue(new TriggerData(this, trigger, data));
_workerWait.Set();
}
}
}
}
static private ManualResetEvent _workerWait = new ManualResetEvent(false);
static void WorkThread()
{
while (working)
{
TriggerData job = null;
lock (Jobs)
{
if (Jobs.Count > 0)
job = Jobs.Dequeue();
if (job == null)
{
_workerWait.Reset();
}
}
if (job == null)
_workerWait.WaitOne();
else
{
try
{
foreach (Match m in job.Trigger.Regex.Matches(job.Data))
job.Trigger.Value.Action(job.World, m);
}
catch (Exception ex)
{
job.World.SendLineToClient("\r\n\x1B[32m -- {0} in trigger ({1}): {2}\x1B[m",
ex.GetType().ToString(), job.Trigger.Name, ex.Message);
}
}
}
}
The events are kernel primitives provided by the OS/Kernel that's designed just for this sort of things. The kernel provides a boundary upon which you can guarantee atomic operations which is important for synchronization(Some atomicity can be done in user space too with hardware support).
In short, when a thread waits on an event it's put on a waiting list for that event and marked as non-runnable.
When the event is signaled, the kernel wakes up the ones in the waiting list and marks them as runnable and they can continue to run. It's naturally a huge benefit that a thread can wake up immediately when the event is signalled, vs sleeping for a long time and recheck the condition every now and then.
Even one millisecond is a really really long time, you could have processed thousands of event in that time. Also the time resolution is traditionally 10ms, so sleeping less than 10ms usually just results in a 10ms sleep anyway. With an event, a thread can be woken up and scheduled immediately
First locking on _workerWait is pointless, an Event is a system (kernel) object designed for signaling between threads (and heavily used in the Win32 API for asynchronous operations). Therefore it is quite safe for multiple threads to set or reset it without additional synchronization.
As to your main question, need to see the logic for placing things on the queue as well, and some information on how much work is done for each job (is the worker thread spending more time processing work or on waiting for work).
Likely the best solution would be to use an object instance to lock on and use Monitor.Pulse and Monitor.Wait as a condition variable.
Edit: With a view of the code to enqueue, it appears that answer #1116297 has it right: a 1ms delay is too long to wait, given that many of the work items will be extremely quick to process.
The approach of having a mechanism to wake up the worker thread is correct (as there is no .NET concurrent queue with a blocking dequeue operation). However rather than using an event, a condition variable is going to be a little more efficient (as in non-contended cases it does not require a kernel transition):
object sync = new Object();
var queue = new Queue<TriggerData>();
public void EnqueueTriggers(IEnumerable<TriggerData> triggers) {
lock (sync) {
foreach (var t in triggers) {
queue.Enqueue(t);
}
Monitor.Pulse(sync); // Use PulseAll if there are multiple worker threads
}
}
void WorkerThread() {
while (!exit) {
TriggerData job = DequeueTrigger();
// Do work
}
}
private TriggerData DequeueTrigger() {
lock (sync) {
if (queue.Count > 0) {
return queue.Dequeue();
}
while (queue.Count == 0) {
Monitor.Wait(sync);
}
return queue.Dequeue();
}
}
Monitor.Wait will release the lock on the parameter, wait until Pulse() or PulseAll() is called against the lock, then re-enter the lock and return. Need to recheck the wait condition because some other thread could have read the item off the queue.

Categories