lock
{
Dispatcher.BeginInvoke(DispatcherPriority.Send, (SendOrPostCallback)delegate(object o)
{
DoSomething();
}
}
Does lock remains acquired Until Dispatcher completes its execution or released soon after sending the DoSomething(); for execution to Dispatcher?
Lock remains acquired until code under lock {} section completes its execution.
In your case that means: until Dispatcher.BeginInvoke completes its execution.
And as Dispatcher.BeginInvoke executes asynchronously, that means that lock gets released almost "immediately" - DoSomething() might start in the moment when lock has been already released.
Related
Just to put the questions upfront (please no comments about the bad architecture or how to revise - suppose this is what it is):
How does the "lock" statement apply when using Invoke/BeginInvoke
Could the following code result in a deadlock?
Suppose I have the following BindingList that I need to update on the GUI Thread:
var AllItems = new BindingList<Item>();
I want to make sure that all updates to it are synchronized.
Suppose I have the following subroutine to do some calculations and then insert a new entry into the BindingList:
private void MyFunc() {
lock(locker) {
... //do some calculations with AllItems
AddToArray(new Item(pos.ItemNo));
... //update some other structures with the contents of AllItems
}
}
And AddToArray looks like:
private void AddToArray (Item pitem)
{
DoInGuiThread(() =>
{
lock (locker)
{
AllItems.Add(pitem);
}
});
}
And DoInGuiThread looks like:
private void DoInGuiThread(Action action) {
if(InvokeRequired) {
BeginInvoke(action);
} else {
action.Invoke();
}
}
The lock is held till you leave the lock block, your current code does not cause a deadlock however it also does not work correctly either.
Here is the sequence of events:
On a background thread you call MyFunc.
A lock is taken for the background thread for the object locker
The background thread will "do some calculations with AllItems"
The background thread calls AddToArray from MyFunc passing in pitem
The background thread calls DoInGuiThread from AddToArray
The background thread calls BeginInvoke from DoInGuiThread, the thread does not block, I will use A to signify the background thread
and B to signify the UI thread, both of these are happening at the
same time.
A) BeginInvoke returns from it's call because it is non blocking.
B) The UI hits lock (locker) and blocks because the lock is held by
the background thread.
A) DoInGuiThread returns.
B) The UI is still locked up, waiting for the background thread to release the lock.
A) AddToArray returns.
B) The UI is still locked up, waiting for the background thread to release the lock.
A) The background thread will "update some other structures with the contents of AllItems" (note, pitem has not yet been added to
AllItems)
B) The UI is still locked up, waiting for the background thread to release the lock.
A) The background thread releases the lock for the object locker
B) The UI thread takes the lock for the object locker
A) MyFunc returns.
B) pitem is added to AllItems
A) Whoever called MyFunc continues to run code
B) The UI thread releases the lock for the object locker
A) Whoever called MyFunc continues to run code
B) The UI thread returns to the message pump to process new messages and no longer appears to be "locked up" by the user.
Do you see the issue? AddToArray returns but the object is not added to the array until the end of MyFunc so your code after AddToArray will not have the item in the array.
The "usual" way to solve this is you use Invoke instead of BeginInvoke however that causes a deadlock to happen. This is the sequence of events, Steps up to 6 are the same and will be skipped.
The background thread calls Invoke from DoInGuiThread
A) Invoke waits for B to return to the message pump.
B) The UI hits lock (locker) and blocks because the lock is held by the
background thread.
A) Invoke waits for B to return to the message pump.
B) The UI is still locked up, waiting for the background thread to release
the lock.
A) Invoke waits for B to return to the message pump.
B) The UI is still locked up, waiting for the background thread to release
the lock.
A) Invoke waits for B to return to the message pump.
B) The UI is still locked up, waiting for the background thread to
release the lock.
(This repeats forever)
There's two different ways this is likely to go down.
You're doing all of this on the GUI thread
You're starting this call chain on some other thread
Let's deal with the first first.
In this case there won't be a problem. You take the lock in MyFunc, you call AddToArray which calls DoInGuiThread passing in a delegate. DoInGuiThread will notice that invoking is not required and call the delegate. The delegate, executing on the same thread that now holds the lock, is allowed to enter the lock again, before calling AllItems.Add.
So no problem here.
Now, the second case, you start this call chain on some other thread.
MyFunc starts by taking the lock, call AddToArray which calls DoInGuiThread passing the delegate. Since DoInGuiThread now detects that it needs to invoke it calls BeginInvoke passing in the delegate.
This delegate is queued on the GUI thread by ways of a message. Here's where things again diverge. Let's say the GUI thread is currently busy, so it won't be able to process messages for a short while (which in this context means "enough to let the rest of this explanation unfold").
DoInGuiThread, having done its job, returns. The message is not yet processed. DoInGuiThread returned back to AddToArray which now returns back to MyFunc which releases the lock.
When the message is finally processed, nobody owns the lock, so the delegate being called is allowed to enter the lock.
Now, if the message ended up being processed before the other thread managed to return all the way out of the lock, the delegate now executing on the GUI thread would simply have to wait.
In other words, the GUI thread would block inside the delegate, waiting for the lock to be released so it could be entered by the code in the delegate.
Jeffrey Richter pointed out in his book 'CLR via C#' the example of a possible deadlock I don't understand (page 702, bordered paragraph).
The example is a thread that runs Task and call Wait() for this Task. If the Task is not started it should possible that the Wait() call is not blocking, instead it's running the not started Task. If a lock is entered before the Wait() call and the Task also try to enter this lock can result in a deadlock.
But the locks are entered in the same thread, should this end up in a deadlock scenario?
The following code produce the expected output.
class Program
{
static object lockObj = new object();
static void Main(string[] args)
{
Task.Run(() =>
{
Console.WriteLine("Program starts running on thread {0}",
Thread.CurrentThread.ManagedThreadId);
var taskToRun = new Task(() =>
{
lock (lockObj)
{
for (int i = 0; i < 10; i++)
Console.WriteLine("{0} from Thread {1}",
i, Thread.CurrentThread.ManagedThreadId);
}
});
taskToRun.Start();
lock (lockObj)
{
taskToRun.Wait();
}
}).Wait() ;
}
}
/* Console output
Program starts running on thread 3
0 from Thread 3
1 from Thread 3
2 from Thread 3
3 from Thread 3
4 from Thread 3
5 from Thread 3
6 from Thread 3
7 from Thread 3
8 from Thread 3
9 from Thread 3
*/
No deadlock occured.
J. Richter wrote in his book "CLR via C#" 4th Edition on page 702:
When a thread calls the Wait method, the system checks if the Task that the thread is waiting for has started executing. If it has, then the thread calling Wait will block until the Task has completed running. But if the Task has not started executing yet, then the system may (depending on the TaskScheduler) execute the Trask by using the thread that called Wait. If this happens, then the thread calling Wait does not block; it executes the Task and returns immediatlely. This is good in that no thread has blocked, thereby reducing resource usage (by not creating a thread to replace the blocked thread) while improving performance (no time is spet to create a thread an there is no contexte switcing). But it can also be bad if, for example, thre thread has taken a thread synchronization lock before calling Wait and thren the Task tries to take the same lock, resulting in a deadlocked thread!
If I'm understand the paragraph correctly, the code above has to end in a deadlock!?
You're taking my usage of the word "lock" too literally. The C# "lock" statement (which my book discourages the use of), internally leverages Monitor.Enter/Exit. The Monitor lock is a lock that supports thread ownership & recursion. Therefore, a single thread can acquire this kind of lock multiple times successfully. But, if you use a different kind of lock, like a Semaphore(Slim), an AutoResetEvent(Slim) or a ReaderWriterLockSlim (without recursion), then when a single thread tries to acquire any of these locks multiple times, deadlock occurs.
In this example, you're dealing with task inlining, a not-so-rare behavior of the TPL's default task scheduler. It results in the task being executed on the same thread which is already waiting for it with Task.Wait(), rather than on a random pool thread. In which case, there is no deadlock.
Change your code like below and you'll have a dealock:
taskToRun.Start();
lock (lockObj)
{
//taskToRun.Wait();
((IAsyncResult)taskToRun).AsyncWaitHandle.WaitOne();
}
The task inlining is nondeterministic, it may or may not happen. You should make no assumptions. Check Task.Wait and “Inlining” by Stephen Toub for more details.
Updated, the lock does not affect the task inlining here. Your code still runs without deadlock if you move taskToRun.Start() inside the lock:
lock (lockObj)
{
taskToRun.Start();
taskToRun.Wait();
}
What does cause the inlining here is the circumstance that the main thread is calling taskToRun.Wait() right after taskToRun.Start(). Here's what happens behind the scene:
taskToRun.Start() queues the task for execution by the task scheduler, but it hasn't been allocated a pool thread yet.
On the same thread, the TPL code inside taskToRun.Wait() checks if the task has already been allocated a pool thread (it hasn't) and executes it inline on the main thread. In which case, it's OK to acquired the same lock twice without a deadlock.
There is also a TPL Task Scheduler thread. If this thread gets a chance to execute before taskToRun.Wait() is called on the main thread, inlining doesn't happen and you get a deadlock. Adding Thread.Sleep(100) before Task.Wait() would be modelling this scenario. Inlining also doesn't happen if you don't use Task.Wait() and rather use something like AsyncWaitHandle.WaitOne() above.
As to the quote you've added to your question, it depends on how you read it. One thing is for sure: the same lock from the main thread can be entered inside the task, when the task gets inlined, without a deadlock. You just cannot make any assumptions that it will get inlined.
In your example, no deadlock occurs because the thread scheduling the task and the thread executing the task happen to be the same. If you were to modify the code such that your task ran on a different thread, you would see the deadlock occur, because two threads would then be contending for a lock on the same object.
Your example, modified to create a deadlock:
class Program {
static object lockObj = new object();
static void Main(string[] args) {
Console.WriteLine("Program starts running on thread {0}",
Thread.CurrentThread.ManagedThreadId);
var taskToRun = new Task(() => {
lock (lockObj) {
for (int i = 0; i < 10; i++)
Console.WriteLine("{0} from Thread {1}",
i, Thread.CurrentThread.ManagedThreadId);
}
});
lock (lockObj) {
taskToRun.Start();
taskToRun.Wait();
}
}
}
This example code has two standard threading problems. To understand it, you first have to understand thread races. When you start a thread, you can never assume it will start running right away. Nor can you assume that the code inside the thread arrives at a particular statement at a particular moment in time.
What matters a great deal here is whether or not the task arrives at the lock statement before the main thread does. In other words, whether it races ahead of the code in the main thread. Do model this as a horse race, the thread that acquired the lock is the horse that wins.
If it is the task that wins, pretty common on modern machines with multiple processor cores or a simple program that doesn't have any other threads active (and probably when you test the code) then nothing goes wrong. It acquires the lock and prevents the main thread from doing the same when it, later, arrives at the lock statement. So you'll see the console output, the task finishes, the main thread now acquires the lock and the Wait() call quickly completes.
But if the thread pool is already busy with other threads, or the machine is busy executing threads in other programs, or you are unlucky and you get an email just as the task starts running, then the code in the task doesn't start running right away and it is the main thread that acquired the lock first. The task can now no longer enter the lock statement so it cannot complete. And the main thread can not complete, Wait() will never return. A deadly embrace called deadlock.
Deadlock is relatively easy to debug, you've got all the time in the world to attach a debugger and look at the active threads to see why they are blocked. Threading race bugs are incredibly difficult to debug, they happen too infrequently and it can be very difficult to reason through the ordering problem that causes them. A common approach to diagnose thread races is to add tracing to the program so you can see the order. Which changes the timing and can make the bug disappear. Lots of programs were shipped with the tracing left on because they couldn't diagnose the problem :)
Thanks #jeffrey-richter for pointing it out, #embee there are scenario when we use locks other than Monitor than a single thread tries to acquire any of these locks multiple times, deadlock occurs. Check out the example below
The following code produce the expected deadlock. It need not be nested task the deadlock can occur without nesting also
class Program
{
static AutoResetEvent signalEvent = new AutoResetEvent(false);
static void Main(string[] args)
{
Task.Run(() =>
{
Console.WriteLine("Program starts running on thread {0}",
Thread.CurrentThread.ManagedThreadId);
var taskToRun = new Task(() =>
{
signalEvent.WaitOne();
for (int i = 0; i < 10; i++)
Console.WriteLine("{0} from Thread {1}",
i, Thread.CurrentThread.ManagedThreadId);
});
taskToRun.Start();
signalEvent.Set();
taskToRun.Wait();
}).Wait() ;
}
}
According to MSDN, Monitor.Wait():
Releases the lock on an object and blocks the current thread until it
reacquires the lock.
However, everything I have read about Wait() and Pulse() seems to indicate that simply releasing the lock on another thread is not enough. I need to call Pulse() first to wake up the waiting thread.
My question is why? Threads waiting for the lock on a Monitor.Enter() just get it when it's released. There is no need to "wake them up". It seems to defeat the usefulness of Wait().
eg.
static object _lock = new Object();
static void Main()
{
new Thread(Count).Start();
Sleep(10);
lock (_lock)
{
Console.WriteLine("Main thread grabbed lock");
Monitor.Pulse(_lock) //Why is this required when we're about to release the lock anyway?
}
}
static void Count()
{
lock (_lock)
{
int count = 0;
while(true)
{
Writeline("Count: " + count++);
//give other threads a chance every 10th iteration
if (count % 10 == 0)
Monitor.Wait(_lock);
}
}
}
If I use Exit() and Enter() instead of Wait() I can do:
static object _lock = new Object();
static void Main()
{
new Thread(Count).Start();
Sleep(10);
lock (_lock) Console.WriteLine("Main thread grabbed lock");
}
static void Count()
{
lock (_lock)
{
int count = 0;
while(true)
{
Writeline("Count: " + count++);
//give other threads a chance every 10th iteration
if (count % 10 == 0)
{
Monitor.Exit(_lock);
Monitor.Enter(_lock);
}
}
}
}
You use Enter / Exit to acquire exclusive access to a lock.
You use Wait / Pulse to allow co-operative notification: I want to wait for something to occur, so I enter the lock and call Wait; the notifying code will enter the lock and call Pulse.
The two schemes are related, but they're not trying to accomplish the same thing.
Consider how you'd implement a producer/consumer queue where the consumer can say "Wake me up when you've got an item for me to consume" without something like this.
I myself had this same doubt, and despite some interesting answers (some of them present here), I still kept searching for a more convincing answer.
I think an interesting and simple thought on this matter would be: I can call Monitor.Wait(lockObj) at a particular moment in which no other thread is waiting to acquire a lock on the lockObj object. I just want to wait for something to happen (some object's state to change, for instance), which is something I know that will happen eventually, on some other thread. As soon as this condition is achieved, I want to be able to reacquire the lock as soon as the other thread releases its lock.
By the definition of the Monitor.Wait method, it releases the lock and tries to acquire it again. If it didn't wait for the Monitor.Pulse method to be called before trying to acquire the lock again, it would simply release the lock and immediately acquire it again (depending on your code, possibly in loop).
That is, I think it's interesting trying to understand the need of the Monitor.Pulse method by looking at its usefulness in the functioning of the Monitor.Wait method.
Think like this: "I don't want to release this lock and immediately try to acquire it again, because I DON'T WANT to be ME the next thread to acquire this lock. And I also don't want to stay in a loop containing a call to Thread.Sleep checking some flag or something in order to know when the condition I'm waiting for has been achieved so that I can try to reacquire the lock. I just want to 'hibernate' and be awaken automatically, as soon as someone tells me the condition I'm waiting for has been achieved.".
Read the Remarks section of the linked MSDN page:
When a thread calls Wait, it releases the lock on the object and enters the object's waiting queue. The next thread in the object's ready queue (if there is one) acquires the lock and has exclusive use of the object. All threads that call Wait remain in the waiting queue until they receive a signal from Pulse or PulseAll, sent by the owner of the lock. If Pulse is sent, only the thread at the head of the waiting queue is affected. If PulseAll is sent, all threads that are waiting for the object are affected. When the signal is received, one or more threads leave the waiting queue and enter the ready queue. A thread in the ready queue is permitted to reacquire the lock.
This method returns when the calling thread reacquires the lock on the object. Note that this method blocks indefinitely if the holder of the lock does not call Pulse or PulseAll.
So, basically, when you call Monitor.Wait, your thread is in the waiting queue. For it to re-acquire the lock, it needs to be in the ready queue. Monitor.Pulse moves the first thread in the waiting queue to the ready queue and thus allows for it to re-acquire the lock.
I have been looking all over MSDN and can't find a reason why Thread can not be interrupted when sleeping within finally block. I have tried aborting with no success.
Is there any way how to wake up thread when sleeping within finally block?
Thread t = new Thread(ProcessSomething) {IsBackground = false};
t.Start();
Thread.Sleep(500);
t.Interrupt();
t.Join();
private static void ProcessSomething()
{
try { Console.WriteLine("processing"); }
finally
{
try
{
Thread.Sleep(Timeout.Infinite);
}
catch (ThreadInterruptedException ex)
{
Console.WriteLine(ex.Message);
}
}
}
Surprisingly MSDN claims thread CAN be aborted in finally block: http://msdn.microsoft.com/en-us/library/aa332364(v=vs.71).aspx
"There is a chance the thread could abort while a finally block is running, in which case the finally block is aborted."
edit
I find Hans Passant comment as best answer since this explains why Thread sometimes can and can not be interrupted/aborted in finally block. And that is when process is shutting down.
thanks
Aborting and interrupting threads should be avoided if possible as this can corrupt the state of a running program. For example, imagine you aborted a thread that was holding locks open to resources, these locks would never be released.
Instead consider using a signalling mechanism so that threads can co-operate with each other and so handle blocking and unblocking gracefully, e.g:
private readonly AutoResetEvent ProcessEvent = new AutoResetEvent(false);
private readonly AutoResetEvent WakeEvent = new AutoResetEvent(false);
public void Do()
{
Thread th1 = new Thread(ProcessSomething);
th1.IsBackground = false;
th1.Start();
ProcessEvent.WaitOne();
Console.WriteLine("Processing started...");
Thread th2 = new Thread(() => WakeEvent.Set());
th2.Start();
th1.Join();
Console.WriteLine("Joined");
}
private void ProcessSomething()
{
try
{
Console.WriteLine("Processing...");
ProcessEvent.Set();
}
finally
{
WakeEvent.WaitOne();
Console.WriteLine("Woken up...");
}
}
Update
Quite an interesting low-level issue. Although Abort() is documented, Interrupt() is much less so.
The short answer to your question is no, you cannot wake a thread in a finally block by calling Abort or Interrupt on it.
Not being able to abort or interrupt threads in finally blocks is by design, simply so that finally blocks have a chance to run as you would expect. If you could abort and interrupt threads in finally blocks this could have unintended consequences for cleanup routines, and so leave the application in a corrupted state - not good.
A slight nuance with thread interrupting, is that an interrupt may have been issued against the thread any time before it entered the finally block, but whilst it was not in a SleepWaitJoin state (i.e. not blocked). In this instance if there was a blocking call in the finally block it would immediately throw a ThreadInterruptedException and crash out of the finally block. Finally block protection prevents this.
As well as protection in finally blocks, this extends to try blocks and also CERs (Constrained Execution Region) which can be configured in user code to prevent a range of exceptions being thrown until after a region is executed - very useful for critical blocks of code which must be completed and delay aborts.
The exception (no pun intended) to this are what are called Rude Aborts. These are ThreadAbortExceptions raised by the CLR hosting environment itself. These can cause finally and catch blocks to be exited, but not CERs. For example, the CLR may raise Rude Aborts in response to threads which it has judged to be taking too long to do their work\exit e.g. when trying to unload an AppDomain or executing code within the SQL Server CLR. In your particular example, when your application shuts down and the AppDomain unloads, the CLR would issue a Rude Abort on the sleeping thread as there would be an AppDomain unload timeout.
Aborting and interrupting in finally blocks is not going to happen in user code, but there is slightly different behavior between the two cases.
Abort
When calling Abort on a thread in a finally block, the calling thread is blocked. This is documented:
The thread that calls Abort might block if the thread that is being aborted is in a protected region of code, such as a catch block, finally block, or constrained execution region.
In the abort case if the sleep was not infinite:
The calling thread will issue an Abort but block here until the finally block is exited i.e. it stops here and does not immediately proceed to the Join statement.
The callee thread has its state set to AbortRequested.
The callee continues sleeping.
When the callee wakes up, as it has a state of AbortRequested it will continue executing the finally block code and then "evaporate" i.e. exit.
When the aborted thread leaves the finally block: no exception is raised, no code after the finally block is executed, and the thread's state is Aborted.
The calling thread is unblocked, continues to the Join statement and immediately passes as the called thread has exited.
So given your example with an infinite sleep, the calling thread will block forever at step 1.
Interrupt
In the interrupt case if the sleep was not infinite:
Not so well documented...
The calling thread will issue an Interrupt and continue executing.
The calling thread will block on the Join statement.
The callee thread has its state set to raise an exception on the next blocking call, but crucially as it is in a finally block it is not unblocked i.e. woken.
The callee continues sleeping.
When the callee wakes up, it will continue executing the finally block.
When the interrupted thread leaves the finally block it will throw a ThreadInterruptedException on its next blocking call (see code example below).
The calling thread "joins" and continues as the called thread has exited, however, the unhandled ThreadInterruptedException in step 6 has now flattened the process...
So again given your example with an infinite sleep, the calling thread will block forever, but at step 2.
Summary
So although Abort and Interrupt have slightly different behavior, they will both result in the called thread sleeping forever, and the calling thread blocking forever (in your example).
Only a Rude Abort can force a blocked thread to exit a finally block, and these can only be raised by the CLR itself (you cannot even use reflection to diddle ThreadAbortException.ExceptionState as it makes an internal CLR call to get the AbortReason - no opportunity to be easily evil there...).
The CLR prevents user code from causing finally blocks to be prematurely exited for our own good - it helps to prevent corrupted state.
For an example of the slightly different behavior with Interrupt:
internal class ThreadInterruptFinally
{
public static void Do()
{
Thread t = new Thread(ProcessSomething) { IsBackground = false };
t.Start();
Thread.Sleep(500);
t.Interrupt();
t.Join();
}
private static void ProcessSomething()
{
try
{
Console.WriteLine("processing");
}
finally
{
Thread.Sleep(2 * 1000);
}
Console.WriteLine("Exited finally...");
Thread.Sleep(0); //<-- ThreadInterruptedException
}
}
The whole point of a finally block is to hold something that will not be affected by an interrupt or abort and will run to normal completion no matter what. Permitting a finally block to be aborted or interrupted would pretty much defeat the point. Sadly, as you noted, finally blocks can be aborted or interrupted due to various race conditions. This is why you will see many people advising you not to interrupt or abort threads.
Instead, use cooperative design. If a thread is supposed to be interrupted, instead of calling Sleep, use a timed wait. Instead of calling Interrupt signal the thing the thread waits for.
can anybody explain me with simple example to handle Monitor.PulseAll().I have already gone some examples from this stackoverflow.As i am a beginner i feel those are above my head.
How about (to show the interaction):
static void Main()
{
object obj = new object();
Console.WriteLine("Main thread wants the lock");
lock (obj)
{
Console.WriteLine("Main thread has the lock...");
ThreadPool.QueueUserWorkItem(ThreadMethod, obj);
Thread.Sleep(1000);
Console.WriteLine("Main thread about to wait...");
Monitor.Wait(obj); // this releases and re-acquires the lock
Console.WriteLine("Main thread woke up");
}
Console.WriteLine("Main thread has released the lock");
}
static void ThreadMethod(object obj)
{
Console.WriteLine("Pool thread wants the lock");
lock (obj)
{
Console.WriteLine("Pool thread has the lock");
Console.WriteLine("(press return)");
Console.ReadLine();
Monitor.PulseAll(obj); // this signals, but doesn't release the lock
Console.WriteLine("Pool thread has pulsed");
}
Console.WriteLine("Pool thread has released the lock");
}
Re signalling; when dealing with Monitor (aka lock), there are two types of blocking; there is the "ready queue", where threads are queued waiting to execute. On the line after Console.WriteLine("Pool thread wants the lock"); the pool queue enters the ready queue. When the lock is released a thread from the ready queue can acquire the lock.
The second queue is for threads that need waking; the call to Wait places the thread in this second queue (and releases the lock temporarily). The call to PulseAll moves all threads from this second queue into the ready queue (Pulse moves only one thread), so that when the pool thread releases the lock the main thread is allowed to pick up the lock again.
It sounds complex (and perhaps it is) - but it isn't as bad as it sounds... honestly. However, threading code is always tricky, and needs to be approached with both caution and a clear head.
Monitor.Wait() will always wait for a pulse.
So, the principal is:
When in doubt, Monitor.Pulse()
When still in doubt, Monitor.PulseAll()
Other than that, I'm not sure what you are asking. Could you please elaborate?
Edit:
Your general layout should be:
Monitor.Enter(lock);
try
{
while(!done)
{
while(!ready)
{
Monitor.Wait(lock);
}
// do something, and...
if(weChangedState)
{
Monitor.Pulse(lock);
}
}
}
finally
{
Monitor.Exit(lock);
}