spoiler note: the question is the last phrase.
In C#, the classical pattern to use a condition variable is like this:
lock (answersQueue)
{
answersQueue.Enqueue(c);
Monitor.Pulse(answersQueue); // condition variable "notify one".
}
and some other thread:
lock (answersQueue)
{
while (answersQueue.Count == 0)
{
// unlock answer queue and sleeps here until notified.
Monitor.Wait(answersQueue);
}
...
}
that's an example taken from my code.
if I place the Pulse outside of the lock scope, it doesn't compile.
however, it is the correct way:
c.f:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686903(v=vs.85).aspx
and:
http://www.installsetupconfig.com/win32programming/threadprocesssynchronizationapis11_7.html
(search for "inside")
And indeed it is idiotic to signal the sleeping thread when you still are in your critical section. Because the sleeping thread CAN'T wake up (not immediately), BECAUSE it is INSIDE a criticial section as well !
Therefore, I hope that .NET or C# Pulse call is actually just flagging the lock object, so that when it goes out of scope it actually "pulses" the condition variable at this moment. Because otherwise, it would have an optimality issue.
So how come the design of the Monitor object was chosen to be that way ?
Edit:
I found the answer in this paper:
http://research.microsoft.com/pubs/64242/implementingcvs.pdf
section "Optimising Signal and Broadcast" and the previous section about NT kernel and how to make Condition Variable on top of Semaphores, which is the reason for introducing the "darned queues".
NOW that makes me a better engineer.
And indeed it is idiotic to signal the sleeping thread when you still are in your critical section. Because the sleeping thread CAN'T wake up
Pulse doesn't expect to get a thread running; it only expects to move a thread between the 2 queues (waiting and ready). The "not go do something" is part of releasing the lock via Exit (or the end of a lock). In reality, it isn't an issue because Monitor.Pulse typically happens right before a Wait or an Exit.
Therefore, I hope that .NET or C# Pulse call is actually just flagging the lock object, so that when it goes out of scope it actually "pulses" the condition variable at this moment. Because otherwise, it would have an optimality issue.
Again; these are different issues: moving between waiting and ready is one thing; exiting a lock already has all the code to actually activate the next ready thread.
You did not understood the basic problem of synchronization. What is a 'monitor', what does it mean that a thread sleeps and what does it mean that it is about to be woken up?
A monitor is a mid-level synchronization structure. This is not a low-level petty volatile boolean flag with bus-halting XCHG operation, and this is not high-level thread pool handler that requires dozens of other special mechanisms..
On a monitor, MANY threads may sleep. There are logical queues out there that i.e. preserver order of being put to sleep/woken up, or mechanisms that guarantee proper time scheduling and fairnees. I will not get into details, all of it is out there on the web, even on wiki.
Add to that that the operation is PULSE. Pulse is instantenous. It does not "stick". Pulse will wake those now sleeping. If after the pulse another one check the monitor, it will go to sleep.
Now imagine: you have a queue of 5 sleeping threads. One thread (6th) wants now to pulse them, and yet another (7th) wants to check the monitor.
6th and 7th are running in parallel, truly simultaneously, since you have quad-core CPU.
So, tell me, what would happen to the queue's implementtion if the 6th starts pulsing andwaking and removing woken threads from the queue, and in the same time the 7th one starts adding itself there?
To solve that, the internal queues would have to be internally synchronized and locked, so only one thread at time modifies them.
Um wait. We just stumbled upon a case where we wanted to SYNCHRONIZE something, and to do it properly we need to SYNCHRONIZE on another thing? Not good.
Therefore, the actual LOCK is done EXTERNALLY before you talk to the monitor itself. This is to achieve SINGLE LOCKING, instead of introduce several layers of hierarchical locks.
That way it is simplier, faster, and more resource-friendly.
Related
I have a thread reading from a specific plc's memory and it works perfectly. Now what I want is to start another thread to test the behavior of the system (simulate the first thread) in case of a conectivity issue, and when everything is Ok, continue the first thread. But I think I'll have problems with that because these two threads will need to use the same port.
My first idea was to abort the first thread, start the second one and when the everything's OK again, abort this thread and 'restart' the first one.
I've read some other forums and people say that aborting or suspending a thread is the worst solution, and I've read about syncronization of threads but I dont really know if this is useful in this case because I've never used it.
My question is, what is the correct way to solve this kind of situations?
You have a shared resource that you need to coordinate thread access to. There are a number of mechanisms in .NET available for that coordination.
There is a wonderful resource that provides both an introduction to thread concepts in .NET, and discusses advanced concepts in an approachable manner
http://www.albahari.com/threading/
In your case, have a look at the section on locking
Exclusive locking is used to ensure that only one thread can enter particular sections of code at a time. The two main exclusive locking constructs are lock and Mutex. Of the two, the lock construct is faster and more convenient. Mutex, though, has a niche in that its lock can span applications in different processes on the computer.
http://www.albahari.com/threading/part2.aspx#_Locking
You can structure your two threads so that they must acquire a specific lock to work with the port. Have your first thread release that lock before you start the second thread, then have the first thread wait to acquire that lock again (which the second thread will hold until done).
I was wondering on the Monitor Class.
As far as i know all waiting threads are not FIFO.
The first one that aquires the lock is not allways the first on in the waiting queue.
Is this correct?
Is there some way to ensure the FIFO condition?
Regards
If you are referring to a built-in way, then no. Repeatedly calling TryEnter in a loop is by definition not fair and unfortunately neither is the simple Monitor.Enter. Technically a thread could wait forever without getting the lock.
If you want absolute fairness you will need to implement it yourself using a queue to keep track of arrival order.
Is there some way to ensure the FIFO condition?
In a word: no!
I wrote a short article about this: Is the Ready Queue FIFO?
Look at this question, I think it will very useful for you - Does lock() guarantee acquired in order requested?
especially this quote:
Because monitors use kernel objects internally, they exhibit the same
roughly-FIFO behavior that the OS synchronization mechanisms also
exhibit (described in the previous chapter). Monitors are unfair, so
if another thread tries to acquire the lock before an awakened waiting
thread tries to acquire the lock, the sneaky thread is permitted to
acquire a lock.
I have a multi-threaded program in C#. What is the best way to prevent deadlock in practice?
Is it timedlock?
Also, what is the best tool available to help detect and prevent the deadlock?
Thank you very much.
Deadlocks typically occur in a few scenarios:
You are using several locks and not locking/unlocking them in the correct order. Hence, you may create a situation where a thread holds lock A and needs lock B, and another thread needs lock A and holds lock B. Neither of them can proceed. This is because each thread is locking in a different order.
When using a reentrant lock and locking it more times than you are unlocking it. See this related question: why does the following code result in deadlock
When using Monitor.Wait/Monitor.Pulse as a signaling mechanism, but the thread that must call Wait does not manage to reach the call by the time the other thread has called Pulse and the signal is lost. You can use the AutoResetEvent for a persistent signal.
You have a worker thread polling a flag to know when to stop. The main thread sets the flag and attempts to join the worker thread, but you forgot to make the flag volatile.
It's not C# specific. You should always acquired in some well-defined order.
There is much information in internet, for example, you might take a look here
http://www.javamex.com/tutorials/threads/deadlock.shtml
I have a C# program, which has an "Agent" class. The program creates several Agents, and each Agent has a "run()" method, which executes a Task (i.e.: Task.Factory.StartNew()...).
Each Agent performs some calculations, and then needs to wait for all the other Agents to finish their calculations, before proceeding to the next stage (his actions will be based according to the calculations of the others).
In order to make an Agent wait, I have created a CancellationTokenSource (named "tokenSource"), and in order to alert the program that this Agent is going to sleep, I threw an event. Thus, the 2 consecutive commands are:
(1) OnWaitingForAgents(new EventArgs());
(2) tokenSource.Token.WaitHandle.WaitOne();
(The event is caught by an "AgentManager" class, which is a thread in itself, and the 2nd command makes the Agent Task thread sleep until a signal will be received for the Cancellation Token).
Each time the above event is fired, the AgentManager class catches it, and adds +1 to a counter. If the number of the counter equals the number of Agents used in the program, the AgentManager (which holds a reference to all Agents) wakes each one up as follows:
agent.TokenSource.Cancel();
Now we reach my problem: The 1st command is executed asynchronously by an Agent, then due to a context switch between threads, the AgentManager seems to catch the event, and goes on to wake up all the Agents. BUT - the current Agent has not even reached the 2nd command yet !
Thus, the Agent is receiving a "wake up" signal, and only then does he go to sleep, which means he gets stuck sleeping with no one to wake him up!
Is there a way to "atomize" the 2 consecutive methods together, so no context switch will happen, thus forcing the Agent to go to sleep before the AgentManager has the chance to wake him up?
The low-level technique that you are asking about is thread synchronisation. What you have there is a critical section (or part of one), and you need to protect access to it. I'm surprised that you've learned about multithreaded programming without having learned about thread synchronisation and critical sections yet! It's essential to know about these things for any kind of "low-level" multithreaded programming.
Maybe look into Parallel.Invoke or Parallel.For in .NET 4, which allows you to execute methods in parallel and wait until all parallel methods have been invoked.
http://msdn.microsoft.com/en-us/library/dd992634.aspx
Seems like that would help you out a lot, and take care of all the queuing for you.
humm... I don't think it's good idea (or even possible) develop software in .NET worrying about context switches, since neither Windows or .NET are real time. Probably you have another kind of problem in that code.
I've understood that you simply run all your agents in parallel, and you want to wait till all of them have finished to go to the next stage. You can use several techniques to accomplish that, the easiest one would be using Monitor.Wait(Object monitor) and Monitor.PulseAll(Object monitor).
In the task library there are several things to do it as well. As #jishi has pointed out, you can use the Parallel flavours, or spawn a lot of Tasks and then wait for all with the Task.WaitAll(Task[] tasks) method.
Each time the above event is fired,
the AgentManager class catches it, and
adds +1 to a counter.
How are you adding 1 to that counter and how are you reading it? You should use Interloked.Increment to ensure an atomic operation, and read it in a volatile operation with Thread.VolatileRead for example, or simply put it in a lock statement.
I am writing a server application which processes request from multiple clients. For the processing of requests I am using the threadpool.
Some of these requests modify a database record, and I want to restrict the access to that specific record to one threadpool thread at a time. For this I am using named semaphores (other processes are also accessing these records).
For each new request that wants to modify a record, the thread should wait in line for its turn.
And this is where the question comes in:
As I don't want the threadpool to fill up with threads waiting for access to a record, I found the RegisterWaitForSingleObject method in the threadpool.
But when I read the documentation (MSDN) under the section Remarks:
New wait threads are created automatically when required. ...
Does this mean that the threadpool will fill up with wait-threads? And how does this affect the performance of the threadpool?
Any other suggestions to boost performance is more than welcome!
Thanks!
Your solution is a viable option. In the absence of more specific details I do not think I can offer other tangible options. However, let me try to illustrate why I think your current solution is, at the very least, based on sound theory.
Lets say you have 64 requests that came in simultaneously. It is reasonable to assume that the thread pool could dispatch each one of those requests to a thread immediately. So you might have 64 threads that immediately begin processing. Now lets assume that the mutex has already been acquired by another thread and it is held for a really long time. That means those 64 threads will be blocked for a long time waiting for the thread that currently owns the mutex to release it. That means those 64 threads are wasted on doing nothing.
On the other hand, if you choose to use RegisterWaitForSingleObject as opposed to using a blocking call to wait for the mutex to be released then you can immediately release those 64 waiting threads (work items) and allow them to be put back into the pool. If I were to implement my own version of RegisterWaitForSingleObject then I would use the WaitHandle.WaitAny method which allows me to specify up to 64 handles (I did not randomly choose 64 for the number of requests afterall) in a single blocking method call. I am not saying it would be easy, but I could replace my 64 waiting threads for only a single thread from the pool. I do not know how Microsoft implemented the RegisterWaitForSingleObject method, but I am guessing they did it in a manner that is at least as efficient as my strategy. To put this another way, you should be able to reduce the number of pending work items in the thread pool by at least a factor of 64 by using RegisterWaitForSingleObject.
So you see, your solution is based on sound theory. I am not saying that your solution is optimal, but I do believe your concern is unwarranted in regards to the specific question asked.
IMHO you should let the database do its own synchronization. All you need to do is to ensure that you're sync'ed within your process.
Interlocked class might be a premature optimization that is too complex to implement. I would recommend using higher-level sync objects, such as ReaderWriterLockSlim. Or better yet, a Monitor.
An approach to this problem that I've used before is to have the first thread that gets one of these work items be responsible for any other ones that occur while it's processing the work item(s), This is done by queueing the work items then dropping into a critical section to process the queue. Only the 'first' thread will drop into the critical section. If a thread can't get the critical section, it'll leave and let the thread already operating in the critical section handle the queued object.
It's really not very complicated - the only thing that might not be obvious is that when leaving the critical section, the processing thread has to do it in a way that doesn't potentially leave a late-arriving workitem on the queue. Basically, the 'processing' critical section lock has to be released while holding the queue lock. If not for this one requirement, a synchronized queue would be sufficient, and the code would really be simple!
Pseudo code:
// `workitem` is an object that contains the database modification request
//
// `queue` is a Queue<T> that can hold these workitem requests
//
// `processing_lock` is an object use to provide a lock
// to indicate a thread is processing the queue
// any number of threads can call this function, but only one
// will end up processing all the workitems.
//
// The other threads will simply drop the workitem in the queue
// and leave
void threadpoolHandleDatabaseUpdateRequest(workitem)
{
// put the workitem on a queue
Monitor.Enter(queue.SyncRoot);
queue.Enqueue(workitem);
Monitor.Exit(queue.SyncRoot);
bool doProcessing;
Monitor.TryEnter(processing_queue, doProcessing);
if (!doProcessing) {
// another thread has the processing lock, it'll
// handle the workitem
return;
}
for (;;) {
Monitor.Enter(queue.SyncRoot);
if (queue.Count() == 0) {
// done processing the queue
// release locks in an order that ensures
// a workitem won't get stranded on the queue
Monitor.Exit(processing_queue);
Monitor.Exit(queue.SyncRoot);
break;
}
workitem = queue.Dequeue();
Monitor.Exit(queue.SyncRoot);
// this will get the database mutex, do the update and release
// the database mutex
doDatabaseModification(workitem);
}
}
ThreadPool creates a wait thread for ~64 waitable objects.
Good comments are here: Thread.sleep vs Monitor.Wait vs RegisteredWaitHandle?