Find which thread currently owns a lock so I can kill it - c#

I need to find out which thread currently owns the lock.
I'm writing a multithread server using ThreadPool that hosts independent application instances. When shutting down an application instance I call Monitor.TryEnter to either acquire the lock or timeout. If a timeout occurs I need to get which thread owns the lock so I can abort it.
If there are no bugs in the applications I would never need to do this as each worker would lock and unlock the application instance on entry and exit to the application. But if there IS a bug, and for whatever reason the worker doesn't exit and is either deadlocked or stuck in an endless loop I want to be able to kill that thread and application instance, while letting the rest of my server live on. The application instance at this point is a lost cause.
Seems like a pretty straight forward requirement, but couldn't find anything built in to do it.
One workaround would be to add a Thread member in the same context as the lock and have each thread update it as it acquires the lock. But that relies on everyone ALWAYS remembering to update it when a lock is acquired.

You think the thread control as a top down hierarchy, but this is not the right way of thinking in the matter of multithreaded applications. If a Thread has a timeout or something else went wrong during its execution, the thread itself has to take care of releasing the lock and ending itself.

Related

Monitor.Pulse in C# appears suboptimal : must be in lock scope

spoiler note: the question is the last phrase.
In C#, the classical pattern to use a condition variable is like this:
lock (answersQueue)
{
answersQueue.Enqueue(c);
Monitor.Pulse(answersQueue); // condition variable "notify one".
}
and some other thread:
lock (answersQueue)
{
while (answersQueue.Count == 0)
{
// unlock answer queue and sleeps here until notified.
Monitor.Wait(answersQueue);
}
...
}
that's an example taken from my code.
if I place the Pulse outside of the lock scope, it doesn't compile.
however, it is the correct way:
c.f:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686903(v=vs.85).aspx
and:
http://www.installsetupconfig.com/win32programming/threadprocesssynchronizationapis11_7.html
(search for "inside")
And indeed it is idiotic to signal the sleeping thread when you still are in your critical section. Because the sleeping thread CAN'T wake up (not immediately), BECAUSE it is INSIDE a criticial section as well !
Therefore, I hope that .NET or C# Pulse call is actually just flagging the lock object, so that when it goes out of scope it actually "pulses" the condition variable at this moment. Because otherwise, it would have an optimality issue.
So how come the design of the Monitor object was chosen to be that way ?
Edit:
I found the answer in this paper:
http://research.microsoft.com/pubs/64242/implementingcvs.pdf
section "Optimising Signal and Broadcast" and the previous section about NT kernel and how to make Condition Variable on top of Semaphores, which is the reason for introducing the "darned queues".
NOW that makes me a better engineer.
And indeed it is idiotic to signal the sleeping thread when you still are in your critical section. Because the sleeping thread CAN'T wake up
Pulse doesn't expect to get a thread running; it only expects to move a thread between the 2 queues (waiting and ready). The "not go do something" is part of releasing the lock via Exit (or the end of a lock). In reality, it isn't an issue because Monitor.Pulse typically happens right before a Wait or an Exit.
Therefore, I hope that .NET or C# Pulse call is actually just flagging the lock object, so that when it goes out of scope it actually "pulses" the condition variable at this moment. Because otherwise, it would have an optimality issue.
Again; these are different issues: moving between waiting and ready is one thing; exiting a lock already has all the code to actually activate the next ready thread.
You did not understood the basic problem of synchronization. What is a 'monitor', what does it mean that a thread sleeps and what does it mean that it is about to be woken up?
A monitor is a mid-level synchronization structure. This is not a low-level petty volatile boolean flag with bus-halting XCHG operation, and this is not high-level thread pool handler that requires dozens of other special mechanisms..
On a monitor, MANY threads may sleep. There are logical queues out there that i.e. preserver order of being put to sleep/woken up, or mechanisms that guarantee proper time scheduling and fairnees. I will not get into details, all of it is out there on the web, even on wiki.
Add to that that the operation is PULSE. Pulse is instantenous. It does not "stick". Pulse will wake those now sleeping. If after the pulse another one check the monitor, it will go to sleep.
Now imagine: you have a queue of 5 sleeping threads. One thread (6th) wants now to pulse them, and yet another (7th) wants to check the monitor.
6th and 7th are running in parallel, truly simultaneously, since you have quad-core CPU.
So, tell me, what would happen to the queue's implementtion if the 6th starts pulsing andwaking and removing woken threads from the queue, and in the same time the 7th one starts adding itself there?
To solve that, the internal queues would have to be internally synchronized and locked, so only one thread at time modifies them.
Um wait. We just stumbled upon a case where we wanted to SYNCHRONIZE something, and to do it properly we need to SYNCHRONIZE on another thing? Not good.
Therefore, the actual LOCK is done EXTERNALLY before you talk to the monitor itself. This is to achieve SINGLE LOCKING, instead of introduce several layers of hierarchical locks.
That way it is simplier, faster, and more resource-friendly.

C# threading deadlock

I have a multi-threaded program in C#. What is the best way to prevent deadlock in practice?
Is it timedlock?
Also, what is the best tool available to help detect and prevent the deadlock?
Thank you very much.
Deadlocks typically occur in a few scenarios:
You are using several locks and not locking/unlocking them in the correct order. Hence, you may create a situation where a thread holds lock A and needs lock B, and another thread needs lock A and holds lock B. Neither of them can proceed. This is because each thread is locking in a different order.
When using a reentrant lock and locking it more times than you are unlocking it. See this related question: why does the following code result in deadlock
When using Monitor.Wait/Monitor.Pulse as a signaling mechanism, but the thread that must call Wait does not manage to reach the call by the time the other thread has called Pulse and the signal is lost. You can use the AutoResetEvent for a persistent signal.
You have a worker thread polling a flag to know when to stop. The main thread sets the flag and attempts to join the worker thread, but you forgot to make the flag volatile.
It's not C# specific. You should always acquired in some well-defined order.
There is much information in internet, for example, you might take a look here
http://www.javamex.com/tutorials/threads/deadlock.shtml

How to close NHibenrate session when using the ThreadPool?

Since threads are reused over and over by the ThreadPool I can't tell when to close NHibernate sessions for each thread to release used up resources.
Should I spawn my own threads (to ensure they are unique) or there is a better way to do this using the ThreadPool?
I fail to see the problem. You might to have to elaborate or add some code in your question.
Each thread has it's own method. Simply allocate the session in the beginning of the method and clean it up in the end. The same applies when you are using a thread pool thread.
Don't forget to wrap all thread code in a try/catch, or your application will crash if an exception is unhandled.
I'd have it set up so that there is one session per thread. This is probably the easiest way to make sure that you don't run into issues where you have a thread terminating a session that is in use by another thread.

C# Communication between threads

I am using .NET 3.5 and am trying to wrap my head around a problem (not being a supreme threading expert bear with me).
I have a windows service which has a very intensive process that is always running, I have put this process onto a separate thread so that the main thread of my service can handle operational tasks - i.e., service audit cycles, handling configuration changes, etc, etc.
I'm starting the thread via the typical ThreadStart to a method which kicks the process off - call it workerthread.
On this workerthread I am sending data to another server, as is expected the server reboots every now and again and connection is lost and I need to re-establish the connection (I am notified by the lost of connection via an event). From here I do my reconnect logic and I am back in and running, however what I easily started to notice to happen was that I was creating this worker thread over and over again each time (not what I want).
Now I could kill the workerthread when I lose the connection and start a new one but this seems like a waste of resources.
What I really want to do, is marshal the call (i.e., my thread start method) back to the thread that is still in memory although not doing anything.
Please post any examples or docs you have that would be of use.
Thanks.
You should avoid killing the worker thread. When you forcibly kill a Win32 thread, not all of its resources are fully recovered. I believe the reserved virtual address space (or is it the root page?) for the thread stack is not recovered when a Win32 thread is killed. It may not be much, but in a long-running server service process, it will add up over time and eventually bring down your service.
If the thread is allowed to exit its threadproc to terminate normally, all the resources are recovered.
If the background thread will be running continuously (not sleeping), you could just use a global boolean flag to communicate state between the main thread and the background thread. As long as the background thread checks this global flag periodically. If the flag is set, the thread can shut itself down cleanly and exit. No need for locking semantics if the main thread is the only writer and the background thread only reads the flag value.
When the background thread loses the connection to the server that it's sending data to, why doesn't it perform the reconnect on its own? It's not clear to me why the main thread needs to tear down the background thread to start another.
You can use the Singleton pattern. In your case, make the connection a static object. Both threads can access the object, which means construct it and use it.
The main thread could construct it whenever required, and the worker thread access it whenever it is available.
Call the method using ThreadPool.QueueUserWorkItem instead. This method grabs a thread from the thread pool and kicks off a method. It appears to be ideal for the task of starting a method on another thread.
Also, when you say "typical ThreadStart" do you mean you're creating and starting a new Thread with a ThreadStart parameter, or you're creating a ThreadStart and calling Invoke on it?
Have you considered a BackgroundWorker?
From what I understand, you just have a single thread that's doing work, unless the need arises where you have to cancel it's processing.
I would kill (but end gracefully if possible) the worker thread anyway. Everything gets garbage-collected, and you can start from scratch.
How often does this server reboot happen? If it happens often enough for resources to be a problem, it's probably happening too often.
The BackgroundWorker is a bit slower than using plain threads, but it has the option of supporting the CancelAsync method.
Basically, BackgroundWorker is a wrapper around a worker thread with some extra options and events.
The CancelAsync method only works when WorkerSupportsCancellation is set.
When CancelAsync is called, CancellationPending is set.
The worker thread should periodically check CancellationPending to see if needs to quit prematurely.
--jeroen

ASP.NET multithreading & "thread has already entered the lock"

In an ASP.NET application, I have a resource that is protected by a ReaderWriterLockSlim.
Normal requests call EnterReadLock, access the resource, call ExitReadLock and return.
At one moment, I accidently forgot to put a call to ExitReadLock somewhere in my code. On the next request, I got an exception stating that the thread had already entered the lock.
Fair enough: if at the end of request A the thread does not exit the lock, and that same thread is used to process request B, and tries to enter the lock, it will throw.
Now, my question: is the following scenario possible? reasons?
thread begins to process request A
thread enters lock
thread does, say, sleep, or do some IO, and so becomes available
same thread begins to process request B, while request A is "on hold"
thread enters lock !! throws !!
If yes, what other solution do I have to protect said resource? Bearing in mind I want to use a ReaderWriterLockSlim because I also have other thread that may write to the resource?
edit: add some details:
1) This happens in the ProcessRequest method of an HttpHandler which generates, caches and serves images. These images are expensive to generate. So, the first request will generate and cache the image, and we want to put the other requests on hold while generating.
2) We have not tried to "reproduce" -- at the moment we're trying to know if it is possible or not that the same thread begins processing a request while already waiting for the image to become ready.
3) I am aware of the LockPolicyRecursion but I am not sure I fully understand its purpose, and whether it would be OK to set it to SupportsRecursion in our case.
edit: going further...
According to the document pointed to by Ryan below, a thread will block and not return to the pool as long as we don't engage into async operations. So once we've locked the thread waiting for EnterReadLock to complete, it won't return to the pool nor process any other request.
So 1) we should be safe but 2) we might starve the thread pool. Assuming we don't want to immediately return a dummy "please wait" image, what solutions do we have?
Yes, that is possible in ASP.NET. There are some places (async etc) where ASP.NET does thread-switching and uses different threads for a single request (at different points), which means that it is entirely possible that half way through a request that thread goes on to process another request.
At what points in the pipeline do you take/release the lock? Can you reduce this? Personally I would limit the thread duration to some synchronous method that is separated from the UI/presentation code. It won't be changing threads in the middle of a synchronous method.
A read-locked thread that's reused by the pool could definitely reenter the lock if it's requesting a read. Can a read-locked thread be reused by the pool? That I'm not sure of, but the read operation should be fast enough to avoid the race condition.
The thread pool should only reclaim a thread if it's idle (i.e., sleeping). So, you would expect that if your protected code doesn't cause the thread to go idle, then the thread shouldn't be reclaimed while it's in the lock - just make sure you exit the lock (always exit in a finally block)
If you want to ensure that you don't reenter the lock, then you'd need to block the thread with a Monitor (or a write lock); blocking threads are not released back to the pool.
EDIT: If you're generating the images, then you should be acquiring a write lock which will block all read locks. Neither the blocking, nor blocked, threads will be reused, so you shouldn't have an issue.
Source

Categories