Lock statement - does it always release the lock? - c#

I have recently read this post from Eric Lippert regarding the lock implementation in c# and still some questions remain.
In 4.0 implementation if a thread abort or any cross thread exception occurs just before the Monitor.Exit(temp) in the finally block is executed - would that keep the lock on the object?
Is there any possibility for an exception to occur at this level, leaving the object still in a locked state?

In 4.0 implementation if a thread abort or any cross thread exception occurs just before the Monitor.Exit(temp) in the finally block is executed - would that keep the lock on the object?
Let's take a look at that code, so that it is clear to other readers:
bool lockWasTaken = false;
var temp = obj;
try
{
Monitor.Enter(temp, ref lockWasTaken);
{
body
}
}
finally
{
if (lockWasTaken)
{
// What if a thread abort happens right here?
Monitor.Exit(temp);
}
}
Your question is not answerable because it is based on a false assumption, namely, that thread aborts can happen in the middle of a finally block.
Thread aborts cannot happen in the middle of a finally block. That is just one reason amongst many reason why you should never attempt to abort a thread. The entire thread could be running in a finally block, and therefore be not-abortable.
Is there any possibility for an exception to occur at this level, leaving the object still in a locked state?
No. A thread abort will be delayed until control leaves the finally. Unlocking a valid lock does not allocate memory or throw another exception.

Read up about ThreadAbortException:
When this exception is raised, the runtime executes all the finally blocks before ending the thread
(This includes any finally block that is currently executing when Thread.Abort is called)
So yes, the lock will be released still. Whether this is desirable or not is a very different matter though - you don't know that the thread is just about to release the lock - it could be anywhere, and may be in the middle of mutating the state that the lock was protecting - so as always, the advice is to avoid Thread.Abort.

Related

Is using Thread.Abort() and handling ThreadAbortException in .NET safe practice?

I need to develop a multithreaded Azure worker role in C# - create multiple threads, feed requests to them, each request might require some very long time to process (not my code - I'll call a COM object to do actual work).
Upon role shutdown I need to gracefully stop processing. How do I do that? Looks like if I just call Thread.Abort() the ThreadAbortException is thrown in the thread and the thread can even use try-catch-finally (or using) to clean up resources. This looks quite reliable.
What bothers me is that my experience is mostly C++ and it's impossible to gracefully abort a thread in an unmanaged application - it will just stop without any further processing and this might leave data in inconsistent state. So I'm kind of paranoid about whether anything like that happens in case I call Thread.Abort() for a busy thread.
Is it safe practice to use Thread.Abort() together with handling ThreadAbortException? What should I be aware of if I do that?
Is using Thread.Abort() and handling ThreadAbortException in .NET safe practice?
TL;DR version: No, isn't.
Generally you're safe when all type invariants (whether explicitly stated or not) are actually true. However many methods will break these invariants while running, only to reach a new state when they are again true at the end. If the thread is idle in a state with invariants held you'll be OK, but in that case better to use something like an event to signal the thread to exit gracefully (ie. you don't need to abort).
An out-of-band exception1 thrown in a thread while in such a invariants-not-true, ie. invalid, state is where the problems start. These problems include, but are certainly not limited to, mutually inconsistent field and property values (data structures in an invalid state), locks not exited, and events representing "changes happened" not fired.
In many cases it is possible to deal with these in clean up code (eg. a finally block), but then consider what happens when the out-of-band exception occurs in that clean up code? This leads to clean up code for the clean up code. But then that code is it self vulnerable so you need clean up code for the clean up code of the clean up code… it never ends!
There are solutions, but they are not easy to design (and tends to impact your whole design), and even harder to test—how to re-create all the cases (think combinatorial explosion). Two possible routes are:
Work on copies of state, update the copies and then atomically swap current for new state. If there is an out-of-band exception then the original state remains (and finalisers can clean up the temporary state).
This is rather like the function of database transactions (albeit RDBMSs work with locks and transaction log files).
It is also similar to the approaches to achieving the "strong exception guarantee" developed in the C++ community in response to a paper questioning if exceptions could ever be safe (C++ of course has no GC/finaliser queue to clean up discarded objects). See Herb Sutters "Guru of the Week #8: CHALLENGE EDITION: Exception Safety" for the solution.
In practice this is hard to achieve unless your state can be encapsulated in a single reference.
Look at "Constrained Execution Regions", but not the limitations on what you can do in these cases. (MSDN Magazine had an introductory article (introduction to the subject, not introductory level), from .NET 2 beta period2).
In practice if you have to do this, using approach #2 to manage the state change under #1 is probably the best approach, but getting it right, and then validating that it is correct (and the correctness is maintained) is hard.
Summary: It's a bit like optimisation: rule 1: don't do it; rule 2 (experts only): don't do it unless you have no other option.
1 A ThreadAbortException is not the only such exception.
2 So details have possibly changed.
One example where it's problematic to abort a thread.
using(var disposable=new ClassThatShouldBeDisposed())
{
...
}
Now the Thread abortion happes after the constructor of the class has finished but before the assignment to the local variable. So it won't be disposed. Eventually the finalizer will run, but that can be much later.
Deterministic disposing and thread abortion don't work well together. The only way I know to get safe disposing when using thread abortion is putting all the critical code inside a finally clause.
try
{//Empty try block
}
finally
{
//put all your code in the finally clause to fool thread abortion
using(var disposable=new ClassThatShouldBeDisposed())
{
...
}
}
This works because thread abortion allows finally code to execute. Of course this implies that the thread abortion will simply not work until the code leaves the finally block.
One way to get your code to work correctly with thread abortion is using the following instead of the using statement. Unfortunately it's very ugly.
ClassThatShouldBeDisposed disposable=null;
try
{
try{}finally{disposable=new ClassThatShouldBeDisposed();}
//Do your work here
}
finally
{
if(disposable!=null)
disposable.Dispose();
}
Personally I just assume threads never get aborted(except when unloading the AppDomain) and thus write normal using based code.
It's very difficult to handle the TheadAbortException correctly, because it can be thrown in the middle of whatever code the thread is executing.
Most code is written with the assumption that some actions, for example int i = 0; never causes an exception, so the critical exception handling is only applied to code that actually can cause an exception by itself. When you abort a thread, the exception can come in code that is not prepared to handle it.
The best practice is to tell the thread to end by itself. Create a class for the method that is running the thread, and put a boolean variable in it. Both the code that started the thread and the method running the thread can access the variable, so you can just switch it to tell the thread to end. The code in the thread of course have to check the value periodically.
Thread.Abort is an unsafe way of killing the thread.
It rises an asynchronous ThreadAbortException which is a special exception that can be caught, but it will automatically be raised again at the end of the catch block
It can leave the state corrupted, and your application becomes unstable
TAE is raised in the other thread
The best practise is to use wrappers that support work cancellation, such as the Task class or use volatile bool. Instead of Thread.Abort consider using Thread.Join which will block the calling thread until the working thread is disposed of.
Some links:
- How To Stop a Thread in .NET (and Why Thread.Abort is Evil)
- Managed code and asynchronous exception hardening
- The dangers of Thread.Abort
As others have mentioned, aborting a thread is probably not a good idea. However, signalling a thread to stop with a bool may not work either, because we have no guarantee that the value of a bool will be synchronized across threads.
It may be better to use an event:
class ThreadManager
{
private Thread thread = new Thread(new ThreadStart(CallCOMMethod));
private AutoResetEvent endThread = new AutoResetEvent(false);
public ThreadManager()
{
thread.Start();
}
public StopThread()
{
endThread.Set();
}
private void CallCOMMethod()
{
while (!endThread.WaitOne())
{
// Call COM method
}
}
}
Since the COM method is long running you may just need to "bite the bullet" and wait for it to complete its current iteration. Is the information computed during the current iteration of value to the user?
If not, one option my be:
Have the ThreadManager itself run on a separate thread from the UI which checks for the stop notification from the user relatively often.
When the user requests that the long running operation be stopped, the UI thread can immediately return to the user.
The ThreadManager waits for the long running COM operation to complete its current iteration, then it throws away the results.
It's considered best practice to just let the thread's method return:
void Run() // thread entry function
{
while(true)
{
if(somecondition) // check for a terminating condition - usually "have I been disposed?"
break;
if(workExists)
doWork();
Thread.Sleep(interval);
}
}
Please get simple idea from here as for your requirement, check thead isalive property, then abort your thread.............................................................
ThreadStart th = new ThreadStart(CheckValue);
System.Threading.Thread th1 = new Thread(th);
if(taskStatusComleted)
{
if (th1.IsAlive)
{
th1.Abort();
}
}
private void CheckValue()
{
//my method....
}

At what point is the Thread.CurrentThread evaluated?

In the following code:
ThreadStart ts = new ThreadStart((MethodInvoker)delegate
{
executingThreads.Add(Thread.CurrentThread);
// work done here.
executingThreads.Remove(Thread.CurrentThread);
});
Thread t = new Thread(ts);
t.Start();
Perhaps you can see that I'd like to keep track of the threads that I start, so I can abort them when necessary.
But I worry that the Thread.CurrentThread is evaluated from the thread that creates the Thread t, and thus aborting it would not abort the spawned thread.
Aborting threads is never a good idea. If you are 100% positive that whatever task you are performing in the thread you want to abort will not corrupt any state information anywhere else then you can probably get away with it, but its best to avoid doing so even in those cases. There are better solutions like flagging the thread to stop, giving it a chance to clean up whatever mess it may leave behind.
Anyhow, answering your question, Thread.CurrentThread is executing in the method invoked when the new thread starts executing, therefore it will return the new thread, not the thread where the new thread was created (if that makes sense).
In the code you have given, Thread.CurrentThread is called in the context of the thread t and not its creator.
Also, aborting threads is morally equivalent to killing puppies.
To answer your question, and without comment on the wisdom of aborting a thread (I agree with previous commenters by the way), Thread.CurrentThread, as you have written it, will do what you are expecting to do - it will represent the thread that is currently invoking your delegate, not the thread that created and started the thread.
First of all I think it's a bad idea to abort threads, check the other answers/comments for the reasons. But I'll leave that aside for now.
Since your calls are inside a delegate they'll only be evaluated once the thread executes the content of that delegate. So the code works as you expect it to work and you get the thread on which the delegate executes, not the thread which created the delegate.
Of course your code isn't exception safe, you should probably put the remove into a finally clause.
executingThreads must be a thread safe collection, or you need to use locking.
Another way to keep track of the threads is to add the created thread to a collection from the creating thread, and then use the properties of that thread to check if the thread has already terminated. That way you don't have to rely on the thread keeping track itself. But this still doesn't fix the abortion problem.

how can a lock statement exit without releasing the lock?

I have a strange problem, a deadlock problem, where if I pause the program using Visual Studio and inspect the threads I can only see two threads waiting on the lock. No thread appears to be inside the lock scope! Is Visual Studio just lying or how can a lock statement exit without releasing the lock?
Thanks
This can happen under the following circumstances. Suppose you have
Enter();
try
{
Foo();
}
finally
{
Exit();
}
and a thread abort exception is thrown after the Enter but before the try. Now the monitor has been entered but the finally will never run because the exception was thrown before the try.
We've fixed this flaw in C# 4. In C# 4 the lock statement is now generated as
bool mustExit = false;
try
{
Enter(ref mustExit);
Foo();
}
finally
{
if (mustExit) Exit();
}
Things can still go horribly wrong of course; aborting a thread is no guarantee that the thread ever aborts, that finally blocks ever run, and so on. You could end up in the unhandled exception event handler with the lock still taken. But this is at least a little better.
This can happen if you manually call Monitor.Enter(something) without calling Monitor.Exit.
Do you have any explicit calls to Monitor.Enter / Monitor.TryEnter in your code? Can you see the stack traces for those waiting threads? If so, look at where they're waiting - that should make it obvious.
Are you by any chance calling yield return from within a lock statement from a thread pool thread?
If that is the case, you may want to look at Yielding surprises
This blog post describes a bug (I got locked out) I encountered when I combined those three things the wrong way. Luckly I was albe to resolve the issue with a small change to the code.

Does a locked object stay locked if an exception occurs inside it?

In a c# threading app, if I were to lock an object, let us say a queue, and if an exception occurs, will the object stay locked? Here is the pseudo-code:
int ii;
lock(MyQueue)
{
MyClass LclClass = (MyClass)MyQueue.Dequeue();
try
{
ii = int.parse(LclClass.SomeString);
}
catch
{
MessageBox.Show("Error parsing string");
}
}
As I understand it, code after the catch doesn't execute - but I have been wondering if the lock will be freed.
I note that no one has mentioned in their answers to this old question that releasing a lock upon an exception is an incredibly dangerous thing to do. Yes, lock statements in C# have "finally" semantics; when control exits the lock normally or abnormally, the lock is released. You're all talking about this like it is a good thing, but it is a bad thing! The right thing to do if you have a locked region that throws an unhandled exception is to terminate the diseased process immediately before it destroys more user data, not free the lock and keep on going.
Look at it this way: suppose you have a bathroom with a lock on the door and a line of people waiting outside. A bomb in the bathroom goes off, killing the person in there. Your question is "in that situation will the lock be automatically unlocked so the next person can get into the bathroom?" Yes, it will. That is not a good thing. A bomb just went off in there and killed someone! The plumbing is probably destroyed, the house is no longer structurally sound, and there might be another bomb in there. The right thing to do is get everyone out as quickly as possible and demolish the entire house.
I mean, think it through: if you locked a region of code in order to read from a data structure without it being mutated on another thread, and something in that data structure threw an exception, odds are good that it is because the data structure is corrupt. User data is now messed up; you don't want to try to save user data at this point because you are then saving corrupt data. Just terminate the process.
If you locked a region of code in order to perform a mutation without another thread reading the state at the same time, and the mutation throws, then if the data was not corrupt before, it sure is now. Which is exactly the scenario that the lock is supposed to protect against. Now code that is waiting to read that state will immediately be given access to corrupt state, and probably itself crash. Again, the right thing to do is to terminate the process.
No matter how you slice it, an exception inside a lock is bad news. The right question to ask is not "will my lock be cleaned up in the event of an exception?" The right question to ask is "how do I ensure that there is never an exception inside a lock? And if there is, then how do I structure my program so that mutations are rolled back to previous good states?"
First; have you considered TryParse?
in li;
if(int.TryParse(LclClass.SomeString, out li)) {
// li is now assigned
} else {
// input string is dodgy
}
The lock will be released for 2 reasons; first, lock is essentially:
Monitor.Enter(lockObj);
try {
// ...
} finally {
Monitor.Exit(lockObj);
}
Second; you catch and don't re-throw the inner exception, so the lock never actually sees an exception. Of course, you are holding the lock for the duration of a MessageBox, which might be a problem.
So it will be released in all but the most fatal catastrophic unrecoverable exceptions.
yes, that will release properly; lock acts as try/finally, with the Monitor.Exit(myLock) in the finally, so no matter how you exit it will be released. As a side-note, catch(... e) {throw e;} is best avoided, as that damages the stack-trace on e; it is better not to catch it at all, or alternatively: use throw; rather than throw e; which does a re-throw.
If you really want to know, a lock in C#4 / .NET 4 is:
{
bool haveLock = false;
try {
Monitor.Enter(myLock, ref haveLock);
} finally {
if(haveLock) Monitor.Exit(myLock);
}
}
"A lock statement is compiled to a call to Monitor.Enter, and then a try…finally block. In the finally block, Monitor.Exit is called.
The JIT code generation for both x86 and x64 ensures that a thread abort cannot occur between a Monitor.Enter call and a try block that immediately follows it."
Taken from:
This site
Just to add a little to Marc's excellent answer.
Situations like this are the very reason for the existence of the lock keyword. It helps developers make sure the lock is released in the finally block.
If you're forced to use Monitor.Enter/Exit e.g. to support a timeout, you must make sure to place the call to Monitor.Exit in the finally block to ensure proper release of the lock in case of an exception.
Your lock will be released properly. A lock acts like this:
try {
Monitor.Enter(myLock);
// ...
} finally {
Monitor.Exit(myLock);
}
And finally blocks are guaranteed to execute, no matter how you leave the try block.

One reader, many writers

Related: How to catch exceptions from a ThreadPool.QueueUserWorkItem?
I am catching exceptions in background threads started by ThreadPool.QueueUserWorkItem(), and propagating them to the main thread via a shared instance variable.
The background threads do this:
try
{
... stuff happens here...
}
catch (Exception ex1)
{
lock(eLock)
{
// record only the first exception
if (_pendingException == null)
_pendingException = ex1;
}
}
There are multiple potential writers to _pendingException - multiple background threads - so I protect it with a lock.
In the main thread, must I take the lock before reading _pendingException? Or can I simply do this:
if (_pendingException != null)
ThrowOrHandle();
EDIT:
ps: I would prefer to NOT take the lock on the reader thread because it is on the hot path, and I'd be taking and releasing the lock very, very often.
You will not be able to get away this easy. You will lose exceptions if another thread throws it before the reader dealt with the existing one. What you need here is a synchronized queue:
try
{
... stuff happens here...
}
catch (Exception ex1)
{
lock(queue)
{
queue.Enqueue(ex1);
Monitor.PulseAll(queue);
}
}
And to process it:
while(!stopped)
lock (queue)
{
while (queue.Count > 0)
processException(queue.Dequeue());
Monitor.Wait(queue);
}
Reads and writes to references are atomic (See C# Spec) and I'm nearly certain that lock does create a memory barrier so yes what you are doing is probably safe.
But really just use the lock around your read. It's guaranteed to work; if you every see it accessed not in a lock you know something is wrong, if the lock is causing you performance issues then you're checking the flag way too often, and it's just the "right thing to do."
Even though you may only care about the first exception, you may still want to use lock for at least two reasons:
In multi-core CPUs, without making a variable volatile (or performing any memory barrier operation) , there might be a moment when threads running on different cores may see different values. (I am not sure calling lock(queue) in a worker thread will cause any memory barrier operation though). (update) Calling lock(queue) in a worker thread will cause memory barrier operation as pointed out by Eric in the comment below.
2. Please keep it mind that References are not addresses (by Eric Lippert) (if you are assuming references are 32-bit addresses in 32-bit CLR that can be read atomically). The implementation of references can be changed to some opaque structures that may not be read atomically in future release of CLR (even though I think it is not likely to happen in foreseeable future :)) and your code will break.

Categories