I was suggested during a code review to do
bool acquiredLock = false;
try {
Monitor.TryEnter(lockObject, 500, ref acquiredLock);
if (acquiredLock) {
// do something
}
else {
// fallback strategy
}
}
finally
{
if (acquiredLock)
{
Monitor.Exit(lockObject);
}
}
instead of the simpler
if (Monitor.TryEnter(lockObject, 500)) {
try {
// do something...
}
finally {
Monitor.Exit(lockObject);
}
} else {
// fallback strategy
}
What difference does it make? How can the first code not exhibit a bug where the second would exhibit a bug?
Assuming in your second snippet you'd actually call Monitor.Exit, the difference is explained in the documentation:
This overload always sets the value of the variable that is passed to the ref parameter lockTaken, even if the method throws an exception, so the value of the variable is a reliable way to test whether the lock has to be released.
In other words, with your second snippet, it might be feasible for an asynchronous exception to be thrown (e.g. the thread being aborted) after the lock has been acquired, but before the method has returned. Even with a finally block, you couldn't then easily tell whether or not you'd need to release the lock. Using the ref parameter, the "monitor acquired" and "ref parameter set to true" actions are atomic - it's impossible for the variable to have the wrong value when the method exits, however it exits.
As of C# 4, when targeting a platform supporting this overload, this is the code that the C# compiler generates too.
Your first snippet is exiting the monitor when the second is not. You want to release the monitor when you're done with the critical block.
Besides the entirely valid point about exiting that other posters have made... The documentation for the TryEnter method states that it can throw one of three exceptions, so technically, yes it could bomb your application under specific circumstances.
Even tough the method is named TryEnter, it actually throws exceptions, and you need to handle them properly. If it throws an exception and you don't handle it and release monitor, you can have a deadlock situation.
This maybe a bit too precautious, because if you look at exception that it actually throws, it will most probably throw them BEFORE trying to acquire lock. But this is an implementation detail and you can't be sure about it.
You can probably check how this method is implemented at http://referencesource.microsoft.com/ but, as I said, this is an implementation detail.
Related
I recently implemented a function which logs object contents using JSON.SerializeObject.
To cut a long story short, the idea was, that this function would be used in our newly implemented logging mechanism, to track objects when needed, based on system parameterization.
One absolute requirement for the entire logging mechanism, was that it should never throw exceptions, because it would be used extensively. Any developer should be able to use it and under no circumstances should this function cause any interruption in code flow. In case of failure, any call should simply be skipped.
After implementing it and having inspected and handled every exception that I could think of, I decided to wrap the entire functionality in an outer try-catch block, just in case.
Like so:
public static void TrackObject(object obj)
{
try { Console.WriteLine(JsonConvert.SerializeObject(obj)); }
catch { Console.WriteLine("Failed to track object."); }
}
After passing all tests with flying colors, I fired up the main application to do some actual environment testing.
To my surprise, due to a Nuget misconfiguration, my function caused an exception (System.IO.FileLoadException) after it was called, but before it entered the try-catch block and so it propagated back to the main application, causing havoc to code flow.
This got me thinking.
There are ways for exceptions to be thrown while calling the function but before a handler kicks in. There are also cases where an exception is simply unacceptable.
My current solution, was to create a wrapper function, which simply calls the actual function inside a try-catch. But this looks ugly and wrong. Plus I am not sure it is a bullet proof solution.
public static void TrackObject(object obj)
{
try { PrivateTrackObject(obj); }
catch { Console.WriteLine("Failed to track object."); }
}
private static void PrivateTrackObject(object obj)
{
Console.WriteLine(JsonConvert.SerializeObject(obj));
}
Is there a way to create a bullet-proof, no-way in hell, exception free method?
Or at least is there a definitive list of exceptions that can occur on a method call?
PS. The compiler warned me about the version mismatch, but I didn’t see it the first time.
PS2. I have created a sample project for anyone who wishes to see this issue in action.
https://drive.google.com/open?id=15BDrLNn87gsMHc9pQ-TgyDMSLQxDBq18
Is there a way to create a bullet-proof, no-way in hell, exception free method?
No. Even if the method is literally empty you can always have a thread abort exception thrown, or a stack overflow exception if there isn't enough space on the stack to call that method, or it could result in an out of memory exception.
is there a definitive list of exceptions that can occur on a method call?
If it's arbitrary code (i.e. from a delegate) then no. It could always be a custom exception of some type that didn't even exist when you wrote your code.
Also note that in your situation you need to be concerned about any possible exceptions that could be thrown in your catch block, if you just want to try handling normal exceptions (unlike the ones mentioned above) that happen in your try block. Just logging the exception could fail. In your example of using the console there could be problems with standard output that result in an exception. If you're really going for this code never throws you'd need to try to log the exceptions, but have other backup logging options for when they aren't working (and if you really can't throw, which as mentioned by others, is almost certainly a bad idea, then you need to be willing to go on without logging if logging your exceptions is failing).
I was wondering which of the following was the suggested pattern when using Mutex (or Semaphores or ReadWriteLockSlims etc.).
Should the initial lock happen inside or outside of the try statement? Is it unimportant?
_mutex.WaitOne()
try
{
// critical code
}
finally
{
_mutex.ReleaseMutex();
}
or
try
{
_mutex.WaitOne()
// critical code
}
finally
{
_mutex.ReleaseMutex();
}
The only way these could be different is if an exception occurred after WaitOne but before the try start in example 1 or after the try start but before WaitOne in example 2. In the first case, the mutex won't be released and in the second case a release might be attempted even though there is no pending wait. The exception would have to be something severe like ThreadAbortException for it to occur in either place. However, if the mutex is contained in a using block, neither would be a problem.
EDIT: after reading Eric's post on this topic that Oliver linked to, I think even with a using block, the situation is not perfect and that simply going with your second version as Oliver suggests is your best option.
Maybe it is a different. Take a look into these posts from Eric:
Subtleties of C# IL codegen
Locks and exceptions do not mix
In short:
Just imagine there happens an exception between the mutex.WaitOne() and the try statement. You'll leave this chunk of code without calling the _mutex.ReleaseMutex().
So take your second piece of code to be sure everything works as expected.
If you´r not using the mutex for cross-process synchronization.
See answer to this question C# - Locking issues with Mutex
Then this will be safer:
private static object _syncLock = new object();
public void RunCriticalCode()
{
lock (_syncLock)
{
// critical code
}
}
In writing some threaded code, I've been using the ReaderWriterLockSlim class to handle synchronized access to variables. Doing this, I noticed I was always writing try-finally blocks, the same for each method and property.
Seeing an opportunity to avoid repeating myself and encapsulate this behaviour I built a class, ReaderWriterLockSection, intended to be used as a thin wrapper to the lock which can be used with the C# using block syntax.
The class is mostly as follows:
public enum ReaderWriterLockType
{
Read,
UpgradeableRead,
Write
}
public class ReaderWriterLockSection : IDisposeable
{
public ReaderWriterLockSection(
ReaderWriterLockSlim lock,
ReaderWriterLockType lockType)
{
// Enter lock.
}
public void UpgradeToWriteLock()
{
// Check lock can be upgraded.
// Enter write lock.
}
public void Dispose()
{
// Exit lock.
}
}
I use the section as follows:
private ReaderWriterLockSlim _lock = new ReaderWriterLockSlim();
public void Foo()
{
using(new ReaderWriterLockSection(_lock, ReaderWriterLockType.Read)
{
// Do some reads.
}
}
To me, this seems like a good idea, one that makes my code easier to read and seemingly more robust since I wont ever forget to release a lock.
Can anybody see an issue with this approach? Is there any reason this is a bad idea?
Well, it seems okay to me. Eric Lippert has previously written about the dangers of using Dispose for "non-resource" scenarios, but I think this would count as a resource.
It may make life tricky in upgrade scenarios, but you could always fall back to a more manual bit of code at that point.
Another alternative is to write a single lock acquire/use/release method and provide the action to take while holding the lock as a delegate.
I usually indulge into this kind of code-sugary confections!
Here's a variant that's a bit easier to read for the users, on top of your API
public static class ReaderWriterLockExt{
public static IDisposable ForRead(ReaderWriterLockSlim rwLock){
return new ReaderWriterLockSection(rwLock,ReaderWriterLockType.Read);
}
public static IDisposable ForWrite(ReaderWriterLockSlim rwLock){
return new ReaderWriterLockSection(rwLock,ReaderWriterLockType.Write);
}
public static IDisposable ForUpgradeableRead(ReaderWriterLockSlim wrLock){
return new ReaderWriterLockSection(rwLock,ReaderWriterLockType.UpgradeableRead);
}
}
public static class Foo(){
private static readonly ReaderWriterLockSlim l=new ReaderWriterLockSlim(); // our lock
public static void Demo(){
using(l.ForUpgradeableRead()){ // we might need to write..
if(CacheExpires()){ // checks the scenario where we need to write
using(l.ForWrite()){ // will request the write permission
RefreshCache();
} // relinquish the upgraded write
}
// back into read mode
return CachedValue();
} // release the read
}
}
I also recommend using a variant that takes an Action delegate that's invoked when the lock cannot be obtained for 10 seconds, which I'll leave as an exercise to the reader.
You might also want to check for a null RWL in the static extension methods, and make sure the lock exists when you dispose it.
Cheers,
Florian
There is another consideration here, you are possibly solving a problem you should not solve. I can't see the rest of your code but I can guess from you seeing value in this pattern.
Your approach solves a problem only if the code that reads or writes the shared resource throws an exception. Implicit is that you don't handle the exception in the method that does the reading/writing. If you did, you could simply release the lock in the exception handling code. If you don't handle the exception at all, the thread will die from the unhandled exception and your program will terminate. No point in releasing a lock then.
So there's a catch clause somewhere lower in the call stack that catches the exception and handles it. This code has to restore the state of the program so that it can meaningful continue without generating bad data or otherwise die due to exceptions caused by altered state. That code has a difficult job to do. It needs to somehow guess how much data was read or written without having any context. Getting it wrong, or only partly right, is potentially very destabilizing to the entire program. After all, it was a shared resource, other threads are reading/writing from it.
If you know how to do this, then by all means use this pattern. You better test the heck out of though. If you're not sure then avoid wasting system resources on a problem you can't reliably fix.
One thing I'd suggest when wrapping a lock to facilitate the "using" pattern is to include a "danger-state" field in the lock; before allowing any code to enter the lock, the code should check the danger state. If the danger state is set, and the code which is trying to enter the lock hasn't passed a special parameter saying it's expecting that it might be, the attempt to acquire the lock should throw an exception. Code which is going to temporarily put the guarded resource into a bad state should set the danger state flag, do what needs to be done, and then reset the danger state flag once the operation is complete and the object is in a safe state.
If an exception occurs while the danger state flag is set, the lock should be released but the danger state flag should remain set. This will ensure that code which wants to access the resource will find out that the resource is corrupted, rather than waiting forever for the lock to be released (which would be the outcome if there were no "using" or "try-finally" block).
If the lock being wrapped is a ReaderWriterLock, it may be convenient to have the acquisition of a "writer" lock automatically set the danger state; unfortunately, there's no way for an IDisposable used by a using block to determine whether the block is being exited cleanly or via exception. Consequently, I don't know any way to use something syntactically like a 'using' block to guard the "danger state" flag.
I suspect this is a very dumb question: what is the correct syntax for an interruptible lock statement in C#? E.g. get lock; if lock is interrupted before locked code finishes, return false out of the enclosing method. Probably totally the wrong terminology... Thanks.
You can have a timeout while aquiring a lock using Monitor.TryEnter; and likewise, within a lock you can do things like Monitor.Wait/Monitor.Pulse to temporarily yield the lock, but you can't be interrupted as such.
The main time interrupt applies might be in Thread.Sleep, which can be interrupted with Thread.Interrupt - but again, this won't yank control out of an executing method block.
What exactly is it that you are trying to achieve? With more context we can probably help more...
What you mean by "interrupted" is unclear.
Interruption by Exception
private bool SomeLockingMethod(object foo)
{
// Verify foo is valid
try
{
lock(foo)
{
while(something)
{
// Do stuff
Thread.Sleep(1); // Possibly yield to another
// thread calling Thread.Interrupt
}
}
return true;
}
catch(ThreadInterruptedException ex)
{
// Handle exception
}
return false;
}
If the return true isn't reached, then something happened while the lock on foo was held, and the code returns false. The lock is automatically released, either way.
Another thread can interrupt this one by calling Thread.Interrupt.
"Interruption" from code
If you're the one "interrupting" the code, it could be as simple as
private bool SomeLockingMethod(object foo)
{
// Verify foo is valid
lock(foo)
{
// Do stuff
if(shouldInterrupt)
{
return false;
}
// Do more stuff
}
return true;
}
Again, the lock is automatically released, whether or not there is an "interruption".
Interruption because someone else is trying to acquire the lock
Possibly this is what you're looking for; in this case you may want to use something else, like a Semaphore or ManualResetEvent.
I'm not sure what you're trying to get at here. The purpose of the lock statement is that you should not get interrupted so you can ensure consistent behavior.
What are you trying to accomplish here?
You might also have a look at transaction scope, added in 2.0, which may be what you're looking for (unknown, due the ambiguity in your question). It allows you to attempt some actions, then roll back if those actions were not completed properly.
See here for more details.
A common pattern in C++ is to create a class that wraps a lock - the lock is either implicitly taken when object is created, or taken explicitly afterwards. When object goes out of scope, dtor automatically releases the lock.
Is it possible to do this in C#? As far as I understand there are no guarantees on when dtor in C# will run after object goes out of scope.
Clarification:
Any lock in general, spinlock, ReaderWriterLock, whatever.
Calling Dispose myself defeats the purpose of the pattern - to have the lock released as soon as we exit scope - no matter if we called return in the middle, threw exception or whatnot.
Also, as far as I understand using will still only queue object for GC, not destroy it immediately...
To amplify Timothy's answer, the lock statement does create a scoped lock using a monitor. Essentially, this translates into something like this:
lock(_lockKey)
{
// Code under lock
}
// is equivalent to this
Monitor.Enter(_lockKey)
try
{
// Code under lock
}
finally
{
Monitor.Exit(_lockKey)
}
In C# you rarely use the dtor for this kind of pattern (see the using statement/IDisposable). One thing you may notice in the code is that if an async exception happens between the Monitor.Enter and the try, it looks like the monitor will not be released. The JIT actually makes a special guarantee that if a Monitor.Enter immediately precedes a try block the async exception will not happen until the try block thus ensuring the release.
Your understanding regarding using is incorrect, this is a way to have scoped actions happen in a deterministic fashion (no queuing to the GC takes place).
C# supplies the lock keyword which provides an exclusive lock and if you want to have different types (e.g. Read/Write) you'll have to use the using statement.
P.S. This thread may interest you.
It's true that you don't know exactly when the dtor is going to run... but, if you implement the IDisposable interface, and then use either a 'using' block or call 'Dispose()' yourself, you will have a place to put your code.
Question: When you say "lock", do you mean a thread lock so that only one thread at a time can use the object? As in:
lock (_myLockKey) { ... }
Please clarify.
For completeness there is another way to achieve a similar RAII effect without using using and IDisposable. In C# using is usually clearer (see also here for some more thoughts), but in other languages (e.g. Java), or even in C# if using is not appropriate for some reason, it's useful to know.
It's an idiom called "Execute Around" and the idea is that you call a method that does the pre and post stuff (e.g. locking/unlocking your threads, or setting up and committing/ closing your DB connection etc), and you pass into that method a delegate that will implement the operations you want to occur in between.
e.g.:
funkyObj.InOut( delegate{ System.Console.WriteLine( "middle bit" ); } );
Depending on what the InOut method does, the output might be something like:
first bit
middle bit
last bit
As I say, this answer is for completeness only, the previous suggestions of using with IDisposable, as well as the lock keyword, are going to be better 99% of the time.
It's a shame that, while .Net has gone further than many other modern OO languages in this regards (I'm looking at you, Java), it still places the responsibility for RAII to work on the client code (ie the code that uses using), whereas in C++ the destructor will always run at the end of the scope.
Why would you want a scoped lock in the first place? Suppose you have the following code:
lock(obj) {
... some logic goes here
}
If exception has happened inside try inserted in place of lock, this is often means that you have a corrupted state now and other threads will continue to work with corrupted state. It is better to let the program hang to signal about the problem.
Another problem is that try incurs some performance penalty, but this is usually much lesser problem if at all.
Jeffrey Richter specifically advises not to use lock statement.
I've been really bothered by the fact that using is up to the developer to remember to do - at best you get a warning, which most people never bother to promote to an error. So, I've been toying with an idea like this - it forces the client to at least TRY to do things correctly. Fortunately and unfortunately, it's a closure, so the client could still keep a copy of the resource, and try to use it again later - but this code at least tries to push the client in the right direction...
public class MyLockedResource : IDisposable
{
private MyLockedResource()
{
Console.WriteLine("initialize");
}
public void Dispose()
{
Console.WriteLine("dispose");
}
public delegate void RAII(MyLockedResource resource);
static public void Use(RAII raii)
{
using (MyLockedResource resource = new MyLockedResource())
{
raii(resource);
}
}
public void test()
{
Console.WriteLine("test");
}
}
Good usage:
MyLockedResource.Use(delegate(MyLockedResource resource)
{
resource.test();
});
Bad usage! (Unfortunately, this can't be prevented...)
MyLockedResource res = null;
MyLockedResource.Use(delegate(MyLockedResource resource)
{
resource.test();
res = resource;
res.test();
});
res.test();