A common pattern in C++ is to create a class that wraps a lock - the lock is either implicitly taken when object is created, or taken explicitly afterwards. When object goes out of scope, dtor automatically releases the lock.
Is it possible to do this in C#? As far as I understand there are no guarantees on when dtor in C# will run after object goes out of scope.
Clarification:
Any lock in general, spinlock, ReaderWriterLock, whatever.
Calling Dispose myself defeats the purpose of the pattern - to have the lock released as soon as we exit scope - no matter if we called return in the middle, threw exception or whatnot.
Also, as far as I understand using will still only queue object for GC, not destroy it immediately...
To amplify Timothy's answer, the lock statement does create a scoped lock using a monitor. Essentially, this translates into something like this:
lock(_lockKey)
{
// Code under lock
}
// is equivalent to this
Monitor.Enter(_lockKey)
try
{
// Code under lock
}
finally
{
Monitor.Exit(_lockKey)
}
In C# you rarely use the dtor for this kind of pattern (see the using statement/IDisposable). One thing you may notice in the code is that if an async exception happens between the Monitor.Enter and the try, it looks like the monitor will not be released. The JIT actually makes a special guarantee that if a Monitor.Enter immediately precedes a try block the async exception will not happen until the try block thus ensuring the release.
Your understanding regarding using is incorrect, this is a way to have scoped actions happen in a deterministic fashion (no queuing to the GC takes place).
C# supplies the lock keyword which provides an exclusive lock and if you want to have different types (e.g. Read/Write) you'll have to use the using statement.
P.S. This thread may interest you.
It's true that you don't know exactly when the dtor is going to run... but, if you implement the IDisposable interface, and then use either a 'using' block or call 'Dispose()' yourself, you will have a place to put your code.
Question: When you say "lock", do you mean a thread lock so that only one thread at a time can use the object? As in:
lock (_myLockKey) { ... }
Please clarify.
For completeness there is another way to achieve a similar RAII effect without using using and IDisposable. In C# using is usually clearer (see also here for some more thoughts), but in other languages (e.g. Java), or even in C# if using is not appropriate for some reason, it's useful to know.
It's an idiom called "Execute Around" and the idea is that you call a method that does the pre and post stuff (e.g. locking/unlocking your threads, or setting up and committing/ closing your DB connection etc), and you pass into that method a delegate that will implement the operations you want to occur in between.
e.g.:
funkyObj.InOut( delegate{ System.Console.WriteLine( "middle bit" ); } );
Depending on what the InOut method does, the output might be something like:
first bit
middle bit
last bit
As I say, this answer is for completeness only, the previous suggestions of using with IDisposable, as well as the lock keyword, are going to be better 99% of the time.
It's a shame that, while .Net has gone further than many other modern OO languages in this regards (I'm looking at you, Java), it still places the responsibility for RAII to work on the client code (ie the code that uses using), whereas in C++ the destructor will always run at the end of the scope.
Why would you want a scoped lock in the first place? Suppose you have the following code:
lock(obj) {
... some logic goes here
}
If exception has happened inside try inserted in place of lock, this is often means that you have a corrupted state now and other threads will continue to work with corrupted state. It is better to let the program hang to signal about the problem.
Another problem is that try incurs some performance penalty, but this is usually much lesser problem if at all.
Jeffrey Richter specifically advises not to use lock statement.
I've been really bothered by the fact that using is up to the developer to remember to do - at best you get a warning, which most people never bother to promote to an error. So, I've been toying with an idea like this - it forces the client to at least TRY to do things correctly. Fortunately and unfortunately, it's a closure, so the client could still keep a copy of the resource, and try to use it again later - but this code at least tries to push the client in the right direction...
public class MyLockedResource : IDisposable
{
private MyLockedResource()
{
Console.WriteLine("initialize");
}
public void Dispose()
{
Console.WriteLine("dispose");
}
public delegate void RAII(MyLockedResource resource);
static public void Use(RAII raii)
{
using (MyLockedResource resource = new MyLockedResource())
{
raii(resource);
}
}
public void test()
{
Console.WriteLine("test");
}
}
Good usage:
MyLockedResource.Use(delegate(MyLockedResource resource)
{
resource.test();
});
Bad usage! (Unfortunately, this can't be prevented...)
MyLockedResource res = null;
MyLockedResource.Use(delegate(MyLockedResource resource)
{
resource.test();
res = resource;
res.test();
});
res.test();
Related
For convenience and safety reasons i'd like to use the using statement for allocation and release of objects from/to a pool
public class Resource : IDisposable
{
public void Dispose()
{
ResourcePool.ReleaseResource(this);
}
}
public class ResourcePool
{
static Stack<Resource> pool = new Stack<Resource>();
public static Resource GetResource()
{
return pool.Pop();
}
public static void ReleaseResource(Resource r)
{
pool.Push(r);
}
}
and the access the pool like
using (Resource r = ResourcePool.GetResource())
{
r.DoSomething();
}
I found some topics on abusing using and Dispose() for scope handling but all of them incorporate using (Blah b = _NEW_ Blah()).
Here the objects are not to be freed after leaving the using scope but kept in the pool.
If the using statement simply expands to a plain try finally Dispose() this should work fine but is there something more happening behind the scenes or a chance this won't work in future .Net versions?
This is not an abuse at all - that is a common scope-handling idiom of C#. For example, ADO.NET objects (connections, statements, query results) are commonly enclosed in using blocks, even though some of these objects get released back to their pools inside their Dispose methods:
using (var conn = new SqlConnection(dbConnectionString)) {
// conn is visible inside this scope
...
} // conn gets released back to its connection pool
It is a valid way to use IDisposable.
In fact, this is how connection pooling is also done in .NET - wrapping a DBConnection object in a using statement to ensure the connection closes and is returned to the connection pool.
TransactionScope is another example for a class that uses the Dispose pattern to rollback un-completed transactions:
A call to the Dispose method marks the end of the transaction scope.
If the using statement simply expands to a plain try finally Dispose() this should work fine but is there something more happening behind the scenes or a chance this won't work in future .Net versions?
It does. Your code should work fine, and is guaranteed by the spec to continue working the same way. In fact, this is fairly common (look at connection pooling in SQL, for a good example.)
The main problem with your code, as written, is that you could explicitly call ReleaseResource within a using, can cause the pool to get the resourced pushed more than once, since it's a part of the public API.
This looks like an abuse of IDisposable and a poor design decision to me. First, it forces the objects that are stored in the pool to know about the pool. That's akin to creating a List type that forces objects in it to implement a particular interface or derive from some particular class. Like a LinkedList class that forces data items to include Next and Previous pointers that the list can use.
In addition, you have the pool allocate the resource for you, but then the resource has a call to put itself back into the pool. That seems ... odd.
I think a better alternative would be:
var resource = ResourcePool.GetResource();
try
{
}
finally
{
ResourcePool.FreeResource(resource);
}
It's a little bit more code (try/finally rather than using), but a cleaner design. It relieves the contained objects from having to know about the container, and it more clearly shows that the pool is managing the object.
Your understanding of the using statement is correct (try, finally, Dispose). I don't foresee this changing anytime soon. So many things would break if it did.
There's not necessarily anything wrong with what you're planning. I've seen this sort of thing before, where Dispose doesn't actually shut down the object, but rather puts it into some sort of state that isn't "fully operational."
If you are at all concerned about this, you can implement this in such a way that it adheres to the usual Dispose implementation pattern. Simply have a wrapper class that implements IDisposable and exposes all the methods of the underlying class. When the wrapper is disposed, put the underlying object into the pool, not the wrapper. Then you can consider the wrapper shut down, although the thing that it wrapped is not.
What you are doing is like C++ and RAII. And in C#, it is about as close that that C++/RAII idiom as you can get.
Eric Lippert, who knows a-thing-or-two about C#, is adamantly against using the IDispose and using statement as a C# RAII idiom. See his in depth response here, Is it abusive to use IDisposable and "using" as a means for getting "scoped behavior" for exception safety?.
Part of the problem with IDisposable used in this RAII fashion is that IDisposable has very stringent requirements to be used correctly. Almost all the C# code I've seen that uses IDisposable fails to implement the pattern correctly. Joe Duffy has made a blog post that goes into meticulous detail about the proper way to implement the IDisposable pattern http://joeduffyblog.com/2005/04/08/dg-update-dispose-finalization-and-resource-management/. Joe's information is much more detailed and extensive than what is mentioned on MSDN. Joe also knows a-thing-or-two about C#, and there were a lot of very smart contributors that helped flesh out that document.
Simple things can be done to implement the bare-bones-minimal IDisposable pattern (for use such as in RAII), such as sealing the class, and since there are no unmanaged resources not having a finalizer, and such. MSDN https://msdn.microsoft.com/en-us/library/system.objectdisposedexception%28v=vs.110%29.aspx is a nice overview, but Joe's information has all the gory details.
But the one thing that you cannot get away from with IDisposable is its "viral" nature. Classes that hold onto a member which is IDisposable themselves ought to become IDisposable... not a problem in the using(RAII raii = Pool.GetRAII()) scenario, but something to be very mindful about.
All that being said, despite Eric's position (of which I tend to agree with him on most everything else), and Joe's 50 page essay of how to properly implement the IDisposable pattern... I do use it as the C#/RAII idiom myself.
Now only if C# had 1) non-nullable references (like C++ or D or Spec#) and 2) deeply immutable data types (like D, or even F# [you can do the F# kind of immutable in C#, but it is a LOT of boilerplate, and its just too hard to get right... makes the easy hard, and the hard impossible]) and 3) design-by-contract as part of the language proper (like D or Eiffel or Spec#, not like the C# Code Contracts abomination). sigh Maybe C# 7.
Recently I've seen some code written as follows:
public void Dipose()
{
using(_myDisposableField) { }
}
This seems pretty strange to me, I'd prefer to see myDisposableField.Dispose();
What reasons are there for using "using" to dispose your objects over explicitly making the call?
No, none at all. It will just compile into an empty try/finally and end up calling Dispose.
Remove it. You'll make the code faster, more readable, and perhaps most importantly (as you continue reading below) more expressive in its intent.
Update: they were being slightly clever, equivalent code needs a null check and as per Jon Skeet's advice, also take a local copy if multi-threading is involved (in the same manner as the standard event invocation pattern to avoid a race between the null check and method call).
IDisposable tmp = _myDisposableField;
if (tmp != null)
tmp.Dispose();
From what I can see in the IL of a sample app I've written, it looks like you also need to treat _myDisposableField as IDisposable directly. This will be important if any type implements the IDisposable interface explicitly and also provides a public void Dispose() method at the same time.
This code also doesn't attempt to replicate the try-finally that exists when using using, but it is sort of assumed that this is deemed unnecessary. As Michael Graczyk points out in the comments, however, the use of the finally offers protection against exceptions, in particular the ThreadAbortException (which could occur at any point). That said, the window for this to actually happen in is very small.
Although, I'd stake a fiver on the fact they did this not truly understanding what subtle "benefits" it gave them.
There is a very subtle but evil bug in the example you posted.
While it "compiles" down to:
try {}
finally
{
if (_myDisposableField != null)
((IDisposable)_myDisposableField).Dispose();
}
objects should be instantiated within the using clause and not outside:
You can instantiate the resource object and then pass the variable to the using statement, but this isn't a best practice. In this case, after control leaves the using block, the object remains in scope but probably has no access to its unmanaged resources. In other words, it's not fully initialized anymore. If you try to use the object outside the using block, you risk causing an exception to be thrown. For this reason, it's better to instantiate the object in the using statement and limit its scope to the using block.
—using statement (C# Reference)
In other words, it's dirty and hacky.
The clean version is extremely clearly spelled out on MSDN:
if you can limit the use of an instance to a method, then use a using block with the constructor call on its border. Do not use Dispose directly.
if you need (but really need) to keep an instance alive until the parent is disposed, then dispose explicitly using the Disposable pattern and nothing else. There are different ways of implementing a dispose cascade, however they need to be all done similarly to avoid very subtle and hard to catch bugs. There's a very good resource on MSDN in the Framework Design Guidelines.
Finally, please note the following you should only use the IDisposable pattern if you use unmanaged resources. Make sure it's really needed :-)
As already discussed in this answer, this is a cheeky way of avoiding a null test, but: there can be more to it than that. In modern C#, in many cases you could achieve similar with a null-conditional operator:
public void Dipose()
=> _myDisposableField?.Dispose();
However, it is not required that the type (of _myDisposableField) has Dispose() on the public API; it could be:
public class Foo : IDisposable {
void IDisposable.Dispose() {...}
}
or even worse:
public class Bar : IDisposable {
void IDisposable.Dispose() {...}
public void Dispose() {...} // some completely different meaning! DO NOT DO THIS!
}
In the first case, Dispose() will fail to find the method, and in the second case, Dispose() will invoke the wrong method. In either of these cases, the using trick will work, as will a cast (although this will do slightly different things again if it is a value-type):
public void Dipose()
=> ((IDisposable)_myDisposableField)?.Dispose();
If you aren't sure whether the type is disposable (which happens in some polymorphism scenarios), you could also use either:
public void Dipose()
=> (_myDisposableField as IDisposable)?.Dispose();
or:
public void Dipose()
{
using (_myDisposableField as IDisposable) {}
}
The using statement defines the span of code after which the referenced object should be disposed.
Yes, you could just call a .dispose once it was done with but it would be less clear (IMHO) what the scope of the object was. YMMV.
It seems that in most cases the C# compiler could call Dispose() automatically. Like most cases of the using pattern look like:
public void SomeMethod()
{
...
using (var foo = new Foo())
{
...
}
// Foo isn't use after here (obviously).
...
}
Since foo isn't used (that's a very simple detection) and since its not provided as argument to another method (that's a supposition that applies to many use cases and can be extended), the compiler could automatically and immediately call Dispose() without the developper requiring to do it.
This means that in most cases the using is pretty useless if the compiler does some smart job. IDisposable seem low level enough to me to be taken in account by a compiler.
Now why isn't this done? Wouldn't that improve the performances (if the developpers are... dirty).
A couple of points:
Calling Dispose does not increase performance. IDisposable is designed for scenarios where you are using limited and/or unmanaged resources that cannot be accounted for by the runtime.
There is no clear and obvious mechanism as to how the compiler could treat IDisposable objects in the code. What makes it a candidate for being disposed of automatically and what doesn't? If the instance is (or could) be exposed outside of the method? There's nothing to say that just because I pass an object to another function or class that I want it to be usable beyond the scope of the method
Consider, for example, a factory patter that takes a Stream and deserializes an instance of a class.
public class Foo
{
public static Foo FromStream(System.IO.Stream stream) { ... }
}
And I call it:
Stream stream = new FileStream(path);
Foo foo = Foo.FromStream(stream);
Now, I may or may not want that Stream to be disposed of when the method exits. If Foo's factory reads all of the necessary data from the Stream and no longer needs it, then I would want it to be disposed of. If the Foo object has to hold on to the stream and use it over its lifetime, then I wouldn't want it to be disposed of.
Likewise, what about instances that are retrieved from something other than a constructor, like Control.CreateGraphics(). These instances could exist outside of the code, so the compiler wouldn't dispose of them automatically.
Giving the user control (and providing an idiom like the using block) makes the user's intention clear and makes it much easier to spot places where IDisposable instances are not being properly disposed of. If the compiler were to automatically dispose of some instances, then debugging would be that much more difficult as the developer had to decipher how the automatic disposal rules applied to each and every block of code that used an IDisposable object.
In the end, there are two reasons (by convention) for implementing IDisposable on a type.
You are using an unmanaged resource (meaning you're making a P/Invoke call that returns something like a handle that must be released by a different P/Invoke call)
Your type has instances of IDisposable that should be disposed of when this object's lifetime is over.
In the first case, all such types are supposed to implement a finalizer that calls Dispose and releases all unmanaged resources if the developer fails to do so (this is to prevent memory and handle leaks).
Garbage Collection (while not directly related to IDisposable, is what cleans up unused objects) isn't that simple.
Let me re-word this a little bit. Automatically calling Dispose() isn't that simple. It also won't directly increase performance. More on that a little later.
If you had the following code:
public void DoSomeWork(SqlCommand command)
{
SqlConnection conn = new SqlConnection(connString);
conn.Open();
command.Connection = conn;
// Rest of the work here
}
How would the compiler know when you were done using the conn object? Or if you passed a reference to some other method that was holding on to it?
Explicitly calling Dispose() or using a using block clearly states your intent and forces things to get cleaned up properly.
Now, back to performance. Simply calling Dispose() on an Object doesn't guarantee any performance increase. The Dispose() method is used for "cleaning up" resources when you're done with an Object.
The performance increase can come when using un-managed resources. If a managed object doesn't properly dispose of its un-managed resources, then you have a memory leak. Ugly stuff.
Leaving the determination to call Dispose() up to the compiler would take away that level of clarity and make debugging memory leaks caused by un-managed resources that much more difficult.
You're asking the compiler to perform a semantic analysis of your code. The fact that something isn't explicitly referenced after a certain point in the source does not mean that it isn't being used. If I create a chain of references and pass one out to a method, which may or may not store that reference in a property or some other persistent container, should I really expect the compiler to trace through all of that and figure out what I really meant?
Volatile entities may also be a concern.
Besides, using() {....} is more readable and intuitive, which is worth a lot in terms of maintainability.
As engineers or programmers, we strive to be efficient, but that is rarely the same thing as lazy.
Look at the MSDN Artilce for the C# Using Statement The using statement is just a short cut to keep from doing a try and finally in allot of places. Calling the dispose is not a low level functionality like Garbage Collection.
As you can see using is translated into.
{
Font font1 = new Font("Arial", 10.0f);
try
{
byte charset = font1.GdiCharSet;
}
finally
{
if (font1 != null)
((IDisposable)font1).Dispose();
}
}
How would the compiler know where to put the finally block? Does it call it on Garbage Collection?
Garabage Collection doesn't happen as soon as you leave a method. Read this article on Garbage Collection to understand it better. Only after there are no references to the object. A resource could be tied up for much longer than needed.
The thought that keeps popping into my head is that the compiler should not protect developers who do not clean up there resources. Just because a language is managed doesn't mean that it is going to protect from yourself.
C++ supports this; they call it "stack semantics for reference types". I support adding this to C#, but it will require different syntax (changing the semantics based on whether or not a local variable is passed to another method isn't a good idea).
I think that you are thinking about finalizers. Finalizers use the destructor syntax in c#, and they are called automatically by the garbage collector. Finalizers are only appropriate to use when you are cleaning up unmanaged resources.
Dispose is intended to allow for early cleanup of unmanaged resources (and it can be used to clean managed resources as well).
Detection is actually trickier than it looks. What if you have code like this:
var mydisposable = new...
AMethod(mydisposable);
// (not used again)
It's possible that some code in AMethod holds on to a reference to myDisposable.
Maybe it gets assigned to an instance variable inside of that method
Maybe myDisposable subscribes to an event inside of AMethod (then the event publisher holds a reference to myDisposable)
Maybe another thread is spawned by AMethod
Maybe mydisposable becomes "enclosed" by an anonymous method or lamba expression inside of AMethod.
All of those things make it difficult to know for absolute certain that your object is no longer in use, so Dispose is there to let a developer say "ok, I know that it's safe to run my cleanup code now);
Bear in mind also that dispose doesn't deallocate your object -- only the GC can do that. (The GC does have the magic to understand all of the scenarios that I described, and it knows when to clean up your object, and if you really need code to run when the GC detects no references, you can use a finalizer). Be careful with finalizers, though -- they are only for unmanaged allocations that your class owns.
You can read more about this stuff here:
http://msdn.microsoft.com/en-us/magazine/bb985010.aspx
and here: http://www.bluebytesoftware.com/blog/2005/04/08/DGUpdateDisposeFinalizationAndResourceManagement.aspx
If you need unmanaged handle cleanup, read about SafeHandles as well.
It's not the responsibility of the compiler to interpret the scopes in your application and do things like figure out when you no longer need memory. In fact, I'm pretty sure that's an impossible problem to solve, because there's no way for the compiler to know what your program will look like at runtime, no matter how smart it is.
This is why we have the garbage collection. The problem with garbage collection is that it runs on an indeterminate interval, and typically if an object implements IDisposable, the reason is because you want the ability to dispose of it immediately. Like, right now immediately. Constructs such as database connections aren't just disposable because they have some special work to do when they get trashed - it's also because they are scarce.
I seems difficult for the G.C. to know that you won't be using this variable anymore later in the same method. Obviously, if you leave the method, and don't keep a further reference to you variable, the G.C. will dispose it. But using using in you sample, tells the G.C. that you are sure that you will not be using this variable anymore after.
The using statement has nothing to do with performance (unless you consider avoiding resource/memory leaks as performance).
All it does for you is guarantee that the IDisposable.Dispose method is called on the object in question when it goes out of scope, even if an exception has occurred inside the using block.
The Dispose() method is then responsible for releasing any resources used by the object. These are most often unmanaged resources such as files, fonts, images etc, but could also be simple "clean-up" activities on managed objects (not garbage collection however).
Of course if the Dispose() method is implemented badly, the using statement provides zero benefit.
I think the OP is saying "why bother with 'using' when the compiler should be able to work it out magically pretty easily".
I think the OP is saying that
public void SomeMethod()
{
...
var foo = new Foo();
... do stuff with Foo ...
// Foo isn't use after here (obviously).
...
}
should be equivalent to
public void SomeMethod()
{
...
using (var foo = new Foo())
{
... do stuff with Foo ...
}
// Foo isn't use after here (obviously).
...
}
because Foo isn't used again.
The answer of course is that the compiler cannot work it out pretty easily. Garbage Collection (what magically calls "Dispose()" in .NET) is a very complicated field. Just because the symbol isn't being used below that doesn't mean that the variable isn't being used.
Take this example:
public void SomeMethod()
{
...
var foo = new Foo();
foo.DoStuffWith(someRandomObject);
someOtherClass.Method(foo);
// Foo isn't use after here (obviously).
// Or is it??
...
}
In this example, someRandomObject and someOtherClass might both have references to what Foo points out, so if we called Foo.Dispose() it would break them. You say you're just imagining the simple case, but the only 'simple case' where what you're proposing works is the case where you make no method calls from Foo and do not pass Foo or any of its members to anything else - effectively when you don't even use Foo at all in which case you probably have no need to declare it. Even then, you can never be sure that some kind of reflection or event hackery didn't get a reference to Foo just by its very creation, or that Foo didn't hook itself up with something else during its constructor.
In addition to the fine reasons listed above, since the problem can't be solved reliably for all cases, those "easy" cases are something that code analysis tools can and do detect. Let the compiler do stuff deterministically, and let your automatic code analysis tools tell you when you're doing something silly like forgetting to call Dispose.
I was wondering which of the following was the suggested pattern when using Mutex (or Semaphores or ReadWriteLockSlims etc.).
Should the initial lock happen inside or outside of the try statement? Is it unimportant?
_mutex.WaitOne()
try
{
// critical code
}
finally
{
_mutex.ReleaseMutex();
}
or
try
{
_mutex.WaitOne()
// critical code
}
finally
{
_mutex.ReleaseMutex();
}
The only way these could be different is if an exception occurred after WaitOne but before the try start in example 1 or after the try start but before WaitOne in example 2. In the first case, the mutex won't be released and in the second case a release might be attempted even though there is no pending wait. The exception would have to be something severe like ThreadAbortException for it to occur in either place. However, if the mutex is contained in a using block, neither would be a problem.
EDIT: after reading Eric's post on this topic that Oliver linked to, I think even with a using block, the situation is not perfect and that simply going with your second version as Oliver suggests is your best option.
Maybe it is a different. Take a look into these posts from Eric:
Subtleties of C# IL codegen
Locks and exceptions do not mix
In short:
Just imagine there happens an exception between the mutex.WaitOne() and the try statement. You'll leave this chunk of code without calling the _mutex.ReleaseMutex().
So take your second piece of code to be sure everything works as expected.
If you´r not using the mutex for cross-process synchronization.
See answer to this question C# - Locking issues with Mutex
Then this will be safer:
private static object _syncLock = new object();
public void RunCriticalCode()
{
lock (_syncLock)
{
// critical code
}
}
In writing some threaded code, I've been using the ReaderWriterLockSlim class to handle synchronized access to variables. Doing this, I noticed I was always writing try-finally blocks, the same for each method and property.
Seeing an opportunity to avoid repeating myself and encapsulate this behaviour I built a class, ReaderWriterLockSection, intended to be used as a thin wrapper to the lock which can be used with the C# using block syntax.
The class is mostly as follows:
public enum ReaderWriterLockType
{
Read,
UpgradeableRead,
Write
}
public class ReaderWriterLockSection : IDisposeable
{
public ReaderWriterLockSection(
ReaderWriterLockSlim lock,
ReaderWriterLockType lockType)
{
// Enter lock.
}
public void UpgradeToWriteLock()
{
// Check lock can be upgraded.
// Enter write lock.
}
public void Dispose()
{
// Exit lock.
}
}
I use the section as follows:
private ReaderWriterLockSlim _lock = new ReaderWriterLockSlim();
public void Foo()
{
using(new ReaderWriterLockSection(_lock, ReaderWriterLockType.Read)
{
// Do some reads.
}
}
To me, this seems like a good idea, one that makes my code easier to read and seemingly more robust since I wont ever forget to release a lock.
Can anybody see an issue with this approach? Is there any reason this is a bad idea?
Well, it seems okay to me. Eric Lippert has previously written about the dangers of using Dispose for "non-resource" scenarios, but I think this would count as a resource.
It may make life tricky in upgrade scenarios, but you could always fall back to a more manual bit of code at that point.
Another alternative is to write a single lock acquire/use/release method and provide the action to take while holding the lock as a delegate.
I usually indulge into this kind of code-sugary confections!
Here's a variant that's a bit easier to read for the users, on top of your API
public static class ReaderWriterLockExt{
public static IDisposable ForRead(ReaderWriterLockSlim rwLock){
return new ReaderWriterLockSection(rwLock,ReaderWriterLockType.Read);
}
public static IDisposable ForWrite(ReaderWriterLockSlim rwLock){
return new ReaderWriterLockSection(rwLock,ReaderWriterLockType.Write);
}
public static IDisposable ForUpgradeableRead(ReaderWriterLockSlim wrLock){
return new ReaderWriterLockSection(rwLock,ReaderWriterLockType.UpgradeableRead);
}
}
public static class Foo(){
private static readonly ReaderWriterLockSlim l=new ReaderWriterLockSlim(); // our lock
public static void Demo(){
using(l.ForUpgradeableRead()){ // we might need to write..
if(CacheExpires()){ // checks the scenario where we need to write
using(l.ForWrite()){ // will request the write permission
RefreshCache();
} // relinquish the upgraded write
}
// back into read mode
return CachedValue();
} // release the read
}
}
I also recommend using a variant that takes an Action delegate that's invoked when the lock cannot be obtained for 10 seconds, which I'll leave as an exercise to the reader.
You might also want to check for a null RWL in the static extension methods, and make sure the lock exists when you dispose it.
Cheers,
Florian
There is another consideration here, you are possibly solving a problem you should not solve. I can't see the rest of your code but I can guess from you seeing value in this pattern.
Your approach solves a problem only if the code that reads or writes the shared resource throws an exception. Implicit is that you don't handle the exception in the method that does the reading/writing. If you did, you could simply release the lock in the exception handling code. If you don't handle the exception at all, the thread will die from the unhandled exception and your program will terminate. No point in releasing a lock then.
So there's a catch clause somewhere lower in the call stack that catches the exception and handles it. This code has to restore the state of the program so that it can meaningful continue without generating bad data or otherwise die due to exceptions caused by altered state. That code has a difficult job to do. It needs to somehow guess how much data was read or written without having any context. Getting it wrong, or only partly right, is potentially very destabilizing to the entire program. After all, it was a shared resource, other threads are reading/writing from it.
If you know how to do this, then by all means use this pattern. You better test the heck out of though. If you're not sure then avoid wasting system resources on a problem you can't reliably fix.
One thing I'd suggest when wrapping a lock to facilitate the "using" pattern is to include a "danger-state" field in the lock; before allowing any code to enter the lock, the code should check the danger state. If the danger state is set, and the code which is trying to enter the lock hasn't passed a special parameter saying it's expecting that it might be, the attempt to acquire the lock should throw an exception. Code which is going to temporarily put the guarded resource into a bad state should set the danger state flag, do what needs to be done, and then reset the danger state flag once the operation is complete and the object is in a safe state.
If an exception occurs while the danger state flag is set, the lock should be released but the danger state flag should remain set. This will ensure that code which wants to access the resource will find out that the resource is corrupted, rather than waiting forever for the lock to be released (which would be the outcome if there were no "using" or "try-finally" block).
If the lock being wrapped is a ReaderWriterLock, it may be convenient to have the acquisition of a "writer" lock automatically set the danger state; unfortunately, there's no way for an IDisposable used by a using block to determine whether the block is being exited cleanly or via exception. Consequently, I don't know any way to use something syntactically like a 'using' block to guard the "danger state" flag.