Using 'Using' for things other than resource disposal [duplicate] - c#

This question already has answers here:
Is it abusive to use IDisposable and "using" as a means for getting "scoped behavior" for exception safety?
(12 answers)
Closed 9 years ago.
We all know that the using statement is really good for resources you want to cleanup in a timely manner, such as an open file or database connection.
I was wondering if it would be considered a good thing to use the statement in cases where resources cleanup isn't the goal of the Dispose() method, but rather resetting to a previous state.
For example, a class that allows a using statement to wrap a procedure that takes a noticeable amount of time and changes the Cursor to a waiting state.
class CursorHelper : IDisposable
{
readonly Cursor _previousState;
public CursorHelper(Cursor newState)
{
_previousState = Cursor.Current;
Cursor.Current = newState;
}
public void Dispose()
{
Cursor.Current = _previousState;
}
}
Then the class can be used as such, without worrying about reverting the Cursor when your are done.
public void TimeIntensiveMethod()
{
using (CursorHelper ch = new CursorHelper(Cursors.WaitCursor))
{
// something that takes a long time to complete
}
}
Is this an appropriate use of the using statement?

There are certainly precedents for (ab)using the using statement in this way, for example FormExtensions.BeginForm in the ASP.NET MVC framework. This renders a <form> closing tag when it's disposed, and it's main purpose is to enable a more terse syntax in MVC views. The Dispose method attempts to render the closing tag even if an exception is thrown, which is slightly odd: if an exception is thrown while rendering a form, you probably don't want to attempt to render the end tag.
Another example is the (now deprecated) NDC.Push method in the log4net framework, which returns an IDisposable whose purpose is to pop the context.
Some purists would say it's an abuse, I suggest you form your own judgement on a case-by-case basis.
Personally I don't see anything wrong with your example for rendering an hourglass cursor.
The discussion linked in a comment by #I4V has some interesting opinions - including arguments against this type of "abuse" from the ubiquitous Jon Skeet.

In reality using is simply syntactical sugar for try/finally, so why don't you just do plain old try/finally like below...
try
{
// do some work
}
finally
{
// reset to some previous state
}
imho implementing the Dispose method to reset to some state would be very misleading, especially if you have consumers for your code.

I am opposed to this and believe it to be an abuse. I also think that RAII in C++ is a terrible idea. I am aware that I am in a minority on both positions.
This question is a duplicate. For detailed reasons on why I think this is an unwarranted abuse of the using statement, see: https://stackoverflow.com/a/2103158/88656

No, it isn't appropriate use of using and/or Dispose. The pattern has a very clear use ("Defines a method to release allocated resources."), and this isn't it. Any future developers using this code would look upon it with the contempt warranted for such evil.
If you want this kind of behaviour then implement events and expose them, calling code can subscribe to them and manage the cursor, if needs be, otherwise the cursor should be manageable by general parameters and maybe using Begin and End methods (although such naming conventions are generally reserved for asynchronous implementations of methods, you get the picture) - hacking this way doesn't actually buy you anything.

I my opinion it makes sense to use using-Disposable thing in a way other than just disposing objects. Of course it depends on the context and usage. If leads to a more readable code then it is ok.
I have had used it in a unit of work & repository pattern implementation like:
public class UnitOfWork: IDisposable {
// this is thread-safe in actual implementation
private static Stack<UnitOfWork> _uowStack = new Stack<UnitOfWork>();
public static UnitOfWork Current {get { return _uowStack.Peek(); }}
public UnitOfWork() {
_uowStack.Push(this);
}
public void Dispose() {
_ouwStack.Pop();
}
public void SaveChanges() {
// do some db operations
}
}
public class Repository {
public void DoSomething(Entity entity) {
// Do Some db operations
UnitOfWork.Current.SaveChanges();
}
}
With this implementation it is guaranteed that nested operations will use their corresponding UnitOfWork without passing parameter. Usage is like.
using (new UnitOfWork())
{
var repo1 = new UserRepository();
// Do some user operation
using (new UnitOfWork())
{
var repo2 = new AccountingRepository();
// do some accounting
}
var repo3 = new CrmRepository();
// do some crm operations
}
In this sample repo1 and repo3 use the same UnitOfWork while repo2 uses a different repository. What reader reads is "using new unit of work" and it makes a lot of sense.

Related

Is a struct to perform cleanup at the end of scope a good C# pattern?

RAII is nice for ensuring you don't fail to call cleanup. Normally, I'd implement with a class. I'm currently using Unity and am conscious of generating garbage in Update (even in editor scripting). I was thinking, is creating a struct wrapper for Begin/End actions a good pattern for avoiding allocation? (Since value types don't allocate on the heap.)
Here's an editor scripting example:
public struct ActionOnDispose : IDisposable
{
Action m_OnDispose;
public ActionOnDispose(Action on_dispose)
{
m_OnDispose = on_dispose;
}
public void Dispose()
{
m_OnDispose();
}
}
EditorGUILayout is a Unity type that exposes several functions that need to be called before and after your code. I'd use ActionOnDispose like this:
public static ActionOnDispose ScrollViewScope(ref Vector2 scroll)
{
scroll = EditorGUILayout.BeginScrollView(scroll);
return new ActionOnDispose(EditorGUILayout.EndScrollView);
}
private Vector2 scroll;
public void OnGUI() // called every frame
{
using (EditorHelper.ScrollViewScope(ref scroll))
{
for (int i = 0; i < 1000; ++i)
{
GUILayout.Label("Item #"+ i);
}
}
}
The above simple example works as expected and Dispose is called once for each call to ScrollViewScope, so it appears correct. Unity even provides a scoped struct: EditorGUI.DisabledScope, but not in many cases. So I wonder if there's a downside to this pattern with structs? (I'd assume if the struct is somehow copied the old copy would be disposed and my end action called twice? I don't see such a scenario, but I'm not very familiar with C# value types.)
Edit: I'm specifically asking if whether it matters that I'm using a struct (a value type). Is abusing IDisposable to benefit from “using” statements considered harmful covers whether this is a good use of IDisposable.
Jeroen Mostert's comments as an answer:
Assigning does not generate the garbage, writing new ActionOnDispose(EditorGUILayout.EndScrollView) does. (Yes, this too allocates a new Action under water, it's just syntactic shorthand.) In general, it's very hard to avoid allocation if you're using delegates, but it usually doesn't pay off to do so either. Short-lived objects are collected in gen 0, which is very efficient. My uneducated guess would be that methods like EditorGUILayout.BeginScrollView are going to allocate stuff themselves, to the point where one more allocation on your end won't matter much.
To answer your other question, no, copying the struct would not cause the method to be called twice. The only way to do that is to actually have it wrapped in two different Dispose scopes. C# does not have RAII; things going out of scope does nothing. It's the using statement that does the actual magic.
My conclusion:
I could make versions of ActionOnDispose for each of my use cases to avoid use of garbage-generating Action.

Empty "using" statement in Dispose

Recently I've seen some code written as follows:
public void Dipose()
{
using(_myDisposableField) { }
}
This seems pretty strange to me, I'd prefer to see myDisposableField.Dispose();
What reasons are there for using "using" to dispose your objects over explicitly making the call?
No, none at all. It will just compile into an empty try/finally and end up calling Dispose.
Remove it. You'll make the code faster, more readable, and perhaps most importantly (as you continue reading below) more expressive in its intent.
Update: they were being slightly clever, equivalent code needs a null check and as per Jon Skeet's advice, also take a local copy if multi-threading is involved (in the same manner as the standard event invocation pattern to avoid a race between the null check and method call).
IDisposable tmp = _myDisposableField;
if (tmp != null)
tmp.Dispose();
From what I can see in the IL of a sample app I've written, it looks like you also need to treat _myDisposableField as IDisposable directly. This will be important if any type implements the IDisposable interface explicitly and also provides a public void Dispose() method at the same time.
This code also doesn't attempt to replicate the try-finally that exists when using using, but it is sort of assumed that this is deemed unnecessary. As Michael Graczyk points out in the comments, however, the use of the finally offers protection against exceptions, in particular the ThreadAbortException (which could occur at any point). That said, the window for this to actually happen in is very small.
Although, I'd stake a fiver on the fact they did this not truly understanding what subtle "benefits" it gave them.
There is a very subtle but evil bug in the example you posted.
While it "compiles" down to:
try {}
finally
{
if (_myDisposableField != null)
((IDisposable)_myDisposableField).Dispose();
}
objects should be instantiated within the using clause and not outside:
You can instantiate the resource object and then pass the variable to the using statement, but this isn't a best practice. In this case, after control leaves the using block, the object remains in scope but probably has no access to its unmanaged resources. In other words, it's not fully initialized anymore. If you try to use the object outside the using block, you risk causing an exception to be thrown. For this reason, it's better to instantiate the object in the using statement and limit its scope to the using block.
—using statement (C# Reference)
In other words, it's dirty and hacky.
The clean version is extremely clearly spelled out on MSDN:
if you can limit the use of an instance to a method, then use a using block with the constructor call on its border. Do not use Dispose directly.
if you need (but really need) to keep an instance alive until the parent is disposed, then dispose explicitly using the Disposable pattern and nothing else. There are different ways of implementing a dispose cascade, however they need to be all done similarly to avoid very subtle and hard to catch bugs. There's a very good resource on MSDN in the Framework Design Guidelines.
Finally, please note the following you should only use the IDisposable pattern if you use unmanaged resources. Make sure it's really needed :-)
As already discussed in this answer, this is a cheeky way of avoiding a null test, but: there can be more to it than that. In modern C#, in many cases you could achieve similar with a null-conditional operator:
public void Dipose()
=> _myDisposableField?.Dispose();
However, it is not required that the type (of _myDisposableField) has Dispose() on the public API; it could be:
public class Foo : IDisposable {
void IDisposable.Dispose() {...}
}
or even worse:
public class Bar : IDisposable {
void IDisposable.Dispose() {...}
public void Dispose() {...} // some completely different meaning! DO NOT DO THIS!
}
In the first case, Dispose() will fail to find the method, and in the second case, Dispose() will invoke the wrong method. In either of these cases, the using trick will work, as will a cast (although this will do slightly different things again if it is a value-type):
public void Dipose()
=> ((IDisposable)_myDisposableField)?.Dispose();
If you aren't sure whether the type is disposable (which happens in some polymorphism scenarios), you could also use either:
public void Dipose()
=> (_myDisposableField as IDisposable)?.Dispose();
or:
public void Dipose()
{
using (_myDisposableField as IDisposable) {}
}
The using statement defines the span of code after which the referenced object should be disposed.
Yes, you could just call a .dispose once it was done with but it would be less clear (IMHO) what the scope of the object was. YMMV.

Is there any benefit to implementing IDisposable on classes which do not have resources?

In C#, if a class, such as a manager class, does not have resources, is there any benefit to having it : IDisposable?
Simple example:
public interface IBoxManager
{
int addBox(Box b);
}
public class BoxManager : IBoxManager
{
public int addBox(Box b)
{
using(dataContext db = new dataContext()){
db.Boxes.add(b);
db.SaveChanges();
}
return b.id;
}
}
Will there be any benefit in memory use when using BoxManager if it also implements IDisposable? public class BoxManager : IBoxManager , IDisposable
For example:
BoxManager bm = new BoxManager();
bm.add(myBox);
bm.dispose();//is there benefit to doing this?
There are only 2 reasons for implementing IDisposable on a type
The type contains native resources which must be freed when the type is no longer used
The type contains fields of type IDisposable
If neither of these are true then don't implement IDisposable
EDIT
Several people have mentioned that IDisposable is a nice way to implement begin / end or bookended operations. While that's not the original intent of IDisposable it does provide for a very nice pattern.
class Operation {
class Helper : IDisposable {
internal Operation Operation;
public void Dispose() {
Operation.EndOperation();
}
}
public IDisposable BeginOperation() {
...
return new Helper() { Operation = this };
}
private void EndOperation() {
...
}
}
Note: Another interesting way to implement this pattern is with lambdas. Instead of giving an IDisposable back to the user and hoping they don't forget to call Dispose have them give you a lambda in which they can execute the operation and you close out the operation
public void BeginOperation(Action action) {
BeginOperationCore();
try {
action();
} finally {
EndOperation();
}
}
There won't be a scrap of difference between the disposable and non-disposable version if you don't explicitly make use of the Dispose() method.
While your code wouldn't benefit from implementing IDisposable, I can't agree with other opinions here that state that IDisposable is only meant to (directly or indirectly) free native resources. IDisposable can be used whenever the object needs to perform clean up task at the end of it's lifetime span. It's extremely useful together with using.
A very popular example: in ASP.NET MVC Html.BeginForm returns an IDisposable. On creation, the object opens the tag, when Dispose is called it closes it. No native resources involved, yet still a good use of IDisposable.
No, there will be no benefit if you don't do something useful like releasing unmanaged resources that your class might hold in the Dispose method.
One major point of confusion, which may not be applicable in your case but arises often, is what exactly constitutes a "resource". From the perspective of an object, an unmanaged resource is something which an outside entity () is "doing"(*) on its behalf, which that outside entity will keep doing--to the detriment of other entitites--until told to stop. For example, if an object opens a file, the machine which hosts the file may grant that object exclusive access, denying everyone else in the universe a chance to use it unless or until it gets notified that the exclusive access isn't needed anymore.
(*) which could be anything, anywhere; possibly not even on the same computer.
(**) or some way in which the the behavior or state of an outside entity is altered
If an outside entity is doing something on behalf of an object which is abandoned and disappears without first letting the entity know its services are no longer required, the outside entity will have no way of knowing that it should stop acting on behalf of the object which no longer exists. IDisposable provides one way of avoiding this problem by providing a standard means of notifying objects when their services are not required. An object whose services are no longer required will generally not need to ask any further favors from any other entities, and will thus be able to request that any entities that had been acting on its behalf should stop doing so.
To allow for the case where an object gets abandoned without IDisposable.Dispose() having been called first, the system allows objects to register a "failsafe" cleanup method called Finalize(). Because for whatever reason, the creators of C# don't like the term Finalize(), the language requires the use of a construct called a "destructor" which does much the same thing. Note that in general, Finalize() will mask rather than solve problems, and can create problems of its own, so it should be used with extreme caution if at all.
A "managed resource" is typically a name given to an object which implements IDisposable and usually, though not always, implements a finalizer.
No, if there are no (managed or unmanaged) resources there is no need for IDisposable either.
Small caveat: some people use IDisposable to clean up eventhandlers or large memory buffers but
you don't seem to use those
it's a questionable pattern anyway.
From my personal experience (confirmed with discussion and other posts here) I would say, that there could be a situations where your object use massive amount of events, or not massive amount but frequent subscriptions and unsubscription from the event which sometimes leads to that the object is not garbage collected. In this case I in Dispose unsubscribe from all events my object subscribed before.
Hope this helps.
IDisposable is also great if you want to benefit the using () {} syntax.
In a WPF project with ViewModels, I wanted to be able to temporarily disable NotifyPropertyChange events from raising. To be sure other developers will re-enable notifications, I wrote a bit of code to be able to write something like:
using (this.PreventNotifyPropertyChanges()) {
// in this block NotifyPropertyChanged won't be called when changing a property value
}
The syntax looks okay and is easily readable. For it to work, there's a bit of code to write. You will need a simple Disposable object and counter.
public class MyViewModel {
private volatile int notifyPropertylocks = 0; // number of locks
protected void NotifyPropertyChanged(string propertyName) {
if (this.notifyPropertylocks == 0) { // check the counter
this.NotifyPropertyChanged(...);
}
}
protected IDisposable PreventNotifyPropertyChanges() {
return new PropertyChangeLock(this);
}
public class PropertyChangeLock : IDisposable {
MyViewModel vm;
// creating this object will increment the lock counter
public PropertyChangeLock(MyViewModel vm) {
this.vm = vm;
this.vm.notifyPropertylocks += 1;
}
// disposing this object will decrement the lock counter
public void Dispose() {
if (this.vm != null) {
this.vm.notifyPropertylocks -= 1;
this.vm = null;
}
}
}
}
There are no resources to dispose here. I wanted a clean code with a kind of try/finally syntax. The using keyword looks better.
is there any benefit to having it : IDisposable?
It doesn't look so in your specific example, however: there is one good reason to implement IDisposable even if you don’t have any IDisposable fields: your descendants might.
This is one of the big architectural problems of IDisposable highlighted in IDisposable: What your mother never told you about resource deallocation. Basically, unless your class is sealed you need to decide whether your descendants are likely to have IDisposable members. And this isn't something you can realistically predict.
So, if your descendants are likely to have IDisposable members, that might be a good reason to implement IDisposable in the base class.
Short answer would be no. However, you can smartly use the nature of the Dispose() executement at the end of the object lifecycle. One have already gave a nice MVC example (Html.BeginForm)
I would like to point out one important aspect of IDisposable and using() {} statement. At the end of the Using statement Dispose() method is automatically called on the using context object (it must have implemented IDisposable interface, of course).
There one more reason that no one mentioned (though it's debateful if it really worth it):
The convension says that if someone uses a class that implement IDisposable, it must call its Dispose method (either explicitly or via the 'using' statement).
But what happens if V1 of a class (in your public API) didn't need IDisposable, but V2 does? Technically it doesn't break backward compatibility to add an interface to a class, but because of that convension, it is! Because old clients of your code won't call its Dispose method and may cause resources to not get freed.
The (almost) only way to avoid it is to implement IDisposable in any case you suspect that you'll need it in the future, to make sure that your clients always call your Dispose method, that some day may be really needed.
The other (and probably better) way is to implemet the lambda pattern mentioned by JaredPar above in the V2 of the API.

Enforcing an "end" call whenever there is a corresponding "start" call

Let's say I want to enforce a rule:
Everytime you call "StartJumping()" in your function, you must call "EndJumping()" before you return.
When a developer is writing their code, they may simply forget to call EndSomething - so I want to make it easy to remember.
I can think of only one way to do this: and it abuses the "using" keyword:
class Jumper : IDisposable {
public Jumper() { Jumper.StartJumping(); }
public void Dispose() { Jumper.EndJumping(); }
public static void StartJumping() {...}
public static void EndJumping() {...}
}
public bool SomeFunction() {
// do some stuff
// start jumping...
using(new Jumper()) {
// do more stuff
// while jumping
} // end jumping
}
Is there a better way to do this?
Essentially the problem is:
I have global state...
and I want to mutate that global state...
but I want to make sure that I mutate it back.
You have discovered that it hurts when you do that. My advice is rather than trying to find a way to make it hurt less, try to find a way to not do the painful thing in the first place.
I am well aware of how hard this is. When we added lambdas to C# in v3 we had a big problem. Consider the following:
void M(Func<int, int> f) { }
void M(Func<string, int> f) { }
...
M(x=>x.Length);
How on earth do we bind this successfully? Well, what we do is try both (x is int, or x is string) and see which, if any, gives us an error. The ones that don't give errors become candidates for overload resolution.
The error reporting engine in the compiler is global state. In C# 1 and 2 there had never been a situation where we had to say "bind this entire method body for the purposes of determining if it had any errors but don't report the errors". After all, in this program you do not want to get the error "int doesn't have a property called Length", you want it to discover that, make a note of it, and not report it.
So what I did was exactly what you did. Start suppressing error reporting, but don't forget to STOP suppressing error reporting.
It's terrible. What we really ought to do is redesign the compiler so that errors are output of the semantic analyzer, not global state of the compiler. However, it's hard to thread that through hundreds of thousands of lines of existing code that depends on that global state.
Anyway, something else to think about. Your "using" solution has the effect of stopping jumping when an exception is thrown. Is that the right thing to do? It might not be. After all, an unexpected, unhandled exception has been thrown. The entire system could be massively unstable. None of your internal state invariants might be actually invariant in this scenario.
Look at it this way: I mutated global state. I then got an unexpected, unhandled exception. I know, I think I'll mutate global state again! That'll help! Seems like a very, very bad idea.
Of course, it depends on what the mutation to global state is. If it is "start reporting errors to the user again" as it is in the compiler then the correct thing to do for an unhandled exception is to start reporting errors to the user again: after all, we're going to need to report the error that the compiler just had an unhandled exception!
If on the other hand the mutation to global state is "unlock the resource and allow it to be observed and used by untrustworthy code" then it is potentially a VERY BAD IDEA to automatically unlock it. That unexpected, unhandled exception might be evidence of an attack on your code, from an attacker who is dearly hoping that you are going to unlock access to global state now that it is in a vulnerable, inconsistent form.
I'm going to disagree with Eric: when to do this or not depends on the circumstances. At one point, I was reworking my a large code base to include acquire/release semantics around all accesses to a custom image class. Images were originally allocated in unmoving blocks of memory, but we now had the ability to put the images into blocks that were allowed to be moved if they hadn't been acquired. In my code, it is a serious bug for a block of memory to have slipped past unlocked.
Therefore, it is vital to enforce this. I created this class:
public class ResourceReleaser<T> : IDisposable
{
private Action<T> _action;
private bool _disposed;
private T _val;
public ResourceReleaser(T val, Action<T> action)
{
if (action == null)
throw new ArgumentNullException("action");
_action = action;
_val = val;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
~ResourceReleaser()
{
Dispose(false);
}
protected virtual void Dispose(bool disposing)
{
if (_disposed)
return;
if (disposing)
{
_disposed = true;
_action(_val);
}
}
}
which allows me to do make this subclass:
public class PixelMemoryLocker : ResourceReleaser<PixelMemory>
{
public PixelMemoryLocker(PixelMemory mem)
: base(mem,
(pm =>
{
if (pm != null)
pm.Unlock();
}
))
{
if (mem != null)
mem.Lock();
}
public PixelMemoryLocker(AtalaImage image)
: this(image == null ? null : image.PixelMemory)
{
}
}
Which in turn lets me write this code:
using (var locker = new PixelMemoryLocker(image)) {
// .. pixel memory is now locked and ready to work with
}
This does the work I need and a quick search tells me I needed it in 186 places that I can guarantee won't ever fail to unlock. And I have to be able to make this guarantee - to do otherwise could freeze a huge chunk of memory in my client's heap. I can't do that.
However, in another case where I do work handling encryption of PDF documents, all strings and streams are encrypted in PDF dictionaries except when they're not. Really. There are a tiny number of edge cases wherein it is incorrect to encrypt or decrypt the dictionaries so while streaming out an object, I do this:
if (context.IsEncrypting)
{
crypt = context.Encryption;
if (!ShouldBeEncrypted(crypt))
{
context.SuspendEncryption();
suspendedEncryption = true;
}
}
// ... more code ...
if (suspendedEncryption)
{
context.ResumeEncryption();
}
so why did I choose this over the RAII approach? Well, any exception that happens in the ... more code ... means that you are dead in the water. There is no recovery. There can be no recovery. You have to start over from the very beginning and the context object needs to be reconstructed, so it's state is hosed anyway. And by comparison, I only had to do this code 4 times - the possibility for error is way, way less than in the memory locking code, and if I forget one in the future, the generated document is going to be broken immediately (fail fast).
So pick RAII when you absolutely positively HAVE to have the bracketed call and can't fail.
Don't bother with RAII if it is trivial to do otherwise.
If you need to control a scoped operation, I would add a method which take an Action<Jumper> to contain the required operations on the jumper instance:
public static void Jump(Action<Jumper> jumpAction)
{
StartJumping();
Jumper j = new Jumper();
jumpAction(j);
EndJumping();
}
An alternative approach that would work in some circumstances (i.e. when the actions can all happen at the end) would be to create a series of classes with a fluent interface and some final Execute() method.
var sequence = StartJumping().Then(some_other_method).Then(some_third_method);
// forgot to do EndJumping()
sequence.Execute();
Execute() can chain back down line and enforce any rules (or you can build the closing sequence as you build the opening sequence).
The one advantage this technique has over others is that you aren't limited by scoping rules. e.g. if you want to build the sequence based on user inputs or other asynchronous events you can do that.
Jeff,
what you're trying to achieve is generally referred to as Aspect Oriented Programming (AOP). Programming using AOP paradigms in C# is not easy - or reliable... yet. There are some capabilities built directly into the CLR and .NET framework that make AOP possible is certain narrow cases. For example, when you derive a class from ContextBoundObject you can use ContextAttribute to inject logic before/after method calls on the CBO instance. You can see examples of how this is done here.
Deriving a CBO class is annoying and restrictive - and there is another alternative. You can use a tool like PostSharp to apply AOP to any C# class. PostSharp is far more flexible than CBOs because it essentially rewrites your IL code in a postcompilation step. While this may seem a bit scary, it's very powerful because it allows you to weave in code in almost any way you can imagine. Here's a PostSharp example built on your use scenario:
using PostSharp.Aspects;
[Serializable]
public sealed class JumperAttribute : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionArgs args)
{
Jumper.StartJumping();
}
public override void OnExit(MethodExecutionArgs args)
{
Jumper.EndJumping();
}
}
class SomeClass
{
[Jumper()]
public bool SomeFunction() // StartJumping *magically* called...
{
// do some code...
} // EndJumping *magically* called...
}
PostSharp achieves the magic by rewriting the compiled IL code to include instructions to run the code that you've defined in the JumperAttribute class' OnEntry and OnExit methods.
Whether in your case PostSharp/AOP is a better alternative than "repurposing" the using statement is unclear to me. I tend to agree with #Eric Lippert that the using keyword obfuscates important semantics of your code and imposes side-effects and semantic menting to the } symbol at the end of a using block - which is unexpected. But is this any different than applying AOP attributes to your code? They also hide important semantics behind a declarative syntax ... but that's sort of the point of AOP.
One point where I whole-heartedly agree with Eric is that redesigning your code to avoid global state like this (when possible) is probably the best option. Not only does it avoid the problem of enforcing correct usage, but it can also help avoid multithreading challenges in the future - which global state is very susceptible to.
I don't actually see this as an abuse of using; I'm using this idiom in different contexts and never had problems... especially given that using is only a syntactic sugar. One way I use it to set a global flag in one of third party libraries I use, so that the change is reverted when finishing operations:
class WithLocale : IDisposable {
Locale old;
public WithLocale(Locale x) { old = ThirdParty.Locale; ThirdParty.Locale = x }
public void Dispose() { ThirdParty.Locale = old }
}
Note you don't need to assign a variable in using clause. This is enough:
using(new WithLocale("jp")) {
...
}
I slightly miss C++'s RAII idiom here, where the destructor is always called. using is the closest you can get in C#, I guess.
We have done almost exactly what you propose as a way to add method trace logging in our applications. Beats having to make 2 logging calls, one for entering and one for exiting.
Would having an abstract base class be helpful? A method in the base class could call StartJumping(), the implementation of the abstract method that child classes would implement, and then call EndJumping().
I like this style and frequently implement it when I want to guarantee some tear down behaviour: often it is much cleaner to read than a try-finally. I don't think you should bother with declaring and naming the reference j, but I do think you should avoid calling the EndJumping method twice, you should check if it's already been disposed. And with reference to your unmanaged code note: it's a finalizer that's typically implemented for that (although Dispose and SuppressFinalize are typically called to free up the resources sooner.)
I've commented on some of the answers here a bit about what IDisposable is and isn't, but I will reiterate the point that IDisposable is to enable deterministic cleanup, but does not guarantee deterministic cleanup. i.e. It's not guaranteed to be called, and only somewhat guaranteed when paired with a using block.
// despite being IDisposable, Dispose() isn't guaranteed.
Jumper j = new Jumper();
Now, I'm not going to comment on your use of using because Eric Lippert did a far better job of it.
If you do have an IDisposable class without requiring a finalizer, a pattern I've seen for detecting when people forget to call Dispose() is to add a finalizer that's conditionally compiled in DEBUG builds so that you can log something whenever your finalizer is called.
A realistic example is a class that's encapsulates writing to a file in some special way. Because MyWriter holds a reference to a FileStream which is also IDisposable, we should also implement IDisposable to be polite.
public sealed class MyWriter : IDisposable
{
private System.IO.FileStream _fileStream;
private bool _isDisposed;
public MyWriter(string path)
{
_fileStream = System.IO.File.Create(path);
}
#if DEBUG
~MyWriter() // Finalizer for DEBUG builds only
{
Dispose(false);
}
#endif
public void Close()
{
((IDisposable)this).Dispose();
}
private void Dispose(bool disposing)
{
if (disposing && !_isDisposed)
{
// called from IDisposable.Dispose()
if (_fileStream != null)
_fileStream.Dispose();
_isDisposed = true;
}
else
{
// called from finalizer in a DEBUG build.
// Log so a developer can fix.
Console.WriteLine("MyWriter failed to be disposed");
}
}
void IDisposable.Dispose()
{
Dispose(true);
#if DEBUG
GC.SuppressFinalize(this);
#endif
}
}
Ouch. That's quite complicated, but this is what people expect when they see IDisposable.
The class doesn't even do anything yet but open a file, but that's what you get with IDisposable, and the logging is extremely simplified.
public void WriteFoo(string comment)
{
if (_isDisposed)
throw new ObjectDisposedException("MyWriter");
// logic omitted
}
Finalizers are expensive, and the MyWriter above doesn't require a finalizer, so there's no point adding one outside of DEBUG builds.
With the using pattern I can just use a grep (?<!using.*)new\s+Jumper to find all places where there might be a problem.
With StartJumping I need to manually look at each call to find out if there is a possibility that an exception, return, break, continue, goto etc can cause EndJumping to not be called.
I dont think you want to make those methods static
You need to check in the dispose if end jumping has alredy been called.
If I call start jumping again what happens?
You could use a reference counter or a flag to keep track of the state of the 'jumping'. Some people would say IDisposable is only for unmanaged resources but I think this is ok. Otherwise you should make start and end jumping private and use a destructor to go with the constructor.
class Jumper
{
public Jumper() { Jumper.StartJumping(); }
public ~Jumper() { Jumper.EndJumping(); }
private void StartJumping() {...}
public void EndJumping() {...}
}

Is it possible to implement scoped lock in C#?

A common pattern in C++ is to create a class that wraps a lock - the lock is either implicitly taken when object is created, or taken explicitly afterwards. When object goes out of scope, dtor automatically releases the lock.
Is it possible to do this in C#? As far as I understand there are no guarantees on when dtor in C# will run after object goes out of scope.
Clarification:
Any lock in general, spinlock, ReaderWriterLock, whatever.
Calling Dispose myself defeats the purpose of the pattern - to have the lock released as soon as we exit scope - no matter if we called return in the middle, threw exception or whatnot.
Also, as far as I understand using will still only queue object for GC, not destroy it immediately...
To amplify Timothy's answer, the lock statement does create a scoped lock using a monitor. Essentially, this translates into something like this:
lock(_lockKey)
{
// Code under lock
}
// is equivalent to this
Monitor.Enter(_lockKey)
try
{
// Code under lock
}
finally
{
Monitor.Exit(_lockKey)
}
In C# you rarely use the dtor for this kind of pattern (see the using statement/IDisposable). One thing you may notice in the code is that if an async exception happens between the Monitor.Enter and the try, it looks like the monitor will not be released. The JIT actually makes a special guarantee that if a Monitor.Enter immediately precedes a try block the async exception will not happen until the try block thus ensuring the release.
Your understanding regarding using is incorrect, this is a way to have scoped actions happen in a deterministic fashion (no queuing to the GC takes place).
C# supplies the lock keyword which provides an exclusive lock and if you want to have different types (e.g. Read/Write) you'll have to use the using statement.
P.S. This thread may interest you.
It's true that you don't know exactly when the dtor is going to run... but, if you implement the IDisposable interface, and then use either a 'using' block or call 'Dispose()' yourself, you will have a place to put your code.
Question: When you say "lock", do you mean a thread lock so that only one thread at a time can use the object? As in:
lock (_myLockKey) { ... }
Please clarify.
For completeness there is another way to achieve a similar RAII effect without using using and IDisposable. In C# using is usually clearer (see also here for some more thoughts), but in other languages (e.g. Java), or even in C# if using is not appropriate for some reason, it's useful to know.
It's an idiom called "Execute Around" and the idea is that you call a method that does the pre and post stuff (e.g. locking/unlocking your threads, or setting up and committing/ closing your DB connection etc), and you pass into that method a delegate that will implement the operations you want to occur in between.
e.g.:
funkyObj.InOut( delegate{ System.Console.WriteLine( "middle bit" ); } );
Depending on what the InOut method does, the output might be something like:
first bit
middle bit
last bit
As I say, this answer is for completeness only, the previous suggestions of using with IDisposable, as well as the lock keyword, are going to be better 99% of the time.
It's a shame that, while .Net has gone further than many other modern OO languages in this regards (I'm looking at you, Java), it still places the responsibility for RAII to work on the client code (ie the code that uses using), whereas in C++ the destructor will always run at the end of the scope.
Why would you want a scoped lock in the first place? Suppose you have the following code:
lock(obj) {
... some logic goes here
}
If exception has happened inside try inserted in place of lock, this is often means that you have a corrupted state now and other threads will continue to work with corrupted state. It is better to let the program hang to signal about the problem.
Another problem is that try incurs some performance penalty, but this is usually much lesser problem if at all.
Jeffrey Richter specifically advises not to use lock statement.
I've been really bothered by the fact that using is up to the developer to remember to do - at best you get a warning, which most people never bother to promote to an error. So, I've been toying with an idea like this - it forces the client to at least TRY to do things correctly. Fortunately and unfortunately, it's a closure, so the client could still keep a copy of the resource, and try to use it again later - but this code at least tries to push the client in the right direction...
public class MyLockedResource : IDisposable
{
private MyLockedResource()
{
Console.WriteLine("initialize");
}
public void Dispose()
{
Console.WriteLine("dispose");
}
public delegate void RAII(MyLockedResource resource);
static public void Use(RAII raii)
{
using (MyLockedResource resource = new MyLockedResource())
{
raii(resource);
}
}
public void test()
{
Console.WriteLine("test");
}
}
Good usage:
MyLockedResource.Use(delegate(MyLockedResource resource)
{
resource.test();
});
Bad usage! (Unfortunately, this can't be prevented...)
MyLockedResource res = null;
MyLockedResource.Use(delegate(MyLockedResource resource)
{
resource.test();
res = resource;
res.test();
});
res.test();

Categories