What happens if I don't call Dispose on the pen object? - c#

What happens if I don't call Dispose on the pen object in this code snippet?
private void panel_Paint(object sender, PaintEventArgs e)
{
var pen = Pen(Color.White, 1);
//Do some drawing
}

A couple of corrections should be made here:
Regarding the answer from Phil Devaney:
"...Calling Dispose allows you to do deterministic cleanup and is highly recommended."
Actually, calling Dispose() does not deterministically cause a GC collection in .NET - i.e. it does NOT trigger a GC immediately just because you called Dispose(). It only indirectly signals to the GC that the object can be cleaned up during the next GC (for the Generation that the object lives in). In other words, if the object lives in Gen 1 then it wouldn't be disposed of until a Gen 1 collection takes place. One of the only ways (though not the only) that you can programmatically and deterministically cause the GC to perform a collection is by calling GC.Collect(). However, doing so is not recommended since the GC "tunes" itself during runtime by collecting metrics about your memory allocations during runtime for your app. Calling GC.Collect() dumps those metrics and causes the GC to start its "tuning" all over again.
Regarding the answer:
IDisposable is for disposing unmanaged resources. This is the pattern in .NET.
This is incomplete. As the GC is non-deterministic, the Dispose Pattern, (How to properly implement the Dispose pattern), is available so that you can release the resources you are using - managed or unmanaged. It has nothing to do with what kind of resources you are releasing. The need for implementing a Finalizer does have to do with what kind of resources you are using - i.e. ONLY implement one if you have non-finalizable (i.e. native) resources. Maybe you are confusing the two. BTW, you should avoid implementing a Finalizer by using the SafeHandle class instead which wraps native resources which are marshaled via P/Invoke or COM Interop. If you do end up implementing a Finalizer, you should always implement the Dispose Pattern.
One critical note which I haven't seen anyone mention yet is that if disposable object is created and it has a Finalizer (and you never really know whether they do - and you certainly shouldn't make any assumptions about that), then it will get sent directly to the Finalization Queue and live for at least 1 extra GC collection.
If GC.SuppressFinalize() is not ultimately called, then the finalizer for the object will be called on the next GC. Note that a proper implementation of the Dispose pattern should call GC.SuppressFinalize(). Thus, if you call Dispose() on the object, and it has implemented the pattern properly, you will avoid execution of the Finalizer. If you don't call Dispose() on an object which has a finalizer, the object will have its Finalizer executed by the GC on the next collection. Why is this bad? The Finalizer thread in the CLR up to and including .NET 4.6 is single-threaded. Imagine what happens if you increase the burden on this thread - your app performance goes to you know where.
Calling Dispose on an object provides for the following:
reduce strain on the GC for the process;
reduce the app's memory pressure;
reduce the chance of an OutOfMemoryException (OOM) if the LOH (Large Object Heap) gets fragmented and the object is on the LOH;
Keep the object out of the Finalizable and f-reachable Queues if it has a Finalizer;
Make sure your resources (managed and unmanaged) are cleaned up.
Edit:
I just noticed that the "all knowing and always correct" MSDN documentation on IDisposable (extreme sarcasm here) actually does say
The primary use of this interface is
to release unmanaged resources
As anyone should know, MSDN is far from correct, never mentions or shows 'best practices', sometimes provides examples that don't compile, etc. It is unfortunate that this is documented in those words. However, I know what they were trying to say: in a perfect world the GC will cleanup all managed resources for you (how idealistic); it will not, however cleanup unmanaged resources. This is absolutely true. That being said, life is not perfect and neither is any application. The GC will only cleanup resources that have no rooted-references. This is mostly where the problem lies.
Among about 15-20 different ways that .NET can "leak" (or not free) memory, the one that would most likely bite you if you don't call Dispose() is the failure to unregister/unhook/unwire/detach event handlers/delegates. If you create an object that has delegates wired to it and you don't call Dispose() (and don't detach the delegates yourself) on it, the GC will still see the object as having rooted references - i.e. the delegates. Thus, the GC will never collect it.
#joren's comment/question below (my reply is too long for a comment):
I have a blog post about the Dispose pattern I recommend to use - (How to properly implement the Dispose pattern). There are times when you should null out references and it never hurts to do so. Actually, doing so does do something before GC runs - it removes the rooted reference to that object. The GC later scans its collection of rooted references and collects those that do not have a rooted reference. Think of this example when it is good to do so: you have an instance of type "ClassA" - let's call it 'X'. X contains an object of type "ClassB" - let's call this 'Y'. Y implements IDisposable, thus, X should do the same to dispose of Y. Let's assume that X is in Generation 2 or the LOH and Y is in Generation 0 or 1. When Dispose() is called on X and that implementation nulls out the reference to Y, the rooted reference to Y is immediately removed. If a GC happens for Gen 0 or Gen 1, the memory/resources for Y is cleaned up but the memory/resources for X is not since X lives in Gen 2 or the LOH.

The Pen will be collected by the GC at some indeterminate point in the future, whether or not you call Dispose.
However, any unmanaged resources held by the pen (e.g., a GDI+ handle) will not be cleaned up by the GC. The GC only cleans up managed resources. Calling Pen.Dispose allows you to ensure that these unmanaged resources are cleaned up in a timely manner and that you aren't leaking resources.
Now, if the Pen has a finalizer and that finalizer cleans up the unmanaged resources then those said unmanaged resources will be cleaned up when the Pen is garbage collected. But the point is that:
You should call Dispose explicitly so that you release your unmanaged resources, and
You shouldn't need to worry about the implementation detail of if there is a finalizer and it cleans up the unmanaged resources.
Pen implements IDisposable. IDisposable is for disposing unmanaged resources. This is the pattern in .NET.
For previous comments on the this topic, please see this answer.

The underlying GDI+ pen handle will not be released until some indeterminate time in the future i.e. when the Pen object is garbage collected and the object's finalizer is called. This might not be until the process terminates, or it might be earlier, but the point is its non-deterministic. Calling Dispose allows you to do deterministic cleanup and is highly recommended.

The total amount of .Net memory in use is the .Net part + all 'external' data in use. OS objects, open files, database and network connections all take some resources that are not purely .Net objects.
Graphics uses Pens and other objects wich are actually OS objects that are 'quite' expensive to keep around. (You can swap your Pen for a 1000x1000 bitmap file). These OS objects only get removed from the OS memory once you call a specific cleanup function. The Pen and Bitmap Dispose functions do this for you immediatly when you call them.
If you don't call Dispose the garbage collector will come to clean them up 'somewhere in the future*'.
(It will actually call the destructor/finalize code that probably calls Dispose())
*on a machine with infinite memory (or more then 1GB) somewhere in the future can be very far into the future. On a machine doing nothing it can be easily longer then 30 minutes to clean up that huge bitmap or very small pen.

If you really want to know how bad it is when you don't call Dispose on graphics objects you can use the CLR Profiler, available free for the download here. In the installation folder (defaults to C:\CLRProfiler ) is CLRProfiler.doc which has a nice example of what happens when you don't call Dispose on a Brush object. It is very enlightening. The short version is that graphics objects take up a larger chunk of memory than you might expect and they can hang around for a long time unless you call Dispose on them. Once the objects are no longer in use the system will, eventually, clean them up, but that process takes up more CPU time than if you had just called Dispose when you were finished with the objects.
You may also want to read up on using IDisposable here and here.

It will keep the resources until the garbage collector cleans it up

Depends if it implements finalizer and it calls the Dispose on its finalize method. If so, handle will be released at GC.
if not, handle will stay around until process is terminated.

With graphic stuff it can be very bad.
Open the Windows Task Manager. Click "choose columns" and choose column called "GDI Objects".
If you don't dispose certain graphic objects, this number will keep raising and raising.
In older versions of Windows this can crash the whole application (limit was 10000 as far as I remember), not sure about Vista/7 though but it's still a bad thing.

the garbage collector will collect it anyway BUT it matters WHEN:
if you dont call dispose on an object that you dont use it will live longer in memory and gets promoted to higher generations meaning that collecting it has a higher cost.

in back of my mind first idea came to the surface is that this object will be disposed as soon as the method finishes execution!, i dont know where did i got this info!, is it right?

Related

Why doesn't CLR handle the cleanup code?

I have just started with the .NET framework. Today, I was taught about the IDisposable interface and the dispose() method. I was taught a few things regarding it:
dispose() should contain the cleanup code corresponding to an object(like closing any resources occupied by any objects - files or database connections,etc.)
I was also told that in case we don't do it in the dispose() method, the same could be done in the destructor, but that doesn't ensure immediate execution, and we are left to the mercy of GC.
And if at all we don't provide any cleanup code at all, the GC will forcefully terminate all connections to resources that our objects were holding. Hence, we should handle the cleanup code ourselves.
But I was curious as to why doesn't CLR handle this on it's own? It takes care of Memory Management, it takes care of Garbage Collection. So, it should very well know which Object holds onto which resource(s) and when that Object dies off. So, it should be capable of de-allocating those resources as well?
I asked a few people about it. The answer I was given was that it is because we need to close it gracefully, where as GC closes it forcefully. Is it actually the reason?
In .NET there's much more than managed code that the GC knows about. There's like a huge volume of unmanaged code involved: all the file handles, database connections, network sockets, ... all this is plain ol' unmanaged Win32 code. You can't even believe that in almost every single BCL function you are calling from your pretty C# application, you will be hitting like tons of unmanaged functions written in C++ (and may God forbid VB6) and buried deep into the internals of the OS itself. All those functions are allocating unmanaged memory, handles, ... The managed world doesn't know what happens there.
For example every single time you open a file (FileStream) you are basically calling (behind the scenes of course) the CreateFile unmanaged Win32 function. This function allocates an unmanaged file handle directly from the file system. .NET and the GC has strictly no way of tracking this unmanaged code and everything it does. That's why those classes implement the IDisposable interface. So that you could always wrap their instances in using statements and ensure that the Dispose method is always called, even in the event of an exception, and this as soon as possible. The Dispose method will take care of calling another unmanaged function to clean the mess it created.
So basically the way you could think about the IDisposable interface is the following:
The day when we have an operating system written in a fully managed language (something like Midori for example from Microsoft Research) we will probably no longer need IDisposable as the GC will be able to completely replace it as it will have knowledge of everything that happens within this system.
The point of IDisposable and Dispose() is that you should clean up unmanaged memory. That's memory .NET didn't allocate, which came from outside sources and thus the GC cannot know about it. So it cannot clean it up for you automatically. Essentially that's precisely the difference between managed and unmanaged memory ;-)
Generally you should implement Dispose() to clean up whatever unmanaged resources your class uses and implement the finalizer to call Dispose() too. The finalizer is just a safeguard, though. It will make sure that those resources get cleaned up eventually, if the caller forgets to dispose of your class properly.
The IDisposable interface is there to provide you a way to clean up un-managed resources. The CLR only manages your managed resources for you.
In other words, the CLR only knows how to clean up the things that it manages. If you open connections to the rest of the system (like opening files, database connections, etc.), those are your responsibility and you need to tell the CLR how you want it to clean those up for you.
It can only take care of memory management for .NET objects. Any code that needs to use unmanaged resources (because it interacts with a C++ library, for example) falls outside the garbage collector's bailiwick. All that code needs to be told when to release its resources the old-fashioned way.
There's no way for the .Net framework (and the GC) to know how to release a un-managed resource. All it can do, is destroy the reference your managed code has to the resource. It is a lot better to actually call .Close() on a connection to your database server (thereby telling it that the connection should go back into the poll of available connection), than just destroying the reference, and letting it timeout on it's own after a set amount of seconds.
So whenever possible, use the IDisposable interface when referencing un-managed resources!
IDisposable is used when you don't want the GC to handle that particular artifact. The most common example are connections, or file handles. You don't want to wait for the GC to run before releasing a file, or to close a connection to the database, since you don't know when that will happen.
Most people associate IDisposable with unmanaged resources, which is mostly accurate, but fail to remember that finalizers are the proper .NET way to handle those. IDisposable provides a way of deterministicly disposing if that is important to your program.
The IDisposable interface is simply a convention to allow you to deterministically dispose of managed and unmanaged resources. It alone doesn't replace garbage collection or do anything involving the garbage collector itself.
It is more apparent with unmanaged resources because unless these are handled (either in a finalizer or with deterministic disposal) they will remain as a memory leak until the process ends. With managed memory, if you don't deterministically dispose of the items they will be undeterministically collected (assuming eventual eligibility for collection) by the GC, because they are managed (this is also the reason why the dispose pattern doesn't include managed items in the finalizer route).
IDisposable itself doesn't do anything, it is just a recognised interface (and is supported in code with the using keyword) that people expect to find when handling items that use consumable resources, unmanaged memory, external items, etc.
The CLR cannot possibly know when an external item is finished with. That is entirely dependent on the flow of your application. If you happen to also not know when to dispose an object, the finalizer syntax is useful. If you implement a finalizer on a custom class, the garbage collection process will run this finalizer just prior to final collection. This is your last chance to tidy up after yourself.
we use Dispose in order to dispose unmanaged resssource as file access or connection database, because GC don't have information about this unmanaged ressource.
you can also use Finalize, but it's not performant because you save your ressource in finalisation structure, and GC pass in the end of dispose cycle by this finalisation structure, and it's not performant

Clean Up Vs Memory Reclaim in .Net

I was reading this MSDN reference:
Although the garbage collector is able
to track the lifetime of an object
that encapsulates an unmanaged
resource, it does not have specific
knowledge about how to clean up the
resource. For these types of objects,
the .NET Framework provides the
Object.Finalize method, which allows
an object to clean up its unmanaged
resources properly when the garbage
collector reclaims the memory used by
the object. By default, the Finalize
method does nothing. If you want the
garbage collector to perform cleanup
operations on your object before it
reclaims the object's memory, you must
override the Finalize method in your
class.
I understand how GC works but this give me a thought that what is actually CleanUp? Is it just reclaiming memory if it is than why it is having different name?
Beware that this is not the full story either, as finalizing only occurs when the object is garbage collected. In actual fact you should release all unmanaged resources (file handles, mutexes, unmanaged memory) as soon as possible. You should have a look at the IDisposable interface, which defines the Dispose() function.
Wherever possible your disposer should run the same method to free resources as the finalizer would, but then call GC.SuppressFinalize() to stop it from running again (in the finalizer), as there is a minor performance hit when using objects that implement finalizers.
They used a generic phrase such as "clean up" because other things may need to be done besides just reclaiming memory. I can see how this may be a little confusing, since the quote mentions cleaning up resources and reclaiming memory in the same sentence. In that case, what they mean is that the garbage collector reclaims the memory used by the managed code that actually called into an unmanaged library (a wrapper class, for example), but leaves the unmanaged-specific reclamation process up to the developer (closing file handles, freeing buffers, etc).
As an example, I have a Graphviz wrapper library containing a Graph class. This class wraps the functions used to create graphs, add nodes to them, etc. Internally, this class maintains a pointer to an unmanaged graph structure allocated by Graphiz itself. To the .NET Framework, this is merely an IntPtr and it has no idea how to free it during garbage collection. So, when a managed Graph object is no longer being used, the garbage collector frees up the memory used by the pointer, but not the data it points to. To do this, I have to implement a finalizer that calls the unmanaged function agclose (the Graphviz function that releases the resources used by a graph).
An example would be if you wrote a component that used some operating system resource like named pipe or memory mapped file. You could used the finalize operation to release the resource back to the os.
Cleaning up a non-managed resource could include closing network connections, files, database connections, etc.. Of course, it could also include the deallocation of memory for that resource.
CleanUp is here means free up any bounded resource (Harddisk, Network bandwidth, Sound Card, Memory, CPU, etc) and since .NET have no managed reference to unmanaged code, it could just let you do the job at the right moment yourself using Finalize() method before GC swaps it. If you don't CleanUp you would end up some orphan unmanaged code at an unknown state which are still using resources. It's better to implement IDisposable and CleanUp by calling Dispose() on your object.

Release resources in .Net C#

I'm new to C# and .NET, ,and have been reading around about it.
I need to know why and when do I need to release resources? Doesn't the garbage collector take care of everything? When do I need to implement IDisposable, and how is it different from destructor in C++?
Also, if my program is rather small i.e. a screensaver, do I need to care about releasing resources?
Thanks.
The garbage collector is only aware of memory. That's fine for memory, because one bit of memory is pretty much as good as any other, so long as you've got enough of it. (This is all modulo cache coherency etc.)
Now compare that with file handles. The operating system could have plenty of room to allocate more file handles - but if you've left a handle open to a particular file, no-one else will be able to open that particular file for writing. You should tell the system when you're done with a handle - usually by closing the relevant stream - as soon as you're finished, and do so in a way that closes it even if an exception is thrown. This is usually done with a using statement, which is like a try/finally with a call to Dispose in the finally block.
Destructors in C++ are very different from .NET finalizers, as C++ destructors are deterministic - they're automatically called when the relevant variable falls out of scope, for example. Finalizers are run by the garbage collector at some point after an object is no longer referenced by any "live" objects, but the timing is unpredictable. (In some rare cases, it may never happen.)
You should implement IDisposable yourself if you have any clean-up which should be done deterministically - typically that's the case if one of your instance variables also implements IDisposable. It's pretty rare these days to need to implement a finalizer yourself - you usually only need one if you have a direct hold on operating system handles, usually in the form of IntPtr; SafeHandle makes all of this a lot easier and frees you from having to write the finalizer yourself.
Basically, you need to worry about releasing resources to unmanaged code - anything outside the .NET framework. For example, a database connection or a file on the OS.
The garbage collector deals with managed code - code in the .NET framework.
Even a small application may need to release unmanaged resources, for example it may write to a local text file. When you have finished with the resource you need to ensure the object's Dispose method is called. The using statement simplifies the syntax:
using (TextWriter w = File.CreateText("test.txt"))
{
w.WriteLine("Test Line 1");
}
The TextWriter object implements the IDisposable interface so as soon as the using block is finished the Dispose method is called and the object can be garbage collected. The actual time of collection cannot be guaranteed.
If you create your own classes that need to be Disposed of properly you will need to implement the IDisposable interface and Dispose pattern yourself. On a simple application you probably won't need to do this, if you do this is a good resource.
Resources are of two kinds - managed, and unmanaged. Managed resources will be cleaned up by the garbage collector if you let it - that is, if you release any reference to the object. However, the garbage collection does not know how to release unmanaged resources that a managed object holds - file handles, and other OS resources for example.
IDisposable is best practice when there's a managed resource you want released promptly (like a database connection), and vital when there are unmanaged resources which you need to have released. The typical pattern:
public void Dispose()
protected void Dispose(bool disposing)
Lets you ensure that unmanaged resources are released whether by the Dispose method or by object finalisation.
You don't need to release memory in managed objects like strings or arrays - that is handled by the garbage collector.
You should clean up operating system resources and some unmanaged objects when you have finished using them. If you open a file you should always remember to close that file when you have finished using it. If you open a file exclusively and forget to close, the next time you try to open that file it might still be locked. If something implements IDisposable, you should definitely consider whether you need to close it properly. The documentation will usually tell you what the Dispose method does and when it should be called.
If you do forget, the garbage collector will eventually run the finalizer which should clean up the object correctly and release the unmanaged resources, but this does not happen immediately after the object becomes eligible for garbage collection, and it in fact might not run at all.
Also it is useful to know about the using statement.
The garbage collector releases MEMORY and cleans up - through disposition - elemetns it removes. BUT: IT only does so when it has memory pressure.
THis is seriously idiotic for ressources whree I may want to explicitely release them. Save to file, for example, is supposed to: Open the file, write out the data and - close the file, so it can be copied away by the user if he wants, WITHOUT waiting for the GC to come around and release the memory for the file object, which may not happen for hours.
You only need to worry about precious resources. Most objects you create while programming do not fit into this category. As you say, the garbage collector will take care of these.
What you do need to be mindful of is objects that implement IDisposable, which is an indication that the resources it owns are precious and should not wait for the finalizer thread to be cleaned up. The only time you would need to implement IDisposable is on classes that own a) objects that implement IDisposable (such as a file stream), or b) unmanaged resources.

Why are there finalizers in java and c#?

I'm not quite understanding why there are finalizers in languages such as java and c#. AFAIK, they:
are not guaranteed to run (in java)
if they do run, they may run an arbitrary amount of time after the object in question becomes a candidate for finalization
and (at least in java), they incur an amazingly huge performance hit to even stick on a class.
So why were they added at all? I asked a friend, and he mumbled something about "you want to have every possible chance to clean up things like DB connections", but this strikes me as a bad practice. Why should you rely on something with the above described properties for anything, even as a last line of defense? Especially when, if something similar was designed into any API, said API would get laughed out of existence.
Well, they are incredibly useful, in certain situations.
In the .NET CLR, for example:
are not guaranteed to run
The finalizer will always, eventually, run, if the program isn't killed. It's just not deterministic as to when it will run.
if they do run, they may run an arbitrary amount of time after the object in question becomes a candidate for finalization
This is true, however, they still run.
In .NET, this is very, very useful. It's quite common in .NET to wrap native, non-.NET resources into a .NET class. By implementing a finalizer, you can guarantee that the native resources are cleaned up correctly. Without this, the user would be forced to call a method to perform the cleanup, which dramatically reduces the effectiveness of the garbage collector.
It's not always easy to know exactly when to release your (native) resources- by implementing a finalizer, you can guarantee that they will get cleaned up correctly, even if your class is used in a less-than-perfect manner.
and (at least in java), they incur an amazingly huge performance hit to even stick on a class
Again, the .NET CLR's GC has an advantage here. If you implement the proper interface (IDisposable), AND if the developer implements it correctly, you can prevent the expensive portion of finalization from occuring. The way this is done is that the user-defined method to do the cleanup can call GC.SuppressFinalize, which bypasses the finalizer.
This gives you the best of both worlds - you can implement a finalizer, and IDisposable. If your user disposes of your object correctly, the finalizer has no impact. If they don't, the finalizer (eventually) runs and cleans up your unmanaged resources, but you run into a (small) performance loss as it runs.
Hmya, you are getting a picture painted here that's a bit too rosy. Finalizers are not guaranteed to run in .NET either. Typical mishaps are a finalizer that throws an exception or a time-out on the finalizer thread (2 seconds).
That was a problem when Microsoft decided to provide .NET hosting support in SQL Server. The kind of application where restarting the app to solve resource leaks isn't considered a viable workaround. .NET 2.0 acquired critical finalizers, enabled by deriving from the CriticalFinalizerObject class. The finalizer of such a class must adhere to the rulez of constrained execution regions (CERs), essentially a region of code where exceptions are suppressed. The kind of things you can do in a CER are very limited.
Back to your original question, finalizers are necessary to release operating system resources other than memory. The garbage collector manages memory very well but doesn't do anything to release pens, brushes, files, sockets, windows, pipes, etc. When an object uses such a resource, it must make sure to release the resource after it is done with it. Finalizers ensure that happens, even when the program forgot to do so. You almost never write a class with a finalizer yourself, operating resources are wrapped by classes in the framework.
The .NET framework also has a programming pattern to ensure such a resource is released early so the resource doesn't linger around until the finalizer runs. All classes that have finalizers also implement the IDisposable.Dispose() method, allowing your code to release a resource explicitly. This is often forgotten by a .NET programmer but that doesn't typically cause problems because the finalizer ensures it will eventually be done. Many .NET programmers have lost hours of sleep worrying whether or not all Dispose() calls are taken care of and massive numbers of threads have been started about it on forums. Java folks must be a happier lot.
Following up on your comment: exceptions and timeouts in the finalizer thread is something that you don't have to worry about. Firstly, if you find yourself writing a finalizer, take a deep breath and ask yourself if you're on the Right Path. Finalizers are for framework classes, you should be using such a class to use an operating resource, you'll get the finalizer built into that class for free. All the way down to the SafeHandle classes, they have a critical finalizer.
Secondly, finalizer thread failures are gross program failures. Similar to getting an OutOfMemory exception or tripping over the power cord and unplugging the machine. There isn't anything you can do about them, other than fixing the bug in your code or re-route the cable. It was important for Microsoft to design critical finalizers, they can't rely on all programmers that write .NET code for SQL Server to get that code right. If you fumble a finalizer yourself then there is no such liability, it will be you that gets the call from the customer, not Microsoft.
In java finalizers exist to allow for the clean up of external resources (things that exist outside of the JVM and can't be garbage collected when the 'parent' java object is). This has always been rare. On example might be if you are interfacing with some custom hardware.
I think the reason that finalizers in java aren't guaranteed to run is that they might not have a chance to do so at program termination.
One thing you might do with a finalizer in 'pure' java is use it to test termination conditions- for example to check that all connections are closed and report an error if they are not. You aren't guaranteed that the error will be always caught but it will likely be caught at least some of the time which is enough to reveal a bug.
Most java code has no call for finalizers.
If you read the JavaDoc for finalize() it says it is "Called by the garbage collector on an object when garbage collection determines that there are no more references to the object. A subclass overrides the finalize method to dispose of system resources or to perform other cleanup."
http://java.sun.com/javase/6/docs/api/java/lang/Object.html#finalize
So that's the "why". I guess you can argue whether their implementation is effective.
The best use I've found for finalize() is to detect bugs with freeing pooled resources. Most leaked objects will get garbage collected eventually and you can generate debug information.
class MyResource {
private Throwable allocatorStack;
public MyResource() {
allocatorStack = new RuntimeException("trace to allocator");
}
#Override
protected void finalize() throws Throwable {
try {
System.out.println("Bug!");
allocatorStack.printStackTrace();
} finally {
super.finalize();
}
}
}
They're meant for freeing up native resources (e.g. sockets, open files, devices) that can't be released until all references to the object have been broken, which is something that a particular caller would (in general) have no way of knowing. The alternative would be subtle, impossible-to-trace resource leaks...
Of course, in many cases as the application author you'll know that there's only one reference to the DB connection (for example); in which case finalizers are no substitute for closing it properly when you know you're finished with it.
In .Net land, t is not guaranteed when they run. But they will run.
Are you refering to Object.Finalize?
According to msdn, "In C# code, Object.Finalize cannot be called or overridden". In fact, they recommend using the Dispose method because it is more controllable.
There's an additional complication with finalizers in .NET. If the class has a finalizer and does not get Dispose()'d, or Dispose() does not suppress the finalizer, the garbage collector will defer collecting until after compacting generation 2 memory (the last generation), so the object is "sort of" but not quite a memory leak. (Yes, it will get cleaned up eventually, but quite possibly not until application termination.)
As others have mentioned, if an object holds non-managed resources, it should implement the IDisposable pattern. Developers should be aware that if an object implements IDisposable, then it's Dispose() method should always be called. C# provides a way to automate this with the using statement:
using (myDataContext myDC = new myDataContext())
{
// code using the data context here
}
The using block automatically calls Dispose() on block exit, even exits by return or exceptions being thrown. The using statement only works with objects that implement IDisposable.
And beware another confusion point; Dispose() is an opportunity for an object to release resources, but it does not actually release the Dispose()'d object. .NET objects are elligible for garbage collection when there are no active references to them. Technically, they can't be reached by any chain of object references, starting from the AppDomain.
The equivalent ofdestructor() in C++ is finalizer() in Java.
They are invoked when the life cycle of an object is about to end.

What is IDisposable for?

If .NET has garbage collection then why do you have to explicitly call IDisposable?
Garbage collection is for memory. You need to dispose of non-memory resources - file handles, sockets, GDI+ handles, database connections etc. That's typically what underlies an IDisposable type, although the actual handle can be quite a long way down a chain of references. For example, you might Dispose an XmlWriter which disposes a StreamWriter it has a reference to, which disposes the FileStream it has a reference to, which releases the file handle itself.
Expanding a bit on other comments:
The Dispose() method should be called on all objects that have references to un-managed resources. Examples of such would include file streams, database connections etc. A basic rule that works most of the time is: "if the .NET object implements IDisposable then you should call Dispose() when you are done with the object.
However, some other things to keep in mind:
Calling dispose does not give you control over when the object is actually destroyed and memory released. GC handles that for us and does it better than we can.
Dispose cleans up all native resources, all the way down the stack of base classes as Jon indicated. Then it calls SuppressFinalize() to indicate that the object is ready to be reclaimed and no further work is needed. The next run of the GC will clean it up.
If Dispose is not called, then GC finds the object as needing to be cleaned up, but Finalize must be called first, to make sure resources are released, that request for Finalize is queued up and the GC moves on, so the lack of a call to Dispose forces one more GC to run before the object can be cleaned. This causes the object to be promoted to the next "generation" of GC. This may not seem like a big deal, but in a memory pressured application, promoting objects up to higher generations of GC can push a high-memory application over the wall to being an out-of-memory application.
Do not implement IDisposable in your own objects unless you absolutely need to. Poorly implemented or unneccessary implementations can actually make things worse instead of better. Some good guidance can be found here:
Implementing a Dispose Method
Or read that whole section of MSDN on Garbage Collection
Because Objects sometime hold resources beside memory. GC releases the memory; IDisposable is so you can release anything else.
because you want to control when the resources held by your object will get cleaned up.
See, GC works, but it does so when it feels like it, and even then, the finalisers you add to your objects will get called only after 2 GC collections. Sometimes, you want to clean those objects up immediately.
This is when IDisposable is used. By calling Dispose() explicitly (or using thr syntactic sugar of a using block) you can get access to your object to clean itself up in a standard way (ie you could have implemented your own cleanup() call and called that explicitly instead)
Example resources you would want to clean up immediately are: database handles, file handles, network handles.
In order to use the using keyword the object must implement IDisposable. http://msdn.microsoft.com/en-us/library/yh598w02(VS.71).aspx
The IDisposable interface is often described in terms of resources, but most such descriptions fail to really consider what "resource" really means.
Some objects need to ask outside entities to do something on their behalf, to the detriment of other entities, until further notice. For example, an object encompassing a file stream may need to ask a file system (which may be anywhere in the connected universe) to grant exclusive access to a file. In many cases, the object's need for the outside entity will be tied to outside code's need for the object. Once client code has done everything it's going to do with the aforementioned file stream object, for example, that object will no longer need to have exclusive access (or any access for that matter) to its associated file.
In general, an object X which asks an entity to do something until further notice incurs an obligation to deliver such notice, but can't deliver such notice as long as X's client might need X's services. The purpose of IDisposable is to provide a uniform way of letting objects know that their services will no longer be required, so that they can notify entities (if any) that were acting on their behalf that their services are no longer required. The code which calls IDisposable need neither know nor care about what (if any) services an object has requested from outside entities, since IDisposable merely invites an object to fulfill obligations (if any) to outside entities.
To put things in terms of "resources", an object acquires a resource when it asks an outside entity to do something on its behalf (typically, though not necessarily, granting exclusive use of something) until further notice, and releases a resource when it tells that outside entity its services are no longer required. Code that acquires a resource doesn't gain a "thing" so much as it incurs an obligation; releasing a resource doesn't give up a "thing", but instead fulfills an obligation.

Categories