In which real life scenario you used garbage collector? - c#

I know all the theories :
I know what is GC, when to call dispose, Finalise when it is getting called.
I would like to know, in your live project ..in which scenario have used all this.
I mean when the project
manager/client insisted you to
cleanup the memory ? When you find
any errors in the programs? Kind of
error messages or error logs? When
your program got crashed because of
unwanted memory? or any other
scenarios?

You should not care when and how GC is called. It is inteligent enough to know when to run and what objects to free.
You should also Dispose, either manualy or using "using" all objects, that implement IDisposable. You will then prevent many errors with un-managed resources, like files.
And if you are running out of memory, then there is something wrong with your algorithm or code itself. Manualy calling GC.Collect is heavily discuraged, especialy in production code.

As a rule of thumb, you need to implement IDisposable if you aggregate a disposable object, or if you hold on to an unmanaged resource. Cleanup is done differently for these two scenarios. Otherwise, keep it simple, don't litter your code with dotnetisms
Similar question here
Memory leaks cause crashed up servers. Cause and effect.

Ressource management is something you just havce to do. It is not 'my client insisted I free my memory'. It is simply good practice. Not all applications may crash and the user just restarts them - there is one or other server application out there.
If you start building your programming libraries, ressource management and concurrency should be your top priority, otherwise you will never be up to speed implementing any solution.
hth
Mario

Related

Tracking Down a .NET Windows Service Memory Leak

Before installing my windows service in production, I was looking for reliable tests that I can perform to make sure my code doesn't contain memory leaks.
However, All what I can find on the net was using task manager to look at used memory or some paid memory profiler tools.
From my understanding, looking at the task manager is not really helpful and cannot confirm the memory leakage (in case, there is).
How to confirm whether there is a memory leak or not?
Is there any free tools to find the source of memory leaks?
Note: I'm using .Net Framework 4.6 and Visual Studio 2015 Community
Well you can use task manager.
GC apps can leak memory, and it will show there.
But...
Free tool - ".Net CLR profiler"
There is a free tool, and it's from Microsoft, and it's awesome. This is a must-use for all programs that leak references. Search MS' site.
Leaking references means you forget to set object references to null, or they never leave scope, and this is almost as likely to occur in Garbage collected languages as not - lists building up and not clearing, event handlers pointing to delegates, etc.
It's the GC equivalent of memory leaks and has the same result. This program tells you what references are taking up tons of memory - and you will know if it's supposed to be that way or not, and if not, you can go find them and fix the problem!
It even has a cool visualization of what objects allocate what memory (so you can track down mistakes). I believe there are youtubes of this if you need an explanation.
Wikipedia page with download links...
NOTE: You will likely have to run your app not as a service to use this. It starts first and then runs your app. You can do this with TopShelf or by just putting the guts in a dll that runs from an EXE that implments the service integrations (service host pattern).
Although managed code implies no direct memory management, you still have to manage your instances. Those instances 'claim' memory. And it is all about the usage of these instances, keeping them alive when you don't expect them to be.
Just one of many examples: wrong usage of disposable classes can result in a lot of instances claiming memory. For a windows service, a slow but steady increase of instances can eventually result in to much memory usage.
Yes, there is a tool to analyze memory leaks. It just isn't free. However you might be able to identify your problem within the 7 day trial.
I would suggest to take a loot at the .NET Memory Profiler.
It is great to analyze memory leaks during development. It uses the concept of snapshots to compare new instances, disposed instances etc. This is a great help to understand how your service uses its memory. You can then dig deeper into why new instances get created or are kept alive.
Yes, you can test to confirm whether memory leaks are introduced.
However, just out-of-the box this will not be very useful. This is because no one can anticipate what will happen during runtime. The tool can analyze your app for common issues, but this is not guaranteed.
However, you can use this tool to integrate memory consumption into your unit test framework like NUnit or MSTest.
Of course a memory profiler is the first kind of tool to try, but it will only tell you whether your instances keep increasing. You still want to know whether it is normal that they are increasing. Also, once you have established that some instances keep increasing for no good reason, (meaning, you have a leak,) you will want to know precisely which call trees lead to their allocation, so that you can troubleshoot the code that allocates them and fix it so that it does eventually release them.
Here is some of the knowledge I have collected over the years in dealing with such issues:
Test your service as a regular executable as much as possible. Trying to test the service as an actual service just makes things too complicated.
Get in the habit of explicitly undoing everything that you do at the end of the scope of that thing which you are doing. For example, if you register an observer to the event of some observee, there should should always be some point in time (the disposal of the observer or the observee?) that you de-register it. In theory, garbage collection should take care of that by collecting the entire graph of interconnected observers and observees, but in practice, if you don't kick the habit of forgetting to undo things that you do, you get memory leaks.
Use IDisposable as much as possible, and make your destructors report if someone forgot to invoke Dispose(). More about this method here: Mandatory disposal vs. the "Dispose-disposing" abomination Disclosure: I am the author of that article.
Have regular checkpoints in your program where you release everything that should be releasable (as if the program is performing an orderly shutdown in order to terminate) and then force a garbage collection to see whether you have any leaks.
If instances of some class appear to be leaking, use the following trick to discover the precise calling tree that caused their allocation: within the constructor of that class, allocate an exception object without throwing it, obtain the stack trace of the exception, and store it. If you discover later that this object has been leaked, you have the necessary stack trace. Just don't do this with too many objects, because allocating an exception and obtaining the stack trace from it is ridiculously slow, only Microsoft knows why.
You could try the free Memoscope memory profiler
https://github.com/fremag/MemoScope.Net
I do not agree that you can trust the Task Manager to check if you have a memory leak or not. The problem with a garbage collector is that it can decide based on heuristics to keep the memory after a memory spike and do not return it to the OS. You might have a 2 GB Commit size but 90% of them can be free.
You should use VMMAP to check during the tests what type of memory your process contains. You do not only have the managed heap, but also unmanaged heap, private bytes, stacks (thread leaks), shared files and much more which need to be tracked.
VMMap has also command line interface which makes it possible to create snapshots at regular intervals which you can examine later. If you have a memory growth you can find out which type of memory is leaked which needs depending on the leak type different debugging tooling approaches.
I would not say that the Garbage collector is infallible. There are times when it fails unknowingly and they are not so straight forward. Memory streams are a common cause of memory leaks. You can open them in one context and they may never even get closed, even though the usage is wrapped in a using statement (the definition of a disposable object that should be cleaned up immediately after its usage falls out of scope). If you are experiencing crashes due to running out of memory, Windows does create dump files that you can sift through.
enter link description here
This is by no means fun or easy and is quite tedious but it tends to be your best bet.
Common areas that are easy to create memory leaks are anything that is using the System.Drawing dll, memory streams, and if you are doing some serious multi-threading.
If you use Entity Framework and a DI pattern, perhaps using Castle Windsor, you can easily get memory leaks.
The main thing to do is use the using( ){ } statement where-ever you can to automatically mark objects as disposed.
Also, you want to turn off automatic tracking on Entity Framework where you are only reading and not writing. Best to isolate your writes, use a using() {} at this point, get a dbContext (with tracking on), write your data.
If you want to investigate what is on the heap. The best tool I've used is RedGate ANTS http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/solving-memory-problems/getting-started not cheap but it works.
However, by using the using() {} pattern where-ever you can (don't make a static or singleton DbContext and never have one context in a massive loop of updates, dispose of them as often as you can!) then you find memory isn't often an issue.
Hope this helps.
Unless you're dealing with unmanaged code, i would be so bold to say you don't have to worry about memory leaks. Any unreferenced object in managed code will be removed by the garbage collector, and the possibility in finding a memory leak within the .net framework i would say you should be considered very lucky (well, unlucky). You don't have to worry about memory leak.
However, you can still encounter ever-growing memory usage, if references to objects are never released. For example, say you keep an internal log structure, and you just keep adding entries to a log list. Then every entry still have references from the log list and therefore will never be collected.
From my experience, you can definitely use the task manager as an indicator whether your system has growing issues; if the memory usage steadily keep rising, you know you have an issue. If it grows to a point but eventually converges to a certain size, it indicates it has reached its operating threshold.
If you want a more detailed view of managed memory usage, you can download the process explorer here, developed by Microsoft. It is still quite blunt, but it gives a somewhat better statistical view than task manager.

C# - Automatically free memory at the end of a block

So I have a program that increases the memory by 5MB per second. Since only 2GB can be allocated per application, I hit my limit at about 5 minutes after I start my program and get a System.OutOfMemoryException.
I have "done my homework" (hopefully good enough :D) and know that after the end of a block, the variables within that block are not disposed.
Ultimately, I need a way to automatically free the memory from variables used in after each block to stop the OutOfMemoryException from occurring.
P.S.: I realize there is a lot of confusion on this topic so if I made a mistake in my question, please feel free to correct me. But mostly, I would like to get this exception out of the way.
EDIT
Alright something weird happened. I copied the entire project from my desktop (where it doesn't work) to my laptop. And apparently, it works just fine on my laptop! The memory only increases MOSTLY by 500 KBps and is automatically freed 1 second later. I have no idea why, it's exactly the same project and no code was altered.
Anyone know why this happens?
Use these variables in a using statement and call GC.Collect() when done
In my experience, the most common way to leak memory in managed code is holding on to references for too long, alternatively not realizing how references are handled. A live reference will prevent garbage collection, no matter how well your code is disposing stuff.
Here is a decent article on how to figure out just what is leaking and where those references are, and you probably want to read up on the 'additional background' links as well.
Use ANTS profiler or CLRProfiler to determine which objects are taking up space and which methods are creating and holding references these object.
probably this discussion will help you to understand what is actually happening regarding memory management in .net Finalize/Dispose pattern in C#
and just in case, this is another post on how to dispose objects: the correct technique for releasing a socket/event/ummaged code with the dispose/finalize pattern
Maybe your objects should implement IDisposable so you can clean up resources when you no longer need them. This MSDN article shows how to implement it:
Digging into IDisposable

Cleanup before termination?

This question has been bugging me for a while: I've read in MSDN's DirectX article the following:
The destructor (of the application) should release any (Direct2D) interfaces stored...
DemoApp::~DemoApp()
{
SafeRelease(&m_pDirect2dFactory);
SafeRelease(&m_pRenderTarget);
SafeRelease(&m_pLightSlateGrayBrush);
SafeRelease(&m_pCornflowerBlueBrush);
}
Now, if all of the application's data is getting released/deallocated at the termination (source) why would I go through the trouble to make a function in-order-to/and release them individually? it makes no sense!
I keep seeing this more and more over the time, and it's obviously really bugging me.
The MSDN article above is the first time I've encountered this, so it made sense to mention it of all other cases.
Well, since so far I didn't actually ask my questions, here they are:
Do I need to release something before termination? (do explain why please)
Why did the author in MSDN haven chosen to do that?
Does the answer differ from native & managed code? I.E. Do I need to make sure everything's disposed at the end of the program while I'm writing a C# program? (I don't know about Java but if disposal exists there I'm sure other members would appreciate an answer for that too).
Thank you!
You don't need to worry about managed content when your application is terminating. When the entire process's memory is torn down all of that goes with it.
What matters is unmanaged resources.
If you have a lock on a file and the managed wrapper for the file handler is taken down when the application closes without you ever releasing the lock, you've now thrown away the only key that would allow access to the file.
If you have an internal buffer (say for logging errors) you may want to flush it before the application terminates. Not doing so would potentially mean the fatal error that caused the application to end isn't logged. That could be...bad.
If you have network connections open you'll want to close them. If you don't then the OS likely won't do it for you (at least not for a while; eventually it might notice the inactivity) and that's rather rude to whoever's on the other end. They may be continuing to listen for a response, or continuing to send you information, not knowing that you're not there anymore.
Now, if all of the application's data is getting released/deallocated
at the termination (source) why would I go through the trouble to make
a function in-order-to/and release them individually?
A number of reasons. One immediate reason is because not all resources are memory. Only memory gets reclaimed at process termination. If some of your resources are things like shared mutexes or file handles, not releasing those resources could mess up other programs or subsequent runs of your program.
I think there's a more important, more fundamental reason though. Not cleaning up after yourself is just lazy, sloppy programming. If you are lazy and sloppy in cleanup at termination, are you lazy and sloppy at other times? If your tendancy is to be lazy and sloppy and only override that tendancy in specific areas where you're cognizant of potential problems, then your tendancy is to be lazy and sloppy. What if there are potential problems you're not cognizant of? How can you rely on your overall philosophy of lazy, sloppy programming to write correct, robust programs?
Don't be that guy. Clean up after yourself.

Why are there finalizers in java and c#?

I'm not quite understanding why there are finalizers in languages such as java and c#. AFAIK, they:
are not guaranteed to run (in java)
if they do run, they may run an arbitrary amount of time after the object in question becomes a candidate for finalization
and (at least in java), they incur an amazingly huge performance hit to even stick on a class.
So why were they added at all? I asked a friend, and he mumbled something about "you want to have every possible chance to clean up things like DB connections", but this strikes me as a bad practice. Why should you rely on something with the above described properties for anything, even as a last line of defense? Especially when, if something similar was designed into any API, said API would get laughed out of existence.
Well, they are incredibly useful, in certain situations.
In the .NET CLR, for example:
are not guaranteed to run
The finalizer will always, eventually, run, if the program isn't killed. It's just not deterministic as to when it will run.
if they do run, they may run an arbitrary amount of time after the object in question becomes a candidate for finalization
This is true, however, they still run.
In .NET, this is very, very useful. It's quite common in .NET to wrap native, non-.NET resources into a .NET class. By implementing a finalizer, you can guarantee that the native resources are cleaned up correctly. Without this, the user would be forced to call a method to perform the cleanup, which dramatically reduces the effectiveness of the garbage collector.
It's not always easy to know exactly when to release your (native) resources- by implementing a finalizer, you can guarantee that they will get cleaned up correctly, even if your class is used in a less-than-perfect manner.
and (at least in java), they incur an amazingly huge performance hit to even stick on a class
Again, the .NET CLR's GC has an advantage here. If you implement the proper interface (IDisposable), AND if the developer implements it correctly, you can prevent the expensive portion of finalization from occuring. The way this is done is that the user-defined method to do the cleanup can call GC.SuppressFinalize, which bypasses the finalizer.
This gives you the best of both worlds - you can implement a finalizer, and IDisposable. If your user disposes of your object correctly, the finalizer has no impact. If they don't, the finalizer (eventually) runs and cleans up your unmanaged resources, but you run into a (small) performance loss as it runs.
Hmya, you are getting a picture painted here that's a bit too rosy. Finalizers are not guaranteed to run in .NET either. Typical mishaps are a finalizer that throws an exception or a time-out on the finalizer thread (2 seconds).
That was a problem when Microsoft decided to provide .NET hosting support in SQL Server. The kind of application where restarting the app to solve resource leaks isn't considered a viable workaround. .NET 2.0 acquired critical finalizers, enabled by deriving from the CriticalFinalizerObject class. The finalizer of such a class must adhere to the rulez of constrained execution regions (CERs), essentially a region of code where exceptions are suppressed. The kind of things you can do in a CER are very limited.
Back to your original question, finalizers are necessary to release operating system resources other than memory. The garbage collector manages memory very well but doesn't do anything to release pens, brushes, files, sockets, windows, pipes, etc. When an object uses such a resource, it must make sure to release the resource after it is done with it. Finalizers ensure that happens, even when the program forgot to do so. You almost never write a class with a finalizer yourself, operating resources are wrapped by classes in the framework.
The .NET framework also has a programming pattern to ensure such a resource is released early so the resource doesn't linger around until the finalizer runs. All classes that have finalizers also implement the IDisposable.Dispose() method, allowing your code to release a resource explicitly. This is often forgotten by a .NET programmer but that doesn't typically cause problems because the finalizer ensures it will eventually be done. Many .NET programmers have lost hours of sleep worrying whether or not all Dispose() calls are taken care of and massive numbers of threads have been started about it on forums. Java folks must be a happier lot.
Following up on your comment: exceptions and timeouts in the finalizer thread is something that you don't have to worry about. Firstly, if you find yourself writing a finalizer, take a deep breath and ask yourself if you're on the Right Path. Finalizers are for framework classes, you should be using such a class to use an operating resource, you'll get the finalizer built into that class for free. All the way down to the SafeHandle classes, they have a critical finalizer.
Secondly, finalizer thread failures are gross program failures. Similar to getting an OutOfMemory exception or tripping over the power cord and unplugging the machine. There isn't anything you can do about them, other than fixing the bug in your code or re-route the cable. It was important for Microsoft to design critical finalizers, they can't rely on all programmers that write .NET code for SQL Server to get that code right. If you fumble a finalizer yourself then there is no such liability, it will be you that gets the call from the customer, not Microsoft.
In java finalizers exist to allow for the clean up of external resources (things that exist outside of the JVM and can't be garbage collected when the 'parent' java object is). This has always been rare. On example might be if you are interfacing with some custom hardware.
I think the reason that finalizers in java aren't guaranteed to run is that they might not have a chance to do so at program termination.
One thing you might do with a finalizer in 'pure' java is use it to test termination conditions- for example to check that all connections are closed and report an error if they are not. You aren't guaranteed that the error will be always caught but it will likely be caught at least some of the time which is enough to reveal a bug.
Most java code has no call for finalizers.
If you read the JavaDoc for finalize() it says it is "Called by the garbage collector on an object when garbage collection determines that there are no more references to the object. A subclass overrides the finalize method to dispose of system resources or to perform other cleanup."
http://java.sun.com/javase/6/docs/api/java/lang/Object.html#finalize
So that's the "why". I guess you can argue whether their implementation is effective.
The best use I've found for finalize() is to detect bugs with freeing pooled resources. Most leaked objects will get garbage collected eventually and you can generate debug information.
class MyResource {
private Throwable allocatorStack;
public MyResource() {
allocatorStack = new RuntimeException("trace to allocator");
}
#Override
protected void finalize() throws Throwable {
try {
System.out.println("Bug!");
allocatorStack.printStackTrace();
} finally {
super.finalize();
}
}
}
They're meant for freeing up native resources (e.g. sockets, open files, devices) that can't be released until all references to the object have been broken, which is something that a particular caller would (in general) have no way of knowing. The alternative would be subtle, impossible-to-trace resource leaks...
Of course, in many cases as the application author you'll know that there's only one reference to the DB connection (for example); in which case finalizers are no substitute for closing it properly when you know you're finished with it.
In .Net land, t is not guaranteed when they run. But they will run.
Are you refering to Object.Finalize?
According to msdn, "In C# code, Object.Finalize cannot be called or overridden". In fact, they recommend using the Dispose method because it is more controllable.
There's an additional complication with finalizers in .NET. If the class has a finalizer and does not get Dispose()'d, or Dispose() does not suppress the finalizer, the garbage collector will defer collecting until after compacting generation 2 memory (the last generation), so the object is "sort of" but not quite a memory leak. (Yes, it will get cleaned up eventually, but quite possibly not until application termination.)
As others have mentioned, if an object holds non-managed resources, it should implement the IDisposable pattern. Developers should be aware that if an object implements IDisposable, then it's Dispose() method should always be called. C# provides a way to automate this with the using statement:
using (myDataContext myDC = new myDataContext())
{
// code using the data context here
}
The using block automatically calls Dispose() on block exit, even exits by return or exceptions being thrown. The using statement only works with objects that implement IDisposable.
And beware another confusion point; Dispose() is an opportunity for an object to release resources, but it does not actually release the Dispose()'d object. .NET objects are elligible for garbage collection when there are no active references to them. Technically, they can't be reached by any chain of object references, starting from the AppDomain.
The equivalent ofdestructor() in C++ is finalizer() in Java.
They are invoked when the life cycle of an object is about to end.

What strategies and tools are useful for finding memory leaks in .NET?

I wrote C++ for 10 years. I encountered memory problems, but they could be fixed with a reasonable amount of effort.
For the last couple of years I've been writing C#. I find I still get lots of memory problems. They're difficult to diagnose and fix due to the non-determinancy, and because the C# philosophy is that you shouldn't have to worry about such things when you very definitely do.
One particular problem I find is that I have to explicitly dispose and cleanup everything in code. If I don't, then the memory profilers don't really help because there is so much chaff floating about you can't find a leak within all the data they're trying to show you. I wonder if I've got the wrong idea, or if the tool I've got isn't the best.
What kind of strategies and tools are useful for tackling memory leaks in .NET?
I use Scitech's MemProfiler when I suspect a memory leak.
So far, I have found it to be very reliable and powerful. It has saved my bacon on at least one occasion.
The GC works very well in .NET IMO, but just like any other language or platform, if you write bad code, bad things happen.
Just for the forgetting-to-dispose problem, try the solution described in this blog post. Here's the essence:
public void Dispose ()
{
// Dispose logic here ...
// It's a bad error if someone forgets to call Dispose,
// so in Debug builds, we put a finalizer in to detect
// the error. If Dispose is called, we suppress the
// finalizer.
#if DEBUG
GC.SuppressFinalize(this);
#endif
}
#if DEBUG
~TimedLock()
{
// If this finalizer runs, someone somewhere failed to
// call Dispose, which means we've failed to leave
// a monitor!
System.Diagnostics.Debug.Fail("Undisposed lock");
}
#endif
We've used Ants Profiler Pro by Red Gate software in our project. It works really well for all .NET language-based applications.
We found that the .NET Garbage Collector is very "safe" in its cleaning up of in-memory objects (as it should be). It would keep objects around just because we might be using it sometime in the future. This meant we needed to be more careful about the number of objects that we inflated in memory. In the end, we converted all of our data objects over to an "inflate on-demand" (just before a field is requested) in order to reduce memory overhead and increase performance.
EDIT: Here's a further explanation of what I mean by "inflate on demand." In our object model of our database we use Properties of a parent object to expose the child object(s). For example if we had some record that referenced some other "detail" or "lookup" record on a one-to-one basis we would structure it like this:
class ParentObject
Private mRelatedObject as New CRelatedObject
public Readonly property RelatedObject() as CRelatedObject
get
mRelatedObject.getWithID(RelatedObjectID)
return mRelatedObject
end get
end property
End class
We found that the above system created some real memory and performance problems when there were a lot of records in memory. So we switched over to a system where objects were inflated only when they were requested, and database calls were done only when necessary:
class ParentObject
Private mRelatedObject as CRelatedObject
Public ReadOnly Property RelatedObject() as CRelatedObject
Get
If mRelatedObject is Nothing
mRelatedObject = New CRelatedObject
End If
If mRelatedObject.isEmptyObject
mRelatedObject.getWithID(RelatedObjectID)
End If
return mRelatedObject
end get
end Property
end class
This turned out to be much more efficient because objects were kept out of memory until they were needed (the Get method was accessed). It provided a very large performance boost in limiting database hits and a huge gain on memory space.
You still need to worry about memory when you are writing managed code unless your application is trivial. I will suggest two things: first, read CLR via C# because it will help you understand memory management in .NET. Second, learn to use a tool like CLRProfiler (Microsoft). This can give you an idea of what is causing your memory leak (e.g. you can take a look at your large object heap fragmentation)
Are you using unmanaged code? If you are not using unmanaged code, according to Microsoft, memory leaks in the traditional sense are not possible.
Memory used by an application may not be released however, so an application's memory allocation may grow throughout the life of the application.
From How to identify memory leaks in the common language runtime at Microsoft.com
A memory leak can occur in a .NET
Framework application when you use
unmanaged code as part of the
application. This unmanaged code can
leak memory, and the .NET Framework
runtime cannot address that problem.
Additionally, a project may only
appear to have a memory leak. This
condition can occur if many large
objects (such as DataTable objects)
are declared and then added to a
collection (such as a DataSet). The
resources that these objects own may
never be released, and the resources
are left alive for the whole run of
the program. This appears to be a
leak, but actually it is just a
symptom of the way that memory is
being allocated in the program.
For dealing with this type of issue, you can implement IDisposable. If you want to see some of the strategies for dealing with memory management, I would suggest searching for IDisposable, XNA, memory management as game developers need to have more predictable garbage collection and so must force the GC to do its thing.
One common mistake is to not remove event handlers that subscribe to an object. An event handler subscription will prevent an object from being recycled. Also, take a look at the using statement which allows you to create a limited scope for a resource's lifetime.
This blog has some really wonderful walkthroughs using windbg and other tools to track down memory leaks of all types. Excellent reading to develop your skills.
I just had a memory leak in a windows service, that I fixed.
First, I tried MemProfiler. I found it really hard to use and not at all user friendly.
Then, I used JustTrace which is easier to use and gives you more details about the objects that are not disposed correctly.
It allowed me to solve the memory leak really easily.
If the leaks you are observing are due to a runaway cache implementation, this is a scenario where you might want to consider the use of WeakReference. This could help to ensure that memory is released when necessary.
However, IMHO it would be better to consider a bespoke solution - only you really know how long you need to keep the objects around, so designing appropriate housekeeping code for your situation is usually the best approach.
I prefer dotmemory from Jetbrains
Big guns - Debugging Tools for Windows
This is an amazing collection of tools. You can analyze both managed and unmanaged heaps with it and you can do it offline. This was very handy for debugging one of our ASP.NET applications that kept recycling due to memory overuse. I only had to create a full memory dump of living process running on production server, all analysis was done offline in WinDbg. (It turned out some developer was overusing in-memory Session storage.)
"If broken it is..." blog has very useful articles on the subject.
After one of my fixes for managed application I had the same thing, like how to verify that my application will not have the same memory leak after my next change, so I've wrote something like Object Release Verification framework, please take a look on the NuGet package ObjectReleaseVerification. You can find a sample here https://github.com/outcoldman/OutcoldSolutions-ObjectReleaseVerification-Sample, and information about this sample http://outcoldman.com/en/blog/show/322
The best thing to keep in mind is to keep track of the references to your objects. It is very easy to end up with hanging references to objects that you don't care about anymore.
If you are not going to use something anymore, get rid of it.
Get used to using a cache provider with sliding expirations, so that if something isn't referenced for a desired time window it is dereferenced and cleaned up. But if it is being accessed a lot it will say in memory.
One of the best tools is using the Debugging Tools for Windows, and taking a memory dump of the process using adplus, then use windbg and the sos plugin to analyze the process memory, threads, and call stacks.
You can use this method for identifying problems on servers too, after installing the tools, share the directory, then connect to the share from the server using (net use) and either take a crash or hang dump of the process.
Then analyze offline.
From Visual Studio 2015 consider to use out of the box Memory Usage diagnostic tool to collect and analyze memory usage data.
The Memory Usage tool lets you take one or more snapshots of the managed and native memory heap to help understand the memory usage impact of object types.
one of the best tools I used its DotMemory.you can use this tool as an extension in VS.after run your app you can analyze every part of memory(by Object, NameSpace, etc) that your app use and take some snapshot of that, Compare it with other SnapShots.
DotMemory

Categories