Tracking Down a .NET Windows Service Memory Leak - c#

Before installing my windows service in production, I was looking for reliable tests that I can perform to make sure my code doesn't contain memory leaks.
However, All what I can find on the net was using task manager to look at used memory or some paid memory profiler tools.
From my understanding, looking at the task manager is not really helpful and cannot confirm the memory leakage (in case, there is).
How to confirm whether there is a memory leak or not?
Is there any free tools to find the source of memory leaks?
Note: I'm using .Net Framework 4.6 and Visual Studio 2015 Community

Well you can use task manager.
GC apps can leak memory, and it will show there.
But...
Free tool - ".Net CLR profiler"
There is a free tool, and it's from Microsoft, and it's awesome. This is a must-use for all programs that leak references. Search MS' site.
Leaking references means you forget to set object references to null, or they never leave scope, and this is almost as likely to occur in Garbage collected languages as not - lists building up and not clearing, event handlers pointing to delegates, etc.
It's the GC equivalent of memory leaks and has the same result. This program tells you what references are taking up tons of memory - and you will know if it's supposed to be that way or not, and if not, you can go find them and fix the problem!
It even has a cool visualization of what objects allocate what memory (so you can track down mistakes). I believe there are youtubes of this if you need an explanation.
Wikipedia page with download links...
NOTE: You will likely have to run your app not as a service to use this. It starts first and then runs your app. You can do this with TopShelf or by just putting the guts in a dll that runs from an EXE that implments the service integrations (service host pattern).

Although managed code implies no direct memory management, you still have to manage your instances. Those instances 'claim' memory. And it is all about the usage of these instances, keeping them alive when you don't expect them to be.
Just one of many examples: wrong usage of disposable classes can result in a lot of instances claiming memory. For a windows service, a slow but steady increase of instances can eventually result in to much memory usage.
Yes, there is a tool to analyze memory leaks. It just isn't free. However you might be able to identify your problem within the 7 day trial.
I would suggest to take a loot at the .NET Memory Profiler.
It is great to analyze memory leaks during development. It uses the concept of snapshots to compare new instances, disposed instances etc. This is a great help to understand how your service uses its memory. You can then dig deeper into why new instances get created or are kept alive.
Yes, you can test to confirm whether memory leaks are introduced.
However, just out-of-the box this will not be very useful. This is because no one can anticipate what will happen during runtime. The tool can analyze your app for common issues, but this is not guaranteed.
However, you can use this tool to integrate memory consumption into your unit test framework like NUnit or MSTest.

Of course a memory profiler is the first kind of tool to try, but it will only tell you whether your instances keep increasing. You still want to know whether it is normal that they are increasing. Also, once you have established that some instances keep increasing for no good reason, (meaning, you have a leak,) you will want to know precisely which call trees lead to their allocation, so that you can troubleshoot the code that allocates them and fix it so that it does eventually release them.
Here is some of the knowledge I have collected over the years in dealing with such issues:
Test your service as a regular executable as much as possible. Trying to test the service as an actual service just makes things too complicated.
Get in the habit of explicitly undoing everything that you do at the end of the scope of that thing which you are doing. For example, if you register an observer to the event of some observee, there should should always be some point in time (the disposal of the observer or the observee?) that you de-register it. In theory, garbage collection should take care of that by collecting the entire graph of interconnected observers and observees, but in practice, if you don't kick the habit of forgetting to undo things that you do, you get memory leaks.
Use IDisposable as much as possible, and make your destructors report if someone forgot to invoke Dispose(). More about this method here: Mandatory disposal vs. the "Dispose-disposing" abomination Disclosure: I am the author of that article.
Have regular checkpoints in your program where you release everything that should be releasable (as if the program is performing an orderly shutdown in order to terminate) and then force a garbage collection to see whether you have any leaks.
If instances of some class appear to be leaking, use the following trick to discover the precise calling tree that caused their allocation: within the constructor of that class, allocate an exception object without throwing it, obtain the stack trace of the exception, and store it. If you discover later that this object has been leaked, you have the necessary stack trace. Just don't do this with too many objects, because allocating an exception and obtaining the stack trace from it is ridiculously slow, only Microsoft knows why.

You could try the free Memoscope memory profiler
https://github.com/fremag/MemoScope.Net
I do not agree that you can trust the Task Manager to check if you have a memory leak or not. The problem with a garbage collector is that it can decide based on heuristics to keep the memory after a memory spike and do not return it to the OS. You might have a 2 GB Commit size but 90% of them can be free.
You should use VMMAP to check during the tests what type of memory your process contains. You do not only have the managed heap, but also unmanaged heap, private bytes, stacks (thread leaks), shared files and much more which need to be tracked.
VMMap has also command line interface which makes it possible to create snapshots at regular intervals which you can examine later. If you have a memory growth you can find out which type of memory is leaked which needs depending on the leak type different debugging tooling approaches.

I would not say that the Garbage collector is infallible. There are times when it fails unknowingly and they are not so straight forward. Memory streams are a common cause of memory leaks. You can open them in one context and they may never even get closed, even though the usage is wrapped in a using statement (the definition of a disposable object that should be cleaned up immediately after its usage falls out of scope). If you are experiencing crashes due to running out of memory, Windows does create dump files that you can sift through.
enter link description here
This is by no means fun or easy and is quite tedious but it tends to be your best bet.
Common areas that are easy to create memory leaks are anything that is using the System.Drawing dll, memory streams, and if you are doing some serious multi-threading.

If you use Entity Framework and a DI pattern, perhaps using Castle Windsor, you can easily get memory leaks.
The main thing to do is use the using( ){ } statement where-ever you can to automatically mark objects as disposed.
Also, you want to turn off automatic tracking on Entity Framework where you are only reading and not writing. Best to isolate your writes, use a using() {} at this point, get a dbContext (with tracking on), write your data.
If you want to investigate what is on the heap. The best tool I've used is RedGate ANTS http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/solving-memory-problems/getting-started not cheap but it works.
However, by using the using() {} pattern where-ever you can (don't make a static or singleton DbContext and never have one context in a massive loop of updates, dispose of them as often as you can!) then you find memory isn't often an issue.
Hope this helps.

Unless you're dealing with unmanaged code, i would be so bold to say you don't have to worry about memory leaks. Any unreferenced object in managed code will be removed by the garbage collector, and the possibility in finding a memory leak within the .net framework i would say you should be considered very lucky (well, unlucky). You don't have to worry about memory leak.
However, you can still encounter ever-growing memory usage, if references to objects are never released. For example, say you keep an internal log structure, and you just keep adding entries to a log list. Then every entry still have references from the log list and therefore will never be collected.
From my experience, you can definitely use the task manager as an indicator whether your system has growing issues; if the memory usage steadily keep rising, you know you have an issue. If it grows to a point but eventually converges to a certain size, it indicates it has reached its operating threshold.
If you want a more detailed view of managed memory usage, you can download the process explorer here, developed by Microsoft. It is still quite blunt, but it gives a somewhat better statistical view than task manager.

Related

.NET Free memory usage (how to prevent overallocation / release memory to the OS)

I'm currently working on a website that makes large use of cached data to avoid roundtrips.
At startup we get a "large" graph (hundreds of thouthands of different kinds of objects).
Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization)
I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report
Now what we can gather from this report is that:
1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS)
2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation)
3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code.
So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release.
Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory.
Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run.
Thanks for your help!
A few notes i forgot:
1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile.
2) These are results of a release build, so debug build can not be the issue.
3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)
Objects allocated on the large object heap (objects >= 85,000 bytes, normally arrays) are not compacted by the garbage collector. Microsoft decided that the cost of moving those objects around would be too high.
The recommendation is to reuse large objects if possible to avoid
fragmentation on the managed heap and the VM space.
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
I'm assuming that your large objects are temporary byte arrays created by your deserialization library. If the library allows you to supply your own byte arrays, you could preallocate them at the start of the program and then reuse them.
I know this isn't the answer you'd like to hear, but you can't forcefully release the memory back to the OS. However, for what reason do you want to do so? .NET will free its heap back to the OS once you're running low on physical memory. But if there's an ample amount of free physical memory, .NET will keep its heap to make future allocation of objects faster. If you really wanted to force .NET to release its heap back to the OS, I suppose you could write a C program which just mallocs until it runs out of memory. This should cause the OS to signal .NET to free its unused portion of the heap.
It's better that unused memory be reseved for .NET so that your application will have better allocation performance (since the runtime knows what memory is free and what isn't, allocation can just use the free memory without having to syscall into the OS to get more memory).
The garbage collector is in charge of defragmenting the heap. Every so often (usually during collection runs), it will move objects around the heap if it determines this needs to be done. (This is why C++/CLI has the pin_ptr construct for "pinning" objects).
Fragmentation usually isn't a big issue though with memory, since it provides fast random access.
As for your OutOfMemoryException, I don't have a good answer for. Ordinarily I'd suspect that your old object graph isn't being collected (some object somewhere is holding a reference onto it, a "memory leak"). But since you're using a profiler, I don't know then.
As of .NET 4.5.1 you can set a one-time flag to compact LOH before issuing a call to GC collect, i.e.
Runtime.GCSettings.LargeObjectHeapCompactionMode = System.Runtime.GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(); // This will cause the LOH to be compacted (once).
Some testing and some C++ later, i've found the reason why i get so much free memory, it's because of IIS instancing the CLR via VM Hoarding (providing a dll to instantiate it without VM Hoarding takes up as much initial memory, but does release most of it as time goes which is the behavior i expect).
So this does fix my reported memory issue, however i still get about 100mb free memory no matter what, and i still think this is due to fragmentation and fragments only being released at once, because the profiler still reports memory fragmentation. So not marking my own answer as an answer in hope someone can shed some light on this or direct me to tools that can either fix this or help me debug the root cause.
It's intriguing that it works differently on the WebDevServer as to IIS...
Is it possible that IIS is using the server garbage-collector, and the WebDev server the workstation garbage collector? The method of garbage collection can affect fragmentation. It'll probably be set in your aspnet.config file. See: http://support.microsoft.com/kb/911716
If you havent found your answer I think the following clues can help you :
Back to the basics : we sometimes forget that the objects can be explicitly set free, call explicitly the Dispose method of the objects (because you didnt mention it, I suppose you do an "object = null" instruction instead).
Use the inherited method, you dont need to implement one, unless your class doesnt have it, which I doubt it.
MSDN Help states about this method :
... There is no performance benefit in implementing the Dispose
method on types that use only managed resources (such as arrays)
because they are automatically reclaimed by the garbage collector. Use
the Dispose method primarily on managed objects that use native
resources and on COM objects that are exposed to the .NET
Framework. ...
Because it says that "they are automatically reclaimed by garbage collector" we can infer that when the method is called does the "releasing thing" (Again Im trying only to give you clues).
Besides I found this interesting article (I suppose ... I didn read it ...completely) : Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework (http://msdn.microsoft.com/en-us/magazine/bb985010.aspx) which states the following in the "Forcing an Object to Clean Up" section :
..., it is also recommended that you add an additional method to
the type that allows a user of the type to explicitly clean up the
object when they want. By convention, this method should be called
Close or Dispose ....
Maybe the answer lies in this article if you read it carefully or just keep investigating in this direction.

How can I tell if the .Net 3.5 garbage collector has run?

I have an application that creates trees of nodes, then tosses them and makes new trees.
The application allocates about 20 MB upon startup. But I tried loading a large file of nodes many times, and the allocated memory went above 700 MB. I thought I would see memory being freed by the garbage collector on occasion.
The machine I'm on has 12 GB of RAM, so maybe it's just that such a "small" amount of memory being allocated doesn't matter to the GC.
I've found a lot of good info on how the GC works, and that it's best not to tell it what to do. But I wanted to verify that it's actually doing anything, and that I'm not somehow doing something wrong in the code that prevents my objects from being cleaned up.
The GC generally runs when either of the scenarios below occur:
You call GC.Collect (which you shouldn't)
Gen0's budget is exhausted
There are some other scenarios as well, but I'll skip those for now.
You didn't tell us how you measured the memory usage, but if you're looking at the memory usage of process itself (e.g. through task manager), then you may not see the numbers you expect. Remember that the .NET runtime essentially has its own memory manager that handles memory usage on behalf of you managed application. The runtime tries to be smart about it so it doesn't allocate and free memory to the OS all the time (those are expensive operations). This question may be relevant as well.
If you're concerned about memory leaks have a look at some of the answers here.
When does the .Net 3.5 garbage collector run?
I thought I would see memory being freed by the garbage collector on occasion.
Since the GC is non-deterministic, you won't be able to necessarily determine when it is going to issue a collection. Short answer: It will run when needed. Trying to analyze your code and predict or assume it should be running at a certain time usually ends up down a rabbit hole.
Answer to: do I leak objects or GC have not need to run yet?
Use memory profiler to see what objects are allocated. As basic step - force garbage collection (GC.Collect) and check out if allocated memory (GC.GetTotalMemory) seems to be reasonable.
If you want to make sure that you're not leaving any unwanted object behind you can use dotTrace memory profiler. It allows you to take two snapshots of objects in memory (taken some time apart) and compare them. You will can clearly see if any old nodes are still hanging around and what is keeping a reference to them and stops them from being collected.
You may have the managed equivalent of a memory leak. Are you maintaining stale references to these objects (i.e., do you have a List<T> or some other object which tracks these nodes)?
Are the subscribing to an event of an object that is not going out of scope? A reference to the subscribee of an event is maintained, so if you don't detach it will keep your objects alive.
You may also be forgetting to Dispose of objects that implement IDisposable. Can't say without seeing your code.
The exact behavior of the GC is implementation defined. You should design your application such that it does not matter. If you need deterministic memory management then you are using the wrong language. Use a tool (RedGate's ANTS profiler will do) to see if you are leaking references somewhere.
This comic is the best way to explain this topic :)
You can monitor the garbage collector activity in .NET 4.0 and beyond with GC.RegisterForFullGCNotification as described in this link: http://www.abhisheksur.com/2010/08/garbage-collection-notifications-in-net.html

C# - Method of programmatically attempting to check for memory leak in block of code

I'm trying to see how feasible it is to attempt to accurately determine that there is a potential memory leak in a block of managed .NET code programmatically. The reason to do this would be to isolate some block of code that appears to be leaking memory, and to then use a standard profiler to further determine the actual cause of the leak. In my particular business case, I would be loading a 3rd party class that extends one of mine to check it for leaks.
The approach that first comes to mind is something like this:
Wait for GC to run.
Get the current allocated memory from the GC.
[Run block of managed code.]
Wait for GC to run.
Get the current allocated memory from the GC and subtract from the allocated memory recorded before running the block of code. Is it correct that the difference should theoretically be (near) 0 if all objects allocated in the block of code that was run were dereferenced appropriately and collected?
Certainly the immediate issue with this is that there will likely be waiting...and waiting...and waiting for the non-deterministic GC to run. If we skip that aspect, the calculation for determining if the block of code leaked any memory however can vary wildly, and would not necessarily be accurate, as some items may not have been collected at the time.
Does the above seem like my best option of attempting to determine somewhat accurately if a block of code is leaking memory? Or are there other working methods that are used in real-life? Thanks.
Personally, I would never dare to do memory profiling on my own. I'll fear that I either do not have the full knowledge and that it would take endless time to do so.
Instead I used successfully memory profilers like Red Gate's ANTS Memory Profiler.
While using ANTS Profiler is awesome it doesn't help if your problem is only seen in production.
Tess Ferrandez has a series of Labs that demonstrate how to debug production problems, including memory leaks. They focus on ASP.NET but it can be use for other types of applications as well.
You really need a Memory Profiler like this one: With that, you can:
start your application, take a memory snapshot (manually or from your code)
[Run block of managed code]
take another memory snapshot
compare the two snapshots and see which new objects are now on the managed heap
I believe it does exactly what you want to do, only far less painful. It also has some helpful filters like "show objects that are kept alive by delegates". It can also analyze memory dumps from a production system.

Out of Memory Exception

I am working on a web app using C# and asp.net I have been receiving an out of memory exception. What the app does is read a bunch of records(products) from a data source, could be hundreds/thousands, processes those records through settings in a wizard and then updates a different data source with the processes product information. Although there are multiple DB classes, right now all the logic is in one big class. The only reason for this, is all the information has to do with one thing, a product. Would it help the memory if I divided my app into different classes?? I don't think it would because if I divided the business logic into two classes, both of the classes would remain alive the entire time sending messages to each other, and so I don't know how this would help. I guess my other solution would be to find out what's sucking up all the memory. Is there a good tool you could recommend??
Thanks
Are you using datareaders to stream through your data? (to avoid loading too much into memory)
My gut is telling me this is a trivial issue to fix, don't pump datatables with 1 million records, work through tables one row at a time, or in small batches ... Release and dispose objects when you are done with them. (Example: don't have static List<Customer> allCustomers = AllCustomers())
Have a development rule that ensures no one reads tables into memory if there are more than X amount of rows involved.
If you need a tool to debug this look at .net memory profiler or windbg with the sos extension both will allow you to sniff through your your managed heaps.
Another note is, if you care about maintainability and would like to reduce your defect count, get rid of the SuperDuperDoEverything class and model information correctly in a way that is better aligned with your domain. The SuperDuperDoEverything class is a bomb waiting to explode.
Also note that you may not actually be running out of memory. What happens is that .NET goes to look for contiguous blocks of memory, and if it doesn't find any, it throws an OOM - even if you have plenty of total memory to cover the request.
Someone referenced both Perfmon and WinDBG. You could also setup adplus to capture a memory dump on crash - I believe the syntax is adplus -crash -iis. Once you have the memory dump, you can do something like:
.symfix C:\symbols
.reload
.loadby sos mscorwks
!dumpheap -stat
And that will give you an idea for what your high-memory objects are.
And of course, check out Tess Fernandez's excellent blog, for example this article on Memory Leaks with XML Serializers and how to troubleshoot them.
If you are able to repro this in your dev environment, and you have VS Team Edition for Developers, there are memory profilers built right in. Just launch a new performance session, and run your app. It will spit out a nice report of what's hanging around.
Finally, make sure your objects don't define a destructor. This isn't C++, and there's nothing deterministic about it, other than it guarantees your object will survive a round of Garbage Collection since it has to be placed in the finalizer queue, and then cleaned up the next round.
a very basic thing you might want to try is, restart visual studio (assuming you are using it) and see if the same thing happens, and yes releasing objects without waiting for garbage collector is always a good practice.
to sum it up,
release objects
close connections
and you can always try this,
http://msdn.microsoft.com/en-us/magazine/cc337887.aspx
I found the problem. While doing my loop I had a collection that wasn't being cleared and so data just keep being added to it.
Start with Perfmon; There is a number of counters for GC related info. More than likely you are leaking memory(otherwise the GC would be deleting objects), meaning you are still referencing data structures that are no longer needed.
You should split into multiple classes anyways, just for the sake of a sane design.
Are you closing your DB connections? If you are reading into files, are you closing/releasing them once you are done reading/writing? Same goes for other objects.
You could cycle your class objects routinely just to release memory.

What strategies and tools are useful for finding memory leaks in .NET?

I wrote C++ for 10 years. I encountered memory problems, but they could be fixed with a reasonable amount of effort.
For the last couple of years I've been writing C#. I find I still get lots of memory problems. They're difficult to diagnose and fix due to the non-determinancy, and because the C# philosophy is that you shouldn't have to worry about such things when you very definitely do.
One particular problem I find is that I have to explicitly dispose and cleanup everything in code. If I don't, then the memory profilers don't really help because there is so much chaff floating about you can't find a leak within all the data they're trying to show you. I wonder if I've got the wrong idea, or if the tool I've got isn't the best.
What kind of strategies and tools are useful for tackling memory leaks in .NET?
I use Scitech's MemProfiler when I suspect a memory leak.
So far, I have found it to be very reliable and powerful. It has saved my bacon on at least one occasion.
The GC works very well in .NET IMO, but just like any other language or platform, if you write bad code, bad things happen.
Just for the forgetting-to-dispose problem, try the solution described in this blog post. Here's the essence:
public void Dispose ()
{
// Dispose logic here ...
// It's a bad error if someone forgets to call Dispose,
// so in Debug builds, we put a finalizer in to detect
// the error. If Dispose is called, we suppress the
// finalizer.
#if DEBUG
GC.SuppressFinalize(this);
#endif
}
#if DEBUG
~TimedLock()
{
// If this finalizer runs, someone somewhere failed to
// call Dispose, which means we've failed to leave
// a monitor!
System.Diagnostics.Debug.Fail("Undisposed lock");
}
#endif
We've used Ants Profiler Pro by Red Gate software in our project. It works really well for all .NET language-based applications.
We found that the .NET Garbage Collector is very "safe" in its cleaning up of in-memory objects (as it should be). It would keep objects around just because we might be using it sometime in the future. This meant we needed to be more careful about the number of objects that we inflated in memory. In the end, we converted all of our data objects over to an "inflate on-demand" (just before a field is requested) in order to reduce memory overhead and increase performance.
EDIT: Here's a further explanation of what I mean by "inflate on demand." In our object model of our database we use Properties of a parent object to expose the child object(s). For example if we had some record that referenced some other "detail" or "lookup" record on a one-to-one basis we would structure it like this:
class ParentObject
Private mRelatedObject as New CRelatedObject
public Readonly property RelatedObject() as CRelatedObject
get
mRelatedObject.getWithID(RelatedObjectID)
return mRelatedObject
end get
end property
End class
We found that the above system created some real memory and performance problems when there were a lot of records in memory. So we switched over to a system where objects were inflated only when they were requested, and database calls were done only when necessary:
class ParentObject
Private mRelatedObject as CRelatedObject
Public ReadOnly Property RelatedObject() as CRelatedObject
Get
If mRelatedObject is Nothing
mRelatedObject = New CRelatedObject
End If
If mRelatedObject.isEmptyObject
mRelatedObject.getWithID(RelatedObjectID)
End If
return mRelatedObject
end get
end Property
end class
This turned out to be much more efficient because objects were kept out of memory until they were needed (the Get method was accessed). It provided a very large performance boost in limiting database hits and a huge gain on memory space.
You still need to worry about memory when you are writing managed code unless your application is trivial. I will suggest two things: first, read CLR via C# because it will help you understand memory management in .NET. Second, learn to use a tool like CLRProfiler (Microsoft). This can give you an idea of what is causing your memory leak (e.g. you can take a look at your large object heap fragmentation)
Are you using unmanaged code? If you are not using unmanaged code, according to Microsoft, memory leaks in the traditional sense are not possible.
Memory used by an application may not be released however, so an application's memory allocation may grow throughout the life of the application.
From How to identify memory leaks in the common language runtime at Microsoft.com
A memory leak can occur in a .NET
Framework application when you use
unmanaged code as part of the
application. This unmanaged code can
leak memory, and the .NET Framework
runtime cannot address that problem.
Additionally, a project may only
appear to have a memory leak. This
condition can occur if many large
objects (such as DataTable objects)
are declared and then added to a
collection (such as a DataSet). The
resources that these objects own may
never be released, and the resources
are left alive for the whole run of
the program. This appears to be a
leak, but actually it is just a
symptom of the way that memory is
being allocated in the program.
For dealing with this type of issue, you can implement IDisposable. If you want to see some of the strategies for dealing with memory management, I would suggest searching for IDisposable, XNA, memory management as game developers need to have more predictable garbage collection and so must force the GC to do its thing.
One common mistake is to not remove event handlers that subscribe to an object. An event handler subscription will prevent an object from being recycled. Also, take a look at the using statement which allows you to create a limited scope for a resource's lifetime.
This blog has some really wonderful walkthroughs using windbg and other tools to track down memory leaks of all types. Excellent reading to develop your skills.
I just had a memory leak in a windows service, that I fixed.
First, I tried MemProfiler. I found it really hard to use and not at all user friendly.
Then, I used JustTrace which is easier to use and gives you more details about the objects that are not disposed correctly.
It allowed me to solve the memory leak really easily.
If the leaks you are observing are due to a runaway cache implementation, this is a scenario where you might want to consider the use of WeakReference. This could help to ensure that memory is released when necessary.
However, IMHO it would be better to consider a bespoke solution - only you really know how long you need to keep the objects around, so designing appropriate housekeeping code for your situation is usually the best approach.
I prefer dotmemory from Jetbrains
Big guns - Debugging Tools for Windows
This is an amazing collection of tools. You can analyze both managed and unmanaged heaps with it and you can do it offline. This was very handy for debugging one of our ASP.NET applications that kept recycling due to memory overuse. I only had to create a full memory dump of living process running on production server, all analysis was done offline in WinDbg. (It turned out some developer was overusing in-memory Session storage.)
"If broken it is..." blog has very useful articles on the subject.
After one of my fixes for managed application I had the same thing, like how to verify that my application will not have the same memory leak after my next change, so I've wrote something like Object Release Verification framework, please take a look on the NuGet package ObjectReleaseVerification. You can find a sample here https://github.com/outcoldman/OutcoldSolutions-ObjectReleaseVerification-Sample, and information about this sample http://outcoldman.com/en/blog/show/322
The best thing to keep in mind is to keep track of the references to your objects. It is very easy to end up with hanging references to objects that you don't care about anymore.
If you are not going to use something anymore, get rid of it.
Get used to using a cache provider with sliding expirations, so that if something isn't referenced for a desired time window it is dereferenced and cleaned up. But if it is being accessed a lot it will say in memory.
One of the best tools is using the Debugging Tools for Windows, and taking a memory dump of the process using adplus, then use windbg and the sos plugin to analyze the process memory, threads, and call stacks.
You can use this method for identifying problems on servers too, after installing the tools, share the directory, then connect to the share from the server using (net use) and either take a crash or hang dump of the process.
Then analyze offline.
From Visual Studio 2015 consider to use out of the box Memory Usage diagnostic tool to collect and analyze memory usage data.
The Memory Usage tool lets you take one or more snapshots of the managed and native memory heap to help understand the memory usage impact of object types.
one of the best tools I used its DotMemory.you can use this tool as an extension in VS.after run your app you can analyze every part of memory(by Object, NameSpace, etc) that your app use and take some snapshot of that, Compare it with other SnapShots.
DotMemory

Categories