Related
Before installing my windows service in production, I was looking for reliable tests that I can perform to make sure my code doesn't contain memory leaks.
However, All what I can find on the net was using task manager to look at used memory or some paid memory profiler tools.
From my understanding, looking at the task manager is not really helpful and cannot confirm the memory leakage (in case, there is).
How to confirm whether there is a memory leak or not?
Is there any free tools to find the source of memory leaks?
Note: I'm using .Net Framework 4.6 and Visual Studio 2015 Community
Well you can use task manager.
GC apps can leak memory, and it will show there.
But...
Free tool - ".Net CLR profiler"
There is a free tool, and it's from Microsoft, and it's awesome. This is a must-use for all programs that leak references. Search MS' site.
Leaking references means you forget to set object references to null, or they never leave scope, and this is almost as likely to occur in Garbage collected languages as not - lists building up and not clearing, event handlers pointing to delegates, etc.
It's the GC equivalent of memory leaks and has the same result. This program tells you what references are taking up tons of memory - and you will know if it's supposed to be that way or not, and if not, you can go find them and fix the problem!
It even has a cool visualization of what objects allocate what memory (so you can track down mistakes). I believe there are youtubes of this if you need an explanation.
Wikipedia page with download links...
NOTE: You will likely have to run your app not as a service to use this. It starts first and then runs your app. You can do this with TopShelf or by just putting the guts in a dll that runs from an EXE that implments the service integrations (service host pattern).
Although managed code implies no direct memory management, you still have to manage your instances. Those instances 'claim' memory. And it is all about the usage of these instances, keeping them alive when you don't expect them to be.
Just one of many examples: wrong usage of disposable classes can result in a lot of instances claiming memory. For a windows service, a slow but steady increase of instances can eventually result in to much memory usage.
Yes, there is a tool to analyze memory leaks. It just isn't free. However you might be able to identify your problem within the 7 day trial.
I would suggest to take a loot at the .NET Memory Profiler.
It is great to analyze memory leaks during development. It uses the concept of snapshots to compare new instances, disposed instances etc. This is a great help to understand how your service uses its memory. You can then dig deeper into why new instances get created or are kept alive.
Yes, you can test to confirm whether memory leaks are introduced.
However, just out-of-the box this will not be very useful. This is because no one can anticipate what will happen during runtime. The tool can analyze your app for common issues, but this is not guaranteed.
However, you can use this tool to integrate memory consumption into your unit test framework like NUnit or MSTest.
Of course a memory profiler is the first kind of tool to try, but it will only tell you whether your instances keep increasing. You still want to know whether it is normal that they are increasing. Also, once you have established that some instances keep increasing for no good reason, (meaning, you have a leak,) you will want to know precisely which call trees lead to their allocation, so that you can troubleshoot the code that allocates them and fix it so that it does eventually release them.
Here is some of the knowledge I have collected over the years in dealing with such issues:
Test your service as a regular executable as much as possible. Trying to test the service as an actual service just makes things too complicated.
Get in the habit of explicitly undoing everything that you do at the end of the scope of that thing which you are doing. For example, if you register an observer to the event of some observee, there should should always be some point in time (the disposal of the observer or the observee?) that you de-register it. In theory, garbage collection should take care of that by collecting the entire graph of interconnected observers and observees, but in practice, if you don't kick the habit of forgetting to undo things that you do, you get memory leaks.
Use IDisposable as much as possible, and make your destructors report if someone forgot to invoke Dispose(). More about this method here: Mandatory disposal vs. the "Dispose-disposing" abomination Disclosure: I am the author of that article.
Have regular checkpoints in your program where you release everything that should be releasable (as if the program is performing an orderly shutdown in order to terminate) and then force a garbage collection to see whether you have any leaks.
If instances of some class appear to be leaking, use the following trick to discover the precise calling tree that caused their allocation: within the constructor of that class, allocate an exception object without throwing it, obtain the stack trace of the exception, and store it. If you discover later that this object has been leaked, you have the necessary stack trace. Just don't do this with too many objects, because allocating an exception and obtaining the stack trace from it is ridiculously slow, only Microsoft knows why.
You could try the free Memoscope memory profiler
https://github.com/fremag/MemoScope.Net
I do not agree that you can trust the Task Manager to check if you have a memory leak or not. The problem with a garbage collector is that it can decide based on heuristics to keep the memory after a memory spike and do not return it to the OS. You might have a 2 GB Commit size but 90% of them can be free.
You should use VMMAP to check during the tests what type of memory your process contains. You do not only have the managed heap, but also unmanaged heap, private bytes, stacks (thread leaks), shared files and much more which need to be tracked.
VMMap has also command line interface which makes it possible to create snapshots at regular intervals which you can examine later. If you have a memory growth you can find out which type of memory is leaked which needs depending on the leak type different debugging tooling approaches.
I would not say that the Garbage collector is infallible. There are times when it fails unknowingly and they are not so straight forward. Memory streams are a common cause of memory leaks. You can open them in one context and they may never even get closed, even though the usage is wrapped in a using statement (the definition of a disposable object that should be cleaned up immediately after its usage falls out of scope). If you are experiencing crashes due to running out of memory, Windows does create dump files that you can sift through.
enter link description here
This is by no means fun or easy and is quite tedious but it tends to be your best bet.
Common areas that are easy to create memory leaks are anything that is using the System.Drawing dll, memory streams, and if you are doing some serious multi-threading.
If you use Entity Framework and a DI pattern, perhaps using Castle Windsor, you can easily get memory leaks.
The main thing to do is use the using( ){ } statement where-ever you can to automatically mark objects as disposed.
Also, you want to turn off automatic tracking on Entity Framework where you are only reading and not writing. Best to isolate your writes, use a using() {} at this point, get a dbContext (with tracking on), write your data.
If you want to investigate what is on the heap. The best tool I've used is RedGate ANTS http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/solving-memory-problems/getting-started not cheap but it works.
However, by using the using() {} pattern where-ever you can (don't make a static or singleton DbContext and never have one context in a massive loop of updates, dispose of them as often as you can!) then you find memory isn't often an issue.
Hope this helps.
Unless you're dealing with unmanaged code, i would be so bold to say you don't have to worry about memory leaks. Any unreferenced object in managed code will be removed by the garbage collector, and the possibility in finding a memory leak within the .net framework i would say you should be considered very lucky (well, unlucky). You don't have to worry about memory leak.
However, you can still encounter ever-growing memory usage, if references to objects are never released. For example, say you keep an internal log structure, and you just keep adding entries to a log list. Then every entry still have references from the log list and therefore will never be collected.
From my experience, you can definitely use the task manager as an indicator whether your system has growing issues; if the memory usage steadily keep rising, you know you have an issue. If it grows to a point but eventually converges to a certain size, it indicates it has reached its operating threshold.
If you want a more detailed view of managed memory usage, you can download the process explorer here, developed by Microsoft. It is still quite blunt, but it gives a somewhat better statistical view than task manager.
I have a program that processes high volumes of data, and can cache much of it for reuse with subsequent records in memory. The more I cache, the faster it works. But if I cache too much, boom, start over, and that takes a lot longer!
I haven't been too successful trying to do anything after the exception occurs - I can't get enough memory to do anything.
Also I've tried allocating a huge object, then de-allocating it right away, with inconsistent results. Maybe I'm doing something wrong?
Anyway, what I'm stuck with is just setting a hardcoded limit on the # of cached objects that, from experience, seems to be low enough. Any better Ideas? thanks.
edit after answer
The following code seems to be doing exactly what I want:
Loop
Dim memFailPoint As MemoryFailPoint = Nothing
Try
memFailPoint = New MemoryFailPoint( mysize) ''// size of MB of several objects I'm about to add to cache
memFailPoint.Dispose()
Catch ex As InsufficientMemoryException
''// dump the oldest items here
End Try
''// do work
next loop.
I need to test if it is slowing things down in this arrangement or not, but I can see the yellow line in Task Manager looking like a very healthy sawtooth pattern with a consistent top - yay!!
You can use MemoryFailPoint to check for available memory before allocating.
You may need to think about your release strategy for the cached objects. There is no possible way you can hold all of them forever so you need to come up with an expiration timeframe and have older cached objects removed from memory. It should be possible to find out how much memory is left and use that as part of your strategy but one thing is certain, old objects must go.
If you implement your cache with WeakRerefences (http://msdn.microsoft.com/en-us/library/system.weakreference.aspx) that will leave the cached objects still eligible for garbage collection in situations where you might otherwise throw an OutOfMemory exception.
This is an alternative to a fixed sized cache, but potentially has the problem to be overly aggressive in clearing out the cache when a GC does occur.
You might consider taking a hybrid approach, where there are a (tunable) fixed number of non-weakreferences in the cahce but you let it grow additionally with weakreferences. Or this may be overkill.
There are a number of metrics you can use to keep track of how much memory your process is using:
GC.GetTotalMemory
Environment.WorkingSet (This one isn't useful, my bad)
The native GlobalMemoryStatusEx function
There are also various properties on the Process class
The trouble is that there isn't really a reliable way of telling from these values alone whether or not a given memory allocation will fail as although there may be sufficient space in the address space for a given memory allocation memory fragmentation means that the space may not be continuous and so the allocation may still fail.
You can however use these values as an indication of how much memory the process is using and therefore whether or not you should think about removing objects from your cache.
Update: Its also important to make sure that you understand the distinction between virtual memory and physical memory - unless your page file is disabled (very unlikely) the cause of the OutOfMemoryException will be caused by a lack / fragmentation of the virtual address space.
If you're only using managed resources you can use the GC.GetTotalMemory method and compare the results with the maximum allowed memory for a process on your architecture.
A more advanced solution (I think this is how SQL Server manages to actually adapt to the available memory) is to use the CLR Hosting APIs:
the interface allows the CLR to inform the host of the consequences of
failing a particular allocation
which will mean actually removing some objects from the cache and trying again.
Anyway I think this is probably an overkill for almost all applications unless you really need an amazing performance.
The simple answer... By knowing what your memory limit is.
The closer you are to reach that limit the more you ARE ABOUT to get an OutOfMemoryException.
The more elaborated answer.... Unless you yourself writes a mechanism to do that kind of thing, programming languages/systems do not work that way; as far as I know they cannot inform you ahead or in advance you are exceeding limits BUT, they gladly inform you when the problem has occurred, and that usually happens through exceptions which you are supposed to write code to handle.
Memory is a resource that you can use; it has limits and it also has some conventions and rules for you to follow to make good use of that resource.
I believe what you are doing of setting a good limit, hard coded or configurable, seems to be your best bet.
I need to dispose of an object so it can release everything it owns, but it doesn't implement the IDisposable so I can't use it in a using block. How can I make the garbage collector collect it?
You can force a collection with GC.Collect(). Be very careful using this, since a full collection can take some time. The best-practice is to just let the GC determine when the best time to collect is.
Does the object contain unmanaged resources but does not implement IDisposable? If so, it's a bug.
If it doesn't, it shouldn't matter if it gets released right away, the garbage collector should do the right thing.
If it "owns" anything other than memory, you need to fix the object to use IDisposable. If it's not an object you control this is something worth picking a different vendor over, because it speaks to the core of how well your vendor really understands .Net.
If it does just own memory, even a lot of it, all you have to do is make sure the object goes out of scope. Don't call GC.Collect() — it's one of those things that if you have to ask, you shouldn't do it.
You can't perform garbage collection on a single object. You could request a garbage collection by calling GC.Collect() but this will effect all objects subject to cleanup. It is also highly discouraged as it can have a negative effect on the performance of later collections.
Also, calling Dispose on an object does not clean up it's memory. It only allows the object to remove references to unmanaged resources. For example, calling Dispose on a StreamWriter closes the stream and releases the Windows file handle. The memory for the object on the managed heap does not get reclaimed until a subsequent garbage collection.
Chris Sells also discussed this on .NET Rocks. I think it was during his first appearance but the subject might have been revisited in later interviews.
http://www.dotnetrocks.com/default.aspx?showNum=10
This article by Francesco Balena is also a good reference:
When and How to Use Dispose and Finalize in C#
http://www.devx.com/dotnet/Article/33167/0/page/1
Garbage collection in .NET is non deterministic, meaning you can't really control when it happens. You can suggest, but that doesn't mean it will listen.
Tells us a little bit more about the object and why you want to do this. We can make some suggestions based off of that. Code always helps. And depending on the object, there might be a Close method or something similar. Maybe the useage is to call that. If there is no Close or Dispose type of method, you probably don't want to rely on that object, as you will probably get memory leaks if in fact it does contain resourses which will need to be released.
If the object goes out of scope and it have no external references it will be collected rather fast (likely on the next collection).
BEWARE: of f ra gm enta tion in many cases, GC.Collect() or some IDisposal is not very helpful, especially for large objects (LOH is for objects ~80kb+, performs no compaction and is subject to high levels of fragmentation for many common use cases) which will then lead to out of memory (OOM) issues even with potentially hundreds of MB free. As time marches on, things get bigger, though perhaps not this size (80 something kb) for LOH relegated objects, high degrees of parallelism exasperates this issue due simply due to more objects in less time (and likely varying in size) being instantiated/released.
Array’s are the usual suspects for this problem (it’s also often hard to identify due to non-specific exceptions and assertions from the runtime, something like “high % of large object heap fragmentation” would be swell), the prognosis for code suffering from this problem is to implement an aggressive re-use strategy.
A class in Systems.Collections.Concurrent.ObjectPool from the parallel extensions beta1 samples helps (unfortunately there is not a simple ubiquitous pattern which I have seen, like maybe some attached property/extension methods?), it is simple enough to drop in or re-implement for most projects, you assign a generator Func<> and use Get/Put helper methods to re-use your previous object’s and forgo usual garbage collection. It is usually sufficient to focus on array’s and not the individual array elements.
It would be nice if .NET 4 updated all of the .ToArray() methods everywhere to include .ToArray(T target).
Getting the hang of using SOS/windbg (.loadby sos mscoreei for CLRv4) to analyze this class of issue can help. Thinking about it, the current garbage collection system is more like garbage re-cycling (using the same physical memory again), ObjectPool is analogous to garbage re-using. If anybody remembers the 3 R’s, reducing your memory use is a good idea too, for performance sakes ;)
I really love WeakReference's. But I wish there was a way to tell the CLR how much (say, on a scale of 1 to 5) how weak you consider the reference to be. That would be brilliant.
Java has SoftReference, WeakReference and I believe also a third type called a "phantom reference". That's 3 levels right there which the GC has a different behaviour algorithm for when deciding if that object gets the chop.
I am thinking of subclassing .NET's WeakReference (luckily and slightly bizzarely it isn't sealed) to make a pseudo-SoftReference that is based on a expiration timer or something.
I believe the fundamental reason that NET does not have soft references is because it can rely on an operating system with virtual memory. A Java process must specify its maximum OS memory (e.g. with -Xmx128M), and it never takes more OS memory than that. Whereas a NET process keeps taking OS memory that it needs, which the OS supplies with disk-backed virtual memory when RAM runs out. If NET allowed soft references, then the NET runtime would not know when to release them unless it either peeked deep into the OS to see if its memory is actually paged on disk (a nasty OS/CLR dependency), or it requested the runtime to specify a maximum process memory footprint (e.g. an equivalent of -Xmx). I guess that Microsoft does not want to add -Xmx to NET because they think the OS should decide how much RAM each process gets (by choosing which virtual memory pages to hold in RAM or on disk), and not the process itself.
Java SoftReferences are used in the creation of memory sensitive caches (they serve no other purpose).
As of .NET 4, .NET has a class System.Runtime.Caching.MemoryCache which will probably meet any such needs.
Having a WeakReference with varying levels of weakness (priority) sounds nice, but also might make the GC's job harder, not easier. (I've no idea on the GC internals, but) I would assume there some sort of additional access statistics that are kept for WeakReference objects so that the GC can clean them up efficiently (e.g. it might get rid of the least-used items first).
More than likely the added complexity wouldn't make anything any more efficient because the most efficient way is to get rid of infrequently used WeakReferences first. If you could assign a priority, how would you do it? This smells like a premature optimization: the programmer doesn't really know most of the time and is guessing; the result is a slower GC collection cycle that is probably reclaiming the wrong objects.
It begs the question though, that if you care about the WeakReference.Target object being reclaimed, is it really a good use of WeakReference?
It's like a cache. You shove stuff into the cache and ask the cache to make it stale after x minutes, but most caches never guarantee to keep it around at all. It just guarantees that if it does, it will expire it according to the policy requested.
My guess as to why this isn't there already would be simplicity. Most people, I think, would call it a virtue that there is only one type of reference, not four.
Maybe the ASP.NET Cache class (System.Web.Caching.Cache) might help achieve what you want? It automatically remove objects if memory gets low:
ASP.NET Caching Overview
Here's an article that shows how to use the Cache class in a windows forms application.
quoted from: Equivalent to SoftReference in .net?
Don't forget that you also have your standard references (the ones that you use on a daily basis). This gives you one more level.
WeakReferences should be used when you don't really care if the object goes away, while SoftReferences really only should be used when you would use a normal reference, but you would rather your object be cleared then for you to run out of memory. I'm not sure on the specifics, but I suspect that the GC normally traces through SoftReferences but not WeakReferences when determining which objects are live, but when running low on memory will also skip the SoftReferences.
My guess is that the .Net designers felt that the difference was confusing to most people and or that SoftReferences add more complexity than they really wanted and so decided to leave them out.
As a side note, AFAIK PhantomReferences are mostly designed for internal use by the virtual machine and are not intended for actual client use.
Maybe there should be an property where you can specify which Generation that the object >= before it is collected. So if you specify 1 then it is the weakest possible reference. But if you specify 3 then it would need to survive at least 3 prior collections before it can be considered for collection itself.
I thought the track ressurection flag was no good for this because by that time the object has already been finalized? May be wrong though...
(PS: I am the OP, just signed up. PITA that it doesn't inherit your history from "unregistered" accounts.)
Looking for the 'trackResurrection' option passed to the constructor perhaps?
The GC class also offers some assistance.
Don't know why .NET does not have Softreferences.
BUT in Java Softreferences are IMHO overused. The reason is tha at least in an application server you would want to be able to influence per application how long your Softreferenzen live. That's currently not possible in Java.
I wrote C++ for 10 years. I encountered memory problems, but they could be fixed with a reasonable amount of effort.
For the last couple of years I've been writing C#. I find I still get lots of memory problems. They're difficult to diagnose and fix due to the non-determinancy, and because the C# philosophy is that you shouldn't have to worry about such things when you very definitely do.
One particular problem I find is that I have to explicitly dispose and cleanup everything in code. If I don't, then the memory profilers don't really help because there is so much chaff floating about you can't find a leak within all the data they're trying to show you. I wonder if I've got the wrong idea, or if the tool I've got isn't the best.
What kind of strategies and tools are useful for tackling memory leaks in .NET?
I use Scitech's MemProfiler when I suspect a memory leak.
So far, I have found it to be very reliable and powerful. It has saved my bacon on at least one occasion.
The GC works very well in .NET IMO, but just like any other language or platform, if you write bad code, bad things happen.
Just for the forgetting-to-dispose problem, try the solution described in this blog post. Here's the essence:
public void Dispose ()
{
// Dispose logic here ...
// It's a bad error if someone forgets to call Dispose,
// so in Debug builds, we put a finalizer in to detect
// the error. If Dispose is called, we suppress the
// finalizer.
#if DEBUG
GC.SuppressFinalize(this);
#endif
}
#if DEBUG
~TimedLock()
{
// If this finalizer runs, someone somewhere failed to
// call Dispose, which means we've failed to leave
// a monitor!
System.Diagnostics.Debug.Fail("Undisposed lock");
}
#endif
We've used Ants Profiler Pro by Red Gate software in our project. It works really well for all .NET language-based applications.
We found that the .NET Garbage Collector is very "safe" in its cleaning up of in-memory objects (as it should be). It would keep objects around just because we might be using it sometime in the future. This meant we needed to be more careful about the number of objects that we inflated in memory. In the end, we converted all of our data objects over to an "inflate on-demand" (just before a field is requested) in order to reduce memory overhead and increase performance.
EDIT: Here's a further explanation of what I mean by "inflate on demand." In our object model of our database we use Properties of a parent object to expose the child object(s). For example if we had some record that referenced some other "detail" or "lookup" record on a one-to-one basis we would structure it like this:
class ParentObject
Private mRelatedObject as New CRelatedObject
public Readonly property RelatedObject() as CRelatedObject
get
mRelatedObject.getWithID(RelatedObjectID)
return mRelatedObject
end get
end property
End class
We found that the above system created some real memory and performance problems when there were a lot of records in memory. So we switched over to a system where objects were inflated only when they were requested, and database calls were done only when necessary:
class ParentObject
Private mRelatedObject as CRelatedObject
Public ReadOnly Property RelatedObject() as CRelatedObject
Get
If mRelatedObject is Nothing
mRelatedObject = New CRelatedObject
End If
If mRelatedObject.isEmptyObject
mRelatedObject.getWithID(RelatedObjectID)
End If
return mRelatedObject
end get
end Property
end class
This turned out to be much more efficient because objects were kept out of memory until they were needed (the Get method was accessed). It provided a very large performance boost in limiting database hits and a huge gain on memory space.
You still need to worry about memory when you are writing managed code unless your application is trivial. I will suggest two things: first, read CLR via C# because it will help you understand memory management in .NET. Second, learn to use a tool like CLRProfiler (Microsoft). This can give you an idea of what is causing your memory leak (e.g. you can take a look at your large object heap fragmentation)
Are you using unmanaged code? If you are not using unmanaged code, according to Microsoft, memory leaks in the traditional sense are not possible.
Memory used by an application may not be released however, so an application's memory allocation may grow throughout the life of the application.
From How to identify memory leaks in the common language runtime at Microsoft.com
A memory leak can occur in a .NET
Framework application when you use
unmanaged code as part of the
application. This unmanaged code can
leak memory, and the .NET Framework
runtime cannot address that problem.
Additionally, a project may only
appear to have a memory leak. This
condition can occur if many large
objects (such as DataTable objects)
are declared and then added to a
collection (such as a DataSet). The
resources that these objects own may
never be released, and the resources
are left alive for the whole run of
the program. This appears to be a
leak, but actually it is just a
symptom of the way that memory is
being allocated in the program.
For dealing with this type of issue, you can implement IDisposable. If you want to see some of the strategies for dealing with memory management, I would suggest searching for IDisposable, XNA, memory management as game developers need to have more predictable garbage collection and so must force the GC to do its thing.
One common mistake is to not remove event handlers that subscribe to an object. An event handler subscription will prevent an object from being recycled. Also, take a look at the using statement which allows you to create a limited scope for a resource's lifetime.
This blog has some really wonderful walkthroughs using windbg and other tools to track down memory leaks of all types. Excellent reading to develop your skills.
I just had a memory leak in a windows service, that I fixed.
First, I tried MemProfiler. I found it really hard to use and not at all user friendly.
Then, I used JustTrace which is easier to use and gives you more details about the objects that are not disposed correctly.
It allowed me to solve the memory leak really easily.
If the leaks you are observing are due to a runaway cache implementation, this is a scenario where you might want to consider the use of WeakReference. This could help to ensure that memory is released when necessary.
However, IMHO it would be better to consider a bespoke solution - only you really know how long you need to keep the objects around, so designing appropriate housekeeping code for your situation is usually the best approach.
I prefer dotmemory from Jetbrains
Big guns - Debugging Tools for Windows
This is an amazing collection of tools. You can analyze both managed and unmanaged heaps with it and you can do it offline. This was very handy for debugging one of our ASP.NET applications that kept recycling due to memory overuse. I only had to create a full memory dump of living process running on production server, all analysis was done offline in WinDbg. (It turned out some developer was overusing in-memory Session storage.)
"If broken it is..." blog has very useful articles on the subject.
After one of my fixes for managed application I had the same thing, like how to verify that my application will not have the same memory leak after my next change, so I've wrote something like Object Release Verification framework, please take a look on the NuGet package ObjectReleaseVerification. You can find a sample here https://github.com/outcoldman/OutcoldSolutions-ObjectReleaseVerification-Sample, and information about this sample http://outcoldman.com/en/blog/show/322
The best thing to keep in mind is to keep track of the references to your objects. It is very easy to end up with hanging references to objects that you don't care about anymore.
If you are not going to use something anymore, get rid of it.
Get used to using a cache provider with sliding expirations, so that if something isn't referenced for a desired time window it is dereferenced and cleaned up. But if it is being accessed a lot it will say in memory.
One of the best tools is using the Debugging Tools for Windows, and taking a memory dump of the process using adplus, then use windbg and the sos plugin to analyze the process memory, threads, and call stacks.
You can use this method for identifying problems on servers too, after installing the tools, share the directory, then connect to the share from the server using (net use) and either take a crash or hang dump of the process.
Then analyze offline.
From Visual Studio 2015 consider to use out of the box Memory Usage diagnostic tool to collect and analyze memory usage data.
The Memory Usage tool lets you take one or more snapshots of the managed and native memory heap to help understand the memory usage impact of object types.
one of the best tools I used its DotMemory.you can use this tool as an extension in VS.after run your app you can analyze every part of memory(by Object, NameSpace, etc) that your app use and take some snapshot of that, Compare it with other SnapShots.
DotMemory