How to detect if concurrent GC is running in .Net - c#

Is there a straightforward function call to determine if Concurrent GC has been enabled in the runtime I'm running in? We have a heterogeneous environment and we need to log which mode is being used so we can identify which systems need to be modified.
I realise that I can investigate the exe.config and check it manually, I was just wondering if there is a property sitting somewhere that exposes this info without having to make a hack.

If System.Runtime.GCSettings.LatencyMode has the value LatencyMode.Batch then concurrent GC has been disabled.

Sadly it doesn't look like there is a method baked in, if there was I would expect to see it here: System.Runtime.GCSettings..
Unless you have a good reason its best to leave concurrent garbage collection on (as its the default).
Generally the only things that care about this are unmanaged applications that are hosting the CLR themselves (such as SQL Server or IIS).
Is your application a good candidate for programmatically changing the gcConcurrent in the exe.config? If not, it would seem as you said, reading the value from the config is the less hackish way (IMHO).
Never the less, I did a bit of research and starting with .Net 4 you can use ETW to detect garbage collection ETW events:
Event tracing for Windows (ETW) is a tracing system that supplements
the profiling and debugging support provided by the .NET Framework.
Starting with the .NET Framework 4, garbage collection ETW events
capture useful information for analyzing the managed heap from a
statistical point of view. For example, the GCStart_V1 event, which is
raised when a garbage collection is about to occur, provides the
following information:
Which generation of objects is being collected.
What triggered the garbage collection.
Type of garbage collection (concurrent or not concurrent).

Related

Tracking Down a .NET Windows Service Memory Leak

Before installing my windows service in production, I was looking for reliable tests that I can perform to make sure my code doesn't contain memory leaks.
However, All what I can find on the net was using task manager to look at used memory or some paid memory profiler tools.
From my understanding, looking at the task manager is not really helpful and cannot confirm the memory leakage (in case, there is).
How to confirm whether there is a memory leak or not?
Is there any free tools to find the source of memory leaks?
Note: I'm using .Net Framework 4.6 and Visual Studio 2015 Community
Well you can use task manager.
GC apps can leak memory, and it will show there.
But...
Free tool - ".Net CLR profiler"
There is a free tool, and it's from Microsoft, and it's awesome. This is a must-use for all programs that leak references. Search MS' site.
Leaking references means you forget to set object references to null, or they never leave scope, and this is almost as likely to occur in Garbage collected languages as not - lists building up and not clearing, event handlers pointing to delegates, etc.
It's the GC equivalent of memory leaks and has the same result. This program tells you what references are taking up tons of memory - and you will know if it's supposed to be that way or not, and if not, you can go find them and fix the problem!
It even has a cool visualization of what objects allocate what memory (so you can track down mistakes). I believe there are youtubes of this if you need an explanation.
Wikipedia page with download links...
NOTE: You will likely have to run your app not as a service to use this. It starts first and then runs your app. You can do this with TopShelf or by just putting the guts in a dll that runs from an EXE that implments the service integrations (service host pattern).
Although managed code implies no direct memory management, you still have to manage your instances. Those instances 'claim' memory. And it is all about the usage of these instances, keeping them alive when you don't expect them to be.
Just one of many examples: wrong usage of disposable classes can result in a lot of instances claiming memory. For a windows service, a slow but steady increase of instances can eventually result in to much memory usage.
Yes, there is a tool to analyze memory leaks. It just isn't free. However you might be able to identify your problem within the 7 day trial.
I would suggest to take a loot at the .NET Memory Profiler.
It is great to analyze memory leaks during development. It uses the concept of snapshots to compare new instances, disposed instances etc. This is a great help to understand how your service uses its memory. You can then dig deeper into why new instances get created or are kept alive.
Yes, you can test to confirm whether memory leaks are introduced.
However, just out-of-the box this will not be very useful. This is because no one can anticipate what will happen during runtime. The tool can analyze your app for common issues, but this is not guaranteed.
However, you can use this tool to integrate memory consumption into your unit test framework like NUnit or MSTest.
Of course a memory profiler is the first kind of tool to try, but it will only tell you whether your instances keep increasing. You still want to know whether it is normal that they are increasing. Also, once you have established that some instances keep increasing for no good reason, (meaning, you have a leak,) you will want to know precisely which call trees lead to their allocation, so that you can troubleshoot the code that allocates them and fix it so that it does eventually release them.
Here is some of the knowledge I have collected over the years in dealing with such issues:
Test your service as a regular executable as much as possible. Trying to test the service as an actual service just makes things too complicated.
Get in the habit of explicitly undoing everything that you do at the end of the scope of that thing which you are doing. For example, if you register an observer to the event of some observee, there should should always be some point in time (the disposal of the observer or the observee?) that you de-register it. In theory, garbage collection should take care of that by collecting the entire graph of interconnected observers and observees, but in practice, if you don't kick the habit of forgetting to undo things that you do, you get memory leaks.
Use IDisposable as much as possible, and make your destructors report if someone forgot to invoke Dispose(). More about this method here: Mandatory disposal vs. the "Dispose-disposing" abomination Disclosure: I am the author of that article.
Have regular checkpoints in your program where you release everything that should be releasable (as if the program is performing an orderly shutdown in order to terminate) and then force a garbage collection to see whether you have any leaks.
If instances of some class appear to be leaking, use the following trick to discover the precise calling tree that caused their allocation: within the constructor of that class, allocate an exception object without throwing it, obtain the stack trace of the exception, and store it. If you discover later that this object has been leaked, you have the necessary stack trace. Just don't do this with too many objects, because allocating an exception and obtaining the stack trace from it is ridiculously slow, only Microsoft knows why.
You could try the free Memoscope memory profiler
https://github.com/fremag/MemoScope.Net
I do not agree that you can trust the Task Manager to check if you have a memory leak or not. The problem with a garbage collector is that it can decide based on heuristics to keep the memory after a memory spike and do not return it to the OS. You might have a 2 GB Commit size but 90% of them can be free.
You should use VMMAP to check during the tests what type of memory your process contains. You do not only have the managed heap, but also unmanaged heap, private bytes, stacks (thread leaks), shared files and much more which need to be tracked.
VMMap has also command line interface which makes it possible to create snapshots at regular intervals which you can examine later. If you have a memory growth you can find out which type of memory is leaked which needs depending on the leak type different debugging tooling approaches.
I would not say that the Garbage collector is infallible. There are times when it fails unknowingly and they are not so straight forward. Memory streams are a common cause of memory leaks. You can open them in one context and they may never even get closed, even though the usage is wrapped in a using statement (the definition of a disposable object that should be cleaned up immediately after its usage falls out of scope). If you are experiencing crashes due to running out of memory, Windows does create dump files that you can sift through.
enter link description here
This is by no means fun or easy and is quite tedious but it tends to be your best bet.
Common areas that are easy to create memory leaks are anything that is using the System.Drawing dll, memory streams, and if you are doing some serious multi-threading.
If you use Entity Framework and a DI pattern, perhaps using Castle Windsor, you can easily get memory leaks.
The main thing to do is use the using( ){ } statement where-ever you can to automatically mark objects as disposed.
Also, you want to turn off automatic tracking on Entity Framework where you are only reading and not writing. Best to isolate your writes, use a using() {} at this point, get a dbContext (with tracking on), write your data.
If you want to investigate what is on the heap. The best tool I've used is RedGate ANTS http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/solving-memory-problems/getting-started not cheap but it works.
However, by using the using() {} pattern where-ever you can (don't make a static or singleton DbContext and never have one context in a massive loop of updates, dispose of them as often as you can!) then you find memory isn't often an issue.
Hope this helps.
Unless you're dealing with unmanaged code, i would be so bold to say you don't have to worry about memory leaks. Any unreferenced object in managed code will be removed by the garbage collector, and the possibility in finding a memory leak within the .net framework i would say you should be considered very lucky (well, unlucky). You don't have to worry about memory leak.
However, you can still encounter ever-growing memory usage, if references to objects are never released. For example, say you keep an internal log structure, and you just keep adding entries to a log list. Then every entry still have references from the log list and therefore will never be collected.
From my experience, you can definitely use the task manager as an indicator whether your system has growing issues; if the memory usage steadily keep rising, you know you have an issue. If it grows to a point but eventually converges to a certain size, it indicates it has reached its operating threshold.
If you want a more detailed view of managed memory usage, you can download the process explorer here, developed by Microsoft. It is still quite blunt, but it gives a somewhat better statistical view than task manager.

.Net 4 MemoryCache Leaks with Concurrent Garbage Collection

I'm using the new MemoryCache in .Net 4, with a max cache size limit in MB (I've tested it set between 10 and 200MB, on systems with between 1.75 and 8GB of memory). I don't set any time based expiration on the objects, as I'm using the cache simply as a high performance drive, and as long as there is space, I want it used. To my surprise, the cache refused to evict any objects, to the point that I would get SystemOutOfMemory exceptions.
I fired up perfmon, wired up my application to .Net CLR Memory\#Bytes In All Heaps, .Net Memory Cache 4.0, and Process\Private Bytes -- indeed, the memory consumption was out of control, and no cache trims were being registered.
Did some googling and stackoverflowing, downloaded and attached the CLRProfiler, and wham: evictions everywhere! The memory stayed within reasonable bounds based upon the memory size limit I had set. Ran it in debug mode again, no evictions. CLRProfiler again, evictions.
I finally noticed that the profiler forced the application to run without concurrent garbage collection (also see useful SO Concurrent Garbage Collection Question). I turned it off in my app.config, and, sure enough, evictions!
This seems like at best an outrageous lack of documentation to not say: this only works with non-concurrent garbage collection -- though I image since its ported from ASP.NET, they may not have had to worry about concurrent garbage collection.
So has anyone else seen this? I'd love to get some other experiences out there, and maybe some more educated insights.
Update 1
I've reproduced the issue within a single method: it seems that the cache must be written to in parallel for the cache evictions not to fire (in concurrent garbage collection mode). If there is some interest, I'll upload the test code to a public repo. I'm definitely getting toward the deep end of the the CLR/GC/MemoryCache pool, and I think I forgot my floaties...
Update 2
I published test code on CodePlex to reproduce the issue. Also, possibly of interest, the original production code runs in Azure, as a Worker Role. Interesting, changing the GC concurrency setting in the role's app.config has no effect. Possibly Azure overrides GC settings much like ASP.NET? Further, running the test code under WPF vs a Console application will produce slightly different eviction results.
You can "force" a garbage collection right after the problematic method and see if the problem reproduces executing:
System.Threading.Thread.Sleep(200);
GC.Collect();
GC.WaitForPendingFinalizers();
right at the end of the method (make sure that you free any handles to reference objects and null them out). If this prevents memory leakage, and then yes, there may be a runtime bug.
Stop-the-world garbage collection is based on determining whether a strong live reference to an object exists at the moment the world is stopped. Concurrent garbage collection usually determines whether a strong live reference to an object has existed since some particular time in the past. My conjecture would be that many strong references to objects held in WeakReferences are being individually created and discarded. If a stop-the-world garbage collector fires between the time a particular object is created and the time it's discarded, that particular object will be kept alive, but previously-discarded objects will not. By contrast, a concurrent garbage collector may not detect that all strong references an object have been discarded until a certain amount of time goes by without any strong references to that object being created.
I've sometimes wished that .net would offer something between a strong reference and a weak one, which would prevent an object from being wiped from memory, but would not protect it from being finalized or having weak WeakReferences to it invalidated. Such references would slightly complicate the GC process, requiring every object to have separate flags indicating whether strong and quasi-weak references exist to it, and whether it has been scanned for both strong and quasi-weak references, but such a feature could be helpful in many "weak event" scenarios.
I found this entry while searching for a similiar topic and I'm focusing on your Out of Memory exception.
If you put an object in the cache then it still may be referencing other objects and therefore these objects would not be garbage collected -- hence the out of memory exception and probably a CPU being pegged out due to Gen 2 garbage collection.
Are you putting "used" objects on the cache or clones of "used" objects on the cache? If you put a clone on the cache then the "used" object that possible references other objects could be garbage collected.
If you shut off your caching mechanism does your program still run out of memory? If it doesn't run out of memory then that would prove that the objects you would otherwise be putting on the cache are still holding references to other objects hindering garbage collection.
Forcing garbage collection is not a best practice and shouldn't have to be done. In this scenario forcing a garbage collection wouldn't dispose of referenced objects anyway.
MemoryCache definately has some issues. It ate 160Mb of memory on my asp.net server, just changed to simple list and added some extra logic to get the same functionality.

Obtaining global roots from .NET programs

I recently started using the ANTS profiling tools for production work. Aside from being amazed by their awesomeness, I couldn't help but wonder how they work. For example, one of the most useful features lets you visualize the global roots of a running program complete with the number of references to values of different types.
How does this tool get hold of that information?
(Full disclosure: I'm on the Visual Studio Profiler team, but the below information is public)
You can do this by writing a CLR profiler that runs inside the process you're targeting. CLR profilers are C++ COM objects that get instantiated by the runtime when the COR_PROFILER and COR_PROFILING_ENABLED environment variables are set (see here). There are two main CLR profiling interfaces, specifically, ICorProfilerCallback and ICorProfilerInfo. ICorProfilerCallback is what the CLR uses to notify you about specific events that you subscribe to (module loads, function JIT compliation, thread creation, GC events), while ICorProfilerInfo can be used by your profiler to obtain additional information about threads, modules, types, methods, and metadata for the loaded assemblies. This interface is what you could use to obtain symbol information about the types allocated.
With your profiler in-process, you can force a GC through ICorProfilerInfo::ForceGC. After the GC completes, your profiler will get notified via ICorProfilerCallback2::GarbageCollectionFinished, and you will get the root references via ICorProfilerCallback2::RootReferences2. When you combine the root reference information with ICorProfilerCallback::ObjectReferences, you can get the complete object reference graph for your .NET application.
You can get more realtime information by using the ICorProfilerCallback::ObjectAllocated callback to determine when individual CLR objects get created. This can be expensive, though, since you're incurring at least an additional function call for each allocated object. You can track individual objects by mapping the CLR-assigned ObjectID to your own internal ID. An ObjectID for a given object is an ephemeral pointer since it can change as garbage collections happen, which can cause objects to move during compaction. This process is illustrated here. You can use the information from ICorProfilerCallback::MovedReferences to track moving objects.
In order to activate the callbacks mentioned above, you need to tell the CLR profiling API that you're interested in them. You can do this by specifying COR_PRF_MONITOR_GC and COR_PRF_MONITOR_OBJECT_ALLOCATED as part of your event flags when calling ICorProfilingInfo::SetEventMask.
David Broman is the developer on the CLR profiler, and his blog has tons of great information on profiling in general, including all the crazy pitfalls and issues you might run into.
Profilers like ANTS use an "profiling API" presented by the CLR itself, that quite simply can tell you what goes on inside the CLR. For instance there is an API callback-method that occur when an object is allocated, aptly named ObjectAllocated(). Likewise there are events for when methods are entered, when threads are created, etc etc.
The original profiling API is called ICorProfilerCallback. Later versions are called CoreProfilerCallback2 and CoreProfilerCallback3. If you google those names you'll find exactly the answers you're looking for. On codeproject you can see a practical example: Creating a Custom .NET Profiler
A final note: The API cannot be used from managed code like C# and VB.NET. It's only available from unmanaged code like e.g. C or C++. So a C# app cannot use this API to examine its own behavior and objects, for instance.

Why doesn't .NET have a SoftReference as well as a WeakReference, like Java?

I really love WeakReference's. But I wish there was a way to tell the CLR how much (say, on a scale of 1 to 5) how weak you consider the reference to be. That would be brilliant.
Java has SoftReference, WeakReference and I believe also a third type called a "phantom reference". That's 3 levels right there which the GC has a different behaviour algorithm for when deciding if that object gets the chop.
I am thinking of subclassing .NET's WeakReference (luckily and slightly bizzarely it isn't sealed) to make a pseudo-SoftReference that is based on a expiration timer or something.
I believe the fundamental reason that NET does not have soft references is because it can rely on an operating system with virtual memory. A Java process must specify its maximum OS memory (e.g. with -Xmx128M), and it never takes more OS memory than that. Whereas a NET process keeps taking OS memory that it needs, which the OS supplies with disk-backed virtual memory when RAM runs out. If NET allowed soft references, then the NET runtime would not know when to release them unless it either peeked deep into the OS to see if its memory is actually paged on disk (a nasty OS/CLR dependency), or it requested the runtime to specify a maximum process memory footprint (e.g. an equivalent of -Xmx). I guess that Microsoft does not want to add -Xmx to NET because they think the OS should decide how much RAM each process gets (by choosing which virtual memory pages to hold in RAM or on disk), and not the process itself.
Java SoftReferences are used in the creation of memory sensitive caches (they serve no other purpose).
As of .NET 4, .NET has a class System.Runtime.Caching.MemoryCache which will probably meet any such needs.
Having a WeakReference with varying levels of weakness (priority) sounds nice, but also might make the GC's job harder, not easier. (I've no idea on the GC internals, but) I would assume there some sort of additional access statistics that are kept for WeakReference objects so that the GC can clean them up efficiently (e.g. it might get rid of the least-used items first).
More than likely the added complexity wouldn't make anything any more efficient because the most efficient way is to get rid of infrequently used WeakReferences first. If you could assign a priority, how would you do it? This smells like a premature optimization: the programmer doesn't really know most of the time and is guessing; the result is a slower GC collection cycle that is probably reclaiming the wrong objects.
It begs the question though, that if you care about the WeakReference.Target object being reclaimed, is it really a good use of WeakReference?
It's like a cache. You shove stuff into the cache and ask the cache to make it stale after x minutes, but most caches never guarantee to keep it around at all. It just guarantees that if it does, it will expire it according to the policy requested.
My guess as to why this isn't there already would be simplicity. Most people, I think, would call it a virtue that there is only one type of reference, not four.
Maybe the ASP.NET Cache class (System.Web.Caching.Cache) might help achieve what you want? It automatically remove objects if memory gets low:
ASP.NET Caching Overview
Here's an article that shows how to use the Cache class in a windows forms application.
quoted from: Equivalent to SoftReference in .net?
Don't forget that you also have your standard references (the ones that you use on a daily basis). This gives you one more level.
WeakReferences should be used when you don't really care if the object goes away, while SoftReferences really only should be used when you would use a normal reference, but you would rather your object be cleared then for you to run out of memory. I'm not sure on the specifics, but I suspect that the GC normally traces through SoftReferences but not WeakReferences when determining which objects are live, but when running low on memory will also skip the SoftReferences.
My guess is that the .Net designers felt that the difference was confusing to most people and or that SoftReferences add more complexity than they really wanted and so decided to leave them out.
As a side note, AFAIK PhantomReferences are mostly designed for internal use by the virtual machine and are not intended for actual client use.
Maybe there should be an property where you can specify which Generation that the object >= before it is collected. So if you specify 1 then it is the weakest possible reference. But if you specify 3 then it would need to survive at least 3 prior collections before it can be considered for collection itself.
I thought the track ressurection flag was no good for this because by that time the object has already been finalized? May be wrong though...
(PS: I am the OP, just signed up. PITA that it doesn't inherit your history from "unregistered" accounts.)
Looking for the 'trackResurrection' option passed to the constructor perhaps?
The GC class also offers some assistance.
Don't know why .NET does not have Softreferences.
BUT in Java Softreferences are IMHO overused. The reason is tha at least in an application server you would want to be able to influence per application how long your Softreferenzen live. That's currently not possible in Java.

What strategies and tools are useful for finding memory leaks in .NET?

I wrote C++ for 10 years. I encountered memory problems, but they could be fixed with a reasonable amount of effort.
For the last couple of years I've been writing C#. I find I still get lots of memory problems. They're difficult to diagnose and fix due to the non-determinancy, and because the C# philosophy is that you shouldn't have to worry about such things when you very definitely do.
One particular problem I find is that I have to explicitly dispose and cleanup everything in code. If I don't, then the memory profilers don't really help because there is so much chaff floating about you can't find a leak within all the data they're trying to show you. I wonder if I've got the wrong idea, or if the tool I've got isn't the best.
What kind of strategies and tools are useful for tackling memory leaks in .NET?
I use Scitech's MemProfiler when I suspect a memory leak.
So far, I have found it to be very reliable and powerful. It has saved my bacon on at least one occasion.
The GC works very well in .NET IMO, but just like any other language or platform, if you write bad code, bad things happen.
Just for the forgetting-to-dispose problem, try the solution described in this blog post. Here's the essence:
public void Dispose ()
{
// Dispose logic here ...
// It's a bad error if someone forgets to call Dispose,
// so in Debug builds, we put a finalizer in to detect
// the error. If Dispose is called, we suppress the
// finalizer.
#if DEBUG
GC.SuppressFinalize(this);
#endif
}
#if DEBUG
~TimedLock()
{
// If this finalizer runs, someone somewhere failed to
// call Dispose, which means we've failed to leave
// a monitor!
System.Diagnostics.Debug.Fail("Undisposed lock");
}
#endif
We've used Ants Profiler Pro by Red Gate software in our project. It works really well for all .NET language-based applications.
We found that the .NET Garbage Collector is very "safe" in its cleaning up of in-memory objects (as it should be). It would keep objects around just because we might be using it sometime in the future. This meant we needed to be more careful about the number of objects that we inflated in memory. In the end, we converted all of our data objects over to an "inflate on-demand" (just before a field is requested) in order to reduce memory overhead and increase performance.
EDIT: Here's a further explanation of what I mean by "inflate on demand." In our object model of our database we use Properties of a parent object to expose the child object(s). For example if we had some record that referenced some other "detail" or "lookup" record on a one-to-one basis we would structure it like this:
class ParentObject
Private mRelatedObject as New CRelatedObject
public Readonly property RelatedObject() as CRelatedObject
get
mRelatedObject.getWithID(RelatedObjectID)
return mRelatedObject
end get
end property
End class
We found that the above system created some real memory and performance problems when there were a lot of records in memory. So we switched over to a system where objects were inflated only when they were requested, and database calls were done only when necessary:
class ParentObject
Private mRelatedObject as CRelatedObject
Public ReadOnly Property RelatedObject() as CRelatedObject
Get
If mRelatedObject is Nothing
mRelatedObject = New CRelatedObject
End If
If mRelatedObject.isEmptyObject
mRelatedObject.getWithID(RelatedObjectID)
End If
return mRelatedObject
end get
end Property
end class
This turned out to be much more efficient because objects were kept out of memory until they were needed (the Get method was accessed). It provided a very large performance boost in limiting database hits and a huge gain on memory space.
You still need to worry about memory when you are writing managed code unless your application is trivial. I will suggest two things: first, read CLR via C# because it will help you understand memory management in .NET. Second, learn to use a tool like CLRProfiler (Microsoft). This can give you an idea of what is causing your memory leak (e.g. you can take a look at your large object heap fragmentation)
Are you using unmanaged code? If you are not using unmanaged code, according to Microsoft, memory leaks in the traditional sense are not possible.
Memory used by an application may not be released however, so an application's memory allocation may grow throughout the life of the application.
From How to identify memory leaks in the common language runtime at Microsoft.com
A memory leak can occur in a .NET
Framework application when you use
unmanaged code as part of the
application. This unmanaged code can
leak memory, and the .NET Framework
runtime cannot address that problem.
Additionally, a project may only
appear to have a memory leak. This
condition can occur if many large
objects (such as DataTable objects)
are declared and then added to a
collection (such as a DataSet). The
resources that these objects own may
never be released, and the resources
are left alive for the whole run of
the program. This appears to be a
leak, but actually it is just a
symptom of the way that memory is
being allocated in the program.
For dealing with this type of issue, you can implement IDisposable. If you want to see some of the strategies for dealing with memory management, I would suggest searching for IDisposable, XNA, memory management as game developers need to have more predictable garbage collection and so must force the GC to do its thing.
One common mistake is to not remove event handlers that subscribe to an object. An event handler subscription will prevent an object from being recycled. Also, take a look at the using statement which allows you to create a limited scope for a resource's lifetime.
This blog has some really wonderful walkthroughs using windbg and other tools to track down memory leaks of all types. Excellent reading to develop your skills.
I just had a memory leak in a windows service, that I fixed.
First, I tried MemProfiler. I found it really hard to use and not at all user friendly.
Then, I used JustTrace which is easier to use and gives you more details about the objects that are not disposed correctly.
It allowed me to solve the memory leak really easily.
If the leaks you are observing are due to a runaway cache implementation, this is a scenario where you might want to consider the use of WeakReference. This could help to ensure that memory is released when necessary.
However, IMHO it would be better to consider a bespoke solution - only you really know how long you need to keep the objects around, so designing appropriate housekeeping code for your situation is usually the best approach.
I prefer dotmemory from Jetbrains
Big guns - Debugging Tools for Windows
This is an amazing collection of tools. You can analyze both managed and unmanaged heaps with it and you can do it offline. This was very handy for debugging one of our ASP.NET applications that kept recycling due to memory overuse. I only had to create a full memory dump of living process running on production server, all analysis was done offline in WinDbg. (It turned out some developer was overusing in-memory Session storage.)
"If broken it is..." blog has very useful articles on the subject.
After one of my fixes for managed application I had the same thing, like how to verify that my application will not have the same memory leak after my next change, so I've wrote something like Object Release Verification framework, please take a look on the NuGet package ObjectReleaseVerification. You can find a sample here https://github.com/outcoldman/OutcoldSolutions-ObjectReleaseVerification-Sample, and information about this sample http://outcoldman.com/en/blog/show/322
The best thing to keep in mind is to keep track of the references to your objects. It is very easy to end up with hanging references to objects that you don't care about anymore.
If you are not going to use something anymore, get rid of it.
Get used to using a cache provider with sliding expirations, so that if something isn't referenced for a desired time window it is dereferenced and cleaned up. But if it is being accessed a lot it will say in memory.
One of the best tools is using the Debugging Tools for Windows, and taking a memory dump of the process using adplus, then use windbg and the sos plugin to analyze the process memory, threads, and call stacks.
You can use this method for identifying problems on servers too, after installing the tools, share the directory, then connect to the share from the server using (net use) and either take a crash or hang dump of the process.
Then analyze offline.
From Visual Studio 2015 consider to use out of the box Memory Usage diagnostic tool to collect and analyze memory usage data.
The Memory Usage tool lets you take one or more snapshots of the managed and native memory heap to help understand the memory usage impact of object types.
one of the best tools I used its DotMemory.you can use this tool as an extension in VS.after run your app you can analyze every part of memory(by Object, NameSpace, etc) that your app use and take some snapshot of that, Compare it with other SnapShots.
DotMemory

Categories