Visual Studio 2017 - Diagostic tool - Heap profiling affects program memory consumption - c#

I am trying to debug strange memory leak in C# application (uses c++/cli and c++) using Diagnostic tool and Memory usage snapshots. But i have discovered one strange problem.
When I run debug in VS2017 with Heap Profiling turned on memory consumption is constant and program runs as expected. When Heap Profiling is turned off program leaks memory which has linear increase. Work completed is same, i have progress of work printed in console and I am sure both programs have made the same work, but one uses constant memory and other has linearly increasing memory (when same work done 2x memory used). Visually it looks like when GC is fired with Heap Profiling some memory gets released, and no memory is released when Heap Profiling is not used.
Does anyone have idea how could Heap Profiling affect this? Native memory is leaked.
[EDIT1] Data from Performance Profiler -> Memory usage
Object Type Reference Count Module
shared_ptr_cli<GeoAtomAttributes> TestBackEnd64.dll
shared_ptr_cli<GeoAtomAttributes> [Finalization Handle] 856,275 TestBackEnd64.dll
shared_ptr_cli<GeoAtomAttributes> [Local Variable] 1 TestBackEnd64.dll
GeoAtomAttributesCli [Local Variable] 1 TestBackEnd64.dll

Memory that can be relased with gc should not be considered as leaked memory, it should be considered as memory that is eligible for garbage collection. Because the next time gc is performed this memory will be collected and available for new object allocations.
Other thoughts;
Gc runs on managed heap, native libraries allocates memory on the native heap. So It cannot effect the memory management of native libraries. But you should be aware of the following cases.(this might not be your case though)
If you pass pinned data structures to native code and free these handles on your Object.Finalize method (of wrapper class); in this case the pinned memory can only be collected when wrapper class is queued for finalization.Calling cleanup functions(*) of native code in the finalize method of managed class can also cause similar cases. I think these are bad practices and should not be used, instead these cleanups should be done as soon as possible.
(*) This case might cause your total process memory consumption to bloat even when there is no need for gc in the managed heap.

Related

"Unmanaged memory" at profiler diagram. Is this a memory leak indication?

I've faced with this diagram, when profiling memory usage of my application:
As you can see, before line "snapshot 1" unmanaged memory holds approximately a half of total used memory. Then, after "snapshot 1" and 2 min 55 s (see the timeline below), I've forced a garbage collection.
As I expect, generation 2 was mostly collected, but unmanaged memory was not released, and now it holds approx. 2/3 of total used memory.
I have no idea, what "unmanaged memory" means in this context.
This is the WPF application with some WinForms/GDI+ interop. I'm sure, that everything, that should be disposed, is disposed. Also, there's no explicit platform interop code. The rest of managed memory is OK.
Is this a memory leak indication?
If so, what is the way to detect memory leak here?
Is this matters, the profiler I'm using is JetBrains dotMemory.
"Total used" memory on dotMemory chart it's the private working set of process. It's memory that the process executable has asked for - not necessarily the amount it is actually using. It includes all your DLLs and heaps but not includes memory-mapped files (shared DLLs). Moreover there is no way to tell whether it belongs to executable itself, or to a linked library. It's not exclusively physical memory; they can be paged to disk or in the standby page list (i.e. no longer in use, but not paged yet either).
So, unmanaged memory is everything in private working set except managed CLR heaps. Usually you have no easy ways to change amount of unmanaged memory for pure .net process. And it's approximately constant during execution of program.

Managed heap OutOfMemory

EDIT: I reformulated it to be a question and moved the answer to the answers part...
In a relatively complex multithreaded .NET application I experienced OutOfMemoryException even in the cases I could think there is no reason for it.
The situation:
The application is 32bit.
The application creates lot of (thousands) short lived objects that are considered small (less than approx. 85kB).
Additionaly it creates some (hundreds) short lived objects that are considered large (greater than approx. 85kb). This implies these objects are allocated at LOH (large object heap).
Both classes for these objects define finalizer (~MyFinalizer(){...}).
The symptoms:
OutOfMemoryException
Looking at the app via memory profiler, there are thousands of the small objects eligible for collection, but not collected and thus block large amount of memory.
The questions:
Why the app exhausts entire heap?
Why there is lot of "dead" objects still present in the memory?
After some deep investigation I have found the reason. As it took some time, I would like to make it easy for others suffering the same problem.
The reasons:
App has only approx 2GB of virtual address space.
The LOH is by design not compacted and thus might get fragmented very quickly, but for the mentioned count of large objects it should not be any problem.
Due to the design of the Garbage Collector, if there is an object defining the finalizer (even just empty), it is considered as Gen2 object and is placed into GC's finalization queue. This implies, until it is finalized (the MyFinalizer called) it just blocks the memory. In the case of mentioned app the GC thread running the finalizers didn't get the chance to do its work as quickly as needed and thus the heap was exhausted.
The Solution:
Do not use the finalizer for such "dynamic" objects (high volume, short life), workaround the finalization code in other way...
Very useful sources:
The Dangers of the Large Object Heap
Garbage Collection
Will the Garbage Collector call Dispose for me?
Try using a profiler, such as:
ANTS Memory Profiler
ANTS Performance Profiler
Jetbrains Performance Profiler
For LOH force a GC with:
GC.Collect();
GC.WaitForPendingFinalizers();

Garbage collection runs too late - causes OutOfMemory exceptions

Was wondering if anyone could shed some light on this.
I have an application which has a large memory footprint (& memory churn). There aren't any memory leaks and GCs tend to do a good job of freeing up resources.
Occasionally, however, a GC does not happen 'on time', causing an out of memory exception. I was wondering if anyone could shed any light on this?
I've used the REDGate profiler, which is very good - the application has a typical 'sawtooth' pattern - the OOMs happen at the top of the sawtooth. Unfortunately the profiler can't be used (AFAIK) to identify sources of memory churn.
Is it possible to set a memory 'soft limit', at which a GC should be forced? At the moment, a GC is only performed when the memory is at its absolute limit, resulting in OOMs.
It shouldn't really be possible for a Garbage Collection to 'not to happen in time'. They happen when a new memory allocation would push Gen-0 past a certain limit. Thus they always happen before a memory allocation would push the memory past its limit. This happens so many times a day throughout the world I would be surprised if any bugs weren't well known about.
Have you considered that you might actually be allocating more memory than is available? The OS only lets you access 2GB on most 32-bit machines.
There are some other possibilities:
Is your application using un-managed memory?
Is your application Pinning any memory? If so that could cause a fragmentation issue especially if you aren't releasing pin.
If you use a lot of memory and you garbage collect a lot I guess you should consider the "Flyweight" design pattern.
As an example, if you garbage collect a lot of strings, see String.Intern(string s).
Msdn reference
You can use GC.collect() to force the garbage collector to do its work. But it is not preferable.
Use memory profiles like(memprofiler) to detect the leaks. Almost all your code performs leaks at some points.

What causes memory fragmentation in .NET

I am using Red Gates ANTS memory profiler to debug a memory leak. It keeps warning me that:
Memory Fragmentation may be causing
.NET to reserver too much free memory.
or
Memory Fragmentation is affecting the size of the largest object that can be allocated
Because I have OCD, this problem must be resolved.
What are some standard coding practices that help avoid memory fragmentation.
Can you defragment it through some .NET methods? Would it even help?
You know, I somewhat doubt the memory profiler here. The memory management system in .NET actually tries to defragment the heap for you by moving around memory (that's why you need to pin memory for it to be shared with an external DLL).
Large memory allocations taken over longer periods of time is prone to more fragmentation. While small ephemeral (short) memory requests are unlikely to cause fragmentation in .NET.
Here's also something worth thinking about. With the current GC of .NET, memory allocated close in time, is typically spaced close together in space. Which is the opposite of fragmentation. i.e. You should allocate memory the way you intend to access it.
Is it a managed code only or does it contains stuff like P/Invoke, unmanaged memory (Marshal.AllocHGlobal) or stuff like GCHandle.Alloc(obj, GCHandleType.Pinned)?
The GC heap treats large object allocations differently. It doesn't compact them, but instead just combines adjacent free blocks (like a traditional unmanaged memory store).
More info here: http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
So the best strategy with very large objects is to allocate them once and then hold on to them and reuse them.
The .NET Framework 4.5.1, has the ability to explicitly compact the large object heap (LOH) during garbage collection.
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
See more info in GCSettings.LargeObjectHeapCompactionMode

Why do I appear to have a memory leak when user "Server" garbage collection?

Here's my situation. I want an explanation of why this is happening. I am reading about GC here but I still don't get it.
Workstation case: When I run with workstation garbage collection, my app ramps up to use about 180MB of private bytes and about 70MB in ".NET CLR Memory #bytes in all heaps". Memory continues to be stable for several hours. Life is good.
Server case: When I run with server garbage collection my app ramps up to use about 500MB of private bytes but still only about 70MB in ".NET CLR Memory #bytes in all heaps". An analysis of the !DumpHeap -stat output and !GCRoot shows a lot of objects without roots. Also, my private bytes increase significantly over several hours but the .NET bytes remain constant. My app DOES use a lot of unmanaged code so I'm thinking this is related given the difference in private and .NET bytes. But why is my life so bad in the server case?
Any GC wisdom or guidance on further investigation?
Thanks!
"Server garbage collection" is designed for high-throughput applications, primarily on clustered servers.
A server GC is expensive, and suspends running threads while it happens. As such, it takes a lot more memory pressure before it actually triggers - if you've still got spare memory, don't be surprised if the garbage collector doesn't feel the need to go through and clean up yet.

Categories