How to analyse unmanged memory of DotNet Process from Dump - c#

I've a problem at a customer that our netcore applications grows in memory size to over 11GB. (it will grow larger and crash with out of memory exception)
We now created a procdump before we restarted the process.
I used MemoryAnalyser to find that most of the memory used by our process is unmanged memory.
How can I now analyse the unmanged memory? To find whats allocated and taking so much space.

Related

Does VS's Diagnostic Tools measure the total Process Memory or the current Process Memory

Does the Visual Studio Diagnostic Tools measure the total memory used by an application's threads, or its current memory currently in use?
I've got an application that reads data in from a 34 megapixel camera at 3 frames per second. The math comes out that it processes 288 MB per second, or about 17 GB per minute. With this in mind, the application obviously consumes a lot of data, once it starts collecting the camera frames. I have had my eyes glued to the Diagnostics Tools for a bit, for reasons you can see below:
I've let the application run with the performance profiler for about 3 minutes, and it ends up reporting a total process memory of about 31 GB, as you can see below:
My laptop only has 16 GB of RAM, so at first glance I think the picture above basically answers my question. However, just past the 2:30min mark you can see a sharp decline in the Memory, which doesn't make sense (I don't believe anything changed in how the program was running). Also, when I opened up the task manager I could see my application was using about 9 GB of memory, prior to shooting down to about 3 GB of memory around the 2:30min mark.
With all of that in mind, what is the Process Memory really measuring?
Because the solution was buried in the comments in the accepted answer, I'll summarize the solution here, to my overall problem of why my program was using so much memory. It had to do with a bug where I was not disposing of Bitmaps. Bitmaps are unmanaged memory; if you fail to dispose of them before they go out of scope, they continue living in memory (outside of your Garbage Collector) until a threshold is reached in the number of unmanaged objects. Only when that threshold is hit, will the unmanaged objects get deleted. That is what was happening in my program, when it dipped from ~31 GB of memory to about 5 GB of memory.
However, just past the 2:30min mark you can see a sharp decline in the
Memory, which doesn't make sense (I don't believe anything changed in
how the program was running).
Garbage collection is a complicated process which can affect your application performance. So GC has been optimized to trigger only when there is a memory pressure over is threshold.
When there is a memory pressure, garbage collection process kicks off and clears unnecessary memory allocations. This is the Garbage collection normal behavior.
Does VS's Diagnostic Tools measure the total Process Memory or the
current Process Memory?
It measures your application's current memory usage. Here is a tutorial to understand VS memory tool.
The fact that it releases most of the memory usage after the garbage cycle means that there are no big memory leaks.
Redgate Ants memory profiler can show more details (Retensoin graphs etc). Here is a video that explains memory leaks bit clearly. https://documentation.red-gate.com/amp10/worked-example/video-tutorials
Is it possible to limit the amount of disc space available to a C#
program? I'm wondering if I could force C# to hold on to memory for a
shorter period of time.
You can call GC.Collect to force an immediate garbage collection after a memory expensive process. However, this is not recommended at all unless there is a really good reason for that. Garbage collect uses heuristic algorithms to optimize its behavior. You don't have to worry about this usually. One thing is MAKE SURE YOU DISPOSE ALL THE DISPOSABLE INSTANCES before they go out of scope. That can help to release memory with less garbage cycles.

Visual Studio 2017 - Diagostic tool - Heap profiling affects program memory consumption

I am trying to debug strange memory leak in C# application (uses c++/cli and c++) using Diagnostic tool and Memory usage snapshots. But i have discovered one strange problem.
When I run debug in VS2017 with Heap Profiling turned on memory consumption is constant and program runs as expected. When Heap Profiling is turned off program leaks memory which has linear increase. Work completed is same, i have progress of work printed in console and I am sure both programs have made the same work, but one uses constant memory and other has linearly increasing memory (when same work done 2x memory used). Visually it looks like when GC is fired with Heap Profiling some memory gets released, and no memory is released when Heap Profiling is not used.
Does anyone have idea how could Heap Profiling affect this? Native memory is leaked.
[EDIT1] Data from Performance Profiler -> Memory usage
Object Type Reference Count Module
shared_ptr_cli<GeoAtomAttributes> TestBackEnd64.dll
shared_ptr_cli<GeoAtomAttributes> [Finalization Handle] 856,275 TestBackEnd64.dll
shared_ptr_cli<GeoAtomAttributes> [Local Variable] 1 TestBackEnd64.dll
GeoAtomAttributesCli [Local Variable] 1 TestBackEnd64.dll
Memory that can be relased with gc should not be considered as leaked memory, it should be considered as memory that is eligible for garbage collection. Because the next time gc is performed this memory will be collected and available for new object allocations.
Other thoughts;
Gc runs on managed heap, native libraries allocates memory on the native heap. So It cannot effect the memory management of native libraries. But you should be aware of the following cases.(this might not be your case though)
If you pass pinned data structures to native code and free these handles on your Object.Finalize method (of wrapper class); in this case the pinned memory can only be collected when wrapper class is queued for finalization.Calling cleanup functions(*) of native code in the finalize method of managed class can also cause similar cases. I think these are bad practices and should not be used, instead these cleanups should be done as soon as possible.
(*) This case might cause your total process memory consumption to bloat even when there is no need for gc in the managed heap.

Except LARGE_ADDRESS_AWARE, what else factors will limit C# process memory consumption?

I take over a C# project which loads 3D models into memory, so I need large memory to use. My platform is 64-bit win10, the C# program is 32-bit, and I use visual studio 2013 to develop. My laptop have 8GB memory.
Before I use editbin /largeaddressaware $(TargetPath) to add LARGE_ADDRESS_AWARE flag to the C# program, it could only consume memory approximately 1GB then program throws OutOfMemory exception, after adding LARGE_ADDRESS_AWARE flag, it could consume memory approximately 1.5GB.
I know that using LARGE_ADDRESS_AWARE on 32-bit process running on 64-bit platform, the memory limit is 4GB. I have also read some articles, says because .NET back-end work and memory fragment, the process is not able to really allocate memory to 4GB.
But I think 1.5GB is way too far to 4GB, so I want to ask is there any other factor will cause memory usage limit? Thank you for your answer.
If your trying to debug your application, your application will not run with LARGEADDRESSAWARE (because the vshost.exe is not properly flagged).
How to: Disable the Hosting Process
Also, be mindful of the GC, it wont aggressively clean up memory in these sorts of situations. So it might be one of the few situations where it would be beneficial to call
GC.Collect()
GC.WaitForPendingFinalizers()
Additional Resouces
GC.Collect Method ()
GC.WaitForPendingFinalizers Method ()
Also take a look at this question if you havent
Can I set LARGEADDRESSAWARE from within Visual Studio?
I found the problem finally.
My C# project have some code to detect its memory usage, when it occupy memory over 1GB, it will throw OutOfMemoryException itself. After I comment these code, the program can reach memory usage to 3GB.

"Unmanaged memory" at profiler diagram. Is this a memory leak indication?

I've faced with this diagram, when profiling memory usage of my application:
As you can see, before line "snapshot 1" unmanaged memory holds approximately a half of total used memory. Then, after "snapshot 1" and 2 min 55 s (see the timeline below), I've forced a garbage collection.
As I expect, generation 2 was mostly collected, but unmanaged memory was not released, and now it holds approx. 2/3 of total used memory.
I have no idea, what "unmanaged memory" means in this context.
This is the WPF application with some WinForms/GDI+ interop. I'm sure, that everything, that should be disposed, is disposed. Also, there's no explicit platform interop code. The rest of managed memory is OK.
Is this a memory leak indication?
If so, what is the way to detect memory leak here?
Is this matters, the profiler I'm using is JetBrains dotMemory.
"Total used" memory on dotMemory chart it's the private working set of process. It's memory that the process executable has asked for - not necessarily the amount it is actually using. It includes all your DLLs and heaps but not includes memory-mapped files (shared DLLs). Moreover there is no way to tell whether it belongs to executable itself, or to a linked library. It's not exclusively physical memory; they can be paged to disk or in the standby page list (i.e. no longer in use, but not paged yet either).
So, unmanaged memory is everything in private working set except managed CLR heaps. Usually you have no easy ways to change amount of unmanaged memory for pure .net process. And it's approximately constant during execution of program.

CLR / High memory consumption after switching from 32-bit process to 64-bit process

I have a backend application (windows service) built on top of .NET Framework 4.5 (C#). The application runs on Windows Server 2008 R2 server, with 64GB of memory.
Due to dependencies I had, I used to compile and run this application as a 32-bit process (compile it as x86) and use /LARGEADDRESSAWARE flag to let the application use more than 2GB memory in the user space. Using this configuration, the average memory consumption (according to the "memory (private working set)" column in the task manager) was about 300-400MB.
The reason I needed the LARGEADDRESSAWARE flag, and the reason i changed it to 64-bit, is that although 300-400MB is the average, once in a while this app doing stuff that involves loading a lot of data into the memory (and it's much easier to develop and manage this kind of stuff when you're not very limited memory-wise).
Recently (after removing those x86 native dependencies), I changed the application compilation to "Any CPU", so now, on the production server, it runs as a 64-bit process. Starting when I did this change, the average memory consumption (according to the task manager) got to new levels: 3-4 GB, when there is no other change that may explain this change in behavior.
Here are some additional facts about the current state:
According to the "#Bytes in all heaps" counter, the total amount of memory is about 600MB.
When debugging the process with WinDbg+SOS, !dumpheap -stat showed that there are about 250-300MB free, but all the other object was much less than the total amount of memory the process used.
According to the GC performance counters, there are Gen0 collections on regular basis. In fact, the "% Time in GC" counter indicates that 10-20% in average of the time spent on GC (which makes sense given the nature of the application - a lot of allocations of information and data structures that are in use for short time).
I'm using Server GC in this app.
There is no memory problem on the server. It uses about 50-60% of the available memory (64GB).
My questions:
Why is a great difference between the memory allocated to the process (according to the task manager) and the actual size of the CLR heap (there is no un-managed code in the process that can explain this)?
Why is the 64-bit process takes more memory compared to the same process running as 32-bit process? even when considering that pointers takes twice the size, there's a big difference.
Can i do something to lower the memory consumption, or to have better understanding of the issue?
Thanks!
There are a few things to consider:
1) You mentioned you're using Server GC mode. In server GC mode, CLR creates one heap for every CPU core on the machine, which is more efficient more multi-threaded processing in server processes, e.g. Asp.Net processes. Each heap has two segment: one for small objects, one for large objects. Each segment starts with 4 gb reserved memory. Basically server GC mode tries to use more memory on the system to trade for overall system performance.
2) Pointer is bigger on 64-bit, of course.
3) Foreground Gen2 GC becomes super expensive in server GC mode due to heap is much larger. So CLR tries super hard to reduce the number of foreground Gen2 GC, sometimes using background Gen2 GC.
4) Depending on usage, fragmentation can become a real issue. I've seen heaps with 98% fragmentation (98% heap is free blocks).
To really solve your problem, you need to get an ETW trace + a memory dump, and then use tools like PerfView for detailed analysis.
A 64-bit process will naturally use 64-bit pointers, effectively doubling the memory usage of every reference. Certain platform-dependent variables such as IntPtr will also take up double the space.
The first and best thing you can do is to run a memory profiler to see where exactly the extra memory footprint is coming from. Anything else is speculative!

Categories