What causes memory fragmentation in .NET - c#

I am using Red Gates ANTS memory profiler to debug a memory leak. It keeps warning me that:
Memory Fragmentation may be causing
.NET to reserver too much free memory.
or
Memory Fragmentation is affecting the size of the largest object that can be allocated
Because I have OCD, this problem must be resolved.
What are some standard coding practices that help avoid memory fragmentation.
Can you defragment it through some .NET methods? Would it even help?

You know, I somewhat doubt the memory profiler here. The memory management system in .NET actually tries to defragment the heap for you by moving around memory (that's why you need to pin memory for it to be shared with an external DLL).
Large memory allocations taken over longer periods of time is prone to more fragmentation. While small ephemeral (short) memory requests are unlikely to cause fragmentation in .NET.
Here's also something worth thinking about. With the current GC of .NET, memory allocated close in time, is typically spaced close together in space. Which is the opposite of fragmentation. i.e. You should allocate memory the way you intend to access it.
Is it a managed code only or does it contains stuff like P/Invoke, unmanaged memory (Marshal.AllocHGlobal) or stuff like GCHandle.Alloc(obj, GCHandleType.Pinned)?

The GC heap treats large object allocations differently. It doesn't compact them, but instead just combines adjacent free blocks (like a traditional unmanaged memory store).
More info here: http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
So the best strategy with very large objects is to allocate them once and then hold on to them and reuse them.

The .NET Framework 4.5.1, has the ability to explicitly compact the large object heap (LOH) during garbage collection.
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
See more info in GCSettings.LargeObjectHeapCompactionMode

Related

Visual Studio 2017 - Diagostic tool - Heap profiling affects program memory consumption

I am trying to debug strange memory leak in C# application (uses c++/cli and c++) using Diagnostic tool and Memory usage snapshots. But i have discovered one strange problem.
When I run debug in VS2017 with Heap Profiling turned on memory consumption is constant and program runs as expected. When Heap Profiling is turned off program leaks memory which has linear increase. Work completed is same, i have progress of work printed in console and I am sure both programs have made the same work, but one uses constant memory and other has linearly increasing memory (when same work done 2x memory used). Visually it looks like when GC is fired with Heap Profiling some memory gets released, and no memory is released when Heap Profiling is not used.
Does anyone have idea how could Heap Profiling affect this? Native memory is leaked.
[EDIT1] Data from Performance Profiler -> Memory usage
Object Type Reference Count Module
shared_ptr_cli<GeoAtomAttributes> TestBackEnd64.dll
shared_ptr_cli<GeoAtomAttributes> [Finalization Handle] 856,275 TestBackEnd64.dll
shared_ptr_cli<GeoAtomAttributes> [Local Variable] 1 TestBackEnd64.dll
GeoAtomAttributesCli [Local Variable] 1 TestBackEnd64.dll
Memory that can be relased with gc should not be considered as leaked memory, it should be considered as memory that is eligible for garbage collection. Because the next time gc is performed this memory will be collected and available for new object allocations.
Other thoughts;
Gc runs on managed heap, native libraries allocates memory on the native heap. So It cannot effect the memory management of native libraries. But you should be aware of the following cases.(this might not be your case though)
If you pass pinned data structures to native code and free these handles on your Object.Finalize method (of wrapper class); in this case the pinned memory can only be collected when wrapper class is queued for finalization.Calling cleanup functions(*) of native code in the finalize method of managed class can also cause similar cases. I think these are bad practices and should not be used, instead these cleanups should be done as soon as possible.
(*) This case might cause your total process memory consumption to bloat even when there is no need for gc in the managed heap.

Defragmentation of heap in c#?

I want to understand full complete working with heap in C#. I understand how the stack and heap work, but I didn't find any explanation (if it is possible) of heap defragmentation.
I read a lot about a problem with fragmentation when GC is allocating and deallocating memory blocks on heap.
So if someone can explain to me or give some good article about this concern and heap (memory) defragmentation.
If you know about how the heap works, I assume you know that there are several different kinds of heaps. See my answer here - Stack vs. Heap in .NET
So the 2 you are speaking of out of those I mention in that answer are the Large Object Heap (LOH) and the GC Heap (also called Ephemeral Heap).
Generally don't need to worry about heap fragmentation for .NET. GC for .NET works in 3 steps: mark, sweep, compact. Mark - scans for all rooted references and makes a list of those that are rooted - these are not eligible for garbage collection and will not be touched. Sweep - clears the memory for those items not on the list and clears the "marked bit" for items that were marked. Compact - moves the memory for the remaining rooted objects so it is in a contiguous block. One caveat to the Compact phase is that the LOH is NOT compacted at least as of the latest version of .NET 4.6.2. This was a design decision the CLR GC team made because of performance reasons and time it would take to move all of the memory to a contiguous block. There have been many, many performance improvements since .NET 1.0 so GC isn't the beast it used to be. In any case, the Heap for Gen 0, 1, and 2 are compacted. Thus, no need for worry about fragmentation there. For the most part, the LOH survives without fragmentation problems with the algorithm it implements. There are cases where you can get fragmentation on the LOH. This can be caused by several things - some of which are bad allocation patterns, frequent Full GC collections, etc. This can be combated by improving allocation patterns, allocating large chunks of memory as close together (programmatically) as possible, and object pooling.
As of .NET 4.5.1, there is a way to compact the LOH manually though I would strongly recommend against it for the reason that it is a huge performance hit for your app for 2 reasons:
it is time consuming
it clears any of the allocation pattern algorithm that the GC has collected over the lifetime of your app. While your app is running, the GC actually tunes itself by learning how your app allocates memory. As such, it becomes more efficient (to a certain point) the longer your app runs. When you execute GC.Collect() (or any overload of it), it clears all of the data the GC has learned - so it must start over. You can read more about how to manually compact the LOH here: https://blogs.msdn.microsoft.com/mariohewardt/2013/06/26/no-more-memory-fragmentation-on-the-net-large-object-heap/ (again, I recommend against it)
Info about GC mark, sweep, compact - https://blogs.msdn.microsoft.com/abhinaba/2009/01/30/back-to-basics-mark-and-sweep-garbage-collection/
Info about LOH allocation algorithm:
https://www.red-gate.com/simple-talk/dotnet/net-framework/the-dangers-of-the-large-object-heap/

Weird out of memory exceptions [duplicate]

If you application is such that it has to do lot of allocation/de-allocation of large size objects (>85000 Bytes), its eventually will cause memory fragmentation and you application will throw an Out of memory exception.
Is there any solution to this problem or is it a limitation of CLR memory management?
Unfortunately, all the info I've ever seen only suggests managing risk factors yourself: reuse large objects, allocate them at the beginning, make sure they're of sizes that are multiples of each other, use alternative data structures (lists, trees) instead of arrays. That just gave me an another idea of creating a non-fragmenting List that instead of one large array, splits into smaller ones. Arrays / Lists seem to be the most frequent culprits IME.
Here's an MSDN magazine article about it:
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx, but there isn't that much useful in it.
The thing about large objects in the CLR's Garbage Collector is that they are managed in a different heap.
The garbage collector uses a mechanism called "Compacting", which is basically fragmentation and re-linkage of objects in the regular heap.
The thing is, since "compacting" large objects (copying and re-linking them) is an expensive procedure, the GC provides a different heap for them, which is never being compacted.
Note also that memory allocation is contiguous. Meaning if you allocate Object #1 and then Object #2, Object #2 will always be placed after Object #1.
This is probably what's causing you to get OutOfMemoryExceptions.
I would suggest having a look at design patterns like Flyweight, Lazy Initialization and Object Pool.
You could also force GC collection, if you're suspecting that some of those large objects are already dead and have not been collected due to flaws in your flow of control, causing them to reach higher generations just before being ready for collection.
A program always bombs on OOM because it is asking for a chunk of memory that's too large, never because it completely exhausted all virtual memory address space. You could argue that's a problem with the LOH getting fragmented, it is just as easy to argue that the program is using too much virtual memory.
Once a program goes beyond allocating half the addressable virtual memory (a gigabyte), it is really time to either consider making its code smarter so it doesn't gobble so much memory. Or making a 64-bit operating system a prerequisite. The latter is always cheaper. It doesn't come out of your pocket either.
Is there any solution to this problem or is it a limitation of CLR memory management?
There is no solution besides reconsidering your design. And it is not a problem of the CLR. Note, the problem is the same for unmanaged applications. It is given by the fact, that too much memory is used by the application at the same time and in segments laying 'disadvantageous' out in memory. If some external culprit has to be pointed at nevertheless, I would rather point at the OS memory manager, which (of course) does not compact its vm address space.
The CLR manages free regions of the LOH in a free list. This in most cases is the best what can be done against fragmentation. But since for really large objects, the number of objects per LOH segment decreases - we eventually end up having only one object per segment. And where those objects are positioned in the vm space is completely up to the memory manager of the OS. This means, the fragmentation mostly happens on the OS level - not on the CLR. This is an often overseen aspect of heap fragmentation and it is not .NET to blame for it. (But it is also true, fragmentation can also occour on the managed side like nicely demonstrated in that article.)
Common solutions have been named already: reuse your large objects. I up to now was not confronted with any situation, where this could not be done by proper design. However, it can be tricky sometimes and therefore may be expensive though.
We were precessing images in multiple threads. With images being large enough, this also caused OutOfMemory exceptions due to memory fragmentation. We tried to solve the problem by using unsafe memory and pre-allocating heap for every thread. Unfortunately, this didn't help completely since we relied on several libraries: we were able to solve the problem in our code, but not 3rd party.
Eventually we replaced threads with processes and let operating system do the hard work. Operating systems have long ago built a solution for memory fragmentation, so it's unwise to ignore it.
I have seen in a different answer that the LOH can shrink in size:
Large Arrays, and LOH Fragmentation. What is the accepted convention?
"
...
Now, having said that, the LOH can shrink in size if the area at its end is completely free of live objects, so the only problem is if you leave objects in there for a long time (e.g. the duration of the application).
...
"
Other then that you can make your program run with extended memory up to 3GB on 32bit system and up to 4 GB on 64bit system.
Just add the flag /LARGEADDRESSAWARE in your linker or this post build event:
call "$(DevEnvDir)..\tools\vsvars32.bat"
editbin /LARGEADDRESSAWARE "$(TargetPath)"
In the end if you are planning to run the program for a long time with lots of large objects you will have to optimize the memory usage and you might even have to reuse allocated objects to avoid garbage collector which is similar in concept, to working with real time systems.

Garbage collection runs too late - causes OutOfMemory exceptions

Was wondering if anyone could shed some light on this.
I have an application which has a large memory footprint (& memory churn). There aren't any memory leaks and GCs tend to do a good job of freeing up resources.
Occasionally, however, a GC does not happen 'on time', causing an out of memory exception. I was wondering if anyone could shed any light on this?
I've used the REDGate profiler, which is very good - the application has a typical 'sawtooth' pattern - the OOMs happen at the top of the sawtooth. Unfortunately the profiler can't be used (AFAIK) to identify sources of memory churn.
Is it possible to set a memory 'soft limit', at which a GC should be forced? At the moment, a GC is only performed when the memory is at its absolute limit, resulting in OOMs.
It shouldn't really be possible for a Garbage Collection to 'not to happen in time'. They happen when a new memory allocation would push Gen-0 past a certain limit. Thus they always happen before a memory allocation would push the memory past its limit. This happens so many times a day throughout the world I would be surprised if any bugs weren't well known about.
Have you considered that you might actually be allocating more memory than is available? The OS only lets you access 2GB on most 32-bit machines.
There are some other possibilities:
Is your application using un-managed memory?
Is your application Pinning any memory? If so that could cause a fragmentation issue especially if you aren't releasing pin.
If you use a lot of memory and you garbage collect a lot I guess you should consider the "Flyweight" design pattern.
As an example, if you garbage collect a lot of strings, see String.Intern(string s).
Msdn reference
You can use GC.collect() to force the garbage collector to do its work. But it is not preferable.
Use memory profiles like(memprofiler) to detect the leaks. Almost all your code performs leaks at some points.

Understanding Memory Performance Counters

[Update - Sep 30, 2010]
Since I studied a lot on this & related topics, I'll write whatever tips I gathered out of my experiences and suggestions provided in answers over here-
1) Use memory profiler (try CLR Profiler, to start with) and find the routines which consume max mem and fine tune them, like reuse big arrays, try to keep references to objects to minimal.
2) If possible, allocate small objects (less than 85k for .NET 2.0) and use memory pools if you can to avoid high CPU usage by garbage collector.
3) If you increase references to objects, you're responsible to de-reference them the same number of times. You'll have peace of mind and code probably will work better.
4) If nothing works and you are still clueless, use elimination method (comment/skip code) to find out what is consuming most memory.
Using memory performance counters inside your code might also help you.
Hope these help!
[Original question]
Hi!
I'm working in C#, and my issue is out of memory exception.
I read an excellent article on LOH here ->
http://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/
Awesome read!
And,
http://dotnetdebug.net/2005/06/30/perfmon-your-debugging-buddy/
My issue:
I am facing out of memory issue in an enterprise level desktop application. I tried to read and understand stuff about memory profiling and performance counter (tried WinDBG also! - little bit) but am still clueless about basic stuff.
I tried CLR profiler to analyze the memory usage. It was helpful in:
Showing me who allocated huge chunks of memory
What data type used maximum memory
But, both, CLR Profiler and Performance Counters (since they share same data), failed to explain:
The numbers that is collected after each run of the app - how to understand if there is any improvement?!?!
How do I compare the performance data after each run - is lower/higher number of a particular counter good or bad?
What I need:
I am looking for the tips on:
How to free (yes, right) managed data type objects (like arrays, big strings) - but not by making GC.Collect calls, if possible. I have to handle arrays of bytes of length like 500KB (unavoidable size :-( ) every now and then.
If fragmentation occurs, how to compact memory - as it seems that .NET GC is not really effectively doing that and causing OOM.
Also, what exactly is 85KB limit for LOH? Is this the size of the object of the overall size of the array? This is not very clear to me.
What memory counters can tell if code changes are actually reducing the chances of OOM?
Tips I already know
Set managed objects to null - mark them garbage - so that garbage collector can collect them. This is strange - after setting a string[] object to null, the # bytes in all Heaps shot up!
Avoid creating objects/arrays > 85KB - this is not in my control. So, there could be lots of LOH.
3.
Memory Leaks Indicators:
# bytes in all Heaps increasing
Gen 2 Heap Size increasing
# GC handles increasing
# of Pinned Objects increasing
# total committed Bytes increasing
# total reserved Bytes increasing
Large Object Heap increasing
My situation:
I have got 4 GB, 32-bit machine with Wink 2K3 server SP2 on it.
I understand that an application can use <= 2 GB of physical RAM
Increasing the Virtual Memory (pagefile) size has no effect in this scenario.
As its OOM issue, I am only focusing on memory related counters only.
Please advice! I really need some help as I'm stuck because of lack of good documentation!
Nayan, here are the answers to your questions, and a couple of additional advices.
You cannot free them, you can only make them easier to be collected by GC. Seems you already know the way:the key is reducing the number of references to the object.
Fragmentation is one more thing which you cannot control. But there are several factors which can influence this:
LOH external fragmentation is less dangerous than Gen2 external fragmentation, 'cause LOH is not compacted. The free slots of LOH can be reused instead.
If the 500Kb byte arrays are referring to are used as some IO buffers (e.g. passed to some socket-based API or unmanaged code), there are high chances that they will get pinned. A pinned object cannot be compacted by GC, and they are one of the most frequent reasons of heap fragmentation.
85K is a limit for an object size. But remember, System.Array instance is an object too, so all your 500K byte[] are in LOH.
All counters that are in your post can give a hint about changes in memory consumption, but in your case I would select BIAH (Bytes in all heaps) and LOH size as primary indicators. BIAH show the total size of all managed heaps (Gen1 + Gen2 + LOH, to be precise, no Gen0 - but who cares about Gen0, right? :) ), and LOH is the heap where all large byte[] are placed.
Advices:
Something that already has been proposed: pre-allocate and pool your buffers.
A different approach which can be effective if you can use any collection instead of contigous array of bytes (this is not the case if the buffers are used in IO): implement a custom collection which internally will be composed of many smaller-sized arrays. This is something similar to std::deque from C++ STL library. Since each individual array will be smaller than 85K, the whole collection won't get in LOH. The advantage you can get with this approach is the following: LOH is only collected when a full GC happens. If the byte[] in your application are not long-lived, and (if they were smaller in size) would get in Gen0 or Gen1 before being collected, this would make memory management for GC much easier, since Gen2 collection is much more heavyweight.
An advice on the testing & monitoring approach: in my experience, the GC behavior, memory footprint and other memory-related stuff need to be monitored for quite a long time to get some valid and stable data. So each time you change something in the code, have a long enough test with monitoring the memory performance counters to see the impact of the change.
I would also recommend to take a look at % Time in GC counter, as it can be a good indicator of the effectiveness of memory management. The larger this value is, the more time your application spends on GC routines instead of processing the requests from users or doing other 'useful' operations. I cannot give advices for what absolute values of this counter indicate an issue, but I can share my experience for your reference: for the application I am working on, we usually treat % Time in GC higher than 20% as an issue.
Also, it would be useful if you shared some values of memory-related perf counters of your application: Private bytes and Working set of the process, BIAH, Total committed bytes, LOH size, Gen0, Gen1, Gen2 size, # of Gen0, Gen1, Gen2 collections, % Time in GC. This would help better understand your issue.
You could try pooling and managing the large objects yourself. For example, if you often need <500k arrays and the number of arrays alive at once is well understood, you could avoid deallocating them ever--that way if you only need, say, 10 of them at a time, you could suffer a fixed 5mb memory overhead instead of troublesome long-term fragmentation.
As for your three questions:
Is just not possible. Only the garbage collector decides when to finalize managed objects and release their memory. That's part of what makes them managed objects.
This is possible if you manage your own heap in unsafe code and bypass the large object heap entirely. You will end up doing a lot of work and suffering a lot of inconvenience if you go down this road. I doubt that it's worth it for you.
It's the size of the object, not the number of elements in the array.
Remember, fragmentation only happens when objects are freed, not when they're allocated. If fragmentation is indeed your problem, reusing the large objects will help. Focus on creating less garbage (especially large garbage) over the lifetime of the app instead of trying to deal with the nuts and bolts of the gc implementation directly.
Another indicator is watching Private Bytes vs. Bytes in all Heaps. If Private Bytes increases faster than Bytes in all Heaps, you have an unmanaged memory leak. If 'Bytes in all Heaps` increases faster than 'Private Bytes' it is a managed leak.
To correct something that #Alexey Nedilko said:
"LOH external fragmentation is less dangerous than Gen2 external
fragmentation, 'cause LOH is not compacted. The free slots of LOH can
be reused instead."
is absolutely incorrect. Gen2 is compacted which means there is never free space after a collection. The LOH is NOT compacted (as he correctly mentions) and yes, free slots are reused. BUT if the free space is not contiguous to fit the requested allocation, then the segment size is increased - and can continue to grow and grow. So, you can end up with gaps in the LOH that are never filled. This is a common cause of OOMs and I've seen this in many memory dumps I've analyzed.
Though there are now methods in the GC API (as of .NET 4.51) that can be called to programatically compact the LOH, I strongly recommend to avoid this - if app performance is a concern. It is extremely expensive to perform this operation at runtime and and hurt your app performance significantly. The reason that the default implementation of the GC was to be performant which is why they omitted this step in the first place. IMO, if you find that you have to call this because of LOH fragmentation, you are doing something wrong in your app - and it can be improved with pooling techniques, splitting arrays, and other memory allocation tricks instead. If this app is an offline app or some batch process where performance isn't a big deal, maybe it's not so bad but I'd use it sparingly at best.
A good visual example of how this can happen is here - The Dangers of the Large Object Heap and here Large Object Heap Uncovered - by Maoni (GC Team Lead on the CLR)

Categories