C# Memory Issues - c#

I've got an application that:
Targets C# 6
Targets .net 4.5.2
Is a Windows Forms application
Builds in AnyCPU Mode beacuse it...
Utilizes old 32 bit libraries that cannot be upgraded to 64 bit, unmanaged memory
Uses DevExpress, a third party control vendor
Processes many gigabytes of data daily to produce reports
After a few hours of use in jobs that have many plots, the application eventually runs out of memory. I've spend quite a long time cleaning up many leaks found in the code and have gotten the project to a state where at the worst case it may be using upwards 400,000K of memory at any given time, according to performance counters. Processing this data has not yielded any issues at this point since data is processed in Jagged arrays, preventing any issues with the Large Object Heap.
Last time this happened the user was using ~305,000K of memory. The application is so "out of memory" that the error dialog cannot even draw the error icon in the MessageBox that comes up, the space where the icon would usually be is all black.
So far I've done the following to clean this up:
Windows forms utilize the Disposed event to ensure that resources are cleaned up, dispose is called manually when required
Business objects utilize IDisposable to remove references
Verified cleanup using ANTS memory profiler and SciTech memory profiler.
The low memory usage suggests this is not the case but I wanted to see if I saw anything that could be helpful, I could not
Utilized the GCSettings.LargeObjectHeapCompactionMode property to remove any fragmentation from processing data that may be fragmented in the Large Object Heap (LoH)
Nearly every article that I've used to get to this point suggests that out of memory actually means out of contiguous address space and given the amount that's in use, I agree with this. I'm not sure what to do at this point since from what I understand (and am probably very wrong about) is that the garbage collector clears this up to make room as the process moves along, with the exception of the LoH, which is cleaned up manually now using the new LargeObejctHeapCompactionMode property introduced in .net 4.5.1.
What am I missing here? I cannot build to 64 bit due to the old 32 bit libraries that contain proprietary algorithms that we do not have access to even dream of producing a 64 bit version of. Are there any modes in these profiles I should be using to identify exactly what is growing out of control here?
If this address space cannot be cleared up does this mean that all c# applications will eventually run "out of memory" because of this?

Nearly every article that I've used to get to this point suggests that out of memory actually means out of contiguous address space and given the amount that's in use, I agree with this.
This is a reasonable hypothesis, but even reasonable hypotheses can be wrong. Yours probably is wrong. What should you do?
Test it with science. That is, look for evidence that falsifies your hypothesis. You want to assume that it is anything else, and be forced by the evidence you've gathered that your hypothesis is not false.
So:
at the point where your application runs out of memory, is it actually out of contiguous free pages of the necessary size? It sure sounds like your observations do not indicate that this is true, so the hypothesis is probably false.
What is other evidence that the hypothesis might be false?
"After a few hours of use in jobs that have many plots, the application eventually runs out of memory."
"Uses DevExpress, a third party control vendor"
"the error dialog cannot even draw the error icon in the MessageBox"
None of this sounds like an out of memory problem. This sounds like a third party control library leaking OS handles for graphics objects. Unfortunately, such leaks usually surface as "out of memory" errors and not "out of handles" errors.
So, that's a new hypothesis. Look for evidence for and against this hypothesis too. You're doing a good job by using a memory profiler. Use a handle profiler next.
If this address space cannot be cleared up does this mean that all c# applications will eventually run "out of memory" because of this?
Nope. The GC does a good job of cleaning up managed memory; lots of applications have no problem running forever without leaking.

Related

How to find cause of high memory usage in a complex C# based Windows service?

I am having problems figuring out the source of a memory problem in a complex C# based Windows service. Unfortunately the problem does not occur all the time and I still don't exactly know the conditions which cause it to happen. Sometimes when I check the system ressources used by the service, it takes up multiple gigabytes of memory until the point where it throws OutOfMemory-exceptions everywhere because there isn't any memory left.
I have a paid version of .NET Memory Profiler available but so far it has been useless because the whole system becomes slow and unstable when the service uses too much memory so I cannot attach the memory profiler to the application.
The solution of the application consists of more than 30 individual projects and hundreds of thousands lines of code so there is no way for me to find the source of the problem by simply looking through the source code.
So far the only thing I was able to do is creating a memory dump (.dmp file) of the process while it was using a lot of memory. Is there a way to analyze this dump or anything else that would help me narrow down the source of this problem?
If you could identify some central methods in the main classes of your legacy projects and you have some kind of logging already in place, you could log the total memory (managed and unmanaged, if your application opens such resources) by calling
Process.GetCurrentProcess().PrivateMemorySize64
At least that would give you a feeling if the memory problem is "diffuse", e. g. by not release objects for garbage collection etc., if if the memory problem occurs just in certain use cases (jump in memory consumption, if a certain action happens). Then you could nail it down by more logging and investigating the corresponding code sections. Its tedious, but when you can not use code instrumentation as you have said, I find it effective. If you want to analyze a specific situation with a memory dump, you can use WinDbg for analyzing it, but that takes some effort for the first time to learn, and would be a separate topic (see https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-tools).

Is it conceivable that the Virtual Size reported by Process Explorer could cause OutOfMemory exceptions?

I am working to diagnose a series of OutOfMemoryException problems within an application of ours. This is an internal 32-bit (x86) OWIN-hosted WebAPI that runs within a console application and talks to a series of hardware components in parallel. For that reason it's creating around 20 instances of a library, and the sharp increase in "virtual size" memory matches when those instances are created.
From the output of Process Explorer, and dotMemory, it does not appear that we're allocating that much actual memory within this application:
From reading many, many SO answers I think I understand that our problem is either from fragmentation within the G0, G1, G2 & LOH heaps, or we're possibly bumping into the 2GB addressable memory limit for a 32-bit process running on Windows 7. This application works in batches where it collects a bunch of data from hardware devices, creates collections in memory to aggregate that data into a single object, and then saves it to be retrieved by a client app. This activity is the cause of the spikes in the dotMemory visual, but these data structures are not enormous, which I think the dotMemory chart shows.
Looking at the heaps has shown they rarely grow beyond 10-15MB in size, and I don't see much evidence that the LOH is growing too large or being severely fragmented. I'm really struggling with how to proceed to better understand what's happening here.
So my question is two-fold:
Is it conceivable that we could be hitting that 2GB limit for virtual memory, and that's a cause for these memory exceptions?
If that is a possible cause then am I right in thinking a 64-bit build would get around that?
We are exploring moving to a 64-bit build, but that would require updating some low-level libraries we use to also be 64-bit. It's certainly an option we will explore eventually (if not sooner), but we're trying to understand this situation better before investing the time required.
Update after setting the LARGEADDRESSFLAG
Based a recommendation I set that flag on the binary and interestingly saw the virtual size jump immediately to nearly 3GB. I don't know if I should be alarmed by that?!
I will monitor the application with this configuration for the next several hours.
In my case the advice provided by #ThomasWeller was indeed correct and enabling the "large address aware" flag has allowed this application to run for several days without throwing memory exceptions.

When I use Socket.IO, why I got an error An unhandled exception of type 'System.OutOfMemoryException'

I coded a program to get the screen shot and send to the server. Every time, I got a screenshot and turned into base64 then sent it using Socket.IO. (using SocketIOClient.dll)
Dictionary<string, string> image = new Dictionary<string, string>();
image.add("image", "");
private void windowMonitorTimer_Tick(object sender, EventArgs e)
{
image["image"] = windowMonitorManager.MonitorScreen();
client.getSocket().Emit("Shot", image);
}
windowMonitorManager.MonitorScreen() is for return a base64 string. If I do not use client.getSocket().Emit("Shot", image), the program could run correct, but if I add this line, the program stop like 2 seconds(send nearly 80 times) and give me the error :
An unhandled exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll
If I do not send the string as long as this, just a short string "hello", it sends 1600 times then occurs the same problem.
Somebody knows how to debug this problem?
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
I try to test socket.Emit(), and find it has its limit.
For example, I send a string of 10000000, after 88 times, it occurs the out of memory problem.
If I send a string of 5000000, after 170 times, it occurs the same problem.
Out of memory is mostly an exception thrown when the process consumes much larger memory than what is allowed by default system (like 2 GB on a 32 bit system), on a 64 bit system, it is higher, but is still bound by the certain practical limit, it's not the theory value of 2^64, it differs from OS to OS and is also dependent on underlying RAM, but is large enough for a single process, now this situation can happens due to multiple reasons:
Memory leak (most prominent), mostly associated with the unmanaged code calls, if there's a handle or memory allocation that is not de-allocated or freed, over a period of time it leads to huge memory allocation for a process and thus the exception, when system cannot map any more.
Managed code can too leak and I have done that, when objects gets continuously created and they are not de-referenced, i.e they are still reachable in GC context, so you can lead to this scenario, I have done this in my code :)
This is not a null reference or a corruption, so taking direct stack trace will be of little use in this scenario, simply because you may get a different stack every time, it will be like the stack of the process threads, when exception happens and it would be mostly misleading, so do not try that way. Executing method of a thread doesn't mean it caused memory leak and it will be different for different threads.
How to debug:
Number of simple steps can be taken to narrow down, but before anything ensure that you have the debug version with valid pdb files for all the loaded binaries of the process.
To know whether is it a leak, monitor process "working set", "virtual bytes" counters either via task manager or preferably via perfmon, since it is much more accurate, also it provides visual graph.
Now a leak is a leak, so steps like increasing address space in a 32 bit system to 3 GB in place of default 2 GB can only help for sometime, but perfmon will tell you if there's a stabilization point, like in few cases, process needs 2.2 GB of memory, so 2 GB by default is not enough but 3 GB in boot.config and UserVA setting for fine tuning 3 GB will help avoiding exception.
If you are using 64 bit system then that's not a worry point, but ensure that your binary is compiled for X64 or any CPU, a 32 bit binary will run as a WOW process and will have limitation on 64 bit system too.
Also try a small utility handles from sysinternals, running it mutliple times in a batch for a process will provide details of a leaking handle, in terms of number of handles like file, mutex etc allocated
Once you have confirmed a genuine leak, not a settings or configuration or system issue, then comes the memory profilers. In free ones, you get lot of information through free tools like windbg, umdh and leakdiag, they infact point you to exact stack trace which is leaking. umdh and leakdiag are both very good tools, they let you know the leaking function. Leakdiag is more exhaustive than UMDH, but for runtime heap UMDH is good enough
Professional memory profilers like:
Dot Memory - http://www.jetbrains.com/dotmemory/
Ants - Red Gate - http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/
are also very good, i personally find Dot memory much more helpful and it helps in quickly pointing to the leaking function or type, with little effort, provided you have correct symbol files. Both have free download version available
Mostly resolving out of memory exception is a gradual and iterative process, because this one could be hidden deep inside, leaking a chunk memory with every execution and bringing the complete process to knees. Let me know if you need help in using a specific tool, then we can see what more can be done to debug the issue further. Happy Debugging
Sounds to me like there's a bug in the SocketIOClient DLL. Without the DLL I cannot reproduce the problem, but tracking it down sounds easy.
Since C# is a garbage collected language, the only way to get an Out Of Memory (OOM) is if you have too much memory allocated that cannot be tracked down to a 'root object'. There are several kinds of root objects:
Static variables (or threadstatic)
Methods variables (locals / arguments) in your stack trace
All objects that you reference from these two (direct / indirect) will contribute to your memory pressure. If you allocate memory that's not available, the GC will first attempt to free memory before throwing an OOM; if not enough memory is free after the GC completed, an OOM will be thrown.
One obvious reason why this might happen is because you're running a 32-bit process, which is the default in Visual Studio nowadays. This can be fixed in the project properties. However, most processes don't need more than 2 GB of memory, so it's more likely that you 'leaked' memory somewhere. So let's break it down:
Locals or arguments that leak
Ways to solve these kind of OOM:
Open visual studio, ctrl d,e (or debug -> exceptions)
Click OutOfMemoryException -> check the 'throw' box
Run the program.
When the out of memory (OOM) hits, you browse through the nodes in the stack trace (or 'parallel stacks') and check the size of the variables. In most cases it's a single buffer or collection that causes the problem. F.ex. in your case a buffer might fill up with socket data, which is never emptied.
Static variables that leak
Other cases of OOM are usually buffers that granually fill up and have a reference to the main tree. The easiest way to find these is to use a memory profiler, like Red Gate / ANTS memory profiler. Run your program in the profiler, take some snapshots and check for 'large instances'.
In general, I usually try to avoid static variables alltogether, which solves a whole world of problems.
Oh and in this case...
Perhaps it's worth noting that there are a lot of good socket libraries out there... even though I don't know the specifics of SocketIOClient, you might want to consider using a widely supported, proven socket library like WCF/SOAP or Protobuf. There's a lot of material online on how to use these in just about any scenario, so if the problem is in SocketIOClient you might want to consider that...
I would guess you are running your timer too frequently and eating memory to an unsustainable point. Did you try lowering its frequency?
If lowering the frequency doesn't help, your code or the SocketIOClient.dll library may be leaking memory. I suggest that first you review the usage of that library to verify that you are not leaving resources open.

MemoryFailPoint fires to early in WinXP 64

I have created a volume class (called VoxelVolume) with a self-organizing memory management, since the GC in C# didn't provide a good mechanism for managing contents of the volume for mapping, unmapping and remapping. Although I could have used the mechanisms of virtual memory, the problem is that the files are often too large to fit into the page file and I don't want to force the users to increase the pagefile size.
Currently this system is working quite well and there is no problem in lacking resources and OutOfMemoryExceptions since the InsufficientMemoryException using the MemoryFailPoint works quite well. This was all testes on a 32bit WinXP system with 2GB of main memory.
Running the same mechanism on 64bit system with 32GB of main memory also works well, but when the application runs the MemoryFailPoint suddenly throws an exception although 24GB of main memory are still free. Another point is when the MemoryFailPoint has fired once, it fires everytime and there is no chance to get rid of it.
What I have read so far, that there is a small object and a large object heap (SOH and LOH). But only for the SOH the GC takes real care of and I can free the SOH from unused objects by applying GC.Collect() and GC.WaitForPendingFinalizers. The MemoryFailPoint is obviously the only way to get a little bit of control for the LOH, but since there is enough memory left on the system I see no reason why the MemoryFilePoint should fire.
Is there any experience around here using the MemoryFailPoint?
Thank you for your help
Martin
I suppose MFP fires due to memory fragmentation.
In 64bit's system you still cannot allocate chunk bigger than 2GB, as far as I know.

System.OutOfMemory being thrown. How to find the culprit?

I am using Visual C# Express 2008 and I have an application that starts up on a form, but uses a thread with a delegated display function to take care of essentially all the processing. That way my form doesn't lock up while tasks are being processed.
Semi-recently, after going through a repeated process a number of times (the program processes incoming data, so when data comes in, the process repeats) my app will crash with a System.OutOfMemory error.
The stack trace in the error message is useless because it only directs me to the the line where I call the delegated form control function.
I've heard people say they use ProcMon from SysInternals to see why errors like this happen. But I, for the life of me, can't figure it out. The amount of memory I am using doesn't change as the program runs, if it goes up, it comes back down. Plus, even if it was going up, how do I figure out which part of my program is the problem?
How can I go about investigating this problem?
EDIT:
So, after delving further into this issue, I looked through anything that I was ever re-declaring. There were a few instances where I had hugematrix = new uint[gigantic], so I got rid of about 3 of those.
Instead of getting rid of the error, it is now far more obscured and confusing.
My application takes the incoming data, and renders it using OpenGL. Now, instead of throwing "System.OutOfMemory" it simply does not render anything with OpenGL.
The only difference in my code is that I do not make new matrices for holding the data I plot. That way, I hope, my array stays in the same place in memory and doesn't do anything suicidal to my LOH.
Unfortunately, this twists the beast far beyond my meager means. With zero errors popping up, and all my data structures apparently still properly filled, how can I find my problem? Does OpenGL use memory in an obscure way so as to not throw exceptions when it fails? Is memory still a problem? How do I find out? All the memory profilers in the world seem to tell me very little.
EDIT:
With the boatloads of support from this community (with extra kudos to Amissico) the error has finally been rooted out. Apparently I was adding items to an OpenGL list, and never taking them off the list.
The app that finally clued me in was .Net Memory Profiler. At the time of crash it showed 1.5GB of data in the <unknown> category. Through process of elimination (everything else in the list that was named), the last thing to be checked off the list was the OpenGL rendering pipleline. The rest is history.
Based on the description in your comments, I would suspect that you are either not disposing of your images correctly or that you have severe Large Object Heap fragmentation and, when trying to allocate for a new image, don't have enough contiguous space available. See this question for more info - Large Object Heap Fragmentation
You need to use a memory profiler, such as the ants memory profiler to find out what causes this error.
Are you re-registering an event handler on every loop and not un-registering it?
CLR Profiler for the .NET Framework 2.0 at https://github.com/MicrosoftArchive/clrprofiler
The most common cause of memory fragmentation is excessive string creation.
Following considerations:
Make sure that threads you spawn are destroyed (aborted or function return). Too much threads can fail application, although in Task Manager used memory is not too high
Memory leaks. Yes, yes, you can cause them in .net pretty well without setting reference to nulls. This can be solved by using memory profilers like dotTrace or ANTS Memory Profiler
I had an OutOfMemoryException-problem as well:
Microsoft Visual C# 2008 Reducing number of loaded dlls
The reason was fragmentation of 2GB GB virtual address space and poster nobugz suggested Sysinternal's Vmmap utility which has been very helpful for diagnostics. You can use it to check if your free memory areas become more fragmented over time. (First sort by size then by type -> refresh repeat sorting and you can see if contiguous free memory blocks become smaller)

Categories