MemoryFailPoint fires to early in WinXP 64 - c#

I have created a volume class (called VoxelVolume) with a self-organizing memory management, since the GC in C# didn't provide a good mechanism for managing contents of the volume for mapping, unmapping and remapping. Although I could have used the mechanisms of virtual memory, the problem is that the files are often too large to fit into the page file and I don't want to force the users to increase the pagefile size.
Currently this system is working quite well and there is no problem in lacking resources and OutOfMemoryExceptions since the InsufficientMemoryException using the MemoryFailPoint works quite well. This was all testes on a 32bit WinXP system with 2GB of main memory.
Running the same mechanism on 64bit system with 32GB of main memory also works well, but when the application runs the MemoryFailPoint suddenly throws an exception although 24GB of main memory are still free. Another point is when the MemoryFailPoint has fired once, it fires everytime and there is no chance to get rid of it.
What I have read so far, that there is a small object and a large object heap (SOH and LOH). But only for the SOH the GC takes real care of and I can free the SOH from unused objects by applying GC.Collect() and GC.WaitForPendingFinalizers. The MemoryFailPoint is obviously the only way to get a little bit of control for the LOH, but since there is enough memory left on the system I see no reason why the MemoryFilePoint should fire.
Is there any experience around here using the MemoryFailPoint?
Thank you for your help
Martin

I suppose MFP fires due to memory fragmentation.
In 64bit's system you still cannot allocate chunk bigger than 2GB, as far as I know.

Related

C# Memory Issues

I've got an application that:
Targets C# 6
Targets .net 4.5.2
Is a Windows Forms application
Builds in AnyCPU Mode beacuse it...
Utilizes old 32 bit libraries that cannot be upgraded to 64 bit, unmanaged memory
Uses DevExpress, a third party control vendor
Processes many gigabytes of data daily to produce reports
After a few hours of use in jobs that have many plots, the application eventually runs out of memory. I've spend quite a long time cleaning up many leaks found in the code and have gotten the project to a state where at the worst case it may be using upwards 400,000K of memory at any given time, according to performance counters. Processing this data has not yielded any issues at this point since data is processed in Jagged arrays, preventing any issues with the Large Object Heap.
Last time this happened the user was using ~305,000K of memory. The application is so "out of memory" that the error dialog cannot even draw the error icon in the MessageBox that comes up, the space where the icon would usually be is all black.
So far I've done the following to clean this up:
Windows forms utilize the Disposed event to ensure that resources are cleaned up, dispose is called manually when required
Business objects utilize IDisposable to remove references
Verified cleanup using ANTS memory profiler and SciTech memory profiler.
The low memory usage suggests this is not the case but I wanted to see if I saw anything that could be helpful, I could not
Utilized the GCSettings.LargeObjectHeapCompactionMode property to remove any fragmentation from processing data that may be fragmented in the Large Object Heap (LoH)
Nearly every article that I've used to get to this point suggests that out of memory actually means out of contiguous address space and given the amount that's in use, I agree with this. I'm not sure what to do at this point since from what I understand (and am probably very wrong about) is that the garbage collector clears this up to make room as the process moves along, with the exception of the LoH, which is cleaned up manually now using the new LargeObejctHeapCompactionMode property introduced in .net 4.5.1.
What am I missing here? I cannot build to 64 bit due to the old 32 bit libraries that contain proprietary algorithms that we do not have access to even dream of producing a 64 bit version of. Are there any modes in these profiles I should be using to identify exactly what is growing out of control here?
If this address space cannot be cleared up does this mean that all c# applications will eventually run "out of memory" because of this?
Nearly every article that I've used to get to this point suggests that out of memory actually means out of contiguous address space and given the amount that's in use, I agree with this.
This is a reasonable hypothesis, but even reasonable hypotheses can be wrong. Yours probably is wrong. What should you do?
Test it with science. That is, look for evidence that falsifies your hypothesis. You want to assume that it is anything else, and be forced by the evidence you've gathered that your hypothesis is not false.
So:
at the point where your application runs out of memory, is it actually out of contiguous free pages of the necessary size? It sure sounds like your observations do not indicate that this is true, so the hypothesis is probably false.
What is other evidence that the hypothesis might be false?
"After a few hours of use in jobs that have many plots, the application eventually runs out of memory."
"Uses DevExpress, a third party control vendor"
"the error dialog cannot even draw the error icon in the MessageBox"
None of this sounds like an out of memory problem. This sounds like a third party control library leaking OS handles for graphics objects. Unfortunately, such leaks usually surface as "out of memory" errors and not "out of handles" errors.
So, that's a new hypothesis. Look for evidence for and against this hypothesis too. You're doing a good job by using a memory profiler. Use a handle profiler next.
If this address space cannot be cleared up does this mean that all c# applications will eventually run "out of memory" because of this?
Nope. The GC does a good job of cleaning up managed memory; lots of applications have no problem running forever without leaking.

Net and native applications memory management

As far as I know single .net application can allocate a lot from available memory. It will be released by GC at some point. I never have to care much about details. It just works.
But what is going to happen if native application is started while most/all memory is used by .net application? Will GC respects this and free memory before? Or will Windows "take care" and will swap memory of .net appplication into swap-file?
I have a single PC with slow HDD, where my WPF application (MVVM, bitmaps, database, pretty memory-intensive) occupy 200-2000 Mb (up to 80%) of RAM and I get reports what PC become slow when running Office, antivirus, etc.
In e.g. Photoshop there is a setting to limit amount of used RAM. Now I am thinking whenever such thing make sense in my WPF application.
Is uncontrollable GC memory allocation a problem or not? Should I limit amount of used by my application memory?
The problem described does not look related to .NET memory management/GC, assuming that application just holds the operation data (in use) in the memory.
.NET app is not different from any other user app so the OS will treat them in the same way: move infrequently used memory blocks to the swap file.
If the application occupies 80% of RAM and works with memory intensively, competing with other applications, the whole process will generate a lot of page faults causing large traffic between swap file and memory. This leads to severe performance degradation, especially in case of slow HDD.
The .NET part in this game is just to clean the application memory from time to time from the data no longer in use. If there are just big amount of data required for the app to run and just adding more RAM is not an option, then application redesign (which limits the amount of loaded data somehow) is a considerable approach.

Is it conceivable that the Virtual Size reported by Process Explorer could cause OutOfMemory exceptions?

I am working to diagnose a series of OutOfMemoryException problems within an application of ours. This is an internal 32-bit (x86) OWIN-hosted WebAPI that runs within a console application and talks to a series of hardware components in parallel. For that reason it's creating around 20 instances of a library, and the sharp increase in "virtual size" memory matches when those instances are created.
From the output of Process Explorer, and dotMemory, it does not appear that we're allocating that much actual memory within this application:
From reading many, many SO answers I think I understand that our problem is either from fragmentation within the G0, G1, G2 & LOH heaps, or we're possibly bumping into the 2GB addressable memory limit for a 32-bit process running on Windows 7. This application works in batches where it collects a bunch of data from hardware devices, creates collections in memory to aggregate that data into a single object, and then saves it to be retrieved by a client app. This activity is the cause of the spikes in the dotMemory visual, but these data structures are not enormous, which I think the dotMemory chart shows.
Looking at the heaps has shown they rarely grow beyond 10-15MB in size, and I don't see much evidence that the LOH is growing too large or being severely fragmented. I'm really struggling with how to proceed to better understand what's happening here.
So my question is two-fold:
Is it conceivable that we could be hitting that 2GB limit for virtual memory, and that's a cause for these memory exceptions?
If that is a possible cause then am I right in thinking a 64-bit build would get around that?
We are exploring moving to a 64-bit build, but that would require updating some low-level libraries we use to also be 64-bit. It's certainly an option we will explore eventually (if not sooner), but we're trying to understand this situation better before investing the time required.
Update after setting the LARGEADDRESSFLAG
Based a recommendation I set that flag on the binary and interestingly saw the virtual size jump immediately to nearly 3GB. I don't know if I should be alarmed by that?!
I will monitor the application with this configuration for the next several hours.
In my case the advice provided by #ThomasWeller was indeed correct and enabling the "large address aware" flag has allowed this application to run for several days without throwing memory exceptions.

How do you increase the heap size of a Mono for Android application?

I have a Mono for Android app that I think is running out of memory when I load and parse an XML document using the XMLDocument class multiple times in a row.
I see that the garbage collector is reporting that I only have 7367K of memory available, which seems quite low.
How can I increase this either through configuration or at runtime?
I'm afraid Android's Virtual Machine memory used for each application is quite limited: 16MB in most cases and 24MB for some other. I also came across that limitation. First you should check that your application has no memory leaks. If that's not enough then you may need to consider forcing calls to the garbage collector: http://docs.xamarin.com/android/advanced_topics/garbage_collection. You should also bear in mind that calling GC will make your application slower.
If anyone has a better option I'd be very happy to know about it!
I found that there is a bug in the XmlDocument that causes it to crash in some situations (loading large XML files (~180K) quickly in sequence). I will be reporting this to Xamarin to see if they can investigate it further.
After I converted my code to use XmlTextReader instead, the memory behavior changed. Now the system dynamically increases the heap size reported during GC cycles. The size goes up and down as necessary and nothing crashes.
With the XmlDocument code, instead of increasing the heap size, it just crashed.

Random RAM usage amounts

I was hoping someone could explain why my application when loaded uses varying amounts of RAM. I'm speaking about a compiled version that uses the exe directly. It's a pretty basic applications and there are no conditional branches in the startup of the application. Yet every time I start it up the RAM amount varies from 6MB-16MB.
I know it's on the small end of usage anyways but I'm curious of why this happens.
Edit: to give a bit more clarification on what the app actually does.
It is a WinForm project.
It connects to a database using sqlclient to retrieve a list of servers.
Based on that list a series of buttons are created to start and stop a service on those servers.
Using the System.Timers class to audit the status of the services on those servers every 20 seconds.
The applications at this point sits there and waits for user input via one of the button clicks to start/stop the service.
The trick here is that the amount of RAM reported by the task schedule is not the amount of RAM used by your application. Rather, it is the amount of RAM reserved for use by your application.
Remember that with managed frameworks like .Net, you don't request or release memory directly. Rather, a garbage collector manages the memory for you. The amount of memory reserved for your application at a given time can vary and depends on a lot of different factors, including memory pressure created at the time by other programs.
Think of it this way: if you need 10 MBs of RAM for your app, is it faster to request and return it to the operating system 1 MB at a time over 10 requests/releases or reserve the block at once with one request/release? Now extend that to a scenario where you don't know exactly how much RAM you'll need, only that it's somewhere in the neighborhood of 10 MB. Additionally, your computer has 1 GB sitting there unused. Of course the best thing to do is take a good-sized chunk of that available RAM. Even 20 or 30 MB wouldn't be unreasonable relative to the ram that's sitting there unused, because unused RAM is wasted performance.
If your system later starts to feel some memory pressure then .Net can easily return some RAM to the system. This is one of the ways managed languages can sometimes give better performance than languages like C++ with traditional memory management: a garbage collector that can more easily take the entire system health into account when allocating memory.
What are you using to determine how much memory is being "used". Even with regular applications Windows will aggressively allocate unused memory in advance, with .NET applications it's even more complicated as to how much memory is actually being used, and how much Windows is just tacking on so that it will be available instantly when needed. If another application actually asks for memory this reserved memory will be repurposed.
One way to check is to minimize the application (at least on XP). If you are looking at the memory use in something like task manager you'll notice it drops off right away, eliminating the seemly "random" amount allocated.
It may be related to the jitter, after the first load the jitter already created a compiled version and it doesn't need to run. Other than that you would have to give us some more details about the app and which kind of memory you are referring to.

Categories