ASPX.NET application out of memory exception for no reason - c#

Here is the deal: when my web server starts up, it creates a couple of lengthy (20M of elements) arrays with really small objects (like 1-2-3 ints). The accumulative size of any individual array is NOT larger than 2GB (the limitation of CLR, see the link below for some details). The w3wp.exe does grow in memory usage close to 2GB (never more than that). The code is compiled in Any CPU platform mode and run on Windows 7 x64 with 8GB of RAM.
What on earth makes it to throw OutOfMemoryException while creating my lists? Does it make any difference if I host the process thru IIS or VS? This appears not happening is PROD but I am experiencing this on my dev machine all the time. (Will try to restart now...)
This may be related but I don't seem to have objects that big:
Very large collection in .Net causes out-of-memory exception
EDIT:
It does make difference to run in IIS or VS - don't see that happening when the process is started in IIS. So could it be VS debugger limitation?

Based on your updated question, it's obvious that Visual Studio does not run in 64 bit mode. So your limitation is 2GB under Visual Studio.
This post probably contains some code helpful to prove this fact:
How to detect Windows 64-bit platform with .NET?

Probably memory allocations are not optimized (i.e. done in small steps and resizes). This has a potential to fragment the heap such, that there is no longer enough contigious free space to store the 'semi-large' array.
That allocation fails, and this situation is by definition OOM, even though plenty fragments of heap might be available. Usually, excessive use of linq can cause this; at a certain point deferred execution looses it's appeal, and you can buy a lot of performance/resources by doing one or two '.ToList()' at strategic places (in my experience, often close to the beginning of your generating process, where the bulk of the data arrives).

Check if you have apppool recycling threshold set to 2GB
http://technet.microsoft.com/en-us/library/cc732519%28WS.10%29.aspx

Related

Except LARGE_ADDRESS_AWARE, what else factors will limit C# process memory consumption?

I take over a C# project which loads 3D models into memory, so I need large memory to use. My platform is 64-bit win10, the C# program is 32-bit, and I use visual studio 2013 to develop. My laptop have 8GB memory.
Before I use editbin /largeaddressaware $(TargetPath) to add LARGE_ADDRESS_AWARE flag to the C# program, it could only consume memory approximately 1GB then program throws OutOfMemory exception, after adding LARGE_ADDRESS_AWARE flag, it could consume memory approximately 1.5GB.
I know that using LARGE_ADDRESS_AWARE on 32-bit process running on 64-bit platform, the memory limit is 4GB. I have also read some articles, says because .NET back-end work and memory fragment, the process is not able to really allocate memory to 4GB.
But I think 1.5GB is way too far to 4GB, so I want to ask is there any other factor will cause memory usage limit? Thank you for your answer.
If your trying to debug your application, your application will not run with LARGEADDRESSAWARE (because the vshost.exe is not properly flagged).
How to: Disable the Hosting Process
Also, be mindful of the GC, it wont aggressively clean up memory in these sorts of situations. So it might be one of the few situations where it would be beneficial to call
GC.Collect()
GC.WaitForPendingFinalizers()
Additional Resouces
GC.Collect Method ()
GC.WaitForPendingFinalizers Method ()
Also take a look at this question if you havent
Can I set LARGEADDRESSAWARE from within Visual Studio?
I found the problem finally.
My C# project have some code to detect its memory usage, when it occupy memory over 1GB, it will throw OutOfMemoryException itself. After I comment these code, the program can reach memory usage to 3GB.

Is it conceivable that the Virtual Size reported by Process Explorer could cause OutOfMemory exceptions?

I am working to diagnose a series of OutOfMemoryException problems within an application of ours. This is an internal 32-bit (x86) OWIN-hosted WebAPI that runs within a console application and talks to a series of hardware components in parallel. For that reason it's creating around 20 instances of a library, and the sharp increase in "virtual size" memory matches when those instances are created.
From the output of Process Explorer, and dotMemory, it does not appear that we're allocating that much actual memory within this application:
From reading many, many SO answers I think I understand that our problem is either from fragmentation within the G0, G1, G2 & LOH heaps, or we're possibly bumping into the 2GB addressable memory limit for a 32-bit process running on Windows 7. This application works in batches where it collects a bunch of data from hardware devices, creates collections in memory to aggregate that data into a single object, and then saves it to be retrieved by a client app. This activity is the cause of the spikes in the dotMemory visual, but these data structures are not enormous, which I think the dotMemory chart shows.
Looking at the heaps has shown they rarely grow beyond 10-15MB in size, and I don't see much evidence that the LOH is growing too large or being severely fragmented. I'm really struggling with how to proceed to better understand what's happening here.
So my question is two-fold:
Is it conceivable that we could be hitting that 2GB limit for virtual memory, and that's a cause for these memory exceptions?
If that is a possible cause then am I right in thinking a 64-bit build would get around that?
We are exploring moving to a 64-bit build, but that would require updating some low-level libraries we use to also be 64-bit. It's certainly an option we will explore eventually (if not sooner), but we're trying to understand this situation better before investing the time required.
Update after setting the LARGEADDRESSFLAG
Based a recommendation I set that flag on the binary and interestingly saw the virtual size jump immediately to nearly 3GB. I don't know if I should be alarmed by that?!
I will monitor the application with this configuration for the next several hours.
In my case the advice provided by #ThomasWeller was indeed correct and enabling the "large address aware" flag has allowed this application to run for several days without throwing memory exceptions.

When I use Socket.IO, why I got an error An unhandled exception of type 'System.OutOfMemoryException'

I coded a program to get the screen shot and send to the server. Every time, I got a screenshot and turned into base64 then sent it using Socket.IO. (using SocketIOClient.dll)
Dictionary<string, string> image = new Dictionary<string, string>();
image.add("image", "");
private void windowMonitorTimer_Tick(object sender, EventArgs e)
{
image["image"] = windowMonitorManager.MonitorScreen();
client.getSocket().Emit("Shot", image);
}
windowMonitorManager.MonitorScreen() is for return a base64 string. If I do not use client.getSocket().Emit("Shot", image), the program could run correct, but if I add this line, the program stop like 2 seconds(send nearly 80 times) and give me the error :
An unhandled exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll
If I do not send the string as long as this, just a short string "hello", it sends 1600 times then occurs the same problem.
Somebody knows how to debug this problem?
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
I try to test socket.Emit(), and find it has its limit.
For example, I send a string of 10000000, after 88 times, it occurs the out of memory problem.
If I send a string of 5000000, after 170 times, it occurs the same problem.
Out of memory is mostly an exception thrown when the process consumes much larger memory than what is allowed by default system (like 2 GB on a 32 bit system), on a 64 bit system, it is higher, but is still bound by the certain practical limit, it's not the theory value of 2^64, it differs from OS to OS and is also dependent on underlying RAM, but is large enough for a single process, now this situation can happens due to multiple reasons:
Memory leak (most prominent), mostly associated with the unmanaged code calls, if there's a handle or memory allocation that is not de-allocated or freed, over a period of time it leads to huge memory allocation for a process and thus the exception, when system cannot map any more.
Managed code can too leak and I have done that, when objects gets continuously created and they are not de-referenced, i.e they are still reachable in GC context, so you can lead to this scenario, I have done this in my code :)
This is not a null reference or a corruption, so taking direct stack trace will be of little use in this scenario, simply because you may get a different stack every time, it will be like the stack of the process threads, when exception happens and it would be mostly misleading, so do not try that way. Executing method of a thread doesn't mean it caused memory leak and it will be different for different threads.
How to debug:
Number of simple steps can be taken to narrow down, but before anything ensure that you have the debug version with valid pdb files for all the loaded binaries of the process.
To know whether is it a leak, monitor process "working set", "virtual bytes" counters either via task manager or preferably via perfmon, since it is much more accurate, also it provides visual graph.
Now a leak is a leak, so steps like increasing address space in a 32 bit system to 3 GB in place of default 2 GB can only help for sometime, but perfmon will tell you if there's a stabilization point, like in few cases, process needs 2.2 GB of memory, so 2 GB by default is not enough but 3 GB in boot.config and UserVA setting for fine tuning 3 GB will help avoiding exception.
If you are using 64 bit system then that's not a worry point, but ensure that your binary is compiled for X64 or any CPU, a 32 bit binary will run as a WOW process and will have limitation on 64 bit system too.
Also try a small utility handles from sysinternals, running it mutliple times in a batch for a process will provide details of a leaking handle, in terms of number of handles like file, mutex etc allocated
Once you have confirmed a genuine leak, not a settings or configuration or system issue, then comes the memory profilers. In free ones, you get lot of information through free tools like windbg, umdh and leakdiag, they infact point you to exact stack trace which is leaking. umdh and leakdiag are both very good tools, they let you know the leaking function. Leakdiag is more exhaustive than UMDH, but for runtime heap UMDH is good enough
Professional memory profilers like:
Dot Memory - http://www.jetbrains.com/dotmemory/
Ants - Red Gate - http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/
are also very good, i personally find Dot memory much more helpful and it helps in quickly pointing to the leaking function or type, with little effort, provided you have correct symbol files. Both have free download version available
Mostly resolving out of memory exception is a gradual and iterative process, because this one could be hidden deep inside, leaking a chunk memory with every execution and bringing the complete process to knees. Let me know if you need help in using a specific tool, then we can see what more can be done to debug the issue further. Happy Debugging
Sounds to me like there's a bug in the SocketIOClient DLL. Without the DLL I cannot reproduce the problem, but tracking it down sounds easy.
Since C# is a garbage collected language, the only way to get an Out Of Memory (OOM) is if you have too much memory allocated that cannot be tracked down to a 'root object'. There are several kinds of root objects:
Static variables (or threadstatic)
Methods variables (locals / arguments) in your stack trace
All objects that you reference from these two (direct / indirect) will contribute to your memory pressure. If you allocate memory that's not available, the GC will first attempt to free memory before throwing an OOM; if not enough memory is free after the GC completed, an OOM will be thrown.
One obvious reason why this might happen is because you're running a 32-bit process, which is the default in Visual Studio nowadays. This can be fixed in the project properties. However, most processes don't need more than 2 GB of memory, so it's more likely that you 'leaked' memory somewhere. So let's break it down:
Locals or arguments that leak
Ways to solve these kind of OOM:
Open visual studio, ctrl d,e (or debug -> exceptions)
Click OutOfMemoryException -> check the 'throw' box
Run the program.
When the out of memory (OOM) hits, you browse through the nodes in the stack trace (or 'parallel stacks') and check the size of the variables. In most cases it's a single buffer or collection that causes the problem. F.ex. in your case a buffer might fill up with socket data, which is never emptied.
Static variables that leak
Other cases of OOM are usually buffers that granually fill up and have a reference to the main tree. The easiest way to find these is to use a memory profiler, like Red Gate / ANTS memory profiler. Run your program in the profiler, take some snapshots and check for 'large instances'.
In general, I usually try to avoid static variables alltogether, which solves a whole world of problems.
Oh and in this case...
Perhaps it's worth noting that there are a lot of good socket libraries out there... even though I don't know the specifics of SocketIOClient, you might want to consider using a widely supported, proven socket library like WCF/SOAP or Protobuf. There's a lot of material online on how to use these in just about any scenario, so if the problem is in SocketIOClient you might want to consider that...
I would guess you are running your timer too frequently and eating memory to an unsustainable point. Did you try lowering its frequency?
If lowering the frequency doesn't help, your code or the SocketIOClient.dll library may be leaking memory. I suggest that first you review the usage of that library to verify that you are not leaving resources open.

System.StackOverflowException from XP machine to 7.0 machine

I have a c# application which I used to run on a XP machine.
I switched recently to a Windows 7.0 machine.
I have the following error message when being in debugger: "System.StackOverflowException". Still have the XP machine, don't have the problem with this one.
It's overflowing in the middle of a recursive algorithm.
Anyone familiar with this problem? Is that the OS which has to do with this or is that the machine itself?
Many thanks for your help,
Michael
It would be helpful to know just how deep the recursion goes in XP before reaching the base case, and where it errors in Win7.
Theoretically, a Windows 7 process should have more available stack space than a WinXP process; at the very least, they should be the same. However, there are other factors at play here. Check out this blog post: http://blogs.technet.com/b/markrussinovich/archive/2009/07/08/3261309.aspx
In short, the limiting factor is usually "resident available memory"; this is physical RAM (not page file space) that is available for data that must be kept there and can't be swapped to the page file. A lot of things must be kept "resident" on the average computer and cannot be swapped out to the page file; most important is that anything that must be run in "kernel mode" (requiring direct access to the core system) must be kept in RAM to avoid page faults, even when there are no active threads for that process at the time.
Windows 7 has more of these "kernel-mode" processes. For instance, Windows Aero (which wasn't part of WinXP) uses your graphics card to accelerate rendering of the desktop, and so it must run in kernel mode. The Windows 7 kernel itself is larger, because it includes additional security and additional built-in hardware support. Windows 7 also has additional background processes etc that run in kernel mode that weren't in WinXP.
So, all other things being equal (including RAM), a Windows 7 machine will actually have less resident memory available to commit to your recursive algorithm, meaning that the algorithm will not be able to recurse deeply enough to reach the base case before a call triggers a StackOverflowException due to Windows not having enough resident memory to meet the "commit" required for the new call.
In addition, Windows 7 arranges things in memory differently. Older Windows versions (XP and older) reserved a memory space for each new process in roughly sequential fashion; the N+1th process (or thread) is given a memory address one block after the last one reserved for the Nth process/thread. Beginning with Windows Vista, memory was allocated in a more "random" fashion; Windows will choose a location in memory that may or may not be adjacent to any other reserved block (it's only guaranteed not to be a part of any other reserved block). This is a security feature designed to confuse malware and prevent it from successfully snooping around in other processes' memory. However, the less space-efficient allocation scheme means that the OS will more quickly run out of 1MB blocks of contiguous RAM to allocate to each new thread. At that point, it begins allocating the gaps. So, depending on your Windows 7 machine's specific memory usage footprint, the thread for your recursive function may request the usual 1MB of stack space, and be given a pointer by the OS which actually only has 128K of contiguous space. Your program won't be able to tell the difference, until it can't actually commit all the space it thought it had reserved. This can produce Heisenbugs where it'll work one time but fail the next because of non-deterministic differences in the exact memory space Windows reserves for the thread each time.
The answer to all of this is "more RAM". The amount needed by the core kernel-mode processes is relatively static, so every GB of additional RAM you can add is a GB that is available solely for user program processes and threads.
How recursive is recursive?
Anything deeper than about ten or so could be risky.
If you're exhausting the stack and you're sure it's not a bug, you could manage your own stack...
For instance:
void Process(SomeType foo)
{
DoWork(foo); //work on foo
foreach(var child in foo.Children)
{
Process(child);
}
}
could become
void Process(SomeType foo)
{
Stack<SomeType> bar=new Stack<SomeType>();
bar.Push(foo);
while(bar.Any())
{
var item=bar.Pop();
DoWork(item);//work on item
foreach(var child in item.Children)
{
bar.Push(child);
}
}
}
thus eliminating any CLR call-stack problems.
Of course, this won't fix an unbounded recursion.
I don't believe this has anything to do with the physical RAM on your PC. I suspect the reason you didn't happen to see it on XP is simply that Windows 7 probably has a (slightly?) different version of .Net.
Clearly, you need to somehow limit the depth of your recursion (or substitute a non-recursive loop).
But you can potentially configure your .Net stack(s). Please look at these links:
http://www.atalasoft.com/cs/blogs/rickm/archive/2008/04/22/increasing-the-size-of-your-stack-net-memory-management-part-3.aspx
http://msdn.microsoft.com/en-us/library/5cykbwz4.aspx
How does the .NET IL .maxstack directive work?

When and how is the .NET managed heap getting swapped?

My small stress test, which allocates random length arrays (100..200MB each) in a loop, shows different behaviour on a 64 bit Win7 machine and on a 32 bit XP (in a VM). Both systems first normally allocate as much arrays as will fit into the LOH. Then the LOH gets bigger and bigger until the virtual address space available is filled up. Expected behaviour so far. But than - on further requests - both behave differently:
While on Win7 an OutOfMemoryException (OOM) is thrown, on XP it seems, the heap gets increased and even swapped to disk - at least no OOM is thrown. (Dont know, if this may have to do with XP running in a virtual box.)
Question:
How does the runtime (or the OS?) decide, whether for managed memory allocation requests, if it is too large to get allocated, a OOM is generated or the large object heap is getting increased - eventually even swapped to disk?
If it is swapped, when does an OOM occour than?
IMO this question is important to all production environments, potentially dealing with larger datasets. Somehow it feels more "safe" to know, the system would rather slow down dramatically in such situations (by swapping) than simply throwing an OOM. At least, it should somehow be deterministically, right?
#Edit: the app is a 32 bit application, therefore running in 32 bit mode on Win 7.
The normal rules apply, a managed process is not treated differently by the Windows memory manager. The ultimate source for chunks of memory is the Windows memory manager. If it cannot find a hole in the virtual memory address space to fit the requested memory allocation then it fails the VirtualAlloc() call and the CLR generates OOM.
Same for swapping behavior, if pages in RAM are needed to map pages of other processes or even pages of the same process then they'll get swapped out. This is not otherwise associated with OOM.
You cannot assume it will work exactly the same on XP as it does on Win7 x64. Getting OOM on x64 when you build your program targeting AnyCPU is quite unusual, a 64-bit operating system has a very large virtual memory address space. The upper limit is set by the maximum size of the paging file. A 32-bit program will run in the WOW emulation layer, it can have a 4 GB address space if you set the LARGEADDRESSAWARE option bit with Editbin.exe.
You can use SysInteral's VMMap utility to see how the address space of your process is carved up.

Categories