As far as I know single .net application can allocate a lot from available memory. It will be released by GC at some point. I never have to care much about details. It just works.
But what is going to happen if native application is started while most/all memory is used by .net application? Will GC respects this and free memory before? Or will Windows "take care" and will swap memory of .net appplication into swap-file?
I have a single PC with slow HDD, where my WPF application (MVVM, bitmaps, database, pretty memory-intensive) occupy 200-2000 Mb (up to 80%) of RAM and I get reports what PC become slow when running Office, antivirus, etc.
In e.g. Photoshop there is a setting to limit amount of used RAM. Now I am thinking whenever such thing make sense in my WPF application.
Is uncontrollable GC memory allocation a problem or not? Should I limit amount of used by my application memory?
The problem described does not look related to .NET memory management/GC, assuming that application just holds the operation data (in use) in the memory.
.NET app is not different from any other user app so the OS will treat them in the same way: move infrequently used memory blocks to the swap file.
If the application occupies 80% of RAM and works with memory intensively, competing with other applications, the whole process will generate a lot of page faults causing large traffic between swap file and memory. This leads to severe performance degradation, especially in case of slow HDD.
The .NET part in this game is just to clean the application memory from time to time from the data no longer in use. If there are just big amount of data required for the app to run and just adding more RAM is not an option, then application redesign (which limits the amount of loaded data somehow) is a considerable approach.
Related
I am working to diagnose a series of OutOfMemoryException problems within an application of ours. This is an internal 32-bit (x86) OWIN-hosted WebAPI that runs within a console application and talks to a series of hardware components in parallel. For that reason it's creating around 20 instances of a library, and the sharp increase in "virtual size" memory matches when those instances are created.
From the output of Process Explorer, and dotMemory, it does not appear that we're allocating that much actual memory within this application:
From reading many, many SO answers I think I understand that our problem is either from fragmentation within the G0, G1, G2 & LOH heaps, or we're possibly bumping into the 2GB addressable memory limit for a 32-bit process running on Windows 7. This application works in batches where it collects a bunch of data from hardware devices, creates collections in memory to aggregate that data into a single object, and then saves it to be retrieved by a client app. This activity is the cause of the spikes in the dotMemory visual, but these data structures are not enormous, which I think the dotMemory chart shows.
Looking at the heaps has shown they rarely grow beyond 10-15MB in size, and I don't see much evidence that the LOH is growing too large or being severely fragmented. I'm really struggling with how to proceed to better understand what's happening here.
So my question is two-fold:
Is it conceivable that we could be hitting that 2GB limit for virtual memory, and that's a cause for these memory exceptions?
If that is a possible cause then am I right in thinking a 64-bit build would get around that?
We are exploring moving to a 64-bit build, but that would require updating some low-level libraries we use to also be 64-bit. It's certainly an option we will explore eventually (if not sooner), but we're trying to understand this situation better before investing the time required.
Update after setting the LARGEADDRESSFLAG
Based a recommendation I set that flag on the binary and interestingly saw the virtual size jump immediately to nearly 3GB. I don't know if I should be alarmed by that?!
I will monitor the application with this configuration for the next several hours.
In my case the advice provided by #ThomasWeller was indeed correct and enabling the "large address aware" flag has allowed this application to run for several days without throwing memory exceptions.
We are designing an enterprise application which caches a lot of data from back end. The users are allowed to open arbitrary number of app windows, and each loads its own data and caches it. To somehow manage memory consumption and prevent overall OS performance decrease, we decided to write a cache manager that will automatically monitor app's memory footprint and remove data from cache when needed.
So the problem is we have difficulties identifying whether it is time to free up memory. Currently we use a very simple approach - we just start throwing away stuff from cache when app's memory usage exceeds 80% of physical memory.
Are there any (alternative?) established practices for dealing with such kind of problem?
This is basically OK. There is no really good strategy. If there are multiple competing applications this can lead to cache competitions and false evictions.
If you pick the threshold too low you waste cache space. If it's too high nothing else might fit into memory including the file cache, DLLs, ...
What do you mean by "available physical memory"? Do you mean installed memory or memory that's free? How can an app use 80% of free memory? I'm unclear on the definition that you are using.
SQL Server uses memory until the OS signals that it's low on memory (I believe that happens when 95% of "something" is being used).
You certainly do not want to use the GC to free memory. It will routinely kill your entire cache.
Maybe you can move the cache contents to disk entirely? Or, you could share the cache between .NET processes by having a hidden cache server process that can be queries by app processes.
I want to stress that if your app consumes 99% of installed RAM (as an example) performance will be very bad because the file cache is almost empty. This means that even DLLs and .NET NGEN'ed code will be paged out and in frequently.
Maybe a better strategy is to assume, that 1GB will be needed to appropriately cache the OS and app files. So you can consume memory until there are only 10% free of the installed RAM minus 1 GB.
I have a backend application (windows service) built on top of .NET Framework 4.5 (C#). The application runs on Windows Server 2008 R2 server, with 64GB of memory.
Due to dependencies I had, I used to compile and run this application as a 32-bit process (compile it as x86) and use /LARGEADDRESSAWARE flag to let the application use more than 2GB memory in the user space. Using this configuration, the average memory consumption (according to the "memory (private working set)" column in the task manager) was about 300-400MB.
The reason I needed the LARGEADDRESSAWARE flag, and the reason i changed it to 64-bit, is that although 300-400MB is the average, once in a while this app doing stuff that involves loading a lot of data into the memory (and it's much easier to develop and manage this kind of stuff when you're not very limited memory-wise).
Recently (after removing those x86 native dependencies), I changed the application compilation to "Any CPU", so now, on the production server, it runs as a 64-bit process. Starting when I did this change, the average memory consumption (according to the task manager) got to new levels: 3-4 GB, when there is no other change that may explain this change in behavior.
Here are some additional facts about the current state:
According to the "#Bytes in all heaps" counter, the total amount of memory is about 600MB.
When debugging the process with WinDbg+SOS, !dumpheap -stat showed that there are about 250-300MB free, but all the other object was much less than the total amount of memory the process used.
According to the GC performance counters, there are Gen0 collections on regular basis. In fact, the "% Time in GC" counter indicates that 10-20% in average of the time spent on GC (which makes sense given the nature of the application - a lot of allocations of information and data structures that are in use for short time).
I'm using Server GC in this app.
There is no memory problem on the server. It uses about 50-60% of the available memory (64GB).
My questions:
Why is a great difference between the memory allocated to the process (according to the task manager) and the actual size of the CLR heap (there is no un-managed code in the process that can explain this)?
Why is the 64-bit process takes more memory compared to the same process running as 32-bit process? even when considering that pointers takes twice the size, there's a big difference.
Can i do something to lower the memory consumption, or to have better understanding of the issue?
Thanks!
There are a few things to consider:
1) You mentioned you're using Server GC mode. In server GC mode, CLR creates one heap for every CPU core on the machine, which is more efficient more multi-threaded processing in server processes, e.g. Asp.Net processes. Each heap has two segment: one for small objects, one for large objects. Each segment starts with 4 gb reserved memory. Basically server GC mode tries to use more memory on the system to trade for overall system performance.
2) Pointer is bigger on 64-bit, of course.
3) Foreground Gen2 GC becomes super expensive in server GC mode due to heap is much larger. So CLR tries super hard to reduce the number of foreground Gen2 GC, sometimes using background Gen2 GC.
4) Depending on usage, fragmentation can become a real issue. I've seen heaps with 98% fragmentation (98% heap is free blocks).
To really solve your problem, you need to get an ETW trace + a memory dump, and then use tools like PerfView for detailed analysis.
A 64-bit process will naturally use 64-bit pointers, effectively doubling the memory usage of every reference. Certain platform-dependent variables such as IntPtr will also take up double the space.
The first and best thing you can do is to run a memory profiler to see where exactly the extra memory footprint is coming from. Anything else is speculative!
I wrote a hello world program (windows application without any UI) in C#. The release-build excutable doesn't do anything but to Thread.Sleep(50000) //50 seconds.
I opened sysinternals (a profiler like task manager). This excutable ate 7MB memory (private bytes)!!
Can anybody explain what is happening and how to make the memory usage smaller.
P.S. I also tried to use NGEN to pre-compile the .exe but still got the same memory usage.
Thanks a lot
C# (and other JIT compiled/interpreted languages) usually end up doing a lot of things for you automatically in the background. While what you've written is simple, there's a lot of stuff going on in the background to support the JIT nature of the application.
That 7MB of memory is relatively small given 2GB of RAM is fairly commonplace these days. And it probably won't go up more unless you do something unusual or allocate lots of arrays and data structures.
If it's a Hello World based on the C# WindowsApplication project type, and there's an int main in Program.cs doing an application.run on a Windows Form, then not only is there a lot of JIT overhead, but there's a lot of Windows Forms overhead too.
End of the day, I'm sure everything is dandy.
.Net apps have a lot of basic overhead, but your memory increase should be relatively small after that point.
Example. My one app consumes 10MB of memory on load, but it only consumes 40MB of memory after loading 60k rows from a Database, each row containing multiple strings and many values.
A small fixed upfront value isn't that bad, on modern computers. Scalability is the primary issue now-a-days. .Net scales fairly well for being managed.
I was hoping someone could explain why my application when loaded uses varying amounts of RAM. I'm speaking about a compiled version that uses the exe directly. It's a pretty basic applications and there are no conditional branches in the startup of the application. Yet every time I start it up the RAM amount varies from 6MB-16MB.
I know it's on the small end of usage anyways but I'm curious of why this happens.
Edit: to give a bit more clarification on what the app actually does.
It is a WinForm project.
It connects to a database using sqlclient to retrieve a list of servers.
Based on that list a series of buttons are created to start and stop a service on those servers.
Using the System.Timers class to audit the status of the services on those servers every 20 seconds.
The applications at this point sits there and waits for user input via one of the button clicks to start/stop the service.
The trick here is that the amount of RAM reported by the task schedule is not the amount of RAM used by your application. Rather, it is the amount of RAM reserved for use by your application.
Remember that with managed frameworks like .Net, you don't request or release memory directly. Rather, a garbage collector manages the memory for you. The amount of memory reserved for your application at a given time can vary and depends on a lot of different factors, including memory pressure created at the time by other programs.
Think of it this way: if you need 10 MBs of RAM for your app, is it faster to request and return it to the operating system 1 MB at a time over 10 requests/releases or reserve the block at once with one request/release? Now extend that to a scenario where you don't know exactly how much RAM you'll need, only that it's somewhere in the neighborhood of 10 MB. Additionally, your computer has 1 GB sitting there unused. Of course the best thing to do is take a good-sized chunk of that available RAM. Even 20 or 30 MB wouldn't be unreasonable relative to the ram that's sitting there unused, because unused RAM is wasted performance.
If your system later starts to feel some memory pressure then .Net can easily return some RAM to the system. This is one of the ways managed languages can sometimes give better performance than languages like C++ with traditional memory management: a garbage collector that can more easily take the entire system health into account when allocating memory.
What are you using to determine how much memory is being "used". Even with regular applications Windows will aggressively allocate unused memory in advance, with .NET applications it's even more complicated as to how much memory is actually being used, and how much Windows is just tacking on so that it will be available instantly when needed. If another application actually asks for memory this reserved memory will be repurposed.
One way to check is to minimize the application (at least on XP). If you are looking at the memory use in something like task manager you'll notice it drops off right away, eliminating the seemly "random" amount allocated.
It may be related to the jitter, after the first load the jitter already created a compiled version and it doesn't need to run. Other than that you would have to give us some more details about the app and which kind of memory you are referring to.