We are designing an enterprise application which caches a lot of data from back end. The users are allowed to open arbitrary number of app windows, and each loads its own data and caches it. To somehow manage memory consumption and prevent overall OS performance decrease, we decided to write a cache manager that will automatically monitor app's memory footprint and remove data from cache when needed.
So the problem is we have difficulties identifying whether it is time to free up memory. Currently we use a very simple approach - we just start throwing away stuff from cache when app's memory usage exceeds 80% of physical memory.
Are there any (alternative?) established practices for dealing with such kind of problem?
This is basically OK. There is no really good strategy. If there are multiple competing applications this can lead to cache competitions and false evictions.
If you pick the threshold too low you waste cache space. If it's too high nothing else might fit into memory including the file cache, DLLs, ...
What do you mean by "available physical memory"? Do you mean installed memory or memory that's free? How can an app use 80% of free memory? I'm unclear on the definition that you are using.
SQL Server uses memory until the OS signals that it's low on memory (I believe that happens when 95% of "something" is being used).
You certainly do not want to use the GC to free memory. It will routinely kill your entire cache.
Maybe you can move the cache contents to disk entirely? Or, you could share the cache between .NET processes by having a hidden cache server process that can be queries by app processes.
I want to stress that if your app consumes 99% of installed RAM (as an example) performance will be very bad because the file cache is almost empty. This means that even DLLs and .NET NGEN'ed code will be paged out and in frequently.
Maybe a better strategy is to assume, that 1GB will be needed to appropriately cache the OS and app files. So you can consume memory until there are only 10% free of the installed RAM minus 1 GB.
Related
As far as I know single .net application can allocate a lot from available memory. It will be released by GC at some point. I never have to care much about details. It just works.
But what is going to happen if native application is started while most/all memory is used by .net application? Will GC respects this and free memory before? Or will Windows "take care" and will swap memory of .net appplication into swap-file?
I have a single PC with slow HDD, where my WPF application (MVVM, bitmaps, database, pretty memory-intensive) occupy 200-2000 Mb (up to 80%) of RAM and I get reports what PC become slow when running Office, antivirus, etc.
In e.g. Photoshop there is a setting to limit amount of used RAM. Now I am thinking whenever such thing make sense in my WPF application.
Is uncontrollable GC memory allocation a problem or not? Should I limit amount of used by my application memory?
The problem described does not look related to .NET memory management/GC, assuming that application just holds the operation data (in use) in the memory.
.NET app is not different from any other user app so the OS will treat them in the same way: move infrequently used memory blocks to the swap file.
If the application occupies 80% of RAM and works with memory intensively, competing with other applications, the whole process will generate a lot of page faults causing large traffic between swap file and memory. This leads to severe performance degradation, especially in case of slow HDD.
The .NET part in this game is just to clean the application memory from time to time from the data no longer in use. If there are just big amount of data required for the app to run and just adding more RAM is not an option, then application redesign (which limits the amount of loaded data somehow) is a considerable approach.
We are testing a 4-process WCF IIS application (x3 release versions) for memory stability (leaks) by simply pinging it every ~1s as a Load Balancer might. It runs fine for >12hours if nothing else is running on the server.
However if we purposely reduce the total available memory (fixing the page, reduce physical memory, launch other apps) and push the physical memory usage to 97% and leave it there for 5 minutes or more, oftentimes Windows will sense the condition and shutdown one of the processes. Note it also fails if the total memory is pushed to 97% (by using a fixed page file).
However, analyzing the memory footprint of one of the surviving processes using the RedGate tool, shows this:
Since the requests are just a steady ping, there seems to be no practical reason for .NET to hold the 269MB free memory when the server is starved. About 50% of the IIS Processes appear to be in this state (~1.8GB).
The App is compiled against .NET 4.0, gcServer true. IIS Gate checking was set to 0% (minFreeMemoryPercentageToActivateService="0") although we would probably set it to 2% in production.
The Server is 2008 R2, ~4GB physical 4GB fixed page, was tested with 4.0 and then 4.5.1 (didn't matter).
There is an answer to a similar question by #atanamir claiming:
".NET will free its heap back to the OS once you're running low on
physical memory."
Anyone know of any reference for that claim? Could it be version specific?
Refs:
.NET Free memory usage (how to prevent overallocation / release memory to the OS)
When is memory, allocated by .NET process, released back to Windows
This is not exactly an answer to the question you asked (why) but it should be a way to achieve what you’re trying to do.
There is something new with .NET Framework 4.5 - source
Once a site is running, its use of the garbage-collector (GC) heap can be a significant factor in its memory consumption. Like any garbage collector, the .NET Framework GC makes tradeoffs between CPU time (frequency and significance of collections) and memory consumption (extra space that is used for new, freed, or free-able objects). For previous releases, we have provided guidance on how to configure the GC to achieve the right balance.
For the .NET Framework 4.5, instead of multiple standalone settings, a workload-defined configuration setting is available that enables all of the previously recommended GC settings as well as new tuning that delivers additional performance for the per-site working set. For example, there is no need to set gcServer, gcConcurrent, etc.
Also here they state:
Tuning GC for high-density Web hosting: GC can impact a site’s memory consumption, but it can be tuned to enable better performance. You can tune or configure GC for better CPU performance (slow down frequency of collections) or lower memory consumption (that is, more frequent collections to free up memory sooner) […] in order to achieve smaller memory consumption (working set) per site
To enable GC memory tuning, add the following setting to the Windows\Microsoft.NET\Framework\v4.0.30319\aspnet.config file and Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet.config file:
<configuration>
<!-- ... -->
<runtime>
<performanceScenario value="HighDensityWebHosting" />
<!-- ... -->
Basically, based on the tests I made, I saw that it will consume more CPU, less memory and make the GC to be more aggressive in terms of clean up and memory release process.
We tested this setting within our infrastructure (IIS 7.5, new 4.5 framework) and the results were impressive. High memory usage leading to out of memory exceptions are no more an issue.
Hope it helps.
The information given by atanamir can be found on this MSDN page.
Garbage collection occurs when one of the following conditions is true:
The system has low physical memory.
The memory that is used by allocated objects on the managed heap surpasses an acceptable threshold. This threshold is continuously adjusted as the process runs.
The GC.Collect method is called. In almost all cases, you do not have to call this method, because the garbage collector runs continuously. This method is primarily used for unique situations and testing.
I have a backend application (windows service) built on top of .NET Framework 4.5 (C#). The application runs on Windows Server 2008 R2 server, with 64GB of memory.
Due to dependencies I had, I used to compile and run this application as a 32-bit process (compile it as x86) and use /LARGEADDRESSAWARE flag to let the application use more than 2GB memory in the user space. Using this configuration, the average memory consumption (according to the "memory (private working set)" column in the task manager) was about 300-400MB.
The reason I needed the LARGEADDRESSAWARE flag, and the reason i changed it to 64-bit, is that although 300-400MB is the average, once in a while this app doing stuff that involves loading a lot of data into the memory (and it's much easier to develop and manage this kind of stuff when you're not very limited memory-wise).
Recently (after removing those x86 native dependencies), I changed the application compilation to "Any CPU", so now, on the production server, it runs as a 64-bit process. Starting when I did this change, the average memory consumption (according to the task manager) got to new levels: 3-4 GB, when there is no other change that may explain this change in behavior.
Here are some additional facts about the current state:
According to the "#Bytes in all heaps" counter, the total amount of memory is about 600MB.
When debugging the process with WinDbg+SOS, !dumpheap -stat showed that there are about 250-300MB free, but all the other object was much less than the total amount of memory the process used.
According to the GC performance counters, there are Gen0 collections on regular basis. In fact, the "% Time in GC" counter indicates that 10-20% in average of the time spent on GC (which makes sense given the nature of the application - a lot of allocations of information and data structures that are in use for short time).
I'm using Server GC in this app.
There is no memory problem on the server. It uses about 50-60% of the available memory (64GB).
My questions:
Why is a great difference between the memory allocated to the process (according to the task manager) and the actual size of the CLR heap (there is no un-managed code in the process that can explain this)?
Why is the 64-bit process takes more memory compared to the same process running as 32-bit process? even when considering that pointers takes twice the size, there's a big difference.
Can i do something to lower the memory consumption, or to have better understanding of the issue?
Thanks!
There are a few things to consider:
1) You mentioned you're using Server GC mode. In server GC mode, CLR creates one heap for every CPU core on the machine, which is more efficient more multi-threaded processing in server processes, e.g. Asp.Net processes. Each heap has two segment: one for small objects, one for large objects. Each segment starts with 4 gb reserved memory. Basically server GC mode tries to use more memory on the system to trade for overall system performance.
2) Pointer is bigger on 64-bit, of course.
3) Foreground Gen2 GC becomes super expensive in server GC mode due to heap is much larger. So CLR tries super hard to reduce the number of foreground Gen2 GC, sometimes using background Gen2 GC.
4) Depending on usage, fragmentation can become a real issue. I've seen heaps with 98% fragmentation (98% heap is free blocks).
To really solve your problem, you need to get an ETW trace + a memory dump, and then use tools like PerfView for detailed analysis.
A 64-bit process will naturally use 64-bit pointers, effectively doubling the memory usage of every reference. Certain platform-dependent variables such as IntPtr will also take up double the space.
The first and best thing you can do is to run a memory profiler to see where exactly the extra memory footprint is coming from. Anything else is speculative!
I understand there are many questions related to this, so I'll be very specific.
I create Console application with two instructions. Create a List with some large capacity and fill it with sample data, and then clear that List or make it equal to null.
What I want to know is if there is a way for me to know/measure/profile while debugging or not, if the actual memory used by the application after the list was cleared and null-ed is about the same as before the list was created and populated. I know for sure that the application has disposed of the information and the GC has finished collecting, but can I know for sure how much memory my application would consume after this?
I understand that during the process of filling the list, a lot of memory is allocated and after it's been cleared that memory may become available to other process if it needs it, but is it possible to measure the real memory consumed by the application at the end?
Thanks
Edit: OK, here is my real scenario and objective. I work on a WPF application that works with large amounts of data read through USB device. At some point, the application allocates about 700+ MB of memory to store all the List data, which it parses, analyzes and then writes to the filesystem. When I write the data to the filesystem, I clear all the Lists and dispose all collections that previously held the large data, so I can do another data processing. I want to know that I won't run into performance issues or eventually use up all memory. I'm fine with my program using a lot of memory, but I'm not fine with it using it all after few USB processings.
How can I go around controlling this? Are memory or process profilers used in case like this? Simply using Task Manager, I see my application taking up 800 MB of memory, but after I clear the collections, the memory stays the same. I understand this won't go down unless windows needs it, so I was wondering if I can know for sure that the memory is cleared and free to be used (by my application or windows)
It is very hard to measure "real memory" usage on Windows if you mean physical memory. Most likley you want something else like:
Amount of memory allocated for the process (see Zooba's answer)
Amount of Managed memory allocated - CLR Profiler, or any other profiler listed in this one - Best .NET memory and performance profiler?
What Task Manager reports for your application
Note that it is not necessary that after garbage collection is finished amount of memory allocated for your process (1) changes - GC may keep allocated memory for future managed allocations (this behavior is not specific to CLR for memory allcation - most memory allocators keep free blocks for later usage unless forced to release it by some means). The http://blogs.msdn.com/b/maoni/ blog is excelent source for details on GC/memory.
Process Explorer will give you all the information you need. Specifically, you will probably be most interested in the "private bytes history" graph for your process.
Alternatively, it is possible to use Window's Performance Monitor to track your specific application. This should give identical information to Process Explorer, though it will let you write the actual numbers out to a separate file.
(A picture because I can...)
I personaly use SciTech Memory Profiler
It has a real time option that you can use to see your memory usage. It has help me find a number of problems with leaking memory.
Try ANTS Profiler. Its not free but you can try the trial version.
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
I was hoping someone could explain why my application when loaded uses varying amounts of RAM. I'm speaking about a compiled version that uses the exe directly. It's a pretty basic applications and there are no conditional branches in the startup of the application. Yet every time I start it up the RAM amount varies from 6MB-16MB.
I know it's on the small end of usage anyways but I'm curious of why this happens.
Edit: to give a bit more clarification on what the app actually does.
It is a WinForm project.
It connects to a database using sqlclient to retrieve a list of servers.
Based on that list a series of buttons are created to start and stop a service on those servers.
Using the System.Timers class to audit the status of the services on those servers every 20 seconds.
The applications at this point sits there and waits for user input via one of the button clicks to start/stop the service.
The trick here is that the amount of RAM reported by the task schedule is not the amount of RAM used by your application. Rather, it is the amount of RAM reserved for use by your application.
Remember that with managed frameworks like .Net, you don't request or release memory directly. Rather, a garbage collector manages the memory for you. The amount of memory reserved for your application at a given time can vary and depends on a lot of different factors, including memory pressure created at the time by other programs.
Think of it this way: if you need 10 MBs of RAM for your app, is it faster to request and return it to the operating system 1 MB at a time over 10 requests/releases or reserve the block at once with one request/release? Now extend that to a scenario where you don't know exactly how much RAM you'll need, only that it's somewhere in the neighborhood of 10 MB. Additionally, your computer has 1 GB sitting there unused. Of course the best thing to do is take a good-sized chunk of that available RAM. Even 20 or 30 MB wouldn't be unreasonable relative to the ram that's sitting there unused, because unused RAM is wasted performance.
If your system later starts to feel some memory pressure then .Net can easily return some RAM to the system. This is one of the ways managed languages can sometimes give better performance than languages like C++ with traditional memory management: a garbage collector that can more easily take the entire system health into account when allocating memory.
What are you using to determine how much memory is being "used". Even with regular applications Windows will aggressively allocate unused memory in advance, with .NET applications it's even more complicated as to how much memory is actually being used, and how much Windows is just tacking on so that it will be available instantly when needed. If another application actually asks for memory this reserved memory will be repurposed.
One way to check is to minimize the application (at least on XP). If you are looking at the memory use in something like task manager you'll notice it drops off right away, eliminating the seemly "random" amount allocated.
It may be related to the jitter, after the first load the jitter already created a compiled version and it doesn't need to run. Other than that you would have to give us some more details about the app and which kind of memory you are referring to.