C# - GC.GetTotalMemory() Question - c#

I'm creating a C# based Windows service that will run 24x7 for months on end. I'd like to be able to track general memory usage of my service. It doesn't need to be exact down to the byte. A general amount allocated will suffice. I'll be monitoring this for trends in memory consumption. Is GC.GetTotalMemory() an appropriate way to monitor this?
I'm aware of Performance Monitor and the use of counters. However, this service is going to be running on at least 12 different servers. I don't want to track PM on 12 different servers. The service will persist all performance and memory usage information, from all instances, to a central DB to avoid this, and for easier analysis.

Only if it's a purely managed-memory application. If any of it is calling off to unmanaged code, the memory allocated there will never be registered with the garbage collector (unless someone remembers to call GC.AddMemoryPressure).

GC.GetTotalMemory retrieves the amount of memory thought to be allocated. It only knows about memory allocated by the managed components, unless you call GC.AddMemoryPressure to tell it about other memory allocated.
You can get a better idea of the real amount of memory allocated by reading Process.WorkingSet64, Process.VirtualMemory64, and other such properties of the Process class. Just call Process.GetCurrentProcess, and then get what you need.
That said, you're probably better off using the performance counters.

This isn't a direct answer to your question, but was prompted by your intent to keep your service running 24x7 for months.
I would be very aware of heap fragmentation. See this article for an explanation

Use performance counters. You can set them up to log to file.
Alternatively you can read performance counters in your code, and save out the data to a central location.

Related

Why does .NET reserve so much memory for my application?

When I run my application, in a profiler I see that is uses about 80MB of memory (total committed bytes, performance counter). But when I look at the size of the allocated memory, it is over 400MB!
So my question is, why is .NET reserving so much memory for my application? Is this normal?
you should read Memory Mystery. I had similar questions a while ago and stopped asking myself after reading this.
I read other sources, but I cant find now, use keywords "unreasonable allocation of memory windows OS". In a nutshell, OS gives more than your app require depending upon physically available memory resources
for e.g. if you are running your app on two machines with different RAM, it can be guaranteed that both these machines will have different memory allocations
As you no doubt know, there is a massive difference between actual memory used and allocated. An application's allocated memory doesn't mean that it's actually being used anywhere; all it really means is that the OS has 'marked' a zone of virtual memory (which is exactly that - virtual) ready for use by the application.
The memory isn't necessarily being used or starving other processes - it just could if the app starts to fill it.
This allocated number, also, will likely scale based on the overall memory ecosystem of the machine. If there's plenty of room when an app starts up, then it'll likely grab a larger allocation than if there's less.
That principle is the same as the one which says it's good practise to create a List<T>, say, with a reasonable initial capacity that'll mean a decent number of items can be added before resizing needs to take place. The OS takes the same approach with memory usage.
"Reserving" memory is by no means the same as "allocated" ram. Read the posts Steve and Krishna linked to.
The part your client needs to look at is Private Bytes. But even that isn't exactly a hard number as parts of your app may be swapped to the virtual disk.
In short, unless your Private Bytes section is pretty well out of control OR you have leaks (ie: undisposed unmanaged resources) you (and your client) should ignore this and let the OS manage what is allocated, what's in physical ram and what's swapped out to disk.
It's fairly common for software to issue one large memory request to the underlying operating system, then internally manage its own use of the allocated memory block. So common, in fact, that Windows' (and other operating systems') memory manager explicitly supports the concept, called "uncommitted memory" -- memory that the process has requested but hasn't made use of yet. That memory really doesn't exist as far as bits taking up space on your DRAM chips until the process actually makes use of it. The preallocation of memory effectively costs nothing.
Applications do this for many reasons -- though it's primarily done for performance reasons. An application with knowledge of its own memory usage patterns can optimize its allocator for that pattern; similarly, for address locality reasons, as successive memory requests from the OS won't always be 'next' to each other in memory, which can affect the performance of the CPU cache and could even preclude you from using some optimizations.
.NET in particular allocates space for the managed heap ahead of time, for both of the reasons listed above. In most cases, allocating memory on the managed heap merely involves incrementing a top-of-heap pointer, which is incredibly fast --- and also not possible with the standard memory allocator (which has a more general design to perform acceptably in a fragmented heap, whereas the CLR's GC uses memory compaction to sharply limit the fragmentation of the managed heap), and also not possible if the managed heap itself is fragmented across the process address space due to multiple allocations at different points in time.

C# - Method of programmatically attempting to check for memory leak in block of code

I'm trying to see how feasible it is to attempt to accurately determine that there is a potential memory leak in a block of managed .NET code programmatically. The reason to do this would be to isolate some block of code that appears to be leaking memory, and to then use a standard profiler to further determine the actual cause of the leak. In my particular business case, I would be loading a 3rd party class that extends one of mine to check it for leaks.
The approach that first comes to mind is something like this:
Wait for GC to run.
Get the current allocated memory from the GC.
[Run block of managed code.]
Wait for GC to run.
Get the current allocated memory from the GC and subtract from the allocated memory recorded before running the block of code. Is it correct that the difference should theoretically be (near) 0 if all objects allocated in the block of code that was run were dereferenced appropriately and collected?
Certainly the immediate issue with this is that there will likely be waiting...and waiting...and waiting for the non-deterministic GC to run. If we skip that aspect, the calculation for determining if the block of code leaked any memory however can vary wildly, and would not necessarily be accurate, as some items may not have been collected at the time.
Does the above seem like my best option of attempting to determine somewhat accurately if a block of code is leaking memory? Or are there other working methods that are used in real-life? Thanks.
Personally, I would never dare to do memory profiling on my own. I'll fear that I either do not have the full knowledge and that it would take endless time to do so.
Instead I used successfully memory profilers like Red Gate's ANTS Memory Profiler.
While using ANTS Profiler is awesome it doesn't help if your problem is only seen in production.
Tess Ferrandez has a series of Labs that demonstrate how to debug production problems, including memory leaks. They focus on ASP.NET but it can be use for other types of applications as well.
You really need a Memory Profiler like this one: With that, you can:
start your application, take a memory snapshot (manually or from your code)
[Run block of managed code]
take another memory snapshot
compare the two snapshots and see which new objects are now on the managed heap
I believe it does exactly what you want to do, only far less painful. It also has some helpful filters like "show objects that are kept alive by delegates". It can also analyze memory dumps from a production system.

.net memory measuring and profiling

I understand there are many questions related to this, so I'll be very specific.
I create Console application with two instructions. Create a List with some large capacity and fill it with sample data, and then clear that List or make it equal to null.
What I want to know is if there is a way for me to know/measure/profile while debugging or not, if the actual memory used by the application after the list was cleared and null-ed is about the same as before the list was created and populated. I know for sure that the application has disposed of the information and the GC has finished collecting, but can I know for sure how much memory my application would consume after this?
I understand that during the process of filling the list, a lot of memory is allocated and after it's been cleared that memory may become available to other process if it needs it, but is it possible to measure the real memory consumed by the application at the end?
Thanks
Edit: OK, here is my real scenario and objective. I work on a WPF application that works with large amounts of data read through USB device. At some point, the application allocates about 700+ MB of memory to store all the List data, which it parses, analyzes and then writes to the filesystem. When I write the data to the filesystem, I clear all the Lists and dispose all collections that previously held the large data, so I can do another data processing. I want to know that I won't run into performance issues or eventually use up all memory. I'm fine with my program using a lot of memory, but I'm not fine with it using it all after few USB processings.
How can I go around controlling this? Are memory or process profilers used in case like this? Simply using Task Manager, I see my application taking up 800 MB of memory, but after I clear the collections, the memory stays the same. I understand this won't go down unless windows needs it, so I was wondering if I can know for sure that the memory is cleared and free to be used (by my application or windows)
It is very hard to measure "real memory" usage on Windows if you mean physical memory. Most likley you want something else like:
Amount of memory allocated for the process (see Zooba's answer)
Amount of Managed memory allocated - CLR Profiler, or any other profiler listed in this one - Best .NET memory and performance profiler?
What Task Manager reports for your application
Note that it is not necessary that after garbage collection is finished amount of memory allocated for your process (1) changes - GC may keep allocated memory for future managed allocations (this behavior is not specific to CLR for memory allcation - most memory allocators keep free blocks for later usage unless forced to release it by some means). The http://blogs.msdn.com/b/maoni/ blog is excelent source for details on GC/memory.
Process Explorer will give you all the information you need. Specifically, you will probably be most interested in the "private bytes history" graph for your process.
Alternatively, it is possible to use Window's Performance Monitor to track your specific application. This should give identical information to Process Explorer, though it will let you write the actual numbers out to a separate file.
(A picture because I can...)
I personaly use SciTech Memory Profiler
It has a real time option that you can use to see your memory usage. It has help me find a number of problems with leaking memory.
Try ANTS Profiler. Its not free but you can try the trial version.
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/

C# memory usage

How I can get the actual memory used in my C# application?
Task Manager shows different metrics.
Process Explorer shows increased usage of private bytes.
Performance counter (perfmon.msc) showed different metrics
when I used .NET memory profiler, it showed most of the memory is garbage collected and only few Live bytes.
I do not know which to believe.
Memory usage is somewhat more complicated than displaying a single number or two. I suggest you take a look at Mark Russinovich's excellent post on the different kinds of counters in Windows.
.NET only complicates matters further. A .NET process is just another Windows process, so obviously it will have all the regular metrics, but in addition to that the CLR acts as a memory manager for the managed application. So depending on the point of view these numbers will vary.
The CLR effectively allocates and frees virtual memory in big chunks on behalf of the .NET application and then hands out bits of memory to the application as needed. So while your application may use very little memory at a given point in time this memory may or may not have been released to the OS.
On top of that the CLR itself uses memory to load IL, compile IL to native code, store all the type information and so forth. All of this adds to the memory footprint of the process.
If you want to know how much memory your managed application uses for data, the Bytes in all heaps counter is useful. Private bytes may be used as a somewhat rough estimate for the application's memory usage on the process level.
You may also want to check out these related questions:
Reducing memory usage of .NET applications?
How to detect where a Memory Leak is?
If you are using VS 2010 you can use Visual Studio 2010 Profiler.
This tool can create very informative reports for you.
If you want to know approximately how many bytes are allocated on the GC heap (ignoring memory used by the runtime, the JIT compiler, etc.), you can call GC.GetTotalMemory. We've used this when tracking down memory leaks.
Download VADump (If you do not have it yet)
Usage: VADUMP.EXE -sop [PID]
Well, what is "actual memory used in my C# application" ?
Thanks to Virtual memory and (several) Memory management layers in Windows and the CLR, this is a rather complicated question.
From the sources you mention the CLR profiler will give you the most detailed breakdown, I would call that the most accurate.
But there is no 'single number' answer, the question whether Application A use more or less memory than B can be impossible to answer.
So what do you actually want to know? Do you have a concrete performance problem to solve?

Points to consider when implementing shared memory

I am planning to implement boost's shared memory between a Server (C++) and client (C# application). There is only reader and one writer and frequency of data share (read and write) is thousands of time per millisecond.
What are the risks involved?
thousands of times per ms doesn't say much. If it's one byte a time that's not a lot. If it's more.. well, it all depends on how much.
I would advise against sharing memory. I would suggest "don't communicate by sharing, share by communicating". If, once you're done, profiling shows that the extra memory copy is indeed the bottleneck, then, yea, maybe some interop-based shared memory solution is the fix. Often you find out that's not the case though.
I just want to make comments about shared memory in general
Expect the shared memory to be mapped into different places in virtual memory. This means that passing pointers between one process and the next is useless, you must use offsets from the shared memory base address
You will not have access to malloc/new and free/delete heap management functions but you do have nice block of memory to set up your own memory management objects.
You must devise a clear ownership model of which process has access to which piece of memory.
Any access to objects shared across the processes (such as bookkeeping objects) must be protected by a mutex
Look for strategies that minimize the time spent while the mutex is locked, freelists are your friend.
In any producer consumer model ensure that you have thought clearly about flow control. You must not overflow or underflow the shared space. Semaphores are your friend.
I think that about covers it. And I agree with the above, rather use IPC to copy memory unless your really have to use shared memory, the pitfalls may eat you alive.
Well, as of .NET 3.5, there's no support for shared memory. You'd have to use P/Invoke, which is a pain. The bigger problem is that C#'s memory model is not very conducive to sharing with C++.
edit
As an additional risk, it's going to require holding OS handles, which means that any mistake could result in a leak that will not be fixed by anything short of killing the process. You can protect against much of this by using SafeHandle instead of IntPtr.

Categories