A while back I wrote a simple full TCP connect scanner in C# and compiled that to a DLL. It spits out PSObjects to the pipeline to show the results of the scan. If I am scanning a CIDR /16 subnet, the amount of data I get back is well into the 512MiB according to Task Manager. I would like to find a way to deallocate this memory usage once I am done analyzing the results to free up space for other tasks (we have to work with less than 8 GiB of memory...). The problem is that I cannot get PowerShell to release it's hold on this memory. Does anyone know a good way to free up this memory consumption?
I've tried setting the variable to $null, Remove-Variable, and calling [GC]::Collect(), but the memory usage in Task Manager still remains at the same (or higher) levels. I am at a loss with this seemingly simple task. Maybe the memory is deallocated, but Task Manager just reports the previous allocation?
The only deallocation of managed memory is going to be done by the CLR garbage collector. Since you've already tried a GC collect, I'd say you have a managed memory hoard. That is, you haven't found all of the roots that refer to your objects. I suggest you use a tool like RedGate's Memory Profiler or the free ClrProfiler to determine what is still referencing the objects you have created. I believe even Visual Studio 2013 has the ability to analyze managed memory from a dump file. That may also help you find all objects holding a reference to your data objects. BTW do any of your objects wrap native resources (handles, memory, etc)?
Related
I'm currently working on a website that makes large use of cached data to avoid roundtrips.
At startup we get a "large" graph (hundreds of thouthands of different kinds of objects).
Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization)
I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report
Now what we can gather from this report is that:
1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS)
2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation)
3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code.
So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release.
Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory.
Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run.
Thanks for your help!
A few notes i forgot:
1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile.
2) These are results of a release build, so debug build can not be the issue.
3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)
Objects allocated on the large object heap (objects >= 85,000 bytes, normally arrays) are not compacted by the garbage collector. Microsoft decided that the cost of moving those objects around would be too high.
The recommendation is to reuse large objects if possible to avoid
fragmentation on the managed heap and the VM space.
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
I'm assuming that your large objects are temporary byte arrays created by your deserialization library. If the library allows you to supply your own byte arrays, you could preallocate them at the start of the program and then reuse them.
I know this isn't the answer you'd like to hear, but you can't forcefully release the memory back to the OS. However, for what reason do you want to do so? .NET will free its heap back to the OS once you're running low on physical memory. But if there's an ample amount of free physical memory, .NET will keep its heap to make future allocation of objects faster. If you really wanted to force .NET to release its heap back to the OS, I suppose you could write a C program which just mallocs until it runs out of memory. This should cause the OS to signal .NET to free its unused portion of the heap.
It's better that unused memory be reseved for .NET so that your application will have better allocation performance (since the runtime knows what memory is free and what isn't, allocation can just use the free memory without having to syscall into the OS to get more memory).
The garbage collector is in charge of defragmenting the heap. Every so often (usually during collection runs), it will move objects around the heap if it determines this needs to be done. (This is why C++/CLI has the pin_ptr construct for "pinning" objects).
Fragmentation usually isn't a big issue though with memory, since it provides fast random access.
As for your OutOfMemoryException, I don't have a good answer for. Ordinarily I'd suspect that your old object graph isn't being collected (some object somewhere is holding a reference onto it, a "memory leak"). But since you're using a profiler, I don't know then.
As of .NET 4.5.1 you can set a one-time flag to compact LOH before issuing a call to GC collect, i.e.
Runtime.GCSettings.LargeObjectHeapCompactionMode = System.Runtime.GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(); // This will cause the LOH to be compacted (once).
Some testing and some C++ later, i've found the reason why i get so much free memory, it's because of IIS instancing the CLR via VM Hoarding (providing a dll to instantiate it without VM Hoarding takes up as much initial memory, but does release most of it as time goes which is the behavior i expect).
So this does fix my reported memory issue, however i still get about 100mb free memory no matter what, and i still think this is due to fragmentation and fragments only being released at once, because the profiler still reports memory fragmentation. So not marking my own answer as an answer in hope someone can shed some light on this or direct me to tools that can either fix this or help me debug the root cause.
It's intriguing that it works differently on the WebDevServer as to IIS...
Is it possible that IIS is using the server garbage-collector, and the WebDev server the workstation garbage collector? The method of garbage collection can affect fragmentation. It'll probably be set in your aspnet.config file. See: http://support.microsoft.com/kb/911716
If you havent found your answer I think the following clues can help you :
Back to the basics : we sometimes forget that the objects can be explicitly set free, call explicitly the Dispose method of the objects (because you didnt mention it, I suppose you do an "object = null" instruction instead).
Use the inherited method, you dont need to implement one, unless your class doesnt have it, which I doubt it.
MSDN Help states about this method :
... There is no performance benefit in implementing the Dispose
method on types that use only managed resources (such as arrays)
because they are automatically reclaimed by the garbage collector. Use
the Dispose method primarily on managed objects that use native
resources and on COM objects that are exposed to the .NET
Framework. ...
Because it says that "they are automatically reclaimed by garbage collector" we can infer that when the method is called does the "releasing thing" (Again Im trying only to give you clues).
Besides I found this interesting article (I suppose ... I didn read it ...completely) : Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework (http://msdn.microsoft.com/en-us/magazine/bb985010.aspx) which states the following in the "Forcing an Object to Clean Up" section :
..., it is also recommended that you add an additional method to
the type that allows a user of the type to explicitly clean up the
object when they want. By convention, this method should be called
Close or Dispose ....
Maybe the answer lies in this article if you read it carefully or just keep investigating in this direction.
I have a C#.NET service running in production. The service functions as a TCP server to which clients register and make requests against. In looking at the Task Manager, it appears to be leaking about 10MB/day. I don't seem to notice these in dev (perhaps because of far less traffic and client activity). In searching around I've read that the Task Manager can be seriously wrong, but I'm not sure how accurate this is or in what circumstances the TM would display incorrect information.
To solve this problem I need to more closely monitor memory consumption. The problem is that the leak only seems to appear in production, where the deployed service was built for Release. Also since it's a service that can't be run directly be VS with an attached profiler/debugging, I'm not sure how to best pinpoint the problem with something more precise than TM.
Any group wisdom would be much appreciated, thanks.
EDIT:
I've added perfmon counters for the privates bytes of the service (7MB to start out) as well as CLR mem in all heaps (30MB to start out)
Task manager says the total memory to be ~37MB so this seems to make sense
The first part of this is to let the service go for a day and check out my counters again.
If my private bytes get huge but CLR mem is roughly static this would indicate an unmanaged leak. If both get huge then it's a managed leak.
Thanks guys.
Your first task is figuring out if the process is leaking memory. You can do this with perfmon measuring the Private Bytes
http://www.goldstarsoftware.com/papers/CapturingVirtualBytesToALogFile.pdf
If the graph is consistently rising (for say half an hour ) you have a memory leak. You can then use other counters to figure out if this is a .NET leak (.NET memory) though this is unlikely. I find that in most of these cases, there is a COM component that is being invoked but not released.
If you truly have a memory leak (and this isn't just variable memory usage)- the process will shutdown with an out of memory exception after running for a while.
You need one of the below MemoryProfilers in order to monitor it;
http://www.jetbrains.com/profiler/
http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/
There are other choices but these are very capable and you can profile remote application's memory with them (at least JetBrains's solution handles that)
Follow this guide: http://blogs.msdn.com/b/tess/archive/2008/03/25/net-debugging-demos-lab-7-memory-leak.aspx
It goes over exactly what you're describing, a memory leak in production. As was mentioned you have to first determine whether it's unmanaged code or managed code that's leaking using perfmon and Private Bytes.
In general make sure for networking objects you're wrapping them in using statements so that they're properly disposed.
A workflow I often use for managed memory leaks is to start the server on a test machine, hit it with a known amount of connections (say 123,456 connections). Then take a memory snapshot by going to task manager and right clicking on the process name and selecting 'create dump'. Open this dump with WinDBG and SOS and run the command !dumpheap -stat. Look for objects that have a multiple of 123,456 instances. Should these objects still be in memory? If not run a !gcroot on an instance of those objects to find why it's still in memory.
Get a dump of the memory when its in a leak state using the Task Manager right click on the process and select create dump file. You can also use ProcDump which gives you more options.
Use SOS Extensions in either WinDebug or Visual Studio to inspect the memory.
I am writing an editor in C#/.NET using AvalonDock.
If i close a document, the memory-consumption of my program doesn't decrease. Even if I call the garbage collector manually. So i assume that there is still a reference of the document somewhere.
The software is huge and the document is a very central component, so it's not easy to find every reference to it.
Does the Visual Studio 2010 debugger have a functionality to search for objects of a certain class in memory or something?
Alternatively, what would you do, if faced with such a problem?
You need to use a memory profiler to find out what objects are in memory and what holds a reference to them.
There are several different options - commercial and free.
You can do what you want using free tools.
The basic steps are as follows:
Run Your Application
Attach windbg to its process
Load the "sos" helper module (.loadby sos mscorwks)
Dump the heap (!DumpHeap -stat)
Find the type you're interested in and see if it's actually the thing using memory
Dump the heap for your particular type (!DumpHeap -type MyNameSpace.MyType)
Find the memory address of an object you think should be disposed, and see if it is "rooted" somewhere. (!gcroot "whatever the address was")
I've personally used this technique to great effect when tracking down memory leaks in graphics-intensive c# programs.
I learned this from Rico Mariani of Microsoft. Here is a blog entry that describes it in detail.
* http://blogs.msdn.com/b/ricom/archive/2004/12/10/279612.aspx
Remember that even when .net cleans itself up, windows may not decide to actually release the memory. Often it only does so when another application actually needs memory. So, use a memory profiler :)
I understand there are many questions related to this, so I'll be very specific.
I create Console application with two instructions. Create a List with some large capacity and fill it with sample data, and then clear that List or make it equal to null.
What I want to know is if there is a way for me to know/measure/profile while debugging or not, if the actual memory used by the application after the list was cleared and null-ed is about the same as before the list was created and populated. I know for sure that the application has disposed of the information and the GC has finished collecting, but can I know for sure how much memory my application would consume after this?
I understand that during the process of filling the list, a lot of memory is allocated and after it's been cleared that memory may become available to other process if it needs it, but is it possible to measure the real memory consumed by the application at the end?
Thanks
Edit: OK, here is my real scenario and objective. I work on a WPF application that works with large amounts of data read through USB device. At some point, the application allocates about 700+ MB of memory to store all the List data, which it parses, analyzes and then writes to the filesystem. When I write the data to the filesystem, I clear all the Lists and dispose all collections that previously held the large data, so I can do another data processing. I want to know that I won't run into performance issues or eventually use up all memory. I'm fine with my program using a lot of memory, but I'm not fine with it using it all after few USB processings.
How can I go around controlling this? Are memory or process profilers used in case like this? Simply using Task Manager, I see my application taking up 800 MB of memory, but after I clear the collections, the memory stays the same. I understand this won't go down unless windows needs it, so I was wondering if I can know for sure that the memory is cleared and free to be used (by my application or windows)
It is very hard to measure "real memory" usage on Windows if you mean physical memory. Most likley you want something else like:
Amount of memory allocated for the process (see Zooba's answer)
Amount of Managed memory allocated - CLR Profiler, or any other profiler listed in this one - Best .NET memory and performance profiler?
What Task Manager reports for your application
Note that it is not necessary that after garbage collection is finished amount of memory allocated for your process (1) changes - GC may keep allocated memory for future managed allocations (this behavior is not specific to CLR for memory allcation - most memory allocators keep free blocks for later usage unless forced to release it by some means). The http://blogs.msdn.com/b/maoni/ blog is excelent source for details on GC/memory.
Process Explorer will give you all the information you need. Specifically, you will probably be most interested in the "private bytes history" graph for your process.
Alternatively, it is possible to use Window's Performance Monitor to track your specific application. This should give identical information to Process Explorer, though it will let you write the actual numbers out to a separate file.
(A picture because I can...)
I personaly use SciTech Memory Profiler
It has a real time option that you can use to see your memory usage. It has help me find a number of problems with leaking memory.
Try ANTS Profiler. Its not free but you can try the trial version.
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
How I can get the actual memory used in my C# application?
Task Manager shows different metrics.
Process Explorer shows increased usage of private bytes.
Performance counter (perfmon.msc) showed different metrics
when I used .NET memory profiler, it showed most of the memory is garbage collected and only few Live bytes.
I do not know which to believe.
Memory usage is somewhat more complicated than displaying a single number or two. I suggest you take a look at Mark Russinovich's excellent post on the different kinds of counters in Windows.
.NET only complicates matters further. A .NET process is just another Windows process, so obviously it will have all the regular metrics, but in addition to that the CLR acts as a memory manager for the managed application. So depending on the point of view these numbers will vary.
The CLR effectively allocates and frees virtual memory in big chunks on behalf of the .NET application and then hands out bits of memory to the application as needed. So while your application may use very little memory at a given point in time this memory may or may not have been released to the OS.
On top of that the CLR itself uses memory to load IL, compile IL to native code, store all the type information and so forth. All of this adds to the memory footprint of the process.
If you want to know how much memory your managed application uses for data, the Bytes in all heaps counter is useful. Private bytes may be used as a somewhat rough estimate for the application's memory usage on the process level.
You may also want to check out these related questions:
Reducing memory usage of .NET applications?
How to detect where a Memory Leak is?
If you are using VS 2010 you can use Visual Studio 2010 Profiler.
This tool can create very informative reports for you.
If you want to know approximately how many bytes are allocated on the GC heap (ignoring memory used by the runtime, the JIT compiler, etc.), you can call GC.GetTotalMemory. We've used this when tracking down memory leaks.
Download VADump (If you do not have it yet)
Usage: VADUMP.EXE -sop [PID]
Well, what is "actual memory used in my C# application" ?
Thanks to Virtual memory and (several) Memory management layers in Windows and the CLR, this is a rather complicated question.
From the sources you mention the CLR profiler will give you the most detailed breakdown, I would call that the most accurate.
But there is no 'single number' answer, the question whether Application A use more or less memory than B can be impossible to answer.
So what do you actually want to know? Do you have a concrete performance problem to solve?