I have a java program that uses a weak hashmap for caching some things, if java needs more memory the weak hashmap is cleared. This works fine for me. Now I also have a c# program on the same computer running and recognized the following.
When the java program is running my c# program is not running correctly at times when the computer is highly stressed. On the other hand my c# program also runs fine at these times when the java program is not running.
Could it be that my java program blocks memory that my c# program could use? How can I find this out?
Your Java program will expand it's heap to a given size. Garbage collection will free objects returning them to the heap's free space, but will not reduce the overall memeory used by the Java program.
Use your OS capabailities to investigate the memory consumed by the C++ and Java apps.
You can use command line options on your JVM to control the maximum heap size for Java, and hence limit how hungry it will be. Of course if you need a huge heap then it's possible that everything won't fit on the one machine.
You can't set a maximum bound for your .net CLR's heap in the same way that you have for your JVM's heap. See this question for some more info. The CLR will simply attempt to expand its heap until it hits your OS-imposed process memory limit, or your machine's free memory is used up.
So yes, when you increase your JVM heap size you could reserve memory that your CLR would otherwise use. The JVM will start at the lower bound and expand to the upper bound if needed. As mentioned above, that memory is not freed up outside the JVM - it is not made available to the .net CLR.
You'll need to do some more monitoring to see if memory really is the cause of the problem though.
Related
I have a C# application that does a few bits and pieces, but the main task it performs is done by a Delphi DLL which it calls.
This Delphi DLL is a total memory hog, which needs to cache a lot of DB-information locally for speed. I'm happy that it's not leaky, as FastMM4 isn't reporting any memory leaks when the code is run within Delphi.
I am starting to run into problems, however, when the control is returned to C#. The C# code attempts to do some calculations on the results of the Delphi app (all results marshalled via a DB). These calculations usually involve a million or so doubles so not extreme memory usage, however the code keeps returning me out of memory exceptions.
I assume that FastMM4 in the Delphi code still hasn't returned the freed memory to Windows (and hence available to the C# code), so the process is still using it's maximum 32-bit memory allocation and C# can't obtain more when it needs to.
So, how do I get the memory used (and freed) by Delphi usable again by the C# code? I thought we may want to do one of the following:
Force an unload of the Delphi DLL from the C# side (my colleague doesn't think this will work, as he thinks it'll just unload the code rather than the memory used on the heap) - probably LoadLibrary/FreeLibrary?
Make a call at the end of the Delphi DLL to release the memory back to Windows (I tried SetWorkingProcessSetSize before, but didn't seem to do anything, should I use a different call?)
Wrap the Delphi DLL in a C# DLL and call it in a different AppDomain (I don't like this from a style perspective as we're creating wrappers just to hold wrappers.
Anything else I've missed?
Force an unload of the Delphi DLL from the C# side (my colleague doesn't think this will work, as he thinks it'll just unload the code rather than the memory used on the heap) - probably LoadLibrary/FreeLibrary?
This will just work. When the DLL unloads, FastMM will finalize and return the memory that it reserved and committed.
One thing I would do is make a call to GC.Collect before calling your library. .NET knows what to do when more managed memory is requested than can fit and calls the collector automatically, however it has no clue what you're doing in native code so there will be a lot of memory allocated needlessly.
I would also move from a 32 bit architecture. It's not that you ran out of memory, it's that you ran out of consecutive memory large enough to fit whatever you're trying to do in Delphi. A larger virtual address space will fix that issue for you, and there hasn't been a processor made in the past 6 years that didn't wasn't 64 bit capable. It's time to take those shy steps into the bright future ahead of us.
A while back I wrote a simple full TCP connect scanner in C# and compiled that to a DLL. It spits out PSObjects to the pipeline to show the results of the scan. If I am scanning a CIDR /16 subnet, the amount of data I get back is well into the 512MiB according to Task Manager. I would like to find a way to deallocate this memory usage once I am done analyzing the results to free up space for other tasks (we have to work with less than 8 GiB of memory...). The problem is that I cannot get PowerShell to release it's hold on this memory. Does anyone know a good way to free up this memory consumption?
I've tried setting the variable to $null, Remove-Variable, and calling [GC]::Collect(), but the memory usage in Task Manager still remains at the same (or higher) levels. I am at a loss with this seemingly simple task. Maybe the memory is deallocated, but Task Manager just reports the previous allocation?
The only deallocation of managed memory is going to be done by the CLR garbage collector. Since you've already tried a GC collect, I'd say you have a managed memory hoard. That is, you haven't found all of the roots that refer to your objects. I suggest you use a tool like RedGate's Memory Profiler or the free ClrProfiler to determine what is still referencing the objects you have created. I believe even Visual Studio 2013 has the ability to analyze managed memory from a dump file. That may also help you find all objects holding a reference to your data objects. BTW do any of your objects wrap native resources (handles, memory, etc)?
When I run my application, in a profiler I see that is uses about 80MB of memory (total committed bytes, performance counter). But when I look at the size of the allocated memory, it is over 400MB!
So my question is, why is .NET reserving so much memory for my application? Is this normal?
you should read Memory Mystery. I had similar questions a while ago and stopped asking myself after reading this.
I read other sources, but I cant find now, use keywords "unreasonable allocation of memory windows OS". In a nutshell, OS gives more than your app require depending upon physically available memory resources
for e.g. if you are running your app on two machines with different RAM, it can be guaranteed that both these machines will have different memory allocations
As you no doubt know, there is a massive difference between actual memory used and allocated. An application's allocated memory doesn't mean that it's actually being used anywhere; all it really means is that the OS has 'marked' a zone of virtual memory (which is exactly that - virtual) ready for use by the application.
The memory isn't necessarily being used or starving other processes - it just could if the app starts to fill it.
This allocated number, also, will likely scale based on the overall memory ecosystem of the machine. If there's plenty of room when an app starts up, then it'll likely grab a larger allocation than if there's less.
That principle is the same as the one which says it's good practise to create a List<T>, say, with a reasonable initial capacity that'll mean a decent number of items can be added before resizing needs to take place. The OS takes the same approach with memory usage.
"Reserving" memory is by no means the same as "allocated" ram. Read the posts Steve and Krishna linked to.
The part your client needs to look at is Private Bytes. But even that isn't exactly a hard number as parts of your app may be swapped to the virtual disk.
In short, unless your Private Bytes section is pretty well out of control OR you have leaks (ie: undisposed unmanaged resources) you (and your client) should ignore this and let the OS manage what is allocated, what's in physical ram and what's swapped out to disk.
It's fairly common for software to issue one large memory request to the underlying operating system, then internally manage its own use of the allocated memory block. So common, in fact, that Windows' (and other operating systems') memory manager explicitly supports the concept, called "uncommitted memory" -- memory that the process has requested but hasn't made use of yet. That memory really doesn't exist as far as bits taking up space on your DRAM chips until the process actually makes use of it. The preallocation of memory effectively costs nothing.
Applications do this for many reasons -- though it's primarily done for performance reasons. An application with knowledge of its own memory usage patterns can optimize its allocator for that pattern; similarly, for address locality reasons, as successive memory requests from the OS won't always be 'next' to each other in memory, which can affect the performance of the CPU cache and could even preclude you from using some optimizations.
.NET in particular allocates space for the managed heap ahead of time, for both of the reasons listed above. In most cases, allocating memory on the managed heap merely involves incrementing a top-of-heap pointer, which is incredibly fast --- and also not possible with the standard memory allocator (which has a more general design to perform acceptably in a fragmented heap, whereas the CLR's GC uses memory compaction to sharply limit the fragmentation of the managed heap), and also not possible if the managed heap itself is fragmented across the process address space due to multiple allocations at different points in time.
How I can get the actual memory used in my C# application?
Task Manager shows different metrics.
Process Explorer shows increased usage of private bytes.
Performance counter (perfmon.msc) showed different metrics
when I used .NET memory profiler, it showed most of the memory is garbage collected and only few Live bytes.
I do not know which to believe.
Memory usage is somewhat more complicated than displaying a single number or two. I suggest you take a look at Mark Russinovich's excellent post on the different kinds of counters in Windows.
.NET only complicates matters further. A .NET process is just another Windows process, so obviously it will have all the regular metrics, but in addition to that the CLR acts as a memory manager for the managed application. So depending on the point of view these numbers will vary.
The CLR effectively allocates and frees virtual memory in big chunks on behalf of the .NET application and then hands out bits of memory to the application as needed. So while your application may use very little memory at a given point in time this memory may or may not have been released to the OS.
On top of that the CLR itself uses memory to load IL, compile IL to native code, store all the type information and so forth. All of this adds to the memory footprint of the process.
If you want to know how much memory your managed application uses for data, the Bytes in all heaps counter is useful. Private bytes may be used as a somewhat rough estimate for the application's memory usage on the process level.
You may also want to check out these related questions:
Reducing memory usage of .NET applications?
How to detect where a Memory Leak is?
If you are using VS 2010 you can use Visual Studio 2010 Profiler.
This tool can create very informative reports for you.
If you want to know approximately how many bytes are allocated on the GC heap (ignoring memory used by the runtime, the JIT compiler, etc.), you can call GC.GetTotalMemory. We've used this when tracking down memory leaks.
Download VADump (If you do not have it yet)
Usage: VADUMP.EXE -sop [PID]
Well, what is "actual memory used in my C# application" ?
Thanks to Virtual memory and (several) Memory management layers in Windows and the CLR, this is a rather complicated question.
From the sources you mention the CLR profiler will give you the most detailed breakdown, I would call that the most accurate.
But there is no 'single number' answer, the question whether Application A use more or less memory than B can be impossible to answer.
So what do you actually want to know? Do you have a concrete performance problem to solve?
I am mostly curious and this isnt a problem. Typically my (C++) apps use very little memory. I thought my current app would take little memory but it uses 3.7mb and VM size of 17.3mb. The app has 4 icons in its resource file, 4 ints in the local(user)settings and is the app LoC is <1k. It detects keyinput and writes a line in a listbox when the user goes idle (calling a windows function). It put itself in the system tray and has a timer set to 100ms.
Theres no arrays or any storage except for a few structs that are less 256bytes together. Why is my app using 17mb+ of VM?
Because it's a managed application, a part of the CLR will also be loaded in memory. Also, the CLR will allocate a bunch of memory so that it may satisfy new object requests (it does not allocate each object from the system). There's also a bunch of other objects that get allocated for each application in a managed model (for instance the thread pool, the garbage collector, etc).
I'm not sure you can do much about reducing that, but on the flip side, you won't see it scale linearly with the app complexity (as in if you make it twice the complexity, it won't use twice the memory).
17 megs sounds about right for a simple C# app.
I guess its the perennial 'hardware use versus programmer productivity' argument.
Grab the .NET memory profiler if you care to see exactly what's taking up that memory.
Programs written with the .NET framework inherently have more overhead.
Something to bear in mind is that each managed thread has a 1MB stack, too. If you're doing anything with threads, that's a couple of MB right away.
Don't worry about memory consumption for a Hello World app.
A managed language app handles its memory usage differently than, say, C where every memory allocation has the risk of not being deallocated.
In some cases a .NET app may even run faster than an equivalent app written in C++ if the app spends a lot of time in malloc/dealloc because the CLR can put off deallocating/garbage collecting until when the app is idle.