I have a long running console app running through millions of iterations. I want to benchmark if memory usage increases linerally as the number of iterations increases.
What would be the best way to go about this?
I think I really only need to be concerned with Peak memory usage during a run right? I basically need to work out what is the max number of iterations I can run on this hardware given the memory on the Server.
I am going to setup a batch lot of runs and Log the results over different interation sizes and then graph the results to identify a memory usage trend which can then be extrapolated for any given hardware.
Looking for advice on the best way to implement this, what .net methods, classes to use or should I use external tools. This article http://www.itwriting.com/dotnetmem.php suggests I should profile my own app via code to factor out shared memory used by the .net runtime across other apps on the box.
Thanks
There are several ways to do this:
Perfmon UI
You can use the Performance Montior control panel applet (in Administrative Tools) that comes with Windows to monitor your application. Have a look at the .Net CLR Memory category and the counters inside it. You can also limit the monitoring to just your process. This is probably the easiest as it requires no code changes.
Perfmon API
You can programmatically use performance counters from .Net. For this, you'll need to use the PerformanceCounter class. This is just an API to the same underlying information that the above UI presents.
Memory Profiler
You can use a memory profiler to profile you application whilst it is running. The two I have used with success are the ANTS Memory Profiler from RedGate and the .Net Memory Profiler from SciTech. Again, this requires no code changes, but may cost you money (although there are free trial versions). There is also the CLR Profiler (a howto can be found here).
Roll Your Own
You can get hold of some limited memory information from the Process class. Get hold of the current process using Process.GetCurrentProcess(), and then look at the properties of it, specifically the ones relating to memory (MinWorkingSet, MaxWorkingSet, PagedMemorySize64, PeakPagedMemorySize64, PeakVirtualMemorySize64, PeakWorkingSet64, PrivateMemorySize64, VirtualMemorySize64, WorkingSet64). This is probably the worst solution as you have to do everything yourself, including data collection and reporting.
If all you are wanting to do is to verify your application doesn't linearly increase its memory usage as your the number of iterations increases, I would recommend monitoring using the Performance Monitor UI in Windows. It will show you what you need with minimal effort on your part.
Windows perfmon is great for this kind of stuff. You have counters for all the managed heaps and for private bytes, if you need to cross correlate with iteration counts you can publish your own perfmon counters from .Net.
when you want to profile the mem usage of your apps, here is a piece of code you can use.
First you have to declare an instance of the PerformanceCounter class
Assembly a = Assembly.GetExecutingAssembly();
_PerfCounter = new PerformanceCounter(".NET CLR Memory",
"# Bytes in all Heaps",
a.GetName().Name,
true);
and each time you want to log the memory consumption you can call
_PerfCounter.NextValue()
Hope it was helpful
Maybe this tool can do everything you want.
Better late than never, but in response to M3ntat's comment, you'll need to add ".vshost" onto the assembly name parameter if you're running from within Visual Studio, e.g.
a.GetName().Name + ".vshost"
Related
I understand there are many questions related to this, so I'll be very specific.
I create Console application with two instructions. Create a List with some large capacity and fill it with sample data, and then clear that List or make it equal to null.
What I want to know is if there is a way for me to know/measure/profile while debugging or not, if the actual memory used by the application after the list was cleared and null-ed is about the same as before the list was created and populated. I know for sure that the application has disposed of the information and the GC has finished collecting, but can I know for sure how much memory my application would consume after this?
I understand that during the process of filling the list, a lot of memory is allocated and after it's been cleared that memory may become available to other process if it needs it, but is it possible to measure the real memory consumed by the application at the end?
Thanks
Edit: OK, here is my real scenario and objective. I work on a WPF application that works with large amounts of data read through USB device. At some point, the application allocates about 700+ MB of memory to store all the List data, which it parses, analyzes and then writes to the filesystem. When I write the data to the filesystem, I clear all the Lists and dispose all collections that previously held the large data, so I can do another data processing. I want to know that I won't run into performance issues or eventually use up all memory. I'm fine with my program using a lot of memory, but I'm not fine with it using it all after few USB processings.
How can I go around controlling this? Are memory or process profilers used in case like this? Simply using Task Manager, I see my application taking up 800 MB of memory, but after I clear the collections, the memory stays the same. I understand this won't go down unless windows needs it, so I was wondering if I can know for sure that the memory is cleared and free to be used (by my application or windows)
It is very hard to measure "real memory" usage on Windows if you mean physical memory. Most likley you want something else like:
Amount of memory allocated for the process (see Zooba's answer)
Amount of Managed memory allocated - CLR Profiler, or any other profiler listed in this one - Best .NET memory and performance profiler?
What Task Manager reports for your application
Note that it is not necessary that after garbage collection is finished amount of memory allocated for your process (1) changes - GC may keep allocated memory for future managed allocations (this behavior is not specific to CLR for memory allcation - most memory allocators keep free blocks for later usage unless forced to release it by some means). The http://blogs.msdn.com/b/maoni/ blog is excelent source for details on GC/memory.
Process Explorer will give you all the information you need. Specifically, you will probably be most interested in the "private bytes history" graph for your process.
Alternatively, it is possible to use Window's Performance Monitor to track your specific application. This should give identical information to Process Explorer, though it will let you write the actual numbers out to a separate file.
(A picture because I can...)
I personaly use SciTech Memory Profiler
It has a real time option that you can use to see your memory usage. It has help me find a number of problems with leaking memory.
Try ANTS Profiler. Its not free but you can try the trial version.
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
How I can get the actual memory used in my C# application?
Task Manager shows different metrics.
Process Explorer shows increased usage of private bytes.
Performance counter (perfmon.msc) showed different metrics
when I used .NET memory profiler, it showed most of the memory is garbage collected and only few Live bytes.
I do not know which to believe.
Memory usage is somewhat more complicated than displaying a single number or two. I suggest you take a look at Mark Russinovich's excellent post on the different kinds of counters in Windows.
.NET only complicates matters further. A .NET process is just another Windows process, so obviously it will have all the regular metrics, but in addition to that the CLR acts as a memory manager for the managed application. So depending on the point of view these numbers will vary.
The CLR effectively allocates and frees virtual memory in big chunks on behalf of the .NET application and then hands out bits of memory to the application as needed. So while your application may use very little memory at a given point in time this memory may or may not have been released to the OS.
On top of that the CLR itself uses memory to load IL, compile IL to native code, store all the type information and so forth. All of this adds to the memory footprint of the process.
If you want to know how much memory your managed application uses for data, the Bytes in all heaps counter is useful. Private bytes may be used as a somewhat rough estimate for the application's memory usage on the process level.
You may also want to check out these related questions:
Reducing memory usage of .NET applications?
How to detect where a Memory Leak is?
If you are using VS 2010 you can use Visual Studio 2010 Profiler.
This tool can create very informative reports for you.
If you want to know approximately how many bytes are allocated on the GC heap (ignoring memory used by the runtime, the JIT compiler, etc.), you can call GC.GetTotalMemory. We've used this when tracking down memory leaks.
Download VADump (If you do not have it yet)
Usage: VADUMP.EXE -sop [PID]
Well, what is "actual memory used in my C# application" ?
Thanks to Virtual memory and (several) Memory management layers in Windows and the CLR, this is a rather complicated question.
From the sources you mention the CLR profiler will give you the most detailed breakdown, I would call that the most accurate.
But there is no 'single number' answer, the question whether Application A use more or less memory than B can be impossible to answer.
So what do you actually want to know? Do you have a concrete performance problem to solve?
I was hoping someone could explain why my application when loaded uses varying amounts of RAM. I'm speaking about a compiled version that uses the exe directly. It's a pretty basic applications and there are no conditional branches in the startup of the application. Yet every time I start it up the RAM amount varies from 6MB-16MB.
I know it's on the small end of usage anyways but I'm curious of why this happens.
Edit: to give a bit more clarification on what the app actually does.
It is a WinForm project.
It connects to a database using sqlclient to retrieve a list of servers.
Based on that list a series of buttons are created to start and stop a service on those servers.
Using the System.Timers class to audit the status of the services on those servers every 20 seconds.
The applications at this point sits there and waits for user input via one of the button clicks to start/stop the service.
The trick here is that the amount of RAM reported by the task schedule is not the amount of RAM used by your application. Rather, it is the amount of RAM reserved for use by your application.
Remember that with managed frameworks like .Net, you don't request or release memory directly. Rather, a garbage collector manages the memory for you. The amount of memory reserved for your application at a given time can vary and depends on a lot of different factors, including memory pressure created at the time by other programs.
Think of it this way: if you need 10 MBs of RAM for your app, is it faster to request and return it to the operating system 1 MB at a time over 10 requests/releases or reserve the block at once with one request/release? Now extend that to a scenario where you don't know exactly how much RAM you'll need, only that it's somewhere in the neighborhood of 10 MB. Additionally, your computer has 1 GB sitting there unused. Of course the best thing to do is take a good-sized chunk of that available RAM. Even 20 or 30 MB wouldn't be unreasonable relative to the ram that's sitting there unused, because unused RAM is wasted performance.
If your system later starts to feel some memory pressure then .Net can easily return some RAM to the system. This is one of the ways managed languages can sometimes give better performance than languages like C++ with traditional memory management: a garbage collector that can more easily take the entire system health into account when allocating memory.
What are you using to determine how much memory is being "used". Even with regular applications Windows will aggressively allocate unused memory in advance, with .NET applications it's even more complicated as to how much memory is actually being used, and how much Windows is just tacking on so that it will be available instantly when needed. If another application actually asks for memory this reserved memory will be repurposed.
One way to check is to minimize the application (at least on XP). If you are looking at the memory use in something like task manager you'll notice it drops off right away, eliminating the seemly "random" amount allocated.
It may be related to the jitter, after the first load the jitter already created a compiled version and it doesn't need to run. Other than that you would have to give us some more details about the app and which kind of memory you are referring to.
We've build a Web Application which is performing horrible even with alot of resources available. My boss doesn't believe me that the application is consuming alot of Hardware IO, so I have to prove that the hardware is ok, but the web app is really crap.
The app is using:
SQL Server 2000 with SP4
The main web application (.NET 3.5)
Two Web Services (.NET 1.1)
Biztalk 2004
There are 30 people using this apps.
How can I prove I am right?
You can hook up a profiler like ANTS profiler or JetBrains DotTrace and see where the application's performance bottlenecks are.
One place you could start is getting a performance profiler like Red-Gate ANTS profiler. I've used this tool and it's very useful is weeding out performance bottlenecks.
Randy
You could start by using SQL Server Profiler to get an impression of the amount of database traffic that is going on.
I'm not saying that database interaction is the bottleneck, but it often is, and the tool is already there if you are using SQL Server, so it may be a good idea to take a look at that before you go out and buy a lot of profiling tools.
Visual Studio 2008 also have built-in performance analysis tools.
Windows performance counters are a good way to get some basic information about general system performance. Proper counters will show you if it's really the IO that's doing lots of stuff. If you take out the numbers from the counters and compare those to the specs, you should be able to tell if the system is maxing out or not.
If the system is maxing out, it's a problem with the web application, and it should be profiled to find out where to start optimizing.
You could use the system performance monitor built into windows since at least XP. You can get almost any information you could possibly need. This includes CPU time, .NET memory usage (include gen0 gen1 and gen2), native memory usage, amount of time spent garbage collection, disk access time, etc. If you just search codeproject or just the web there are many examples of using these counter to test for just about anything you want.
One of the benefits of this is you don't have to change your code and can be used with existing system.
I find this is the best starting point to point you to where you should look for bottle necks and issues.
How do you calculate memory (RAM) bandwidth used? Which performance counters are required?
I came across a tool that was able to do it, the "Rightmark multi-threaded memory test". But unlike the rest of Rightmark's tests, I haven't found the source code for it, just the binaries
If your code can run on Linux, use Cachegrind:
Cachegrind is a cache profiler. It
performs detailed simulation of the
I1, D1 and L2 caches in your CPU and
so can accurately pinpoint the sources
of cache misses in your code. It
identifies the number of cache misses,
memory references and instructions
executed for each line of source code,
with per-function, per-module and
whole-program summaries. It is useful
with programs written in any language.
Cachegrind runs programs about
20--100x slower than normal.
You may want to use the KCacheGrind GUI.
It is very difficult to 'calculate' memory bandwidth usage. There are lots of non-trivial cache and MMU issues to contend with. The only real way to do it is either through the use of simulation or real-world measurements.
You can get a 'rough' idea by debugging the code and counting the number of memory load and store operations performed. However, knowing whether it was a cache hit/miss is another issue.
It depends on your purpose. If it is to obtain a guesstimate, you can use the rule of thumb that about 30% of general purpose code is memory loads and stores. If you're trying to get a worst case, you can assume that caches miss all the time and work it out.
One potential thing you could do is to look at virtualisation. There are several open source options (QEMU comes to mind). It may be possible to export certain hardware measurements from them.