We are designing a Stress Test Application which will send mass HTTP requests of size "1 MB" to a particular Web Service. To achieve stress, we are using multiple threads in the application. Structure is something like we have X EnqueueThreads which will create the HTTPRequest data and they will add it them to the queue. And the Y WorkerThreads will dequeue the requests and they will submit to web service.
All requests are aysnchronous.
Now the problem here is, Enqueue threads are working much faster than WorkerThreads so if there is no stop/wait condition, they will add the requests until the out of memory exception is thrown and thus making the injector (where this utility will be running) slow.
At present we are handling the OutOfMemory exceptions and making the enqueuethreads sleep for some time. Another way I could think of is limit the queue size.
However I would like to know the views on what should be the best approach to use the limited system resources (specially memory).?
Thanks in advance.
You can and probably should use the MemoryFailPoint class in a scenario like this.
If you get an OutOfMemoryException then the application state could be corrupt and you shouldn't try to recover from it. MemoryFailPoint is designed to avoid this by allowing you to determine how much to slow your application down so that you can avoid getting out of memory. You are letting the framework determine if you can perform the operation and not taking a guess on how much you "think" you can get away with based on how much memory your app is using.
You should also check for memory usage through the garbage collector not the process to get an accurate reading of how much managed memory is actually allocated. Using private memory size will give you a much lower reading and you could still land up in an out of memory situation although it appears you have plenty to spare.
The code sample on the MSDN page shows how to estimate memory usage for an operation and to use that information to wait until memory is available before trying to process more requests. If you can determine the areas of code that have large memory requirements this is a good option to constrain it and avoid running out of memory.
Well, according to the topic of the question, best way to avoid out of memory exception would be not to create objects that fill in that memory.
Handling exception is easiest solution, though may bring different difficulties and inconsistencies into the application with time. Another way would be getting available size of memory resources like this:
Process currentProcess = Process.GetCurrentProcess();
long memorySize = currentProcess.PrivateMemorySize64
Then you can calculate the length of your queue based on estimate of one object memory capacity.
Another way would be to check for memory size in each worker thread. Whenever there's no memory, the thread can just finish. This way many threads would be spawning and dying but the application would be at maximum available capacity.
Related
I have a server app which receives lots of UDP packets from several sensors 50 times per second (new packet every 20ms), does some analysis, stores them, does some debug logging and other "server stuff".
The problem is that the GC performs a full blocking collection every once in a while, suspending all threads for up to 200ms (perhaps even more in some rare percentiles). I don't have a problem with lagging behind each packet for couple of milliseconds (even sustained latency of say 10ms for each packet wouldn't be an issue), but long suspends are really annoying.
According to MSDN, there is the SustainedLowLatency mode for the GC, which, according to MSDN:
Enables garbage collection that tries to minimize latency over an extended period. The collector tries to perform only generation 0, generation 1, and concurrent generation 2 collections.
Full blocking collections may still occur if the system is under memory pressure.
Does "extended period" still mean I cannot simply set the mode to SustainedLowLatency and forget it?
Is there any way to prevent full blocking collections, at least for a single core?
Basically, this mode will try to use only background gen 2 collections, and do a full GC only when the system starts lacking memory. The delay between blocking collections depends entirely on your code: the less you use the gen 2 the less the memory will be fragmented, and the longer you'll last. Unfortunately, sockets use pinned buffers, which are a typical cause of memory fragmentation. You can try to reuse your buffers, but it involves writing tricky low-level code with the socket API.
The only way to prevent full blocking collections is to use TryStartNoGCRegion. There again, how much you can last depends on how much memory you allocate. Use small objects. Use structs instead of classes whenever possible. Use pooling whenever you need large arrays.
Profiling (for instance, with Jetbrains dotTrace) will help you spot and optimize codepaths that allocate a lot of memory.
I have a windows service written in C#.Net. When the service is started, I spawn a new thread as shown below
new Thread(new ThreadStart(Function1)).Start();
This thread loops infinitely and performs the duties expected of my service. Once a day, I need to simultaneously perform a different operation for which my thread spawns a second thread as show below
new Thread(new ThreadStart(Function2)).Start();
This second thread performs a very simple function. It reads all the lines of a text file using FileReadAllLines , quickly processes this information and exits.
My problem is that the memory used by the second thread which reads the file is not getting collected. I let my service run for 3 hours hoping that the GC would be called but nothing happened and task manager still shows that my service is using 150mb of memory. The function to read and process the text file is a very simple one and I am sure there are no hidden references to the string array containing the text. Could someone shed some light on why this is happening? Could it be possible that a thread spawned by another spawned thread cannot cleanup after itself?
Thanks
Trust the garbage collector and stop worrying. 150 megs is nothing. You aren't even measuring the size of the file in that; most of that will be code.
If you are concerned about where memory is going, start by understanding how memory works in a modern operating system. You need to understand the difference between virtual and physical memory, the difference between committed and allocated memory, and all of that, before you start throwing around numbers like "150 megs of allocated memory". Remember, you have 2000 megs of virtual address space in a 32 bit process; I would not think that a 150 meg process is large by any means.
Like Jon says, what you want to be concerned about is a slow steady rise in private bytes. If that's not happening, then you don't have a memory leak. Let the garbage collector do its job and don't worry about it.
If you are still worried about it good heavens do not use task manager. Get a memory profiler and learn how to use it. Task manager is for inspecting processes by looking down on them from 30000 feet. You need to be using a microscope, not a telescope, to analyze how the process is freeing the bytes of a single file.
If you're using Windows task manager to try to work out the memory used, it's likely to be deceiving you. Memory used by the CLR isn't generally returned to the operating system as far as I'm aware... so you'll potentially still see a high working set, even though most of that memory is then still available to be reused within the process.
If you let the service run for a week, do you see the memory usage climb steadily through the week, or does it just increase in the first day, and then plateau? If so, do you definitely view this as a problem? If so, you may need to put your second task in a separate process.
I understand there are many questions related to this, so I'll be very specific.
I create Console application with two instructions. Create a List with some large capacity and fill it with sample data, and then clear that List or make it equal to null.
What I want to know is if there is a way for me to know/measure/profile while debugging or not, if the actual memory used by the application after the list was cleared and null-ed is about the same as before the list was created and populated. I know for sure that the application has disposed of the information and the GC has finished collecting, but can I know for sure how much memory my application would consume after this?
I understand that during the process of filling the list, a lot of memory is allocated and after it's been cleared that memory may become available to other process if it needs it, but is it possible to measure the real memory consumed by the application at the end?
Thanks
Edit: OK, here is my real scenario and objective. I work on a WPF application that works with large amounts of data read through USB device. At some point, the application allocates about 700+ MB of memory to store all the List data, which it parses, analyzes and then writes to the filesystem. When I write the data to the filesystem, I clear all the Lists and dispose all collections that previously held the large data, so I can do another data processing. I want to know that I won't run into performance issues or eventually use up all memory. I'm fine with my program using a lot of memory, but I'm not fine with it using it all after few USB processings.
How can I go around controlling this? Are memory or process profilers used in case like this? Simply using Task Manager, I see my application taking up 800 MB of memory, but after I clear the collections, the memory stays the same. I understand this won't go down unless windows needs it, so I was wondering if I can know for sure that the memory is cleared and free to be used (by my application or windows)
It is very hard to measure "real memory" usage on Windows if you mean physical memory. Most likley you want something else like:
Amount of memory allocated for the process (see Zooba's answer)
Amount of Managed memory allocated - CLR Profiler, or any other profiler listed in this one - Best .NET memory and performance profiler?
What Task Manager reports for your application
Note that it is not necessary that after garbage collection is finished amount of memory allocated for your process (1) changes - GC may keep allocated memory for future managed allocations (this behavior is not specific to CLR for memory allcation - most memory allocators keep free blocks for later usage unless forced to release it by some means). The http://blogs.msdn.com/b/maoni/ blog is excelent source for details on GC/memory.
Process Explorer will give you all the information you need. Specifically, you will probably be most interested in the "private bytes history" graph for your process.
Alternatively, it is possible to use Window's Performance Monitor to track your specific application. This should give identical information to Process Explorer, though it will let you write the actual numbers out to a separate file.
(A picture because I can...)
I personaly use SciTech Memory Profiler
It has a real time option that you can use to see your memory usage. It has help me find a number of problems with leaking memory.
Try ANTS Profiler. Its not free but you can try the trial version.
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
I have a windows service multithreaded application for indexing purpose which have six threads. It is working fine except memory leakage. Actually when the service is started, then the service is consuming 12,584kb memory, after some time it is taking memory of 61,584 kb. But after indexing process is complete it is not releasing memory.
I need it come back to its previous position after the indexing is complete, that is it should take the memory with which it started e.g. 12,584kb in this case.
I have used garbage collection but it is not doing what I want.
Can anyone please help me?
First advice would be to throw a memory profiler at it. I've always been happy with Red Gate's ANTS profiler, which will help identify which objects are leaking, if any.
Do bear in mind that there may be first time initialisation happening all over the place, so you probably want to track it over time
.NET doesn't release memory to satisfy people staring at Task Manager. Allocating memory is expensive. The CLR is designed to do so sparingly, and to hold onto any memory it allocates as long as possible. Think of it this way--what's the point of having 4gb of memory when you're not using half of it?
Unless you actually KNOW that you have a memory leak (as in, your app crashes after two days of uptime), let me give you a piece of advice... Close Task Manager. Don't optimize for memory before you know you need to. Relax, guy. Everything is fine.
61 MB does not sound out of the ordinary. You will need to let the service run for awhile and monitor its memory usage for a rising trend. If you see the application leveling out at a certain value then there is probably nothing to worry about.
I agree with Will completely - however I offer 2 small pieces of advice:
Don't use GC.Collect. Trust me you are not going to improve the CLR's Garbage Collection algorithm by doing this. In fact objects that are ready to be collected but can't be for some reason will be moved to Tier 2, where they will have to wait even longer before being collected (thereby potentially making any leak you may have worse!)
Check everywhere in your code that you have created some type of stream (usually MemoryStream) and verify that the streams are being closed properly (preferably in a "finally" block)
Here I am using "log4net-1.2.10" for handling exception, "Lucene.net-2.1.0.3" for indexing, here is one "IndexWriter" class for add document in index or delete documents from index.
lock (object)
{
indexWriter.AddDocument(document);// indexWriter object of "IndexWriter" class.
}.
Also we use microsoft message queue for retrieving messages.
I had the same problem until I changed the applcation from STA to MTA:
[MTAThread]
public static void Main()
instead of
[STAThread]
public static void Main()
(I've used Red Gate's ANTS profiler...)
Related to my previous question:
Preventing Memory issues when handling large amounts of text
Is there a way to determine how much memory space my program is occupying? I end up processing a large amount of text file and usually store the processed objects in memory. There are times where there will be too much information, and I will run out of memory. I have a solution for avoiding the memory allocation problem, but I only want to use it when necessary, to avoid paging, which will ultimately decrease my performance when it's not necessary. Is there a way to figure out how much memory I am occupying, so that I can page my information only when necessary?
NOTE: I am looking for a solution that my program can utilize to begin paging when necessary.
long bytes = System.Diagnostics.Process.GetCurrentProcess().WorkingSet64;
You can try GC.GetTotalMemory:
Retrieves the number of bytes
currently thought to be allocated. A
parameter indicates whether this
method can wait a short interval
before returning, to allow the system
to collect garbage and finalize
objects.
The important thing to note is this part: "Retrieves the number of bytes currently thought to be allocated". This means that this method may not be 100% accurate - as long as you know this going in, you should be able to get a rough idea of your virtual memory utilization at a given point in your application execution.
Edit: Let me now offer a different solution that will probably be more productive: use perfmon and the CLR performance counters.
You really need to use a code Profiler. These will tell you exactly what's happening, where the memory is being used up, etc.
FYI: It's rarely where you think it is.
long bytes = System.Diagnostics.Process.GetCurrentProcess().WorkingSet64 for more See Here