Memory leakage in c# windows service multithreading application - c#

I have a windows service multithreaded application for indexing purpose which have six threads. It is working fine except memory leakage. Actually when the service is started, then the service is consuming 12,584kb memory, after some time it is taking memory of 61,584 kb. But after indexing process is complete it is not releasing memory.
I need it come back to its previous position after the indexing is complete, that is it should take the memory with which it started e.g. 12,584kb in this case.
I have used garbage collection but it is not doing what I want.
Can anyone please help me?

First advice would be to throw a memory profiler at it. I've always been happy with Red Gate's ANTS profiler, which will help identify which objects are leaking, if any.
Do bear in mind that there may be first time initialisation happening all over the place, so you probably want to track it over time

.NET doesn't release memory to satisfy people staring at Task Manager. Allocating memory is expensive. The CLR is designed to do so sparingly, and to hold onto any memory it allocates as long as possible. Think of it this way--what's the point of having 4gb of memory when you're not using half of it?
Unless you actually KNOW that you have a memory leak (as in, your app crashes after two days of uptime), let me give you a piece of advice... Close Task Manager. Don't optimize for memory before you know you need to. Relax, guy. Everything is fine.

61 MB does not sound out of the ordinary. You will need to let the service run for awhile and monitor its memory usage for a rising trend. If you see the application leveling out at a certain value then there is probably nothing to worry about.

I agree with Will completely - however I offer 2 small pieces of advice:
Don't use GC.Collect. Trust me you are not going to improve the CLR's Garbage Collection algorithm by doing this. In fact objects that are ready to be collected but can't be for some reason will be moved to Tier 2, where they will have to wait even longer before being collected (thereby potentially making any leak you may have worse!)
Check everywhere in your code that you have created some type of stream (usually MemoryStream) and verify that the streams are being closed properly (preferably in a "finally" block)

Here I am using "log4net-1.2.10" for handling exception, "Lucene.net-2.1.0.3" for indexing, here is one "IndexWriter" class for add document in index or delete documents from index.
lock (object)
{
indexWriter.AddDocument(document);// indexWriter object of "IndexWriter" class.
}.
Also we use microsoft message queue for retrieving messages.

I had the same problem until I changed the applcation from STA to MTA:
[MTAThread]
public static void Main()
instead of
[STAThread]
public static void Main()
(I've used Red Gate's ANTS profiler...)

Related

Tracking Down a .NET Windows Service Memory Leak

Before installing my windows service in production, I was looking for reliable tests that I can perform to make sure my code doesn't contain memory leaks.
However, All what I can find on the net was using task manager to look at used memory or some paid memory profiler tools.
From my understanding, looking at the task manager is not really helpful and cannot confirm the memory leakage (in case, there is).
How to confirm whether there is a memory leak or not?
Is there any free tools to find the source of memory leaks?
Note: I'm using .Net Framework 4.6 and Visual Studio 2015 Community
Well you can use task manager.
GC apps can leak memory, and it will show there.
But...
Free tool - ".Net CLR profiler"
There is a free tool, and it's from Microsoft, and it's awesome. This is a must-use for all programs that leak references. Search MS' site.
Leaking references means you forget to set object references to null, or they never leave scope, and this is almost as likely to occur in Garbage collected languages as not - lists building up and not clearing, event handlers pointing to delegates, etc.
It's the GC equivalent of memory leaks and has the same result. This program tells you what references are taking up tons of memory - and you will know if it's supposed to be that way or not, and if not, you can go find them and fix the problem!
It even has a cool visualization of what objects allocate what memory (so you can track down mistakes). I believe there are youtubes of this if you need an explanation.
Wikipedia page with download links...
NOTE: You will likely have to run your app not as a service to use this. It starts first and then runs your app. You can do this with TopShelf or by just putting the guts in a dll that runs from an EXE that implments the service integrations (service host pattern).
Although managed code implies no direct memory management, you still have to manage your instances. Those instances 'claim' memory. And it is all about the usage of these instances, keeping them alive when you don't expect them to be.
Just one of many examples: wrong usage of disposable classes can result in a lot of instances claiming memory. For a windows service, a slow but steady increase of instances can eventually result in to much memory usage.
Yes, there is a tool to analyze memory leaks. It just isn't free. However you might be able to identify your problem within the 7 day trial.
I would suggest to take a loot at the .NET Memory Profiler.
It is great to analyze memory leaks during development. It uses the concept of snapshots to compare new instances, disposed instances etc. This is a great help to understand how your service uses its memory. You can then dig deeper into why new instances get created or are kept alive.
Yes, you can test to confirm whether memory leaks are introduced.
However, just out-of-the box this will not be very useful. This is because no one can anticipate what will happen during runtime. The tool can analyze your app for common issues, but this is not guaranteed.
However, you can use this tool to integrate memory consumption into your unit test framework like NUnit or MSTest.
Of course a memory profiler is the first kind of tool to try, but it will only tell you whether your instances keep increasing. You still want to know whether it is normal that they are increasing. Also, once you have established that some instances keep increasing for no good reason, (meaning, you have a leak,) you will want to know precisely which call trees lead to their allocation, so that you can troubleshoot the code that allocates them and fix it so that it does eventually release them.
Here is some of the knowledge I have collected over the years in dealing with such issues:
Test your service as a regular executable as much as possible. Trying to test the service as an actual service just makes things too complicated.
Get in the habit of explicitly undoing everything that you do at the end of the scope of that thing which you are doing. For example, if you register an observer to the event of some observee, there should should always be some point in time (the disposal of the observer or the observee?) that you de-register it. In theory, garbage collection should take care of that by collecting the entire graph of interconnected observers and observees, but in practice, if you don't kick the habit of forgetting to undo things that you do, you get memory leaks.
Use IDisposable as much as possible, and make your destructors report if someone forgot to invoke Dispose(). More about this method here: Mandatory disposal vs. the "Dispose-disposing" abomination Disclosure: I am the author of that article.
Have regular checkpoints in your program where you release everything that should be releasable (as if the program is performing an orderly shutdown in order to terminate) and then force a garbage collection to see whether you have any leaks.
If instances of some class appear to be leaking, use the following trick to discover the precise calling tree that caused their allocation: within the constructor of that class, allocate an exception object without throwing it, obtain the stack trace of the exception, and store it. If you discover later that this object has been leaked, you have the necessary stack trace. Just don't do this with too many objects, because allocating an exception and obtaining the stack trace from it is ridiculously slow, only Microsoft knows why.
You could try the free Memoscope memory profiler
https://github.com/fremag/MemoScope.Net
I do not agree that you can trust the Task Manager to check if you have a memory leak or not. The problem with a garbage collector is that it can decide based on heuristics to keep the memory after a memory spike and do not return it to the OS. You might have a 2 GB Commit size but 90% of them can be free.
You should use VMMAP to check during the tests what type of memory your process contains. You do not only have the managed heap, but also unmanaged heap, private bytes, stacks (thread leaks), shared files and much more which need to be tracked.
VMMap has also command line interface which makes it possible to create snapshots at regular intervals which you can examine later. If you have a memory growth you can find out which type of memory is leaked which needs depending on the leak type different debugging tooling approaches.
I would not say that the Garbage collector is infallible. There are times when it fails unknowingly and they are not so straight forward. Memory streams are a common cause of memory leaks. You can open them in one context and they may never even get closed, even though the usage is wrapped in a using statement (the definition of a disposable object that should be cleaned up immediately after its usage falls out of scope). If you are experiencing crashes due to running out of memory, Windows does create dump files that you can sift through.
enter link description here
This is by no means fun or easy and is quite tedious but it tends to be your best bet.
Common areas that are easy to create memory leaks are anything that is using the System.Drawing dll, memory streams, and if you are doing some serious multi-threading.
If you use Entity Framework and a DI pattern, perhaps using Castle Windsor, you can easily get memory leaks.
The main thing to do is use the using( ){ } statement where-ever you can to automatically mark objects as disposed.
Also, you want to turn off automatic tracking on Entity Framework where you are only reading and not writing. Best to isolate your writes, use a using() {} at this point, get a dbContext (with tracking on), write your data.
If you want to investigate what is on the heap. The best tool I've used is RedGate ANTS http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/solving-memory-problems/getting-started not cheap but it works.
However, by using the using() {} pattern where-ever you can (don't make a static or singleton DbContext and never have one context in a massive loop of updates, dispose of them as often as you can!) then you find memory isn't often an issue.
Hope this helps.
Unless you're dealing with unmanaged code, i would be so bold to say you don't have to worry about memory leaks. Any unreferenced object in managed code will be removed by the garbage collector, and the possibility in finding a memory leak within the .net framework i would say you should be considered very lucky (well, unlucky). You don't have to worry about memory leak.
However, you can still encounter ever-growing memory usage, if references to objects are never released. For example, say you keep an internal log structure, and you just keep adding entries to a log list. Then every entry still have references from the log list and therefore will never be collected.
From my experience, you can definitely use the task manager as an indicator whether your system has growing issues; if the memory usage steadily keep rising, you know you have an issue. If it grows to a point but eventually converges to a certain size, it indicates it has reached its operating threshold.
If you want a more detailed view of managed memory usage, you can download the process explorer here, developed by Microsoft. It is still quite blunt, but it gives a somewhat better statistical view than task manager.

Gen2 collection not always collecting dead objects?

By monitoring the CLR #Bytes in all Heaps performance counter of a brand new .NET 4.5 server application over the last few days, I can notice a pattern that makes me think that Gen2 collection is not always collecting dead objects, but I am having trouble understanding what exactly is going on.
Server application is running in .NET Framework 4.5.1 using Server GC / Background.
This is a console application hosted as a Windows Service (with the help of Topshelf framework)
The server application is processing messages, and the throughput is somehow pretty constant for now.
What I can see looking at the graph of CLR #Bytes in all Heaps is that the memory started arround 18MB then growing up to 35MB on approx 20-24 hours (with between 20-30 Gen2 collections during that time frame), and then all of a sudden dropping back to nominal value of 18MB, then growing again up to ~35MB over 20-24 hours and dropping back to 18MB, and so on (I can see the pattern repeating over the last 6 days the app is now running) ... The growing of memory is not linear, it takes approx 5 hours to grow by 10MB and then 15-17 hours for the remaining 10 MB or so.
Thing is that I can see by looking at perfmon counters for #Gen0/#Gen1/#Gen2 collections that a bunch of Gen2 collections are going on during the 20-24 hours period (maybe arround 30) and none of them makes the memory drop back to nominal 18MB.
However, what is strange is by using an external tool to force a GC (Perfview in my case), then I can see #Induced GC going up by 1 (GC.Collect was called so this is normal) and immediately the memory is going back to nominal 18MB.
Which leads me into thinking that either the perfmon counter for #Gen2 collections is not right and only a single Gen2 collection happens after 20-22hours or so (meeehhh I really don't think so) or that the Gen2 collection does not always collect dead objects (seems more plausible) ... but in that case why would forcing a GC via GC.Collect do the trick, what would be the difference between explicitely calling into GC.Collect, v.s automatic triggered collections during the lifetime of the application.
I am sure there is a very good explanation but from the different source of documentation I have found about GC -too few :(- a Gen2 collection does collect dead objects in any case. So maybe docs are not up to date or I have misread ... Any explanation is welcome. Thanks !
EDIT : Please see this screenshot of the #Bytes in all heaps graph over 4 days
(Click for larger view)
this is easier than trying to graph things in your head. What you can see on the graph is what I said above... memory increasing over 20-24hours (and 20-30 Gen2 collections during that time frame) until reaching ~35MB then dropping all of a sudden. You will note at the end of the graph, the induced GC I triggered via an external tool, immediately dropping back memory to nominal.
EDIT #2 : I made a lot of cleaning in the code, mainly regarding finalizers. I had a lot of classes that were holding reference to disposable types, so I had to implement IDisposable on these types. However I was misguided by some articles into implementing the Diposable pattern with a Finalizer in any case. After reading some MSDN documentation I came to understand that a finalizer was only required when the type was holding native resources itself (and still in that case this could be avoided with SafeHandle). So I removed all finalizers from all these types. There were some other modications in the code, but mainly business logic, nothing ".NET framework" related.
Now the graph is very different, this is a flat line arround 20MB for days now ... exactly what I was expecting to see !
So the problem is now fixed, however I still have no idea what was the problem due to ... It seems like it might have been related to finalizers but still does not explain what I was noticing, even if we weren't calling Dispose(true) -suppressing finalizer-, the finalizer thread is supposed to kick in between collection and not every 20-24 hours ?!
Considering we have now moved away from the problem, it will take time to come back to the "buggy" version and reproduce it again. I may try to do it some time though and go to the bottom of it.
EDIT: Added Gen2 collection graph (Click for larger view)
From
http://msdn.microsoft.com/en-us/library/ee787088%28v=VS.110%29.aspx#workstation_and_server_garbage_collection
Conditions for a garbage collection
Garbage collection occurs when one of the following conditions is
true:
The system has low physical memory.
The memory that is used by allocated objects on the managed heap surpasses an acceptable threshold. This threshold is continuously
adjusted as the process runs.
The GC.Collect method is called. In almost all cases, you do not have to call this method, because the garbage collector runs
continuously. This method is primarily used for unique situations and
testing.
It seems that you are hitting the 2nd one and 35 is the threshold. You should be able to configure the threshold to something else if 35 is to large.
There isn't anything special about gen2 collections that would cause them to deviate from these rules. (cf https://stackoverflow.com/a/8582251/215752)
Are any of your objects "large" objects? there's a separate "large object heap" which has different rules
Large Object Heap Fragmentation
It was improved in 4.5.1, though:
http://blogs.msdn.com/b/dotnet/archive/2011/10/04/large-object-heap-improvements-in-net-4-5.aspx
This could easily be explained if gcTrimCommitOnLowMemory is enabled. Normally, the GC keeps some extra memory allocated to the process. However, when the memory reaches a certain threshold, the GC will then "trim" the extra memory.
From the docs:
When the gcTrimCommitOnLowMemory setting is enabled, the garbage collector evaluates the system memory load and enters a trimming mode when the load reaches 90%. It maintains the trimming mode until the load drops under 85%.
This could easily explain your scenario - the memory reserves are being kept (and used) until your application reaches a certain point, which seems to be once every 20-24 hours, at which point the 90% load is detected, and the memory is trimmed to its minimum requirements (the 18mb).
Reading your first version I would say that is a normal behavior.
...but in that case why would forcing a GC via GC.Collect do the
trick, what would be the difference between explicitely calling into
GC.Collect, v.s automatic triggered collections during the lifetime of
the application.
There is two type of collections, a full collection and a partial collection. What the automatic triggered does is a partial collection, but when calling GC.Collect it will do a full collection.
Meanwhile, I might have the reason of it now that you told us that you were using finalizer on all of your objects. If for any reason one of those objects were promoted to #2 Gen, the finalizer would only run when doing a #2 Gen collection.
The following example will demonstrate what I just said.
public class ClassWithFinalizer
{
~ClassWithFinalizer()
{
Console.WriteLine("hello from finalizer");
//do nothing
}
}
static void Main(string[] args)
{
ClassWithFinalizer a = new ClassWithFinalizer();
Console.WriteLine("Class a is on #{0} generation", GC.GetGeneration(a));
GC.Collect();
Console.WriteLine("Class a is on #{0} generation", GC.GetGeneration(a));
GC.Collect();
Console.WriteLine("Class a is on #{0} generation", GC.GetGeneration(a));
a = null;
Console.WriteLine("Collecting 0 Gen");
GC.Collect(0);
GC.WaitForPendingFinalizers();
Console.WriteLine("Collecting 0 and 1 Gen");
GC.Collect(1);
GC.WaitForPendingFinalizers();
Console.WriteLine("Collecting 0, 1 and 2 Gen");
GC.Collect(2);
GC.WaitForPendingFinalizers();
Console.Read();
}
The output will be:
Class a is on #0 generation
Class a is on #1 generation
Class a is on #2 generation
Collecting 0 Gen
Collecting 0 and 1 Gen
Collecting 0, 1 and 2 Gen
hello from finalizer
As you can see, only when doing a collection on the generation where the object is, the memory of the objects with finalizer will be reclaimed.
Just figure I'll throw in my 2 cents here. I'm not an expert at this but maybe this might help your investigation.
If you're using a 64-bit platform try adding this to your .config file. I read that that can be an issue.
<configuration>
<runtime>
<gcAllowVeryLargeObjects enabled="true" />
</runtime>
</configuration>
The only other thing I would point out is that you could prove your hypothesis by troubleshooting from the inside if you are in control of the source code.
Calling something along the lines of this your app's main memory consuming class, and setting it to run on timed intervals, could shed some light onto what's really going on.
private void LogGCState() {
int gen = GC.GetGeneration(this);
//------------------------------------------
// Comment out the GC.GetTotalMemory(true) line to see what's happening
// without any interference
//------------------------------------------
StringBuilder sb = new StringBuilder();
sb.Append(DateTime.Now.ToString("G")).Append('\t');
sb.Append("MaxGens: ").Append(GC.MaxGeneration).Append('\t');
sb.Append("CurGen: ").Append(gen).Append('\t');
sb.Append("CurGenCount: ").Append(GC.CollectionCount(gen)).Append('\t');
sb.Append("TotalMemory: ").Append(GC.GetTotalMemory(false)).Append('\t');
sb.Append("AfterCollect: ").Append(GC.GetTotalMemory(true)).Append("\r\n");
File.AppendAllText(#"C:\GCLog.txt", sb.ToString());
}
Also there is a pretty good article here on using the GC.RegisterForFullGCNotification method. Obviously this would enable you to also include the time span of a full collection so that you could possibly tune performance and collection frequency to your specific needs. This method also let's you specify a heap threshold to to trigger notifications (or collections?).
There's also probably a way to set that in the apps .config file but I haven't looked. For the most part 35MB is a pretty small footprint for a Server Application these days. Heck, my web browser makes it up to 300-400MB sometimes :) So, the Framework might just see 35MB as a good default point to free up memory.
Anyhow, I can tell by the thoughtfulness of your question that I'm probably just pointing out the obvious. But, it feels worth mentioning. I wish you luck!.
On a funny note
At the top of this post I had originally written "if (you're using a 64-bit platform)". That made me crack up. Take care!
I have exactly the same situation in my WPF application. No finalizers in my code btw. However it seems that ongoing GC actually collects Gen 2 objects. I can see that GC.GetTotalMemory() results reduces up to 150mb after Gen2 collection triggered.
So I'm under impression that Gen2 heap size does not show amount of bytes that is used by live objects. It is rather just a heap size or amount of bytes that is allocated for Gen2 purposes. You may have plenty of free memory there.
Under some conditions (not on each gen 2 collection) this heap size is trimmed. And at this particular moment my application gets a huge performance hit - it may hung up to sevetal seconds. Wondering why...

Targetting memory leak in a .NET production service

I have a C#.NET service running in production. The service functions as a TCP server to which clients register and make requests against. In looking at the Task Manager, it appears to be leaking about 10MB/day. I don't seem to notice these in dev (perhaps because of far less traffic and client activity). In searching around I've read that the Task Manager can be seriously wrong, but I'm not sure how accurate this is or in what circumstances the TM would display incorrect information.
To solve this problem I need to more closely monitor memory consumption. The problem is that the leak only seems to appear in production, where the deployed service was built for Release. Also since it's a service that can't be run directly be VS with an attached profiler/debugging, I'm not sure how to best pinpoint the problem with something more precise than TM.
Any group wisdom would be much appreciated, thanks.
EDIT:
I've added perfmon counters for the privates bytes of the service (7MB to start out) as well as CLR mem in all heaps (30MB to start out)
Task manager says the total memory to be ~37MB so this seems to make sense
The first part of this is to let the service go for a day and check out my counters again.
If my private bytes get huge but CLR mem is roughly static this would indicate an unmanaged leak. If both get huge then it's a managed leak.
Thanks guys.
Your first task is figuring out if the process is leaking memory. You can do this with perfmon measuring the Private Bytes
http://www.goldstarsoftware.com/papers/CapturingVirtualBytesToALogFile.pdf
If the graph is consistently rising (for say half an hour ) you have a memory leak. You can then use other counters to figure out if this is a .NET leak (.NET memory) though this is unlikely. I find that in most of these cases, there is a COM component that is being invoked but not released.
If you truly have a memory leak (and this isn't just variable memory usage)- the process will shutdown with an out of memory exception after running for a while.
You need one of the below MemoryProfilers in order to monitor it;
http://www.jetbrains.com/profiler/
http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/
There are other choices but these are very capable and you can profile remote application's memory with them (at least JetBrains's solution handles that)
Follow this guide: http://blogs.msdn.com/b/tess/archive/2008/03/25/net-debugging-demos-lab-7-memory-leak.aspx
It goes over exactly what you're describing, a memory leak in production. As was mentioned you have to first determine whether it's unmanaged code or managed code that's leaking using perfmon and Private Bytes.
In general make sure for networking objects you're wrapping them in using statements so that they're properly disposed.
A workflow I often use for managed memory leaks is to start the server on a test machine, hit it with a known amount of connections (say 123,456 connections). Then take a memory snapshot by going to task manager and right clicking on the process name and selecting 'create dump'. Open this dump with WinDBG and SOS and run the command !dumpheap -stat. Look for objects that have a multiple of 123,456 instances. Should these objects still be in memory? If not run a !gcroot on an instance of those objects to find why it's still in memory.
Get a dump of the memory when its in a leak state using the Task Manager right click on the process and select create dump file. You can also use ProcDump which gives you more options.
Use SOS Extensions in either WinDebug or Visual Studio to inspect the memory.

System.OutOfMemory being thrown. How to find the culprit?

I am using Visual C# Express 2008 and I have an application that starts up on a form, but uses a thread with a delegated display function to take care of essentially all the processing. That way my form doesn't lock up while tasks are being processed.
Semi-recently, after going through a repeated process a number of times (the program processes incoming data, so when data comes in, the process repeats) my app will crash with a System.OutOfMemory error.
The stack trace in the error message is useless because it only directs me to the the line where I call the delegated form control function.
I've heard people say they use ProcMon from SysInternals to see why errors like this happen. But I, for the life of me, can't figure it out. The amount of memory I am using doesn't change as the program runs, if it goes up, it comes back down. Plus, even if it was going up, how do I figure out which part of my program is the problem?
How can I go about investigating this problem?
EDIT:
So, after delving further into this issue, I looked through anything that I was ever re-declaring. There were a few instances where I had hugematrix = new uint[gigantic], so I got rid of about 3 of those.
Instead of getting rid of the error, it is now far more obscured and confusing.
My application takes the incoming data, and renders it using OpenGL. Now, instead of throwing "System.OutOfMemory" it simply does not render anything with OpenGL.
The only difference in my code is that I do not make new matrices for holding the data I plot. That way, I hope, my array stays in the same place in memory and doesn't do anything suicidal to my LOH.
Unfortunately, this twists the beast far beyond my meager means. With zero errors popping up, and all my data structures apparently still properly filled, how can I find my problem? Does OpenGL use memory in an obscure way so as to not throw exceptions when it fails? Is memory still a problem? How do I find out? All the memory profilers in the world seem to tell me very little.
EDIT:
With the boatloads of support from this community (with extra kudos to Amissico) the error has finally been rooted out. Apparently I was adding items to an OpenGL list, and never taking them off the list.
The app that finally clued me in was .Net Memory Profiler. At the time of crash it showed 1.5GB of data in the <unknown> category. Through process of elimination (everything else in the list that was named), the last thing to be checked off the list was the OpenGL rendering pipleline. The rest is history.
Based on the description in your comments, I would suspect that you are either not disposing of your images correctly or that you have severe Large Object Heap fragmentation and, when trying to allocate for a new image, don't have enough contiguous space available. See this question for more info - Large Object Heap Fragmentation
You need to use a memory profiler, such as the ants memory profiler to find out what causes this error.
Are you re-registering an event handler on every loop and not un-registering it?
CLR Profiler for the .NET Framework 2.0 at https://github.com/MicrosoftArchive/clrprofiler
The most common cause of memory fragmentation is excessive string creation.
Following considerations:
Make sure that threads you spawn are destroyed (aborted or function return). Too much threads can fail application, although in Task Manager used memory is not too high
Memory leaks. Yes, yes, you can cause them in .net pretty well without setting reference to nulls. This can be solved by using memory profilers like dotTrace or ANTS Memory Profiler
I had an OutOfMemoryException-problem as well:
Microsoft Visual C# 2008 Reducing number of loaded dlls
The reason was fragmentation of 2GB GB virtual address space and poster nobugz suggested Sysinternal's Vmmap utility which has been very helpful for diagnostics. You can use it to check if your free memory areas become more fragmented over time. (First sort by size then by type -> refresh repeat sorting and you can see if contiguous free memory blocks become smaller)

Out of Memory Exception

I am working on a web app using C# and asp.net I have been receiving an out of memory exception. What the app does is read a bunch of records(products) from a data source, could be hundreds/thousands, processes those records through settings in a wizard and then updates a different data source with the processes product information. Although there are multiple DB classes, right now all the logic is in one big class. The only reason for this, is all the information has to do with one thing, a product. Would it help the memory if I divided my app into different classes?? I don't think it would because if I divided the business logic into two classes, both of the classes would remain alive the entire time sending messages to each other, and so I don't know how this would help. I guess my other solution would be to find out what's sucking up all the memory. Is there a good tool you could recommend??
Thanks
Are you using datareaders to stream through your data? (to avoid loading too much into memory)
My gut is telling me this is a trivial issue to fix, don't pump datatables with 1 million records, work through tables one row at a time, or in small batches ... Release and dispose objects when you are done with them. (Example: don't have static List<Customer> allCustomers = AllCustomers())
Have a development rule that ensures no one reads tables into memory if there are more than X amount of rows involved.
If you need a tool to debug this look at .net memory profiler or windbg with the sos extension both will allow you to sniff through your your managed heaps.
Another note is, if you care about maintainability and would like to reduce your defect count, get rid of the SuperDuperDoEverything class and model information correctly in a way that is better aligned with your domain. The SuperDuperDoEverything class is a bomb waiting to explode.
Also note that you may not actually be running out of memory. What happens is that .NET goes to look for contiguous blocks of memory, and if it doesn't find any, it throws an OOM - even if you have plenty of total memory to cover the request.
Someone referenced both Perfmon and WinDBG. You could also setup adplus to capture a memory dump on crash - I believe the syntax is adplus -crash -iis. Once you have the memory dump, you can do something like:
.symfix C:\symbols
.reload
.loadby sos mscorwks
!dumpheap -stat
And that will give you an idea for what your high-memory objects are.
And of course, check out Tess Fernandez's excellent blog, for example this article on Memory Leaks with XML Serializers and how to troubleshoot them.
If you are able to repro this in your dev environment, and you have VS Team Edition for Developers, there are memory profilers built right in. Just launch a new performance session, and run your app. It will spit out a nice report of what's hanging around.
Finally, make sure your objects don't define a destructor. This isn't C++, and there's nothing deterministic about it, other than it guarantees your object will survive a round of Garbage Collection since it has to be placed in the finalizer queue, and then cleaned up the next round.
a very basic thing you might want to try is, restart visual studio (assuming you are using it) and see if the same thing happens, and yes releasing objects without waiting for garbage collector is always a good practice.
to sum it up,
release objects
close connections
and you can always try this,
http://msdn.microsoft.com/en-us/magazine/cc337887.aspx
I found the problem. While doing my loop I had a collection that wasn't being cleared and so data just keep being added to it.
Start with Perfmon; There is a number of counters for GC related info. More than likely you are leaking memory(otherwise the GC would be deleting objects), meaning you are still referencing data structures that are no longer needed.
You should split into multiple classes anyways, just for the sake of a sane design.
Are you closing your DB connections? If you are reading into files, are you closing/releasing them once you are done reading/writing? Same goes for other objects.
You could cycle your class objects routinely just to release memory.

Categories