I'm running a very small program that throws "System.OutOfMemoryException" when there's clearly a lot of memory available:
Under the task manager, I got plenty of memory left, and the process is taking around 45MB.
Is there a way to increase this process memory limit?
EDIT1:
I moved some code around, and this made the error get triggered in many different spots. The latest, after loading a form and filling it with data from the database, I just change from one tab to another in a TabControl and that triggers the exception.
I'm using Entity Framework 6 and a Postgres database, if that helps. Maybe the framework is creating too many instances of objects and I'm running out of memory indexes.
That's why I think the code itself is not relevant to the question.
EDIT2:
I took another screenshot of the memory analyser before and after the exception occurs:
So the problem is that I'm getting over 100K objects in memory, and I'm not having enough memory pages to create new ones? I'm still quite lost on how to handle this.
EDIT3:
Well, I found the culprit of the memory leak: Entify Framework. The memory consumption increase happens after I first load data from the database. Now I'll have to properly debug if the problem is in the Postgre driver or on Entity Framework itself. Will have to test with a different database to check.
Here's two screenshots showing the difference before and after I use EF. The code loads some items in two comboboxes, both using the same List as a datasource. On the first example. I have EF, on the second I loaded the same types of objects, but creating them myself, no connection to the database. The difference is astonishing!
EDIT 4:
Well, this is embarassing. I moved to a different solution. Now I'm working with Simple.Migration to manage changes in the database, and Dapper (with Dapper.Contrib) to handle CRUD. Hope they see this and maybe see what's going wrong...
Memory usage can be different from memory reserved. The system can reserve memory (it says "I have dibs on that megabyte") but not actually utilize it. I'm not certain whether Visual Studio shows the memory usage or reserved, but you can check it through Task Manager, going to the Performance tab, and clicking the Resource Monitor button near the bottom of the window. The Memory tab in the Resource Monitor window will show how much memmory is commited.
If you have this problem, and I bet you do, then you are repeatedly declaring something and not instantiating or deleting it, or you are adding endless objects to a list. It's most likely happening inside a loop. It is very unlikely that Visual Studio is hitting a memory limit, unless your computer is a potato.
Related
I have a WPF application which uses MEF to load some dialogs.
I noticed that after some time it starts to build up more and more WeakReferences which seem to not be freed. The test scenario has a memory pressure item to build up ~3GB. The dialog calls a DataService and reloads the contents of a DataGrid.
What can be the cause for such amount of WeakReferences not being freed?
I cannot see any application specific references being kept.
Below a picture of the memory profiling session with a view on the last added items. Number of WeakReferences added is noticable.
Thank you in advance.
UPDATE:
The profiler I used on a remote computer shows that the build up is mainly caused by new WeakReferences. WaitCallback is probably beacuse of a loop which I made to elaborate the problem and cause a refresh every second. Otherwise the Objects delta is clean.
I'm working on an app for windows phone 8, and I'm having a memory leak problem. But first some background. The app works (unfortunately) using WebBrowsers as pages. The pages are pretty complex with a lot of javascript involved.
The native part of the app, written in c#, is responsible for doing some simple communication with the javascript(e.g. native is a delegate for the javascript to communicate with a server), make animation for page transition, tracking, persistance, etc. All is done in a unique PhoneApplicationPage.
After I had some crashes for out of memory exceptions, I started profiling the app. I can see that the WebBrowsers, which are the big part of the the app, are being disposed correctly.
But the problem I'm seeing is that memory continues to increase. What's worse, I have little feedback from the profiler. From what I understand, the profiler graph says there is a big problem, while the profiler numbers say there's no problem at all...
Note: the step represents a navigation from a WebBrowser to another WebBrowser. The spike is created (I suppose) by the animation between the two controls. In the span I've selected in the image, I was doing a navigation forward and one backward having a maxium of 5 WebBrowsers (2 for menus that are always there, 1 for the index page, 1 for the page I navigate from and 1 for the page I navigate to). At every navigation the profiler shows the correct number of WebBrowsers: 5 after navigating forward, 4 after navigating backward.
Note 2: I have added the red line to make clearer that the memory is going up in that span of time
As you can see from the image
the memory usage is pretty big but the numbers say it's low and in that span of time, retained allocation is lower than when it started...
I hope I've included enough information. I want some ideas on what could cause this problem. My ideas so far are:
-the javascript in the WebBrowser is doing something wrong (e.g. not cleaning some event handler). Even if this is the case, shouldn't the WebBrowser release the memory when it is destroyed?
-using a unique PhoneApplicationPage is something evil that is not supposed to be done, and changing its structure may cause this.
-other?
Another question: why does the graph show the correct amount of memory use while the number doesn't?
If you need more info about the profiler, ask and I will post them tomorrow.
Ok after a lot of investigation I finally was able to find the leak.
the leak is created by the WebBrowser control itself which seems to have some event handler that are not removed when you remove it from a Panel. In fact the leak is reproducible by following these steps:
Create a new WebBrowser
Add it to a Panel or whatever
Navigate to a page, with an image which is big and heavy
Tap somewhere in the blank space of the browser(tapping on the image seems to not create the leak)
remove and collect the browser
repeat from 1
at every iteration the memory of the image is never collected and the memory continue to grow.
A ticket to Microsoft was already sent.
The problem was resolved using a pool of WebBrowsers
I don't think There is enough information to find the cause to your leak, and without posting your entire solution I am not sure there can be, since the question is about locating the root cause of it...
What I Can offer is the approach I have used when I had my own memory leak.
The technique was to:
Open a memory profiler. From your screenshot I see you are using one. I used perfmon. This article has some material about setting perfmon and #fmunkert also explains it rather well.
Locate an area in the code that you suspect that it is likely that the leak is in that area. This part is mostly depending on you having good guesses about the part of the code that is responsible for the issue.
Push the Leak to the extreme: Use labels and "goto" for isolating an area / function and repeat the suspicious code many times (a loop will work to. I find goto more convenient for this matter).
In the loop I have used a breakpoint that halted every 50 hits for examining the delta in the memory usage. Of course you can change the value to feet a noticeable leak change in your application.
If you have located the area that causes the leak, the memory usage should rapidly spike. If the Memory usage does not spike, repeat stages 1-4 with another area of code that you suspect being the root cause. If it does, continue to 6.
In the area you have found to be the cause, use same technique (goto + labels) to zoom in and isolate smaller parts of the area until you find the source of the leak.
Note that the down sides of this method are:
If you are allocating an object in the loop, it's disposal should be also contained in the loop.
If you have more than one source of leak, It makes it harder to spot (yet still possible)
Did you clean up your event handlers? You may inadvertently still have some references if the controls are rooted.
I have an event driven app that I was tasked with maintaining.
About 100 events run every 30 seconds, on separate timers. Over time the events alias into a constant stream of about 1-3 events per second.
Memory usage does not appear dependent on the number of events firing in any given second.
Each event polls data from a Webservice, checks the data using a LINQ2SQL DataContext against the previously polled data (I do not dispose or null out the DataContext when done), and if the data is different, updates the database and pushes the new data as an XML message to receiver service via TCP.
This app appears to have a memory leak which
only manifests after 30m+ of running (either debug or release)
won't manifest when profiling [I'm using .NET Memory Profiler 4.5]
Characteristics:
On startup the program uses ~30MB. As time progresses this Memory usage in Task Manager will begin pogoing, first only slightly, between 50 and 150MB, and eventually gets worse, oscillating between 200MB and 1GB+. When this happens, it happens a few times within a second or two, then settles down at ~150MB for the next 10-20 or so seconds.
I've been trying to catch this behavior in action using memory profiling. So far I've been unsuccessful, I can't get the app to pogo or oscillate in memory usage anywhere near like I can when the profiler isn't watching.
However, I've been noticing a square-wave sort of pattern on the memory usage as the Garbage Collector stages 1 and 2 run that looks very similar to what I see in Task Manager, except the memory usage oscillations in the square-wave are 10MB wide, instead of 800MB+ (200MB to 1GB+). Now, according to Google Images, Garbage collection in a properly functioning app looks more like a sawtooth wave than square.
I frankly don't see any way that my app could be pogoing between 200MB and 1GB+ of memory usage within a second and NOT be spiking the CPU to 100%.
I have read about some problems that can manifest between garbage collection + event handling, but I have several paths I could go investigate and am trying to narrow down which one to spend time on. I'm still pretty slow at .NET and haven't developed the "intuition" I have for embedded devices running C that generally helps me filter what I should investigate first.
What if FEELS like is perhaps some event handlers are losing and re-gaining references to [massive amounts of data] (I don't know how this could even happen?) seeing as memory usage appears to spike back up to 1GB soon after the garbage collector runs and drops memory usage back to 200MB.
Previous versions of this app did not have these problems. Two changes I have made since then include
utilizing LINQ2SQL instead of our own data manager (which had an ADORecordSetHelper object we utilized to execute hardcoded SQL statements)
changing the piece of software we use to send the TCP XML messages to a receiver.
Due to the simplicity of the what we're doing in #2, it COULD be the source of the problem but this memory usage behavior makes me think otherwise.
I guess my main questions at this point are
Should I be calling dispose on my LINQ2SQL DataContexts before I return from the method I create them in?
Should I null them out instead?
if an exception were occuring somewhere in a method after creation of a DataContext, could it cause the DataContext to be kept in memory indefinitely?
if I store a result from a LINQ query to a value-type (ie int not var), is it lazy-loaded then, or lazy'd when the variable is used?
how possible is it for event-driven frameworks to hypothetically lose and regain references?
edit: the events have instance-based subscriptions like discussed here and are never unsubscribed for the life of the app.
edit2: finally managed to catch it in the profiler, appears to be a 200MB system.string that's being created somehow. Thanks everyone for ruling out GC behavior.
Most of the times, memory leaks are caused by weird references between objects (events and delegates are also included here).
What I think you could try is the following:
Run the application and reproduce the issue. When the private working set of memory hits a very high value, right click the process on task manager and select "Create Dump File". This will be a lot less intrusive than profiling the application live.
Download WinDBG and run it.
Open the memory dump by going to the File menu and selecting Open dump file (I cannot remember exactly what the name of the menu options is... should be easy to spot though).
Run the following commands:
.symfix
.loadby sos clr
!dumpheap -type [YourAssemblyNameSpacePrefix] -stat
The last command will give you all the instances in memory which are not CLR types, only your types. Look at the types which have a very high number of instances and try to see if anything doesn't look right.
If you see a very high number of objects of the same type run the following command which will show you all instances' addresses:
!dumheap -type [TheFullObjectTypeName]
You will need to select one single instance address. Now run the following command to see the references to that instance:
!gcroot [InstanceAddress]
Repeat step 6 a few times for different instances so that you can confirm the leak is coming from the same place or to help you identify what is causing those instances to not be collected (still being referenced by other objects).
If you don't see anything weird with your own types, change the !dumpheap command in step 4 to: !dumpheap -stat. This way you are not filtering by type and you will also see CLR types and third party libraries types.
This is a little bit complex but hopefully I was able to give you a method to help you figure out how to find memory leaks.
I am using Visual C# Express 2008 and I have an application that starts up on a form, but uses a thread with a delegated display function to take care of essentially all the processing. That way my form doesn't lock up while tasks are being processed.
Semi-recently, after going through a repeated process a number of times (the program processes incoming data, so when data comes in, the process repeats) my app will crash with a System.OutOfMemory error.
The stack trace in the error message is useless because it only directs me to the the line where I call the delegated form control function.
I've heard people say they use ProcMon from SysInternals to see why errors like this happen. But I, for the life of me, can't figure it out. The amount of memory I am using doesn't change as the program runs, if it goes up, it comes back down. Plus, even if it was going up, how do I figure out which part of my program is the problem?
How can I go about investigating this problem?
EDIT:
So, after delving further into this issue, I looked through anything that I was ever re-declaring. There were a few instances where I had hugematrix = new uint[gigantic], so I got rid of about 3 of those.
Instead of getting rid of the error, it is now far more obscured and confusing.
My application takes the incoming data, and renders it using OpenGL. Now, instead of throwing "System.OutOfMemory" it simply does not render anything with OpenGL.
The only difference in my code is that I do not make new matrices for holding the data I plot. That way, I hope, my array stays in the same place in memory and doesn't do anything suicidal to my LOH.
Unfortunately, this twists the beast far beyond my meager means. With zero errors popping up, and all my data structures apparently still properly filled, how can I find my problem? Does OpenGL use memory in an obscure way so as to not throw exceptions when it fails? Is memory still a problem? How do I find out? All the memory profilers in the world seem to tell me very little.
EDIT:
With the boatloads of support from this community (with extra kudos to Amissico) the error has finally been rooted out. Apparently I was adding items to an OpenGL list, and never taking them off the list.
The app that finally clued me in was .Net Memory Profiler. At the time of crash it showed 1.5GB of data in the <unknown> category. Through process of elimination (everything else in the list that was named), the last thing to be checked off the list was the OpenGL rendering pipleline. The rest is history.
Based on the description in your comments, I would suspect that you are either not disposing of your images correctly or that you have severe Large Object Heap fragmentation and, when trying to allocate for a new image, don't have enough contiguous space available. See this question for more info - Large Object Heap Fragmentation
You need to use a memory profiler, such as the ants memory profiler to find out what causes this error.
Are you re-registering an event handler on every loop and not un-registering it?
CLR Profiler for the .NET Framework 2.0 at https://github.com/MicrosoftArchive/clrprofiler
The most common cause of memory fragmentation is excessive string creation.
Following considerations:
Make sure that threads you spawn are destroyed (aborted or function return). Too much threads can fail application, although in Task Manager used memory is not too high
Memory leaks. Yes, yes, you can cause them in .net pretty well without setting reference to nulls. This can be solved by using memory profilers like dotTrace or ANTS Memory Profiler
I had an OutOfMemoryException-problem as well:
Microsoft Visual C# 2008 Reducing number of loaded dlls
The reason was fragmentation of 2GB GB virtual address space and poster nobugz suggested Sysinternal's Vmmap utility which has been very helpful for diagnostics. You can use it to check if your free memory areas become more fragmented over time. (First sort by size then by type -> refresh repeat sorting and you can see if contiguous free memory blocks become smaller)
I am working on a web app using C# and asp.net I have been receiving an out of memory exception. What the app does is read a bunch of records(products) from a data source, could be hundreds/thousands, processes those records through settings in a wizard and then updates a different data source with the processes product information. Although there are multiple DB classes, right now all the logic is in one big class. The only reason for this, is all the information has to do with one thing, a product. Would it help the memory if I divided my app into different classes?? I don't think it would because if I divided the business logic into two classes, both of the classes would remain alive the entire time sending messages to each other, and so I don't know how this would help. I guess my other solution would be to find out what's sucking up all the memory. Is there a good tool you could recommend??
Thanks
Are you using datareaders to stream through your data? (to avoid loading too much into memory)
My gut is telling me this is a trivial issue to fix, don't pump datatables with 1 million records, work through tables one row at a time, or in small batches ... Release and dispose objects when you are done with them. (Example: don't have static List<Customer> allCustomers = AllCustomers())
Have a development rule that ensures no one reads tables into memory if there are more than X amount of rows involved.
If you need a tool to debug this look at .net memory profiler or windbg with the sos extension both will allow you to sniff through your your managed heaps.
Another note is, if you care about maintainability and would like to reduce your defect count, get rid of the SuperDuperDoEverything class and model information correctly in a way that is better aligned with your domain. The SuperDuperDoEverything class is a bomb waiting to explode.
Also note that you may not actually be running out of memory. What happens is that .NET goes to look for contiguous blocks of memory, and if it doesn't find any, it throws an OOM - even if you have plenty of total memory to cover the request.
Someone referenced both Perfmon and WinDBG. You could also setup adplus to capture a memory dump on crash - I believe the syntax is adplus -crash -iis. Once you have the memory dump, you can do something like:
.symfix C:\symbols
.reload
.loadby sos mscorwks
!dumpheap -stat
And that will give you an idea for what your high-memory objects are.
And of course, check out Tess Fernandez's excellent blog, for example this article on Memory Leaks with XML Serializers and how to troubleshoot them.
If you are able to repro this in your dev environment, and you have VS Team Edition for Developers, there are memory profilers built right in. Just launch a new performance session, and run your app. It will spit out a nice report of what's hanging around.
Finally, make sure your objects don't define a destructor. This isn't C++, and there's nothing deterministic about it, other than it guarantees your object will survive a round of Garbage Collection since it has to be placed in the finalizer queue, and then cleaned up the next round.
a very basic thing you might want to try is, restart visual studio (assuming you are using it) and see if the same thing happens, and yes releasing objects without waiting for garbage collector is always a good practice.
to sum it up,
release objects
close connections
and you can always try this,
http://msdn.microsoft.com/en-us/magazine/cc337887.aspx
I found the problem. While doing my loop I had a collection that wasn't being cleared and so data just keep being added to it.
Start with Perfmon; There is a number of counters for GC related info. More than likely you are leaking memory(otherwise the GC would be deleting objects), meaning you are still referencing data structures that are no longer needed.
You should split into multiple classes anyways, just for the sake of a sane design.
Are you closing your DB connections? If you are reading into files, are you closing/releasing them once you are done reading/writing? Same goes for other objects.
You could cycle your class objects routinely just to release memory.