Help finding a leak with Remote Performance Monitor - c#

I'm trying to find the source of a memory leak in a Compact Framework application using the Remote Performance Monitor. I managed to remove a few minor ones related to brushes and other graphic objects, but I still don't have a clear idea of what is causing the main problem.
At this point the only objects that seem to never go back to their original count are System.String objects. I find this to be very weird, since I would've thought that in order for these objects to remain uncollected, the objects that contain them would have to remain as well, and yet no other type of objects seem to be increasing along with the System.Strings.
I'm trying to find out which new String objects are the ones that remain after the application returns to its original state (i.e. the login screen). The problem is that originally, the application loads with about 2200 string objects, and then after process "X" it increases another 70 or so which never get collected. I don't know how to identify those 70 new objects, in order to find out who is holding on to them and make the appropriate corrections.
Does anyone have any experience in which Strings were not being collected? Is there any way to separate the new objects created during process "X" from those that were originally required by the application so that I can know which are leaking? Any advice would be appreciated.
Thanks
**UPDATE
Ok... there is something very strange going on. I'm starting to doubt whether there is a leak at all.
Let's say that I take a memory snapshot at the Login screen which is the original starting point of the application. Imagine that there are 1000 string objects in memory at this point. Now, if I login and choose an option from the menu, I'll take a snapshot once the new screen is loaded. Let's say that loading this form increases the string count by say 50 objects. When I logout and take a snapshot again at the login screen, only 25 of those objects have been collected, the rest will stay in memory from then on.
The strange thing is that if I keep repeating this process, no further string objects will accumulate. Instead of the string count increasing by 50, only 25 will be added at this point, and those same 25 will be collected once I go back to the login screen. I'm thinking that if this were an actual leak, then each time that I opened that screen the string count would increase permanently by 25, but this only happens the first time.
This happens to me on every new screen that I open. At first there is a small permanent increase in the overall string count, but once I've loaded that particular screen, any increase in the string count during its execution will be collected once I go back to the login screen.
All of this has led me to believe that perhaps those strings are part of the inner workings of the CLR or something like that. Could it be some sort of caching done by the runtime? Perhaps it is storing my string constants for faster loading? Something like that? I hope this wasn't too confusing.

if you're using CF 3.5, then use the CLR Profiler instead of RPM. It will show you all objects and the roots that spawned them. It would allow you to walk back the root tree to figure out where they were allocated.
EDIT
So you're saying that you aren't actually getting OOMs or any aberrent behavior? Sounds like you're trying to optimise when it's not necessary. Those string are likely things like Form captions, etc. that the JITter is creating and would get collected if/when the objects get collected. What they are, though, really isn't terribly important if you're not actually seeing memory problems.

Related

C#/MonoGame - Destroying large objects in memory when I unload my game level

I'm working on a game using C#/MonoGame and I'm wondering how to solve a garbage collection problem relating to large game objects in memory.
Every time I load a new game, I create and store a World object which itself has a private field containing a quadtree for my LOD terrain system. The quadtree recursively contains vertex information for all its possible child nodes, down to the smallest level I've decided I want. This means that each new World takes ~10 seconds to create (done on a background thread) and comes to ~150MB in RAM size.
I'd like to be assured that every time I load a new game, the old World is disposed, but as far as I can tell right now, this is never happening. I just tested it by pressing 'load game' six times in a row, and the memory used by my game was touching 1GB without ever dropping.
I know this could mean that I'm still referencing my old World objects, but I can't see how this is happening. My main game app has a single private World which is re-created from scratch as a new object on my background loader thread every time I press 'load game':
// This belongs to the game and is referenced when I want to interact with and draw the current World
private World _world;
.
.
// Executes on background loader thread when I press 'load game'
LoadGame()
{
// This is where the old World is replaced with the new one, so I want the old one to be disposed as it contains ~150MB of useless data
_world = new World(worldParameters);
}
.
.
Draw()
{
// Rendering the active World in our draw loop
DrawWorld(_world);
}
I tried setting _world to null and forcing the GC to collect at the top of the LoadGame() method, but that did nothing.
Is there a way I can force the memory for the old object to be freed, or even just see if I'm inadvertently pointing to any of its fields in the rest of my game, causing it to stay alive?
Memory allocation can be a tricky thing in garbage collected languages like C#. Ironically, these languages are designed so that you don't have to think too much about memory allocations. Unfortunately, for games having the garbage collector kick in during gameplay can be a frustrating experience for the player if keeping a consistent frame rate is important for your game.
There are two approaches that I'm aware of to deal with this kind of issue. The first is to explicitly call the garbage collector yourself.
GC.Collect();
GC.WaitForPendingFinalizers();
Keep in mind, this is a quick and dirty approach and it might not work the way you expect. Many people on the internet will tell you it's not recommended or a bad idea to force garbage collection. I'm no expert in the pros and cons, but I just wanted to make you aware that it's an option so you can do your own research.
The second approach is being smart about your data structures. Some circles might call this "implementing your own memory allocation strategy". Essentially what it boils down to is pre-allocating a big chunk of memory up front and reusing that space over and over again.
What the exact strategy looks like for you could be very simple, or very complex. My advice is to start with the simplest thing you can get away with and see how it goes. One of the patterns that might be helpful to look into is often called "object pooling".

Free memory explicitly

I know this may seem like a duplicate, but I think this specific case may be a bit different.
In my MVC application, I have a ViewModel containing a big List used to display a table similar to an Excel sheet. each MyComplexClass contains a list of MyComplexClass and a list of MyComplexColumn.
This makes a pretty big memory usage, but now I have a problem. I am supposed to write a method that cleans my big table, and load a different set of data into it. I could write something like:
MyViewModel.Lines = new List<MyComplexClass>();
MyViewModel.LoadNewSetOfData(SetOfData);
But, comming from a C background, explicitly losing my reference on the old List is not something I can do, and continue looking at my face in the mirror every morning.
I saw Here, Here, and Here, that I can set the reference to null, and then call
GC.Collect()
But I was told that I was not really best practice, especially because it may really affects performance, and because I have no way to know if the GC has disposed of this specific memory allocation.
Isn't there any way I can just call something like Free() and live on?
Thank you
Don't worry about it! Trying to "encourage" the garbage collector to reclaim memory isn't a great idea - just leave it to do its work in the background.
Just load your different set of data. If memory is getting low the GC will kick in to reclaim the memory without you having to do anything, and if there's plenty of memory available then you'll be able to load the new set without taking the hit of a garbage collection.
The short answer - don't worry about garbage collection. The only time I'd be concerned is if you're putting massive amounts of objects and data in a place where they might not get properly garbage collected at all. For example, you store tons of per-user data in ASP.NET session, and it sits there for a long time even though the user visits a few pages and leaves.
But as long as references to objects are released (as when they go out of scope) then garbage collection will do its job. It might not be perfect, but it's really, really good. The work and thought that went into developing good garbage collection in most cases probably exceeds the work that we put into developing the applications themselves.
Although we have the ability to trigger it, the design of the .NET framework deliberately allows us to focus on higher-level abstractions without having to worry about smaller details like when the memory is freed.

Objects in memory & general memory management

I'm having trouble grasping some concepts relating to objects in memory and I would be very grateful if someone could put me on the right track. I realize how important managing memory is and I'm concerned that I'm adopting bad programming habits.
In my game loop I use lambda expressions in the following format to remove objects from my game:
ObjectsMan.lstExplosionParticles.RemoveAll(particle => particle.TTL <= 0);
These objects are usually instantiated inside list add methods, for example:
ObjectsMan.EnemyShots.Add(new EnemShot(current.SpritePosition + position.Key,
Logic_ReturnValue.Angle_TarPlayer(current), position.Value));
As far as I understand it, the list is storing the object's memory location. So when I remove it from the list, the object still exists in memory. Is that correct?
If that is indeed the case, could many of these objects sitting in memory cause game lag (for example, thousands of individual projectile objects) even if I'm not drawing them? Do I need to manually assign each object to null?
Additionally, if I don't impose a frame rate cap on my game's draw method, am I correct in thinking that I'm losing a massive amount of performance due to frames being drawn that the human eye can't see?
Another question is that when if I stop my game using the debugger and a sound is playing, my sound driver locks up. I thought that calling my sound effect's Stop method would prevent this. Does the debugger bypass XNA's unload content method when it stops? What is happening here?
Lastly, if I include extra using statements that I don't technically need, is that impacting my memory? For example most of my classes include a few using statements that they don't actually need. Is there a performance related reason to clean this up, or is it just good programming practice?
I would greatly appreciate it if someone could give me some help or point me in the right direction with this.
Thanks.
As far as I understand it, the list is storing the object's memory location. So when I remove it from the list, the object still exists in memory. Is that correct?
Yes. You are just removing references to the objects.
If that is indeed the case, could many of these objects sitting in memory cause game lag (for example, thousands of individual projectile objects) even if I'm not drawing them?
Directly? No. But they will cause lags when GC finally kicks in. Simply because GC has many more objects to take care of. I had exactly same problem on Xbox360, that didn't have generational GC.
Do I need to manually assign each object to null?
That is just removing the reference. Same as removing it from list.
Lastly, if I include extra using statements that I don't technically need, is that impacting my memory? For example most of my classes include a few using statements that they don't actually need. Is there a performance related reason to clean this up, or is it just good programming practice?
There should be no problem here.
The GC doesn't kick in until the end of the program, is this correct?
Wrong. The GC can run at any point in time. Usually if application decides it is running low on memory and OS can't provide any, it will run the GC.
Just because I've removed the objects from a list, that shouldn't give the GC grounds to remove my object from memory?
If this list held the only reference, then GC is free to remove the object. But is not related if GC is actually ran at that moment.
Also, usually, when programming a game, you should limit number of objects you create. Simply because running a GC can create lots of unpredictable lag. Either use structures or reuse existing instances. In case of particle system, best way would be to have structure for each particle and field saying if the particle is active. When removing particle, you just set this field to false. When adding new particle, you will find first that is not active and set appropriate fields and activate it.

What could cause this memory issue?

I'm working on an app for windows phone 8, and I'm having a memory leak problem. But first some background. The app works (unfortunately) using WebBrowsers as pages. The pages are pretty complex with a lot of javascript involved.
The native part of the app, written in c#, is responsible for doing some simple communication with the javascript(e.g. native is a delegate for the javascript to communicate with a server), make animation for page transition, tracking, persistance, etc. All is done in a unique PhoneApplicationPage.
After I had some crashes for out of memory exceptions, I started profiling the app. I can see that the WebBrowsers, which are the big part of the the app, are being disposed correctly.
But the problem I'm seeing is that memory continues to increase. What's worse, I have little feedback from the profiler. From what I understand, the profiler graph says there is a big problem, while the profiler numbers say there's no problem at all...
Note: the step represents a navigation from a WebBrowser to another WebBrowser. The spike is created (I suppose) by the animation between the two controls. In the span I've selected in the image, I was doing a navigation forward and one backward having a maxium of 5 WebBrowsers (2 for menus that are always there, 1 for the index page, 1 for the page I navigate from and 1 for the page I navigate to). At every navigation the profiler shows the correct number of WebBrowsers: 5 after navigating forward, 4 after navigating backward.
Note 2: I have added the red line to make clearer that the memory is going up in that span of time
As you can see from the image
the memory usage is pretty big but the numbers say it's low and in that span of time, retained allocation is lower than when it started...
I hope I've included enough information. I want some ideas on what could cause this problem. My ideas so far are:
-the javascript in the WebBrowser is doing something wrong (e.g. not cleaning some event handler). Even if this is the case, shouldn't the WebBrowser release the memory when it is destroyed?
-using a unique PhoneApplicationPage is something evil that is not supposed to be done, and changing its structure may cause this.
-other?
Another question: why does the graph show the correct amount of memory use while the number doesn't?
If you need more info about the profiler, ask and I will post them tomorrow.
Ok after a lot of investigation I finally was able to find the leak.
the leak is created by the WebBrowser control itself which seems to have some event handler that are not removed when you remove it from a Panel. In fact the leak is reproducible by following these steps:
Create a new WebBrowser
Add it to a Panel or whatever
Navigate to a page, with an image which is big and heavy
Tap somewhere in the blank space of the browser(tapping on the image seems to not create the leak)
remove and collect the browser
repeat from 1
at every iteration the memory of the image is never collected and the memory continue to grow.
A ticket to Microsoft was already sent.
The problem was resolved using a pool of WebBrowsers
I don't think There is enough information to find the cause to your leak, and without posting your entire solution I am not sure there can be, since the question is about locating the root cause of it...
What I Can offer is the approach I have used when I had my own memory leak.
The technique was to:
Open a memory profiler. From your screenshot I see you are using one. I used perfmon. This article has some material about setting perfmon and #fmunkert also explains it rather well.
Locate an area in the code that you suspect that it is likely that the leak is in that area. This part is mostly depending on you having good guesses about the part of the code that is responsible for the issue.
Push the Leak to the extreme: Use labels and "goto" for isolating an area / function and repeat the suspicious code many times (a loop will work to. I find goto more convenient for this matter).
In the loop I have used a breakpoint that halted every 50 hits for examining the delta in the memory usage. Of course you can change the value to feet a noticeable leak change in your application.
If you have located the area that causes the leak, the memory usage should rapidly spike. If the Memory usage does not spike, repeat stages 1-4 with another area of code that you suspect being the root cause. If it does, continue to 6.
In the area you have found to be the cause, use same technique (goto + labels) to zoom in and isolate smaller parts of the area until you find the source of the leak.
Note that the down sides of this method are:
If you are allocating an object in the loop, it's disposal should be also contained in the loop.
If you have more than one source of leak, It makes it harder to spot (yet still possible)
Did you clean up your event handlers? You may inadvertently still have some references if the controls are rooted.

"being moved to page file" event

Is there an event ( or similar ) in C# to tell when the current process is about to be moved from memory to the page file? Also for that matter an event for coming back from the pagefile.
Or if those events do not exist, perhaps a better way for this to happen or suggestions?
The reason i would like to accomplish this:
I have an application / inventory management program which allows you to look through everything and it mostly keeps all that information in a large List. I would like to just clear out that list before it gets written to disk. It just becomes stale information and slows down the return of the program when it has to be resumed. I would rather be querying the database for new information instead of loading stale info.
No, there is no such event. Even if there were, memory is paged out at the, err, page level, and there is no easy way to know what objects reside in which pages. Add to that the fact that even if you knew that object X is in page Y, X likely has references to other objects that may reside in other pages. So even if X is currently paged in, actually using it may require paging in other memory. And the garbage collector can move objects in memory over their lifetime, making the whole problem more complicated.
That said, for your use case it sounds like you shouldn't even be storing the data in a large List. Can you just use normal data binding?
There is no such event. However, if your program happens to be using ASP.Net then you can use System.Web.Caching.Cache: it will dump your cached objects when there's memory pressure.
I don't know of a good one outside of ASP.Net, but perhaps someone else can suggest something?
edit: meklarian suggests trying out System.Runtime.Caching, probably specifically MemoryCache. Thank you meklarian!

Categories