Objects in memory & general memory management - c#

I'm having trouble grasping some concepts relating to objects in memory and I would be very grateful if someone could put me on the right track. I realize how important managing memory is and I'm concerned that I'm adopting bad programming habits.
In my game loop I use lambda expressions in the following format to remove objects from my game:
ObjectsMan.lstExplosionParticles.RemoveAll(particle => particle.TTL <= 0);
These objects are usually instantiated inside list add methods, for example:
ObjectsMan.EnemyShots.Add(new EnemShot(current.SpritePosition + position.Key,
Logic_ReturnValue.Angle_TarPlayer(current), position.Value));
As far as I understand it, the list is storing the object's memory location. So when I remove it from the list, the object still exists in memory. Is that correct?
If that is indeed the case, could many of these objects sitting in memory cause game lag (for example, thousands of individual projectile objects) even if I'm not drawing them? Do I need to manually assign each object to null?
Additionally, if I don't impose a frame rate cap on my game's draw method, am I correct in thinking that I'm losing a massive amount of performance due to frames being drawn that the human eye can't see?
Another question is that when if I stop my game using the debugger and a sound is playing, my sound driver locks up. I thought that calling my sound effect's Stop method would prevent this. Does the debugger bypass XNA's unload content method when it stops? What is happening here?
Lastly, if I include extra using statements that I don't technically need, is that impacting my memory? For example most of my classes include a few using statements that they don't actually need. Is there a performance related reason to clean this up, or is it just good programming practice?
I would greatly appreciate it if someone could give me some help or point me in the right direction with this.
Thanks.

As far as I understand it, the list is storing the object's memory location. So when I remove it from the list, the object still exists in memory. Is that correct?
Yes. You are just removing references to the objects.
If that is indeed the case, could many of these objects sitting in memory cause game lag (for example, thousands of individual projectile objects) even if I'm not drawing them?
Directly? No. But they will cause lags when GC finally kicks in. Simply because GC has many more objects to take care of. I had exactly same problem on Xbox360, that didn't have generational GC.
Do I need to manually assign each object to null?
That is just removing the reference. Same as removing it from list.
Lastly, if I include extra using statements that I don't technically need, is that impacting my memory? For example most of my classes include a few using statements that they don't actually need. Is there a performance related reason to clean this up, or is it just good programming practice?
There should be no problem here.
The GC doesn't kick in until the end of the program, is this correct?
Wrong. The GC can run at any point in time. Usually if application decides it is running low on memory and OS can't provide any, it will run the GC.
Just because I've removed the objects from a list, that shouldn't give the GC grounds to remove my object from memory?
If this list held the only reference, then GC is free to remove the object. But is not related if GC is actually ran at that moment.
Also, usually, when programming a game, you should limit number of objects you create. Simply because running a GC can create lots of unpredictable lag. Either use structures or reuse existing instances. In case of particle system, best way would be to have structure for each particle and field saying if the particle is active. When removing particle, you just set this field to false. When adding new particle, you will find first that is not active and set appropriate fields and activate it.

Related

C#/MonoGame - Destroying large objects in memory when I unload my game level

I'm working on a game using C#/MonoGame and I'm wondering how to solve a garbage collection problem relating to large game objects in memory.
Every time I load a new game, I create and store a World object which itself has a private field containing a quadtree for my LOD terrain system. The quadtree recursively contains vertex information for all its possible child nodes, down to the smallest level I've decided I want. This means that each new World takes ~10 seconds to create (done on a background thread) and comes to ~150MB in RAM size.
I'd like to be assured that every time I load a new game, the old World is disposed, but as far as I can tell right now, this is never happening. I just tested it by pressing 'load game' six times in a row, and the memory used by my game was touching 1GB without ever dropping.
I know this could mean that I'm still referencing my old World objects, but I can't see how this is happening. My main game app has a single private World which is re-created from scratch as a new object on my background loader thread every time I press 'load game':
// This belongs to the game and is referenced when I want to interact with and draw the current World
private World _world;
.
.
// Executes on background loader thread when I press 'load game'
LoadGame()
{
// This is where the old World is replaced with the new one, so I want the old one to be disposed as it contains ~150MB of useless data
_world = new World(worldParameters);
}
.
.
Draw()
{
// Rendering the active World in our draw loop
DrawWorld(_world);
}
I tried setting _world to null and forcing the GC to collect at the top of the LoadGame() method, but that did nothing.
Is there a way I can force the memory for the old object to be freed, or even just see if I'm inadvertently pointing to any of its fields in the rest of my game, causing it to stay alive?
Memory allocation can be a tricky thing in garbage collected languages like C#. Ironically, these languages are designed so that you don't have to think too much about memory allocations. Unfortunately, for games having the garbage collector kick in during gameplay can be a frustrating experience for the player if keeping a consistent frame rate is important for your game.
There are two approaches that I'm aware of to deal with this kind of issue. The first is to explicitly call the garbage collector yourself.
GC.Collect();
GC.WaitForPendingFinalizers();
Keep in mind, this is a quick and dirty approach and it might not work the way you expect. Many people on the internet will tell you it's not recommended or a bad idea to force garbage collection. I'm no expert in the pros and cons, but I just wanted to make you aware that it's an option so you can do your own research.
The second approach is being smart about your data structures. Some circles might call this "implementing your own memory allocation strategy". Essentially what it boils down to is pre-allocating a big chunk of memory up front and reusing that space over and over again.
What the exact strategy looks like for you could be very simple, or very complex. My advice is to start with the simplest thing you can get away with and see how it goes. One of the patterns that might be helpful to look into is often called "object pooling".

Free memory explicitly

I know this may seem like a duplicate, but I think this specific case may be a bit different.
In my MVC application, I have a ViewModel containing a big List used to display a table similar to an Excel sheet. each MyComplexClass contains a list of MyComplexClass and a list of MyComplexColumn.
This makes a pretty big memory usage, but now I have a problem. I am supposed to write a method that cleans my big table, and load a different set of data into it. I could write something like:
MyViewModel.Lines = new List<MyComplexClass>();
MyViewModel.LoadNewSetOfData(SetOfData);
But, comming from a C background, explicitly losing my reference on the old List is not something I can do, and continue looking at my face in the mirror every morning.
I saw Here, Here, and Here, that I can set the reference to null, and then call
GC.Collect()
But I was told that I was not really best practice, especially because it may really affects performance, and because I have no way to know if the GC has disposed of this specific memory allocation.
Isn't there any way I can just call something like Free() and live on?
Thank you
Don't worry about it! Trying to "encourage" the garbage collector to reclaim memory isn't a great idea - just leave it to do its work in the background.
Just load your different set of data. If memory is getting low the GC will kick in to reclaim the memory without you having to do anything, and if there's plenty of memory available then you'll be able to load the new set without taking the hit of a garbage collection.
The short answer - don't worry about garbage collection. The only time I'd be concerned is if you're putting massive amounts of objects and data in a place where they might not get properly garbage collected at all. For example, you store tons of per-user data in ASP.NET session, and it sits there for a long time even though the user visits a few pages and leaves.
But as long as references to objects are released (as when they go out of scope) then garbage collection will do its job. It might not be perfect, but it's really, really good. The work and thought that went into developing good garbage collection in most cases probably exceeds the work that we put into developing the applications themselves.
Although we have the ability to trigger it, the design of the .NET framework deliberately allows us to focus on higher-level abstractions without having to worry about smaller details like when the memory is freed.

Is it inefficient to highly frequently create short-lived new instances of a class?

I have a C# program that tracks a player's position in a game. In this program I have a class called Waypoint (X, Y, Z), which represents a location on the game map. In one of the threads I spawn, I keep checking the player's distance from a certain target Waypoint, quite rapidly after each other in while(true) loops. In the Waypoint class have a method called public double Distance(Waypoint wp), that calculates the distance from the current waypoint, to the waypoint passed as a parameter.
Question: Is it okay to create a new Waypoint for the player's position, every time I want to check the distance from player to the target waypoint? The program would then potentially, in a while(true) loop, create this player Waypoint over and over again, just for the purpose of calculating the distance.
PS: My program probably needs to smartly use resources, as it is running multiple threads with continuous while loops doing various work such as posting the player's X,Y,Z location to the UI.
Thanks a lot!
What the other answers are saying is:
- maybe you should make stack-local instances because it shouldn't cost much, and
- maybe you shouldn't do so, because memory allocation can be costly.
These are guesses - educated guesses - but still guesses.
You are the only one who can answer the question, by actually finding out (not guessing) if those news are taking a large enough percent of wall-clock time to be worth worrying about.
The method I (and many others) rely on for answering that kind of question is random pausing.
The idea is simple.
Suppose those news, if somehow eliminated, would save - pick a percent, like 20% - of time.
That means if you simply hit the pause button and display the call stack, you have at least a 20% chance of catching it in the act.
So if you do that 20 times, you will see it doing it roughly 4 times, give or take.
If you do that, you will see what's accounting for the time.
- If it's the news, you will see it.
- If it's something else, you will see it.
You won't know exactly how much it costs, but you don't need to know that.
What you need to know is what the problem is, and that's what it tells you.
ADDED: If you'll bear with me to explain how this kind of performance tuning can go, here's an illustration of a hypothetical situation:
When you take stack samples, you may find a number of things that could be improved, of which one of them could be memory allocation, and it might not even be very big, as in this case it is (C) taking only 14%.
It tells you something else is taking a lot more time, namely (A).
So if you fix (A) you get a speedup factor of 1.67x. Not bad.
Now if you repeat the process, it tells you that (B) would save you a lot of time.
So you fix it and (in this example) get another 1.67x, for an overall speedup of 2.78x.
Now you do it again, and you see that the original thing you suspected, memory allocation, is indeed a large fraction of the time.
So you fix it and (in this example) get another 1.67x, for an overall speedup of 4.63x.
Now that is serious speedup.
So the point is 1) keep an open mind about what to speed up - let the diagnostic tell you what to fix, and 2) repeat the process, to make multiple speedups. That's how you get real speedup, because things that were small to begin with become much more significant when you've trimmed away other stuff.
The actual creation of an object with a very short lifetime is minuscule. Creating a new object pretty much just involves incrementing the heap pointer by the size of the object and zeroing out those bits. This isn't going to be a problem.
As for the actual collection of those objects, when the garbage collector performs a collection it is taking all of the objects still alive and copying them. Any objects not "alive" are not touched by the GC here, so they aren't adding work to collections. If the objects you create never, or very very rarely, are alive during a GC collection then they're not adding any costs there.
The one thing that they can do, is decrease the amount of currently available memory such that the GC needs to perform collections noticeably more often than it otherwise would. The GC will actually perform a collection when it needs some more memory. If you'r constantly using all of your available memory creating these short lived objects, then you could increase the rate of collections in your program.
Of course, it would take a lot of objects to actually have a meaningful impact on how often collections take place. If you're concerned you should spend some time measuring how often your program is performing collections with and without this block of code, to see what effect it is having. If you really are causing collections to happen much more frequently than they otherwise would, and you're noticing performance problems as a result, then consider trying to address the problem.
There are two avenues that come to mind for resolving this problem, if you have found a measurable increase in the number of collections. You could investigate the possibility of using value types instead of reference types. This may or may not make sense conceptually in context for you, and it may or may not actually help the problem. It'd depend way too much on specifics not mentioned, but it's at least something to look into. The other possibility is trying to aggressively cache the objects so that they can be re-used over time. This also needs to be looked at carefully, because it can greatly increase the complexity of a program and make it much harder to write programs that are correct, maintainable, and easy to reason about, but it can be an effective tool for reintroducing memory if used correctly.

C# Garbage Collection -> to C++ delete

I'm converting a C# project to C++ and have a question about deleting objects after use. In C# the GC of course takes care of deleting objects, but in C++ it has to be done explicitly using the delete keyword.
My question is, is it ok to just follow each object's usage throughout a method and then delete it as soon as it goes out of scope (ie method end/re-assignment)?
I know though that the GC waits for a certain size of garbage (~1MB) before deleting; does it do this because there is an overhead when using delete?
As this is a game I am creating there will potentially be lots of objects being created and deleted every second, so would it be better to keep track of pointers that go out of scope, and once that size reachs 1MB to then delete the pointers?
(as a side note: later when the game is optimised, objects will be loaded once at startup so there is not much to delete during gameplay)
Your problem is that you are using pointers in C++.
This is a fundamental problem that you must fix, then all your problems go away. As chance would have it, I got so fed up with this general trend that I created a set of presentation slides on this issue. – (CC BY, so feel free to use them).
Have a look at the slides. While they are certainly not entirely serious, the fundamental message is still true: Don’t use pointers. But more accurately, the message should read: Don’t use delete.
In your particular situation you might find yourself with a lot of long-lived small objects. This is indeed a situation which a modern GC handles quite well, and which reference-counting smart pointers (shared_ptr) handle less efficiently. If (and only if!) this becomes a performance problem, consider switching to a small object allocator library.
You should be using RAII as much as possible in C++ so you do not have to explicitly deleteanything anytime.
Once you use RAII through smart pointers and your own resource managing classes every dynamic allocation you make will exist only till there are any possible references to it, You do not have to manage any resources explicitly.
Memory management in C# and C++ is completely different. You shouldn't try to mimic the behavior of .NET's GC in C++. In .NET allocating memory is super fast (basically moving a pointer) whereas freeing it is the heavy task. In C++ allocating memory isn't that lightweight for several reasons, mainly because a large enough chunk of memory has to be found. When memory chunks of different sizes are allocated and freed many times during the execution of the program the heap can get fragmented, containing many small "holes" of free memory. In .NET this won't happen because the GC will compact the heap. Freeing memory in C++ is quite fast, though.
Best practices in .NET don't necessarily work in C++. For example, pooling and reusing objects in .NET isn't recommended most of the time, because the objects get promoted to higher generations by the GC. The GC works best for short lived objects. On the other hand, pooling objects in C++ can be very useful to avoid heap fragmentation. Also, allocating a larger chunk of memory and using placement new can work great for many smaller objects that need to be allocated and freed frequently, as it can occur in games. Read up on general memory management techniques in C++ such as RAII or placement new.
Also, I'd recommend getting the books "Effective C++" and "More effective C++".
Well, the simplest solution might be to just use garbage collection in
C++. The Boehm collector works well, for example. Still, there are
pros and cons (but porting code originally written in C# would be a
likely candidate for a case where the pros largely outweigh the cons.)
Otherwise, if you convert the code to idiomatic C++, there shouldn't be
that many dynamically allocated objects to worry about. Unlike C#, C++
has value semantics by default, and most of your short lived objects
should be simply local variables, possibly copied if they are returned,
but not allocated dynamically. In C++, dynamic allocation is normally
only used for entity objects, whose lifetime depends on external events;
e.g. a Monster is created at some random time, with a probability
depending on the game state, and is deleted at some later time, in
reaction to events which change the game state. In this case, you
delete the object when the monster ceases to be part of the game. In
C#, you probably have a dispose function, or something similar, for
such objects, since they typically have concrete actions which must be
carried out when they cease to exist—things like deregistering as
an Observer, if that's one of the patterns you're using. In C++, this
sort of thing is typically handled by the destructor, and instead of
calling dispose, you call delete the object.
Substituting a shared_ptr in every instance that you use a reference in C# would get you the closest approximation at probably the lowest effort input when converting the code.
However you specifically mention following an objects use through a method and deleteing at the end - a better approach is not to new up the object at all but simply instantiate it inline/on the stack. In fact if you take this approach even for returned objects with the new copy semantics being introduced this becomes an efficient way to deal with returned objects also - so there is no need to use pointers in almost every scenario.
There are a lot more things to take into considerations when deallocating objects than just calling delete whenever it goes out of scope. You have to make sure that you only call delete once and only call it once all pointers to that object have gone out of scope. The garbage collector in .NET handles all of that for you.
The construct that is mostly corresponding to that in C++ is tr1::shared_ptr<> which keeps a reference counter to the object and deallocates when it drops to zero. A first approach to get things running would be to make all C# references in to C++ tr1::shared_ptr<>. Then you can go into those places where it is a performance bottleneck (only after you've verified with a profile that it is an actual bottleneck) and change to more efficient memory handling.
GC feature of c++ has been discussed a lot in SO.
Try Reading through this!!
Garbage Collection in C++

Garbage Collection on one object, C#

I need to dispose of an object so it can release everything it owns, but it doesn't implement the IDisposable so I can't use it in a using block. How can I make the garbage collector collect it?
You can force a collection with GC.Collect(). Be very careful using this, since a full collection can take some time. The best-practice is to just let the GC determine when the best time to collect is.
Does the object contain unmanaged resources but does not implement IDisposable? If so, it's a bug.
If it doesn't, it shouldn't matter if it gets released right away, the garbage collector should do the right thing.
If it "owns" anything other than memory, you need to fix the object to use IDisposable. If it's not an object you control this is something worth picking a different vendor over, because it speaks to the core of how well your vendor really understands .Net.
If it does just own memory, even a lot of it, all you have to do is make sure the object goes out of scope. Don't call GC.Collect() — it's one of those things that if you have to ask, you shouldn't do it.
You can't perform garbage collection on a single object. You could request a garbage collection by calling GC.Collect() but this will effect all objects subject to cleanup. It is also highly discouraged as it can have a negative effect on the performance of later collections.
Also, calling Dispose on an object does not clean up it's memory. It only allows the object to remove references to unmanaged resources. For example, calling Dispose on a StreamWriter closes the stream and releases the Windows file handle. The memory for the object on the managed heap does not get reclaimed until a subsequent garbage collection.
Chris Sells also discussed this on .NET Rocks. I think it was during his first appearance but the subject might have been revisited in later interviews.
http://www.dotnetrocks.com/default.aspx?showNum=10
This article by Francesco Balena is also a good reference:
When and How to Use Dispose and Finalize in C#
http://www.devx.com/dotnet/Article/33167/0/page/1
Garbage collection in .NET is non deterministic, meaning you can't really control when it happens. You can suggest, but that doesn't mean it will listen.
Tells us a little bit more about the object and why you want to do this. We can make some suggestions based off of that. Code always helps. And depending on the object, there might be a Close method or something similar. Maybe the useage is to call that. If there is no Close or Dispose type of method, you probably don't want to rely on that object, as you will probably get memory leaks if in fact it does contain resourses which will need to be released.
If the object goes out of scope and it have no external references it will be collected rather fast (likely on the next collection).
BEWARE: of f ra gm enta tion in many cases, GC.Collect() or some IDisposal is not very helpful, especially for large objects (LOH is for objects ~80kb+, performs no compaction and is subject to high levels of fragmentation for many common use cases) which will then lead to out of memory (OOM) issues even with potentially hundreds of MB free. As time marches on, things get bigger, though perhaps not this size (80 something kb) for LOH relegated objects, high degrees of parallelism exasperates this issue due simply due to more objects in less time (and likely varying in size) being instantiated/released.
Array’s are the usual suspects for this problem (it’s also often hard to identify due to non-specific exceptions and assertions from the runtime, something like “high % of large object heap fragmentation” would be swell), the prognosis for code suffering from this problem is to implement an aggressive re-use strategy.
A class in Systems.Collections.Concurrent.ObjectPool from the parallel extensions beta1 samples helps (unfortunately there is not a simple ubiquitous pattern which I have seen, like maybe some attached property/extension methods?), it is simple enough to drop in or re-implement for most projects, you assign a generator Func<> and use Get/Put helper methods to re-use your previous object’s and forgo usual garbage collection. It is usually sufficient to focus on array’s and not the individual array elements.
It would be nice if .NET 4 updated all of the .ToArray() methods everywhere to include .ToArray(T target).
Getting the hang of using SOS/windbg (.loadby sos mscoreei for CLRv4) to analyze this class of issue can help. Thinking about it, the current garbage collection system is more like garbage re-cycling (using the same physical memory again), ObjectPool is analogous to garbage re-using. If anybody remembers the 3 R’s, reducing your memory use is a good idea too, for performance sakes ;)

Categories