My code has to generate millions object to perform some algorithm (millions objects will be created and at the same time 2/3 of them should be destroyed).
I know that object creation causes performance problems.
Could someone recommend how to manage so huge amount of objects, garbage collection and so on?
Thank you.
Elaborating a bit on my "make them a value type" comment above.
If you have a struct Foo, then preparing for the algorithm with e.g. var storage = new Foo[1000000] will only allocate one big block of memory (I 'm assuming the required amount of contiguous memory will be available).
You can then manually manage the memory inside that block to avoid performing more memory allocations:
Keep a count of how many slots in the array are actually used
To "create" a new Foo, put it at the first unused slot and increment the counter
To "delete" a Foo, swap it with the one in last used slot and decrement the counter
Of course making an algorithm work with value types vs reference types is not as simple as changing class to struct. But if workable it will allow you to side-step all of this overhead for an one-time startup cost.
If it is possible in your algorithm then try to reuse objects - if 2/3 are destroyed immedietly then you can try to use them again.
You can implement IDisposable interface on the type whose object is been created. Then you can implment using keyword and write whatever logic involving the object within the using scope. The following links will give you a fair idea of what i am trying to say. Hope they are of some help.
http://www.codeguru.com/csharp/csharp/cs_syntax/interfaces/article.php/c8679
Am I implementing IDisposable correctly?
Regards,
Samar
Related
I was reading a description of some code written in C that gains speed due to allocating temporary arrays on the stack instead of the heap for use in very hot loops. (It was described as being similar to SBO optimization). The object in question is similar to a List<T> in that it's just an array with some basic convenience functionality on top. It allocates a small section of memory to use, and if the list is expanded past the size of the array, it allocates a new array on the heap, copies the data, and updates the pointer.
I would like to do the same thing in C#, but I'm not sure how to accomplish it as I want to keep this in a safe context so I can't use a pointer to update the data reference if its expanded, and Span<int> doesn't have an implicit cast to int[]. Specifically:
stackalloc memory is released on method exit, so I'm not sure if there's a simpler way to use a struct like this than giving it a Span field and assigning it after creating within the method using it.
How do I neatly switch between using backing fields of different types (Span and int[]) without changing the public-facing interface?
I managed to come up with a solution, not sure if it's the best implementation, but it seems to work. I also have a couple of alternatives.
Note: This is useful for increasing speed only when you have a function that needs to create a temporary array and is called very frequently. The ability to switch to a heap allocated object is just a fallback in case you overrun the buffer.
Option 1 - Using Span and stackalloc
If you're building to .NET Core 2.1 or later, .NET Standard 2.1 or later, or can use NuGet to use the System.Memory package, the solution is really simple.
Instead of a class, use a ref struct (this is necessary to have a Span<T> field, and neither can leave the method where they're declared. If you need a long-lived class, then there's no reason to try to allocate on the stack since you'll just have to move it to the heap anyway.)
public ref struct SmallList
{
private Span<int> data;
private int count;
//...
}
Then add in all your list functionality. Add(), Remove(), etc. In Add or any functions that might expand the list, add a check to make sure you don't overrun the span.
if (count == data.Length)
{
int[] newArray = new int[data.Length * 2]; //double the capacity
Array.Copy(data.ToArray(), 0, new_array, 0, cap);
data = new_array; //Implicit cast! Easy peasy!
}
Span<T> can be used to work with stack allocated memory, but it can also point to heap allocated memory. So if you can't guarantee your list will always be small enough to fit in the stack, the snippet above gives you a nice fallback that shouldn't happen frequently enough to cause noticeable problems. If it is, either increase the initial stack allocation size (within reason, don't overflow!), or use another solution like an array pool.
Using the struct just requires an extra line and a constructor that takes a span to assign to the data field. Not sure if there's a way to do it all in one shot, but it's easy enough:
Span<int> span = stackalloc int[32];
SmallList list = new SmallList(span);
And if you need to use it in a nested function (which was part of my issue) you just pass it in as a parameter instead of having the nested function return a list.
void DoStuff(SmallList results) { /* do stuff */ }
DoStuff(list);
//use results...
Option 2: ArrayPool
The System.Memory package also includes the ArrayPool class, which lets you store a pool of small arrays that your class/struct could take out without bothering the garbage collector. This has comparable speed depending on the use case. It also has the benefit that it would work for classes that have to live beyond a single method. It's also fairly easy to write your own if you can't use System.Memory.
Option 3: Pointers
You can do something like this with pointers and other unsafe code, but the question was technically asking about safe code. I just like my lists to be thorough.
Option 4: Without System.Memory
If, like me, you're using Unity / Mono, you can't use System.Memory and related features until at least 2021. Which leaves you to roll your own solution. An array pool is fairly straightforward to implement, and does the job of avoiding garbage allocations. A stack allocated array is a bit more complicated.
Luckily, someone has already done it, specifically with Unity in mind. The page linked is quite long, but includes both sample code demonstrating the concept and a code generation tool that can make a SmallBuffer class specific to your exact use case. The basic idea is to just create a struct with individual variables that you index as if they were an array.
Update: I tried both these solutions and the array pool was slightly faster (and a lot easier) than the SmallBuffer in my case, so remember to profile!
In C and C++ languages the developer defines in which memory an object is going to be instantiated: stack or heap.
In C# you it is determined by the author of the data type.
You can achieve your goal using Span and pointers. https://learn.microsoft.com/en-us/dotnet/api/system.span-1?view=netcore-3.1.
But I would not recommend you to do that, because your code is not safe. Meaning that CLR gives you all the responsibility to manage it, at least clean the memory, when you do not need such object anymore. Usually the C# developers come to such tricks, when they want to optimise really big data collections, which allocates a lot of memory in the heap.
If it is still what you are looking for - than, probably, C# is not the best option to use.
Even more, if you have a big collection and somehow you find the way how to put it in stack memory - you can easily face StackOverflowException.
In my game I can use a list of game objects or tags to iterate but i prefer knows what is the most efficient way.
Save more memory using tags or unity requires many resources to do a search by tag?
public List<City> _Citys = new List<City>();
or
foreach(GameObject go in GameObject.FindGameObjectsWithTag("City"))
You're better of using a List of City objects and doing a standard for loop to iterate over the 'City' objects. The List just simply holds references to the 'City' objects, so impact on memory should be minimal - you could use an array of GameObjects[] instead of a List (which is what FindGameObjectsWithTag returns).
It's better for performance to use a populated List/Array rather than searching by Tags and of course you're explicitly pointing to an object rather than using 'magic' strings -- if you change the tag name later on then the FindGameObjectsWithTag method will silently break, as it will no longer find any objects.
Also, avoid using a foreach loop in Unity as this unfortunately creates a lot of garbage (the garbage collector in Unity isn't great so it's best to create as little garbage as possbile), instead just use a standard for loop:
Replace the “foreach” loops with simple “for” loops. For some reason, every iteration of every “foreach” loop generated 24 Bytes of garbage memory. A simple loop iterating 10 times left 240 Bytes of memory ready to be collected which was just unacceptable
EDIT: As mentioned in pid's answer - measure. You can use the built-in Unity profiler to inspect memory usage: http://docs.unity3d.com/Manual/ProfilerMemory.html
Per Microsoft's C# API rules, verbs such as Find* or Count* denote active code while terms such as Length stand for actual values that require no code execution.
Now, if the Unity3D folks respected those guidelines is a matter of debate, but from the name of the method I can already tell that it has a cost and should not be taken too lightly.
On the other side, your question is about performance, not correctness. Both ways are correct per se, but one is supposed to have better performance.
So, the main rule of refactoring for performance is: MEASURE.
It depends on memory allocation and garbage collection, it is impossible to tell which really is faster without measuring.
So the best advice I could give you is pretty general. Whenever you feel the need to enhance performance of code you have to actually measure what you are about to improve, before and after.
Your code examples are 2 distinctly different things. One is instantiating a list, and one is enumerating over an IEnumerable returned from a function call.
I assume you mean the difference between iterating over your declared list vs iterating over the return value from GameObject.FindObjectsWithTag() in which case;
Storing a List as a member variable in your class, populating it once and then iterating over it several times is more efficient than iterating over GameObject.FindObjectsWithTag several times.
This is because you keep your List and your references to the objects in your list at all times without having to repopulate it.
GameObject.FindObjectsWithTag will search your entire object hierarchy and compile a list of all the objects that it finds that matches your search criteria. This is done every time you call the function, so there is additional overhead even if the amount of objects it finds is the same as it still searches your hierarchy.
To be honest, you could just cache your results with a List object using GameObject.FindObjectWithTag providing the amount of objects returned will not change. (As in to say you are not instantiating or destroying any of those objects)
Just short question, if you have a class with just one 1 property, and lots of (non static) methods, does an entirely new object get stored every time you say 'new object()', or just the property, and the methods in some 'common' memory space so the same Type can reference to that?
Thus, is having a large class always performing worse than a small class in terms of instantiation time?
Memory allocation may prove to be time consuming indeed.
Still, I believe a cleaner, more obvious measurement of resource consumption would be occupied space not instantiation time.
As you have stated yourself already, it is the case that methods,
static or not, occupy memory space just once. The this reference is just a hidden parameter, which gets sent from caller to called code just like any other parameter and in the end, all methods are just plain ol' functions (or routines).
In a simplistic way of putting it, so do all static fields.
Don't think about properties. They are just high level wrappers for methods which in the end access fields.
Instance fields are what occupies space, per instance.
But there are other things, like runtime type information which get allocated also.
In short, your assumption is correct.
EDIT
Just as a recap:
If by "large class" you mean a class which defines a lot of methods, then no, instantiation time will not be affected.
On the other hand, if by that term, you mean a class which defines a lot of instance fields, then yeah, instantiation time will be affected.
Although this is not my happy place (I know almost nothing of how good ol' malloc actually manages to defragment memory) thinking that allocating a lot of memory would take a longer time is in a strange way I can't put my finger on, like saying that
"adding the numbers 1024 and 2048 takes a bit longer than adding the numbers 3 and 4"
(given all 4 numbers are stored in variables of same numerical type).
So I would worry more about memory consumption. I'm sure time is somehow affected too, but maybe logarithmically.
Methods are shared. All other things being equal, instantiating a class with many methods has pretty much the same cost as instantiating one with few. It's their non-static fields and the amount of work performed by the constructor (and some other minor factors) that determine creation cost.
Instance fields are the only thing stored in the object itself. Methods are stored in the type, which means that they only exist in one place.
In fact, instance methods are just syntactic sugar (at the IL level) for static methods that accept an instance as a parameter.
I think you will find the information you need (and probably more) here . The code of the instance methods will be shared.
I've got a few global arrays I use in a simple WinForms game. The arrays are initialized when a new game starts. When a player is in the middle of the game (the arrays are filled with data) he clicks on the StartNewGame() button (restarts the game). What to do next?
Is it ok to reinitialize the whole array for the new game or should I just set every array item to null and use the already initialized array (which would be slower)?
I mean is it okay to do something like this?
MyClass[,] gameObjects;
public Form1()
{
StartNewGame();
// game flow .. simplified here .. normally devided in functions and events..
StartNewGame();
// other game flow
}
public StartNewGame()
{
gameObjects = new MyClass[10,10];
// some work with gameObjects
}
This almost entirely depends upon MyClass, specifically how many data members it contains, how much processing does its constructor (and members' constructors) require and whether it is a relatively simply operation to (re)set an object of this class to "initialized" state. A more objective answer can be obtained through benchmarking.
From you question, I understand that there are not so many array's - in that case I would say, reinitialize the whole array
In cases you have a lot of work that can take 30 sec to set up maybe you do clean up instead of reinitializing everything.
If you choose to place null, you can jet some ugly exception , so I think you rather clean the object inside the array rather then set them to null
If there are only 100 elements as in your example, then there shouldn't really be a noticeable performance hit.
If you reinitialize the array, you will perform n constructions for n objects. The garbage collector will come clean up the old array and de-allocate those old n objects at some later time. (So you have n allocations upfront, and n deallocations by the GC).
If you set each pointer in the array to null, the garbage collector will still do the same amount of work and come clean up those n objects at some later time. The only difference is you're not deallocating the array here, but that single deallocation is negligible.
From my point of view, the best way to achieve performance in this case is to not reallocate the objects at all, but to use the same ones. Add a valid bit to mark whether or not an object is valid (in use), and to reinitialize you simply set all the valid bits to false. In a similar fashion, programs don't go through and write 0's to all your memory when it's not in use. They just leave it as garbage and overwrite data as necessary.
But again, if your number of objects isn't going into the thousands, I'd say you really won't notice the performance hit.
gameObjects = new MyClass[10,10];
... is the way to go. This is definitely faster than looping through the array and setting the items to null. It is also simpler to code and to understand. But both variants are very fast in anyway, unless you have tens of millions of entries! '[10, 10]' is very small, so forget about performance and do what seems more appropriate and more understandable to you. A clean coding is more important than performance in most cases.
I have some places where implementing some sort of cache might be useful. For example in cases of doing resource lookups based on custom strings, finding names of properties using reflection, or to have only one PropertyChangedEventArgs per property name.
A simple example of the last one:
public static class Cache
{
private static Dictionary<string, PropertyChangedEventArgs> cache;
static Cache()
{
cache = new Dictionary<string, PropertyChangedEventArgs>();
}
public static PropertyChangedEventArgs GetPropertyChangedEventArgs(
string propertyName)
{
if (cache.ContainsKey(propertyName))
return cache[propertyName];
return cache[propertyName] = new PropertyChangedEventArgs(propertyName);
}
}
But, will this work well? For example if we had a whole load of different propertyNames, that would mean we would end up with a huge cache sitting there never being garbage collected or anything. I'm imagining if what is cached are larger values and if the application is a long-running one, this might end up as kind of a problem... or what do you think? How should a good cache be implemented? Is this one good enough for most purposes? Any examples of some nice cache implementations that are not too hard to understand or way too complex to implement?
This is a large problem, you need to determine the domain of the problem and apply the correct techniques. For instance, how would you describe the expiration of the objects? Do they become stale over a fixed interval of time? Do they become stale from an external event? How frequently does this happen? Additionally, how many objects do you have? Finally, how much does it cost to generate the object?
The simplest strategy would be to do straight memoization, as you have above. This assumes that objects never expire, and that there are not so many as to run your memory dry and that you think the cost to create these objects warrants the use of a cache to begin with.
The next layer might be to limit the number of objects, and use an implicit expiration policy, such as LRU (least recently used). To do this you'd typically use a doubly linked list in addition to your dictionary, and every time an objects is accessed it is moved to the front of the list. Then, if you need to add a new object, but it is over your limit of total objects, you'd remove from the back of the list.
Next, you might need to enforce explicit expiration, either based on time, or some external stimulus. This would require you to have some sort of expiration event that could be called.
As you can see there is alot of design in caching, so you need to understand your domain and engineer appropriately. You did not provide enough detail for me to discuss specifics, I felt.
P.S. Please consider using Generics when defining your class so that many types of objects can be stored, thus allowing your caching code to be reused.
You could wrap each of your cached items in a WeakReference. This would allow the GC to reclaim items if-and-when required, however it doesn't give you any granular control of when items will disappear from the cache, or allow you to implement explicit expiration policies etc.
(Ha! I just noticed that the example given on the MSDN page is a simple caching class.)
Looks like .NET 4.0 now supports System.Runtime.Caching for caching many types of things. You should look into that first, instead of re-inventing the wheel. More details:
http://msdn.microsoft.com/en-us/library/system.runtime.caching%28VS.100%29.aspx
This is a nice debate to have, but depending your application, here's some tips:
You should define the max size of the cache, what to do with old items if your cache is full, have a scavenging strategy, determine a time to live of the object in the cache, does your cache can/must be persisted somewhere else that memory, in case of application abnormal termination, ...
This is a common problem that has many solutions depending on your application need.
It is so common that Microsoft released a whole library to address it.
You should check out Microsoft Velocity before rolling up your own cache.
http://msdn.microsoft.com/en-us/data/cc655792.aspx
Hope this help.
You could use a WeakReference but if your object is not that large than don't because the WeakReference would be taking more memory than the object itself which is not a good technique. Also, if the object is a short-time usage where it will never make it to generation 1 from generation 0 on the GC, there is not much need for the WeakReference but IDisposable interface on the object would have with the release on SuppressFinalize.
If you want to control the lifetime you need a timer to update the datetime/ timespan again the desiredExpirationTime on the object in your cache.
The important thing is if the object is large then opt for the WeakReference else use the strong reference. Also, you can set the capacity on the Dictionary and create a queue for requesting additional objects in your temp bin serializing the object and loading it when there is room in the Dictionary, then clear it from the temp directory.