Optimizing use of C# events? - c#

I have a game in which players connect and see a list of challenges. In the code, challenges are stored in a dictionary by their status so I have a Dictionary<string, List<Challenge>> to store them.
I have a class that handles all related challenges actions, such as retrieving them from a server, and this class sends events to notify interested objects about the update. My problem is my delegate for the event is sending the whole dictionary and I'm realizing this is probably very bad in terms of garbage created or even execution speed.
My app will run on mobile and I'm working on Unity so I'm limited to .Net 3.5 (some 4 features). Performance is critical here so I would like to know what would be a more efficient way of doing this ?
The only thing I see right away would be to remove the dictionary from the event delegate and just access my singleton's instance variable instead of having it as a parameter. But I want to be sure there is no better way and that this will indeed be more efficient.

Classes are reference types and Dictionary<TKey, TValue> is a class, therefore, you're not copying the dictionary but just pointing to the same one from different references (for example, the event args providing a reference to the so-called dictionary).
If you're very worried about memory issues with event handlers, you can take a look at weak event pattern, which can led to prevent some memory leaks in some use cases (but you'll need .NET 4.0 at least...).

Related

How to create a Weak Referenced Event Handler?

I am doing research on how to properly create a weak referenced event handler. Since WPF has already a solution for avoiding memory leaks with events, I decompiled the "WeakEventManager" class and spent some time on analysing it.
As I found out, the "WeakEventManager" class relies on creating and storing a weak reference to the target object as well as to the event handler delegate. Here are some code segments:
this._list.Add(new WeakEventManager.Listener(target, handler));
public Listener(object target, Delegate handler)
{
this._target = new WeakReference(target);
this._handler = new WeakReference((object) handler);
}
I am asking myself if this simple solution would already be working or did I overlook an important aspect, since most other solutions I found on the Internet are complex and hard to understand. The best solution I could find so far
uses an unbound delegate. That is some kind of wrapper delegate that takes the event handler and event subscriber instance as a parameter (yes, it requires the event subscriber object to be passed in during delegate invocation).
You can find this great article here:
http://diditwith.net/CommentView,guid,aacdb8ae-7baa-4423-a953-c18c1c7940ab.aspx#commentstart
The "WeakEventManager" class doesnt't rely on knowing the subscriber class or any additional information. Beyond that it also works with anonymous delegates. Why do developers spent so much time on writing a solution that is not only working but also convenient to use, if a solution only requires them to store a weak reference to the delegate? What's the catch?
Update: Because someone downvoted this question (may be it is a little bit unspecific), I want to give a more precise question:
Is that source code above all it takes to create a working weak event handler? If not, what is missing?
did I overlook an important aspect
Yes, an important one. You traded one "leak" for another. Instead of preventing the event subscriber objects from getting garbage collected, you now prevent WeakReference objects from getting collected. You are not ahead.
What is also required is a mechanism to get those stale WeakReferences cleaned-up. You don't need them anymore when their IsAlive property returns false, you then remove it from _list. But you have to check that separately, some code needs to take care of that. An obvious choice is to check when an event is added or when it is fired. But that isn't enough, since client code only ever adds at initialization and you can't get a guarantee that the event is consistently fired.
So some kind of scheme is required to do it later. Anything is possible and nothing is quite ideal because it is a lot of busy-work with often not getting anything done. Making the right choice is important. And that certainly includes not doing this at all, it tends to be a band-aid over a design problem.
The idea behind the WeakEventManager is that it keeps a list of weak references to the target object of the events and a list of handlers it must call, but does not "tie them together" (as directly subscribing to events typically does), allowing the target to be garbage collected (and periodically checked so that the "subscribed" delegate list is emptied accordingly when the source is garbage collected).
This, which sounds easy, requires a lot of plumbing code, and that's all the WeakEventManager does (do the plumbing for you in an easy to use manner).
If you are cloning out WeakEventManager to your own implementation (or using the built-in) and it does that, then yes, that's all you need to do. If that's not what you are asking, then the question is unclear to me.
Is that source code above all it takes to create a working weak event handler? If not, what is missing?
That is the most important bit, the rest of the Listener class is important as well, not to mention the manual plumbing required to support this non-delegate event handler.
For instance check out ListenerList.DeliverEvent which is a bit more involved than the usual Invoke call.
Basically the normal event system assumes strong references so in order to use WeakReference you end up having to add a lot of the translation logic yourself as well as deciding how to handle events that never fire due to using WeakReference.

Is this a good use for a static class?

I have a read only list that is shared across all instances of the application and won't change very often. Is it good practice to make a property on a static class to access this list? the list is filled from the database in the static constructor. Setting the app pool to recycle every night would guarantee the list would be up to date every day correct? Are there any reasons this is a bad idea? Thanks!
Nothing wrong with a static class. You could also use the cache, which would work in a similar way. The cache gives you the added bonus of being able to invalidate the cache on a timed basis of your choosing.
This looks like a good solution. You may want to use a sealed class instead, to avoid sub-classes messing with it.
The issue with global state is when it is being changed by the application. In this case, that's not a problem.
You should understand how static properties are stored.
Roughly, all static state is placed in the instance of RuntimeType (which, in turn, is created when static ctor is called). CLR via C# describes this mechanism in details.
In the light of that, this collection will be shared across all instances, but you should keep in mind all potential memory leaks (just imagine the situation when you're subscribing on the collection events and all pages became reachable even when they're closed etc.)
The second disadvantage of this approach is that this collection is not up-to-date.
The third disadvantage is that you need to take care about thread safety of this collection, which, in turn, will harm your performance.

.Net Memory Management Issue : Objects Stuck in Generation 2

I have profiled my App w/ VS2010 profiler, with object lifetime collection enabled.
I was heavilly surprised to see that most instances of a particular struct named "Record" are collected by the GC as Gen 2 instances. I am very upset, as instances of "Record" struct should live less than 500ms each (theoretically).
These structs are simple time series data of 6xInt32 or so, that are read on the flow, Queued/Dequeued in a Queue having a size of 1000, passed to a processor that fires some logic depending on those few millions "Records" sequentially. I do not need to keep more than 50 records at a time.
So my question is : Why could these object live long enough to mainly end up as Second Generation references, and what could I do to ensure they REALLY get dumped off after each computation.
EDIT :
I am asking this because I have noticed a drastic performance dropdown for bigger sample sizes (i.e Records Numbers) : if N take T minutes, 2N takes 2,5T minutes or so, and so on. So there is obviously a leak somewhere.
EDIT 2 :
My Bad : Creating a struct instance cannot cause a garbage collection
I've changed it to classes and did not notice any significant improvement so far.
I'll run the profiler again with classes this time not structs) and see what it gives
EDIT 3 :
Many answers suspect Boxing/Unboxing to take place somewhere.
I DO use typed generic collections and typed Queues. And "Records" are never attached to any class as members. They are individually handled by events.
The ex-Struct (Now Class) implemented an interface and was casted by it when called (this is rather common usage) and I dropped off that interface. No improvement.
EDIT 4 :
I have run again the profiler, replacing struct by class. I have the same results : most instances of CLASS "Record" still end up being collected as Gen2 instances
EDIT 5 :
Producers of the Record classes are many parallel BackGroundWorkers (Byte Readers), and there is one Consumer Thread that dispatches the Records to other methods after performing a few checks. Besides I use Events and Delegates to communicate between the different parts. I do not unregister those events because they are useful all along the process (I may be wrong on that point)
If you stored them just as local variables (which may or may not be possible depending on your scenario) they will never end up on the heap at all.
If that's possible, I recommend trying that. You might get a more in depth response, if you post a code sample.
As a sanity check, are you forgetting to unregister static events, or using some other class that may be doing this (some classes fix this via Dispose)?
Also have you looked into the possibility of using the Flyweight pattern?
Edit- Since you now say you are doing something with events, this is highly likely to be a cause of your issues. Are you forgetting to unregister the events?
If you are "heavy" on memory use, and you are using C# 4.0, you could try the "server" GC. Merge your app.config (or web.config) with:
<configuration>
<runtime>
<gcServer enabled="true"/>
</runtime>
</configuration>
(with merge I mean that if you already have some of these sections, use them, otherwhise create them. "configuration" is the first level element of the app.config). This is better for some apps (apps that don't need heavy interaction with the user)
They end up as Second Generation because the GC just decided not to garbage collect them. I don't think it is really a problem to worry about. If you really wanted to you could just reuse the 1000 objects instead of continuously creating new ones. This would make it so that these objects wouldn't need to get garbage collected. You could force Garbage collection to ensure the objects "REALLY" get dumped but that would degrade performance as GC is an expensive operation or so I am told.

Why doesn't .NET have a SoftReference as well as a WeakReference, like Java?

I really love WeakReference's. But I wish there was a way to tell the CLR how much (say, on a scale of 1 to 5) how weak you consider the reference to be. That would be brilliant.
Java has SoftReference, WeakReference and I believe also a third type called a "phantom reference". That's 3 levels right there which the GC has a different behaviour algorithm for when deciding if that object gets the chop.
I am thinking of subclassing .NET's WeakReference (luckily and slightly bizzarely it isn't sealed) to make a pseudo-SoftReference that is based on a expiration timer or something.
I believe the fundamental reason that NET does not have soft references is because it can rely on an operating system with virtual memory. A Java process must specify its maximum OS memory (e.g. with -Xmx128M), and it never takes more OS memory than that. Whereas a NET process keeps taking OS memory that it needs, which the OS supplies with disk-backed virtual memory when RAM runs out. If NET allowed soft references, then the NET runtime would not know when to release them unless it either peeked deep into the OS to see if its memory is actually paged on disk (a nasty OS/CLR dependency), or it requested the runtime to specify a maximum process memory footprint (e.g. an equivalent of -Xmx). I guess that Microsoft does not want to add -Xmx to NET because they think the OS should decide how much RAM each process gets (by choosing which virtual memory pages to hold in RAM or on disk), and not the process itself.
Java SoftReferences are used in the creation of memory sensitive caches (they serve no other purpose).
As of .NET 4, .NET has a class System.Runtime.Caching.MemoryCache which will probably meet any such needs.
Having a WeakReference with varying levels of weakness (priority) sounds nice, but also might make the GC's job harder, not easier. (I've no idea on the GC internals, but) I would assume there some sort of additional access statistics that are kept for WeakReference objects so that the GC can clean them up efficiently (e.g. it might get rid of the least-used items first).
More than likely the added complexity wouldn't make anything any more efficient because the most efficient way is to get rid of infrequently used WeakReferences first. If you could assign a priority, how would you do it? This smells like a premature optimization: the programmer doesn't really know most of the time and is guessing; the result is a slower GC collection cycle that is probably reclaiming the wrong objects.
It begs the question though, that if you care about the WeakReference.Target object being reclaimed, is it really a good use of WeakReference?
It's like a cache. You shove stuff into the cache and ask the cache to make it stale after x minutes, but most caches never guarantee to keep it around at all. It just guarantees that if it does, it will expire it according to the policy requested.
My guess as to why this isn't there already would be simplicity. Most people, I think, would call it a virtue that there is only one type of reference, not four.
Maybe the ASP.NET Cache class (System.Web.Caching.Cache) might help achieve what you want? It automatically remove objects if memory gets low:
ASP.NET Caching Overview
Here's an article that shows how to use the Cache class in a windows forms application.
quoted from: Equivalent to SoftReference in .net?
Don't forget that you also have your standard references (the ones that you use on a daily basis). This gives you one more level.
WeakReferences should be used when you don't really care if the object goes away, while SoftReferences really only should be used when you would use a normal reference, but you would rather your object be cleared then for you to run out of memory. I'm not sure on the specifics, but I suspect that the GC normally traces through SoftReferences but not WeakReferences when determining which objects are live, but when running low on memory will also skip the SoftReferences.
My guess is that the .Net designers felt that the difference was confusing to most people and or that SoftReferences add more complexity than they really wanted and so decided to leave them out.
As a side note, AFAIK PhantomReferences are mostly designed for internal use by the virtual machine and are not intended for actual client use.
Maybe there should be an property where you can specify which Generation that the object >= before it is collected. So if you specify 1 then it is the weakest possible reference. But if you specify 3 then it would need to survive at least 3 prior collections before it can be considered for collection itself.
I thought the track ressurection flag was no good for this because by that time the object has already been finalized? May be wrong though...
(PS: I am the OP, just signed up. PITA that it doesn't inherit your history from "unregistered" accounts.)
Looking for the 'trackResurrection' option passed to the constructor perhaps?
The GC class also offers some assistance.
Don't know why .NET does not have Softreferences.
BUT in Java Softreferences are IMHO overused. The reason is tha at least in an application server you would want to be able to influence per application how long your Softreferenzen live. That's currently not possible in Java.

Customization hooks: virtual functions or events?

Performance and Design wise what would be the pros and cons
Using a sealed class and events or using a abstract class with a virtual function?
there will only be one listener to the events...
You shouldn't worry too much about performance in terms of abstract classes, inheritance, and event subscription. These are language constructs designed to make development & maintenance simpler. They aren't really designed with performance completely in mind.
There are better things to worry about when it comes to performance. A few things come to mind:
Boxing & unboxing - Try to avoid boxing & unboxing objects too much if you're doing a lot of repetitive or iterative tasks.
Reference Types vs. Value Types - Objects created as "structs" are stored by value. This means that the entire value of the object is passed around when sent in memory. This can be more expensive, but its lifetime is more deterministic, so it has an advantage in only existing within certain scopes usually. Objects created as "classes" are stored by reference. When sending a by reference object around through code, you only send the reference to the object, which means less memory to move around. The downside is that because it is allocated to the heap, it's lifespan in memory is less deterministic.
Subscribing/unsubscribing to events - This is not so much a performance issue as just a general development mistake. Objects will not be GC'ed unless all events are unsubscribed from. If you keep subscriptions open, your object can remain in memory forever causing a memory leak. Microsoft has good documentation on a WeakEvent pattern to help work around this problem.
You should also read Microsoft's MSDN documentation on Performance. It's a pretty good reference for understanding the real performance killers in .NET. Sealed & abstract classes and event handlers are usually not a performance concern.
Generally, code structure is more important to worry about. Think about how you're working with your data and what patterns you use that could be heavy to execute.
They don't exactly look like similar alternatives... If the question is whether virtual methods are faster than calling an event than the answer is yes, but only slightly.

Categories