There are basically two patterns in avoiding duplicated registering of event handlers:
(According to this discussion: C# pattern to prevent an event handler hooked twice)
Using System.Linq namespace, and check if the event handler is registered by calling GetInvocationList().Contains(MyEventHandlerMethod);
Do the unregistering before registering, like this:
MyEvent -= MyEventHandlerMethod;
MyEvent += MyEventHandlerMethod;
My question is, performance-wise, which one is better, or is there a significant difference between them in performance?
I don't think this matters a lot, both in assumed performance gain and actual difference.
Both GetInvocationList and -= walk the internal array _invocationList. (See source)
The LINQ extension method Contains will take more time since it needs the entire array to be walked and converted, returned and then checked by Contains itself. The Contains has the advantage it doesn't need to add the event handler if it exists which will mean some performance gain.
won't work for external callers, and is not very efficient anyway
should be fine (note that it creates 2 delegate instances each time, though), however also consider
in most scenarios, it should be easy to know whether you are already subscribed; if you can't know, then that suggests an architectural problem
The typical usage would be "subscribe {some usage} [unsubscribe]" where the unsubscribe may not be necessary, depending on the relative lifetimes of the event publisher and subscriber; if you actually have a re-entrant scenario, then "subscribe if not already subscribed" is itself problematic, because when unsubscribing later, you don't know if you're preventing an outer iteration receiving the event.
According the documentation, invocation list is being stored as array or something similar to it, and the order of the event handlers is being stored too. May be there are inner structure to maintain fast search for a particular method there.
So in the worst case operation of the GetInvocationList().Contains(MyEventHandlerMethod); is O(1) (as we simply got the reference for the array) + O(n) for searching the method, even if there is no optimization for it. I seriously doubt that this is true, and I think that there is some optimizing code, and it is O(log_n).
Second approach has additional operation of adding which is, I think, O(1), as we adding the event handler to the end.
So, to see the difference between such actions, you need a lot of the event handlers.
But! If you use the second approach, as I said, you'll add the event handler to the end of the queue, which can be wrong in some cases. So use the first one, and has no doubt in it.
MyEvent -= MyEventHandlerMethodfirst need to find the registered event handler in the invocation list in order to remove it.
So GetInvocationList().Contains is better, but it's truely insignificant.
But, notice that you can't access event EventHandler foo's invocation list....
Related
We are developing a game with the Unity3D engine (which uses Mono for user code - our code is written in C#).
The scenario is that we have a class exposing an event, with around ~ 250 registrations to that event (each level object on the game's map registers itself to that event):
// Every level registers itself (around 250 levels)
ScoresDataHelper.OnScoresUpdate += HandleOnScoresUpdate;
When destroying this scene, every object unregisters itself from the event:
ScoresDataHelper.OnScoresUpdate -= HandleOnScoresUpdate;
When using the built-in profiler, i am seeing a huge memory allocation, digging deeper shows that it is due to the delegates being unregistered.
I suspect this is due to the fact that Delegates are immutable and when chaining them together, new instances are created ?
Here's a screenshot from Unity's profiler:
Is there any way to avoid these memory allocations when dealing with a large number of event subscriptions?
As you confirmed in comments that you want to unsubscribe all the event subscriptions, there is no reason to unsubscribe it one by one.
You could just set the event to null. This will unsubscribe everything from the event without allocating any memory.
this.OnScoresUpdate = null;
One thing to note is that you can't do this from outside of the ScoresDataHelper class. It must be inside ScoresDataHelper class.
A simple solution would be not to use events in the first place.
// T,S,U is whatever your function takes and returns
private List<Func<T,S,U>> Listeners = new List<Func<T,S,U>>();
public void OnScoresUpdate(Func<T,S,U> listener){
Listeners.Add(listener);
}
// when you want to fire the event
foreach(var listener in Listeners){
listener(param1, param2);
}
// when you want to unsubscribe the listeners:
Listeners = new List<Func<T,S,U>>();
You can also use a weak collection if you want to avoid memory issues, removing listeners automatically as soon as the element gets garbage collected.
if you want to unsubscribe particular event, you can change the collection to HashSet. Then, you will remove a listener like:
private HashSet<Func<T,S,U>> listeners = new HashSet<Func<T,S,U>>();
public void RemoveListener(Func<T,S,U> listener){listeners.remove(listener);}
Adding this answer because this is the top result on Google right now:
According to these tests, adding a function to a delegate creates 208 bytes of garbage, and adding delegates to an event creates 104 bytes multiplied by the number of events added. So the 10th delegate added to an event creates 1kb of garbage! I imagine a similar test for unsubscribing would have the same results. The two answers above implementing a custom event class using a List or HashSet have been an easy and effective solution for my own version of this problem in Unity.
When I use Reactive Extensions (Rx) with linq filter what happen under the hood?
Is this,
var move = Observable.FromEventPattern<MouseEventArgs>(frm, "MouseMove");
IObservable<System.Drawing.Point> points = from evt in move
select evt.EventArgs.Location;
var overfirstbisector = from pos in points
where pos.X == pos.Y
select pos;
var movesub = overfirstbisector.Subscribe(pos => Console.WriteLine("mouse at " + pos));
more efficient from this?
private void MouseMove(object sender, EventArgs args)
{
if (args.Location.X == args.LocationY)
Console.WriteLine("mouse at " + args.Location);
}
I dont talk about the filtering logic itself but about the events behavior of the methods.
In Rx do the event raised exactly the same way of the regular event but with warapper or there is somthing special under the hood?
In this case, there's no algorithmic performance benefit for using the Rx query over the typical event handler - in fact, your Rx query may actually be marginally slower than the typical event handler. "Under the hood" the Rx query is basically doing the same thing as the typical event handler, but in a cleaner way.
The Rx query is not more efficient than the directly subscribing the events. Under the hood, the Rx query is still subscribing to the events and adding a bit of logic (e.g. for the schedulers), so I would say you are trading a bit of performance for increased readability, flexibility (since you can quickly change and adapt the query) and testability (since the Rx query can be much more easily unit-tested).
There is nothing "special" about Rx. Rx is just a library, not a language feature. If you wanted to, you could have built Rx yourself in a normal old C# project, it just happened that the smart people at Microsoft thought of it first. The code is open source now so you can just download it and see how it all works (admittedly it got a lot more complex in v2)
In your example, the Rx code will need to do the following:
Reflectively look for an event called "MouseMove" on the frm object
Create an observable sequence (IObservable<MouseEventArgs>) from the event
Ensure the safe semantics of the implicit IObservable contract e.g. that values are sequential, the subscriptions are thread safe etc..
Do the condition check
Subscribe to the sequence (safely)
Print to the console when a value is pushed.
In contrast, the non-rx code does the following:
Recieves a virtual call from a base class
does the condition check
Prints the value to the console.
So no reflection & no safety checks, but the same result. In practice the performance will be very fast for both so you are unlikely to see any performance difference.
With regards to Unit testing, I think any argument for or against is nonsense. We are talking about a MouseMove event, how are you going to unit test that? Putting all that Rx in your code base doesn't appear to pay for itself in my opinion (slower, more code, another framework for a dev to understand, etc...)
According to Microsoft event naming guidelines, the sender parameter in a C# event handler "is always of type object, even if it is possible to use a more specific type".
This leads to lots of event handling code like:
RepeaterItem item = sender as RepeaterItem;
if (item != null) { /* Do some stuff */ }
Why does the convention advise against declaring an event handler with a more specific type?
MyType
{
public event MyEventHander MyEvent;
}
...
delegate void MyEventHander(MyType sender, MyEventArgs e);
Am I missing a gotcha?
For posterity: I agree with the general sentiment in the answers that the convention is to use object (and to pass data via the EventArgs) even when it is possible to use a more specific type, and in real-world programming it is important to follow the convention.
Edit: bait for search: RSPEC-3906 rule "Event Handlers should have the correct signature"
Well, it's a pattern rather than a rule. It does mean that one component can forward on an event from another, keeping the original sender even if it's not the normal type raising the event.
I agree it's a bit strange - but it's probably worth sticking to the convention just for familiarity's sake. (Familiarity for other developers, that is.) I've never been particularly keen on EventArgs myself (given that on its own it conveys no information) but that's another topic. (At least we've got EventHandler<TEventArgs> now - although it would help if there were also an EventArgs<TContent> for the common situation where you just need a single value to be propagated.)
EDIT: It does make the delegate more general purpose, of course - a single delegate type can be reused across multiple events. I'm not sure I buy that as a particularly good reason - particularly in the light of generics - but I guess it's something...
I think there's a good reason for this convention.
Let's take (and expand on) #erikkallen's example:
void SomethingChanged(object sender, EventArgs e) {
EnableControls();
}
...
MyRadioButton.Click += SomethingChanged;
MyCheckbox.Click += SomethingChanged;
MyDropDown.SelectionChanged += SomethingChanged;
...
This is possible (and has been since .Net 1, before generics) because covariance is supported.
Your question makes total sense if you're going top-down - i.e. you need the event in your code, so you add it to your control.
However the convention is to make it easier when writing the components in the first place. You know that for any event the basic pattern (object sender, EventArgs e) will work.
When you add the event you don't know how it will be used, and you don't want to arbitrarily constrain the developers using your component.
Your example of a generic, strongly typed event makes good sense in your code, but won't fit with other components written by other developers. For instance if they want to use your component with those above:
//this won't work
GallowayClass.Changed += SomethingChanged;
In this example the additional type-constraint is just creating pain for the remote developer. They now have to create a new delegate just for your component. If they're using a load of your components they might need a delegate for each one.
I reckon the convention is worth following for anything external or that you expect to be used outside of a close nit team.
I like the idea of the generic event args - I already use something similar.
I use the following delegate when I would prefer a strongly-typed sender.
/// <summary>
/// Delegate used to handle events with a strongly-typed sender.
/// </summary>
/// <typeparam name="TSender">The type of the sender.</typeparam>
/// <typeparam name="TArgs">The type of the event arguments.</typeparam>
/// <param name="sender">The control where the event originated.</param>
/// <param name="e">Any event arguments.</param>
public delegate void EventHandler<TSender, TArgs>(TSender sender, TArgs e) where TArgs : EventArgs;
This can be used in the following manner:
public event EventHandler<TypeOfSender, TypeOfEventArguments> CustomEvent;
Generics and history would play a big part, especially with the number of controls (etc) that expose similar events. Without generics, you would end up with a lot of events exposing Control, which is largely useless:
you still have to cast to do anything useful (except maybe a reference check, which you can do just as well with object)
you can't re-use the events on non-controls
If we consider generics, then again all is well, but you then start getting into issues with inheritance; if class B : A, then should events on A be EventHandler<A, ...>, and events on B be EventHandler<B, ...>? Again, very confusing, hard for tooling, and a bit messy in terms of language.
Until there is a better option that covers all of these, object works; events are almost always on class instances, so there is no boxing etc - just a cast. And casting isn't very slow.
I guess that's because you should be able to do something like
void SomethingChanged(object sender, EventArgs e) {
EnableControls();
}
...
MyRadioButton.Click += SomethingChanged;
MyCheckbox.Click += SomethingChanged;
...
Why do you do the safe cast in your code? If you know that you only use the function as an event handler for the repeater, you know that the argument is always of the correct type and you can use a throwing cast instead, e.g. (Repeater)sender instead of (sender as Repeater).
No good reason at all, now there's covarience and contravarience I think it's fine to use a strongly typed Sender. See discussion in this question
Conventions exist only to impose consistency.
You CAN strongly type your event handlers if you wish, but ask yourself if doing so would provide any technical advantage?
You should consider that event handlers don't always need to cast the sender... most of the event handling code I've seen in actual practice don't make use of the sender parameter. It is there IF it is needed, but quite often it isn't.
I often see cases where different events on different objects will share a single common event handler, which works because that event handler isn't concerned with who the sender was.
If those delegates were strongly typed, even with clever use of generics, it would be VERY difficult to share an event handler like that. In fact, by strongly typing it you are imposing the assumption that the handlers should care what the sender is, when that isn't the practical reality.
I guess what you should be asking is why WOULD you strongly type the event handling delegates? By doing so would you be adding any significant functional advantages? Are you making the usage more "consistent"? Or are you just imposing assumptions and constraints just for the sake of strong-typing?
You say:
This leads to lots of event handling
code like:-
RepeaterItem item = sender as RepeaterItem
if (RepeaterItem != null) { /* Do some stuff */ }
Is it really lots of code?
I'd advise never to use the sender parameter to an event handler. As you've noticed, it's not statically typed. It's not necessarily the direct sender of the event, because sometimes an event is forwarded. So the same event handler may not even get the same sender object type every time it is fired. It's an unnecessary form of implicit coupling.
When you enlist with an event, at that point you must know what object the event is on, and that is what you're most likely to be interested in:
someControl.Exploded += (s, e) => someControl.RepairWindows();
And anything else specific to the event ought to be in the EventArgs-derived second parameter.
Basically the sender parameter is a bit of historical noise, best avoided.
I asked a similar question here.
It's because you can never be sure who fired the event. There is no way to restrict which types are allowed to fire a certain event.
The pattern of using EventHandler(object sender, EventArgs e) is meant to provide for all events the means of identifying the event source (sender), and providing a container for all the event's specific payload.
The advantage of this pattern is also that it allows to generate a number of different events using the same type of delegate.
As for the arguments of this default delegate...
The advantage of having a single bag for all the state you want to pass along with the event is fairly obvious, especially if there are many elements in that state.
Using object instead of a strong type allows to pass the event along, possibly to assemblies that do not have a reference to your type (in which case you may argue that they won't be able to use the sender anyway, but that's another story - they can still get the event).
In my own experience, I agree with Stephen Redd, very often the sender is not used. The only cases I've needed to identify the sender is in the case of UI handlers, with many controls sharing the same event handler (to avoid duplicating code).
I depart from his position, however, in that I see no problem defining strongly typed delegates, and generating events with strongly typed signatures, in the case where I know that the handler will never care who the sender is (indeed, often it should not have any scope into that type), and I do not want the inconvenience of stuffing state into a bag (EventArg subclass or generic) and unpacking it. If I only have 1 or 2 elements in my state, I'm OK generating that signature.
It's a matter of convenience for me: strong typing means the compiler keeps me on my toes, and it reduces the kind of branching like
Foo foo = sender as Foo;
if (foo !=null) { ... }
which does make the code look better :)
This being said, it is just my opinion. I've deviated often from the recommended pattern for events, and I have not suffered any for it. It is important to always be clear about why it is OK to deviate from it.
Good question!
.
Well, that's a good question. I think because any other type could use your delegate to declare an event, so you can't be sure that the type of the sender is really "MyType".
I tend to use a specific delegate type for each event (or a small group of similar events). The useless sender and eventargs simply clutter the api and distract from the actually relevant bits of information. Being able to "forward" events across classes isn't something I've yet to find useful - and if you're forwarding events like that, to an event handler that represents a different type of event, then being forced to wrap the event yourself and provide the appropriate parameters is little effort. Also, the forwarder tends to have a better idea of how to "convert" the event parameters than the final receiver.
In short, unless there's some pressing interop reason, dump the useless, confusing parameters.
There's a standard pattern for events in .NET - they use a delegate type that takes a plain object called sender and then the actual "payload" in a second parameter, which should be derived from EventArgs.
The rationale for the second parameter being derived from EventArgs seems pretty clear (see the .NET Framework Standard Library Annotated Reference). It is intended to ensure binary compatibility between event sinks and sources as the software evolves. For every event, even if it only has one argument, we derive a custom event arguments class that has a single property containing that argument, so that way we retain the ability to add more properties to the payload in future versions without breaking existing client code. Very important in an ecosystem of independently-developed components.
But I find that the same goes for zero arguments. This means that if I have an event that has no arguments in my first version, and I write:
public event EventHandler Click;
... then I'm doing it wrong. If I change the delegate type in the future to a new class as its payload:
public class ClickEventArgs : EventArgs { ...
... I will break binary compatibility with my clients. The client ends up bound to a specific overload of an internal method add_Click that takes EventHandler, and if I change the delegate type then they can't find that overload, so there's a MissingMethodException.
Okay, so what if I use the handy generic version?
public EventHandler<EventArgs> Click;
No, still wrong, because an EventHandler<ClickEventArgs> is not an EventHandler<EventArgs>.
So to get the benefit of EventArgs, you have to derive from it, rather than using it directly as is. If you don't, you may as well not be using it (it seems to me).
Then there's the first argument, sender. It seems to me like a recipe for unholy coupling. An event firing is essentially a function call. Should the function, generally speaking, have the ability to dig back through the stack and find out who the caller was, and adjust its behaviour accordingly? Should we mandate that interfaces should look like this?
public interface IFoo
{
void Bar(object caller, int actualArg1, ...);
}
After all, the implementor of Bar might want to know who the caller was, so they can query for additional information! I hope you're puking by now. Why should it be any different for events?
So even if I am prepared to take the pain of making a standalone EventArgs-derived class for every event I declare, just to make it worth my while using EventArgs at all, I definitely would prefer to drop the object sender argument.
Visual Studio's autocompletion feature doesn't seem to care what delegate you use for an event - you can type += [hit Space, Return] and it writes a handler method for you that matches whatever delegate it happens to be.
So what value would I lose by deviating from the standard pattern?
As a bonus question, will C#/CLR 4.0 do anything to change this, perhaps via contravariance in delegates? I attempted to investigate this but hit another problem. I originally included this aspect of the question in that other question, but it caused confusion there. And it seems a bit much to split this up into a total of three questions...
Update:
Turns out I was right to wonder about the effects of contravariance on this whole issue!
As noted elsewhere, the new compiler rules leave a hole in the type system that blows up at runtime. The hole has effectively been plugged by defining EventHandler<T> differently to Action<T>.
So for events, to avoid that type hole you should not use Action<T>. That doesn't mean you have to use EventHandler<TEventArgs>; it just means that if you use a generic delegate type, don't pick one that is enabled for contravariance.
Nothing, you lose nothing. I've been using Action<> since .NET 3.5 came out and it is far more natural and easier to program against.
I don't even deal with the EventHandler type for generated event handlers anymore, simply write the method signature you want and wire it up with a lambda:
btnCompleteOrder.OnClick += (o,e) => _presenter.CompleteOrder();
I don't like the event-handler pattern either. To my mind, the Sender object isn't really all that helpful. In cases where an event is saying something happened to some object (e.g. a change notification) it would be more helpful to have the information in the EventArgs. The only use I could kinda-sorta see for Sender would be to unsubscribe from an event, but it's not always clear what event one should unsubscribe to.
Incidentally, if I had my druthers, an Event wouldn't be an AddHandler method and a RemoveHandler method; it would just be an AddHandler method which would return a MethodInvoker that could be used for unsubscription. Rather than a Sender argument, I'd have the first argument be a copy of the MethodInvoker required for unsubscription (in case an object finds itself receiving events to which the unsubscribe invoker has been lost). The standard MulticastDelegate wouldn't be suitable for dispatching events (since each subscriber should receive a different unsubscription delegate) but unsubscribing events wouldn't require a linear search through an invocation list.
I have an app which consists of several different assemblies, one of which holds the various interfaces which the classes obey, and by which the classes communicate across assembly boundaries. There are several classes firing events, and several which are interested in these events.
My question is as follows: is it good practice to implement a central EventConsolidator of some kind? This would be highly coupled, as it would need to know every class (or at least interface) throwing an event, and every consumer of an event would need to have a reference to EventConsolidator in order to subscribe.
Currently I have the situation where class A knows class B (but not C), class B knows class C, etc. Then if C fires an event B needs to pick it up and fire its own event in order for A to respond. These kinds of chains can get quite long, and it may be that B is only interested in the event in order to pass it along. I don't want A to know about C though, as that would break encapsulation.
What is good practice in this situation? Centralise the events, or grin and bear it and define events in each intermediate class? Or what are the criteria by which to make the decision? Thanks!
Edit: Here is another question asking essentially the same thing.
You could put the event itself in an interface, so that A didn't need to know about C directly, but only that it has the relevant event. However, perhaps you mean that the instance of A doesn't have sight of an instance of C...
I would try to steer clear of a centralised event system. It's likely to make testing harder, and introduced tight coupling as you said.
One pattern which is worth knowing about is making event proxying simple. If B only exposes an event to proxy it to C, you can do:
public event FooHandler Foo
{
add
{
c.Foo += value;
}
remove
{
c.Foo -= value;
}
}
That way it's proxying the subscription/unsubscription rather than the act of raising the event. This has an impact on GC eligibility, of course - which may be beneficial or not, depending on the situation. Worth thinking about though.
What you could try is using the event brokering of either NInject or the Unity Application Block.
This allows you to, for example:
[Publish("foo://happened")]
public event EventHandler<FooArgs> FooHappened;
[Subscribe("foo://happened")]
public void Foo_Happened(object sender, FooArgs args)
{ }
If both objects are created through the container the events will be hooked up automatically.
I'd probably try to massage the domain so that each class can directly depend on the appropriate event source. What I mean is asking the question why don't A know about C? Is there perhaps a D waiting to emerge?
As an alternative approach you could consider an event broker architecture. It means observers don't know directly about the source. Here's an interesting video.
This would be highly coupled, as it would need to know every class
I think you answered your own question if you consider that coupling is bad! Passing events through a chain of potential handlers is a fairly common pattern in many environments; It may not be the most efficient approach, but it avoids the complexity that your suggested approach would involve.
Another approach you could take is to use a message dispatcher. This involves using a common message format (or at least a common message header format) to represent events, and then placing those messages into a queue. A dispatcher then picks up each of those events in turn (or based on some prioritisation), and routes them directly to the required handler. Each handler must be registered with the dispatcher at startup.
A message in this case could simply be a class with a few specific fields at the start. The specific message could simply be a derivative, or you could pass your message-specific data as an 'object' parameter along with the message header.
You can check out the EventBroker object in the M$ patterns and practises lib if you want centralised events.
Personally I think its better to think about your architecture instead and even though we use the EventBroker here, none of our new code uses it and we're hoping to phase it out one sunny day.
we have our own event broker implementation (open source)
Tutorial at: http://sourceforge.net/apps/mediawiki/bbvcommon/index.php?title=Event_Broker
And a performance analysis at: www.planetgeek.ch/2009/07/12/event-broker-performance/
Advantages compared to CAB:
- better logging
- extension support
- better error handling
- extendable handlers (UI, Background Thread, ...)
and some more I cannot recall right now.
Cheers,
Urs