Auto Generate Custom Events at runtime - c#

I have a class that Handles send & receive over a socket between my application and the network. This class uses other classes, including a low level sockket connection class and a PDU handler class that creates the messages to send and handles received data.
Now i use an event to signal my class that the low level connection class has data for it and i need to send that data to the PDU handler to convert to information the application can use and then hand the data to the application.
For future usage, i am trying to get the class to be as generic as possible, so that on future Server/Client projects i will need only to change the PDU handler to take under consideration the new operations availlable and how to handle the data.
All that is well underway and now i am facing the isssue of handing the data back to the app. For that, my logical approach is an event letting the app know data is ready for collection. For that i can either:
a) have one event and let the app sort out what kind of message it is through the operation code (doable)
b) Have one event per operation code and have the app subscribe to all of them and thus know at start what it is getting
Considering the idea of making things generic, and the approach stated in b, is there a way to dinamicly create events based on a given delegate signature at runtime?
e. g.
imagine you have opcodes in an enum called MyOperation1 and MyOperation2 and you have defined a delegate like:
public delegate void PDUEventHandler(ParamType Param, [...]);
and i want to define events called:
public event PDUEventHandler MyOperation1;
public event PDUEventHandler MyOperation2;
But if i add a new operation code i will need an event for it.
Can this events be created dinamicly or do i need to do it by hand?
If i need to do it by hand then i guess a single event would be better and handle things app side.

Perhaps what you need is a callback - essentially you pass to the event handler a delegate for it to execute when the handler is done. Here's a stackoverflow thread to give you an idea
In terms of event handlers & re-useability, perhaps you can extend EventArgs and have that delegate as a property.
EDIT:
I was thinking a single PDUEventHandler having common code and a "hole" where custom code is run. That custom code is passed to the handler as a delegate (i.e. a method) or even a class instance. But let's change that a little...
Sounds like you need a factory. In fact you're practically describing a factory.
Conceptually let go of the idea of passing special opcodes to an EventHandler per se, or having multi-signature PDUEventHandlers.
Create a PDUHandlerFactoryclass. The factory returns a customized instance as a general PDUHandler class reference. Then instead of a PDUEventHander you caller has a PDUHandler reference that points to the factory-custom instance.

Related

c# how to modify parameter that is passing through a delegate

I have the following delegate
System.Action<SomeMessage> TheDelegate;
Which has a couple subscribers, however as the message gets passed through all the subscribers, each subscriber will do something to it and that change persists and gets passed to the next subscriber, which is something that I don't want.
Is there a way that I can use the original message as the parameter for all subscribers?
Edit:
SomeMessage is a class, thus it gets passed by reference through the subscribers
To name two possibilities:
Clone the message each time it is passed to a subscriber. How is the message structured, is it cheap to clone? Maybe you could even pass it as struct? Cloning could be done by implementing ICloneable, clone on a per-property basis, which can be automated by a framework, e.g, Automapper (http://automapper.org) or, if you mess around with some JSON Lib like JSON.NET, you could do something like var clone = FromJson(ToJson(original)). This is surely not a fast approach but an easy one that works well with deep objects.
Another way would be making the message itself immutable and let each subscriber pass change requests to some kind of collector, e.g.
like this:
interface ICommandSequence
{
void AddCommand(ICommand);
}
and the Action becomes
System.Action<ImmutableMessage, ICommandSequence>
Each subscriber could now pass command instances to the ICommandSequence instance. And after all subscribers have been called, you could execute the command sequence, and apply changes to message objects. How the commands look depends on the way your messages and message processing looks.
If your application cares about a design that is strongly focused on the business domain, you could build commands that represent real-world business events, as it is done in CQRS, for example.

Labview .Net events

I have a .Net 4 control /.dll which I am using with LabVIEW.
The control exposes the following event:
public delegate void CommNewDataEventHandler(UInt16 u16StageGroup, UInt32 u32Status , string [] strNewDataTitle, float[] fNewData, string[] strNewDataUnit);
public event CommNewDataEventHandler CommNewDataEvent;
What I would like to do is subscribe to this event within LabView and update a numeric indicator with the value specified in float[] fNewData.
What is the correct way to do this?
current VI callback VI
There is no "correct" way to do this, but you can put code in the callback VI to pass data to where you need it.
For example, you can pass the control reference as the user parameter (this is the terminal on the register node and the control on the FP) and then use Variant to Data to convert it back to a reference (edit - you don't need to convert if you create the VI after wiring the data into the node) and use the value property. This is inelegant, but it will work.
A more elegant solution would be to pass a user event of your data type to the callback VI (for instance, as the user parameter) and then generate that event with the data you got. This is more cumbersome, but less hidden.
Like so (ignore the missing horizontal wire. It must have blinked when I took the screenshot, but it's there):
You can find the image here if imgur takes it down: https://forums.ni.com/ni/attachments/ni/130/16266/1/event%20callback%20example.PNG
As the previous poster has suggested, there is no "correct" way to do this. There are a number of different approaches you might take depending on the scope of your project. For the general .NET event registration and handling procedure NI has a good example here: https://decibel.ni.com/content/docs/DOC-9161
This sample code is a "timer handler" (using a native .NET timer API) and illustrates how to register for an event and create your callback VI. Let's modify this to achieve your goal. To do so, we must somehow communicate through our callback VI to another part of the program (containing the numeric indicator we want to update). Options for establishing communication between seperate parts of our application:
Global variables
Functional global variable
Control/indicator references
Structured synchronization mechanism (i.e. queue, notifier, etc.)
Messaging system (i.e. UDP via local loopback, managed queues, etc.)
There are many, many options and this is certainly not an exhaustive list. Every approach has advantages and disadvantages and your decisions should be based on what kind of application you are writing. Note that some are better style than others. My preference (when writing fairly simple application) would be to use something like a notifier for a single point data update. My opinion is that this offers a good amount of flexibility/power and you won't get knocked for style points.
Below is a modified version of NI's example program using a notifier to pass the data from the callback VI to the top level VI. When the event fires, the callback pushes some data onto the notifier to signal to the top level VI that the elapsed time has expired and the event has occured. The top level VI waits for the notification and uses the provided data to update the value of the numeric indicator.
Note that this is a very trivial example. In this case I don't really even have to send any data back. I know that if the notifier does not timeout that the event has fired and can subsequently increment my counter in the top level. However, the notifier allows you the flexibility to stuff arbitrary data in the communication pipe. Hence, it can tell me "hey! your condition occurred!" and "here's some data that was generated".
Callback VI
Top Level VI
If you are writing a larger application, your loop to monitor the notifier can run in parallel with other code. Doing so allows you have an asynchronous mechanism for monitoring the status of the event and displaying it on the GUI. This approach allows you to monitor the event without interfering with the rest of your application.

How to decide between a method or event?

I read a question ages ago "How do C# Events work behind the scenes?" and Jon answered that all events are similar to methods...
In a purely hypothetical situation, I was wondering if someone could explain or point me to a resource that says when to use an event over a method?
Basically, If I want to have a big red/green status picture which is linked to a Bool field, and I wanted to change it based on the value of the bool, should I:
a) Have a method called Changepicture which is linked to the field and changes the state of the bool and the picture.
b) Have a get/set part to the field and stick an event in the set part.
c) Have a get/set part to the field and stick a method in the set part.
d) Other?
To gain more information about events see this post.
You have options.
If your object already implements INotifyPropertyChanged and your red/green picture is a control which supports databinding, then you can simply fire the NotifyPropertyCHanged event on the bool's set method, and add a databinding on that property to your control.
If not implementing INotifyPropertyChanged, I would still recommend doing something similar. I.e. creating your own event handler, and having the reg/green picture subscribe to the event. Just straight up calling a method from the set of your property creates a tight coupling, which is generally a bad thing to do.
The answer is: It depends.
If your boolean value is in the codebehind class of your visual component (e.g. WinForm) you can call a method ChangePicture without doing strange things. But if your boolean value is architectural more far away from the visual component an event is the right way to handle the scenario because you can not easily call a method on the visual component because the class that contains the boolean value perhaps doesn´t even know your visual component exists. :)
The best way to figure out what you should do is to look at classes in the .NET framework and see how they are designed.
Methods are "doers" or "actions", while you can see events as notification mechanisms. That is if others could be interested is being notified when something happens in an object then you can surface an event and have one or more subscribers to these events.
Since events in .NET are multi-cast, meaning multiple objects can subscribe and therefore be notified of an event happening, that may be other reason to raise an event in your objects. Events also follow the observer pattern in that the subject (your class) is really unaware of the subscribers (loosely coupled). While in order to call a method, the secondary object needs to have a reference to an instance of your class.
Note that, a method in your class eventually raises and event. So let's say you have a method in your class called ChangePicture. Then in the method's implementation, you could eventually raise an event PictureChanged. if someone is interested in being notified of this event, they can subscribe to this event. This someone is typically not the one that made the method call to change the picture.
Events are delegates. Delegates are objects. Event's are actually MulticastDelegates (a base class in the .NET framework). These objects eventually call a method, which is the method that gets called as part of the event notification. So they are slightly "heavier" then just a method call, but that should almost never determine your design.

What would I lose by abandoning the standard EventHandler pattern in .NET?

There's a standard pattern for events in .NET - they use a delegate type that takes a plain object called sender and then the actual "payload" in a second parameter, which should be derived from EventArgs.
The rationale for the second parameter being derived from EventArgs seems pretty clear (see the .NET Framework Standard Library Annotated Reference). It is intended to ensure binary compatibility between event sinks and sources as the software evolves. For every event, even if it only has one argument, we derive a custom event arguments class that has a single property containing that argument, so that way we retain the ability to add more properties to the payload in future versions without breaking existing client code. Very important in an ecosystem of independently-developed components.
But I find that the same goes for zero arguments. This means that if I have an event that has no arguments in my first version, and I write:
public event EventHandler Click;
... then I'm doing it wrong. If I change the delegate type in the future to a new class as its payload:
public class ClickEventArgs : EventArgs { ...
... I will break binary compatibility with my clients. The client ends up bound to a specific overload of an internal method add_Click that takes EventHandler, and if I change the delegate type then they can't find that overload, so there's a MissingMethodException.
Okay, so what if I use the handy generic version?
public EventHandler<EventArgs> Click;
No, still wrong, because an EventHandler<ClickEventArgs> is not an EventHandler<EventArgs>.
So to get the benefit of EventArgs, you have to derive from it, rather than using it directly as is. If you don't, you may as well not be using it (it seems to me).
Then there's the first argument, sender. It seems to me like a recipe for unholy coupling. An event firing is essentially a function call. Should the function, generally speaking, have the ability to dig back through the stack and find out who the caller was, and adjust its behaviour accordingly? Should we mandate that interfaces should look like this?
public interface IFoo
{
void Bar(object caller, int actualArg1, ...);
}
After all, the implementor of Bar might want to know who the caller was, so they can query for additional information! I hope you're puking by now. Why should it be any different for events?
So even if I am prepared to take the pain of making a standalone EventArgs-derived class for every event I declare, just to make it worth my while using EventArgs at all, I definitely would prefer to drop the object sender argument.
Visual Studio's autocompletion feature doesn't seem to care what delegate you use for an event - you can type += [hit Space, Return] and it writes a handler method for you that matches whatever delegate it happens to be.
So what value would I lose by deviating from the standard pattern?
As a bonus question, will C#/CLR 4.0 do anything to change this, perhaps via contravariance in delegates? I attempted to investigate this but hit another problem. I originally included this aspect of the question in that other question, but it caused confusion there. And it seems a bit much to split this up into a total of three questions...
Update:
Turns out I was right to wonder about the effects of contravariance on this whole issue!
As noted elsewhere, the new compiler rules leave a hole in the type system that blows up at runtime. The hole has effectively been plugged by defining EventHandler<T> differently to Action<T>.
So for events, to avoid that type hole you should not use Action<T>. That doesn't mean you have to use EventHandler<TEventArgs>; it just means that if you use a generic delegate type, don't pick one that is enabled for contravariance.
Nothing, you lose nothing. I've been using Action<> since .NET 3.5 came out and it is far more natural and easier to program against.
I don't even deal with the EventHandler type for generated event handlers anymore, simply write the method signature you want and wire it up with a lambda:
btnCompleteOrder.OnClick += (o,e) => _presenter.CompleteOrder();
I don't like the event-handler pattern either. To my mind, the Sender object isn't really all that helpful. In cases where an event is saying something happened to some object (e.g. a change notification) it would be more helpful to have the information in the EventArgs. The only use I could kinda-sorta see for Sender would be to unsubscribe from an event, but it's not always clear what event one should unsubscribe to.
Incidentally, if I had my druthers, an Event wouldn't be an AddHandler method and a RemoveHandler method; it would just be an AddHandler method which would return a MethodInvoker that could be used for unsubscription. Rather than a Sender argument, I'd have the first argument be a copy of the MethodInvoker required for unsubscription (in case an object finds itself receiving events to which the unsubscribe invoker has been lost). The standard MulticastDelegate wouldn't be suitable for dispatching events (since each subscriber should receive a different unsubscription delegate) but unsubscribing events wouldn't require a linear search through an invocation list.

C# Best practice: Centralised event controller or not

I have an app which consists of several different assemblies, one of which holds the various interfaces which the classes obey, and by which the classes communicate across assembly boundaries. There are several classes firing events, and several which are interested in these events.
My question is as follows: is it good practice to implement a central EventConsolidator of some kind? This would be highly coupled, as it would need to know every class (or at least interface) throwing an event, and every consumer of an event would need to have a reference to EventConsolidator in order to subscribe.
Currently I have the situation where class A knows class B (but not C), class B knows class C, etc. Then if C fires an event B needs to pick it up and fire its own event in order for A to respond. These kinds of chains can get quite long, and it may be that B is only interested in the event in order to pass it along. I don't want A to know about C though, as that would break encapsulation.
What is good practice in this situation? Centralise the events, or grin and bear it and define events in each intermediate class? Or what are the criteria by which to make the decision? Thanks!
Edit: Here is another question asking essentially the same thing.
You could put the event itself in an interface, so that A didn't need to know about C directly, but only that it has the relevant event. However, perhaps you mean that the instance of A doesn't have sight of an instance of C...
I would try to steer clear of a centralised event system. It's likely to make testing harder, and introduced tight coupling as you said.
One pattern which is worth knowing about is making event proxying simple. If B only exposes an event to proxy it to C, you can do:
public event FooHandler Foo
{
add
{
c.Foo += value;
}
remove
{
c.Foo -= value;
}
}
That way it's proxying the subscription/unsubscription rather than the act of raising the event. This has an impact on GC eligibility, of course - which may be beneficial or not, depending on the situation. Worth thinking about though.
What you could try is using the event brokering of either NInject or the Unity Application Block.
This allows you to, for example:
[Publish("foo://happened")]
public event EventHandler<FooArgs> FooHappened;
[Subscribe("foo://happened")]
public void Foo_Happened(object sender, FooArgs args)
{ }
If both objects are created through the container the events will be hooked up automatically.
I'd probably try to massage the domain so that each class can directly depend on the appropriate event source. What I mean is asking the question why don't A know about C? Is there perhaps a D waiting to emerge?
As an alternative approach you could consider an event broker architecture. It means observers don't know directly about the source. Here's an interesting video.
This would be highly coupled, as it would need to know every class
I think you answered your own question if you consider that coupling is bad! Passing events through a chain of potential handlers is a fairly common pattern in many environments; It may not be the most efficient approach, but it avoids the complexity that your suggested approach would involve.
Another approach you could take is to use a message dispatcher. This involves using a common message format (or at least a common message header format) to represent events, and then placing those messages into a queue. A dispatcher then picks up each of those events in turn (or based on some prioritisation), and routes them directly to the required handler. Each handler must be registered with the dispatcher at startup.
A message in this case could simply be a class with a few specific fields at the start. The specific message could simply be a derivative, or you could pass your message-specific data as an 'object' parameter along with the message header.
You can check out the EventBroker object in the M$ patterns and practises lib if you want centralised events.
Personally I think its better to think about your architecture instead and even though we use the EventBroker here, none of our new code uses it and we're hoping to phase it out one sunny day.
we have our own event broker implementation (open source)
Tutorial at: http://sourceforge.net/apps/mediawiki/bbvcommon/index.php?title=Event_Broker
And a performance analysis at: www.planetgeek.ch/2009/07/12/event-broker-performance/
Advantages compared to CAB:
- better logging
- extension support
- better error handling
- extendable handlers (UI, Background Thread, ...)
and some more I cannot recall right now.
Cheers,
Urs

Categories