I have an app which consists of several different assemblies, one of which holds the various interfaces which the classes obey, and by which the classes communicate across assembly boundaries. There are several classes firing events, and several which are interested in these events.
My question is as follows: is it good practice to implement a central EventConsolidator of some kind? This would be highly coupled, as it would need to know every class (or at least interface) throwing an event, and every consumer of an event would need to have a reference to EventConsolidator in order to subscribe.
Currently I have the situation where class A knows class B (but not C), class B knows class C, etc. Then if C fires an event B needs to pick it up and fire its own event in order for A to respond. These kinds of chains can get quite long, and it may be that B is only interested in the event in order to pass it along. I don't want A to know about C though, as that would break encapsulation.
What is good practice in this situation? Centralise the events, or grin and bear it and define events in each intermediate class? Or what are the criteria by which to make the decision? Thanks!
Edit: Here is another question asking essentially the same thing.
You could put the event itself in an interface, so that A didn't need to know about C directly, but only that it has the relevant event. However, perhaps you mean that the instance of A doesn't have sight of an instance of C...
I would try to steer clear of a centralised event system. It's likely to make testing harder, and introduced tight coupling as you said.
One pattern which is worth knowing about is making event proxying simple. If B only exposes an event to proxy it to C, you can do:
public event FooHandler Foo
{
add
{
c.Foo += value;
}
remove
{
c.Foo -= value;
}
}
That way it's proxying the subscription/unsubscription rather than the act of raising the event. This has an impact on GC eligibility, of course - which may be beneficial or not, depending on the situation. Worth thinking about though.
What you could try is using the event brokering of either NInject or the Unity Application Block.
This allows you to, for example:
[Publish("foo://happened")]
public event EventHandler<FooArgs> FooHappened;
[Subscribe("foo://happened")]
public void Foo_Happened(object sender, FooArgs args)
{ }
If both objects are created through the container the events will be hooked up automatically.
I'd probably try to massage the domain so that each class can directly depend on the appropriate event source. What I mean is asking the question why don't A know about C? Is there perhaps a D waiting to emerge?
As an alternative approach you could consider an event broker architecture. It means observers don't know directly about the source. Here's an interesting video.
This would be highly coupled, as it would need to know every class
I think you answered your own question if you consider that coupling is bad! Passing events through a chain of potential handlers is a fairly common pattern in many environments; It may not be the most efficient approach, but it avoids the complexity that your suggested approach would involve.
Another approach you could take is to use a message dispatcher. This involves using a common message format (or at least a common message header format) to represent events, and then placing those messages into a queue. A dispatcher then picks up each of those events in turn (or based on some prioritisation), and routes them directly to the required handler. Each handler must be registered with the dispatcher at startup.
A message in this case could simply be a class with a few specific fields at the start. The specific message could simply be a derivative, or you could pass your message-specific data as an 'object' parameter along with the message header.
You can check out the EventBroker object in the M$ patterns and practises lib if you want centralised events.
Personally I think its better to think about your architecture instead and even though we use the EventBroker here, none of our new code uses it and we're hoping to phase it out one sunny day.
we have our own event broker implementation (open source)
Tutorial at: http://sourceforge.net/apps/mediawiki/bbvcommon/index.php?title=Event_Broker
And a performance analysis at: www.planetgeek.ch/2009/07/12/event-broker-performance/
Advantages compared to CAB:
- better logging
- extension support
- better error handling
- extendable handlers (UI, Background Thread, ...)
and some more I cannot recall right now.
Cheers,
Urs
Related
I have a class that is going to be responsible for generating events on an frequent but irregular interval, that other classes must consume and operate on. I want to use Reactive Extensions for this task.
The consumer side of this is very straightforward; I have my consumer class implementing IObserver<Payload> and all seems well. The problem comes on the producer class.
Implementing IObservable<Payload> directly (that is, putting my own implementation for IDisposable Subscribe(IObserver<Payload> ) is, according to the documentation, not recommended. It suggests instead composing with the Observable.Create() set of functions. Since my class will run for a long time, I've tried creating an Observable with var myObservable = Observable.Never(), and then, when I have new Payloads available, calling myObservable.Publish(payloadData). When I do this, though, I don't seem to hit the OnNext implementation in my consumer.
I think, as a work-around, I can create an event in my class, and then create the Observable using the FromEvent function, but this seems like an overly complicated approach (i.e., it seems weird that the new hotness of Observables 'requires' events to work). Is there a simple approach I'm overlooking here? What's the 'standard' way to create your own Observable sources?
Create a new Subject<Payload> and call it's OnNext method to send an event. You can Subscribe to the subject with your observer.
The use of Subjects is often debated. For a thorough discussion on the use of Subjects (which links to a primer), see here - but in summary, this use case sounds valid (i.e. it probably meets the local + hot criteria).
As an aside, overloads of Subscribe accepting delegates may remove the need for you to provide an implemention of IObserver<T>.
Observable.Never() doesn't "send" notifications, you should use Observable.Return(yourValue)
If you need a guide with concrete examples i recommend reading Intro to Rx
Unless I come across a better way of doing it, what I've settled on for now is the use of a BlockingCollection.
var _itemsToSend = new BlockingCollection<Payload>();
IObservable<MessageQueuePayload> _deliverer =
_itemsToSend.GetConsumingEnumerable().ToObservable(Scheduler.Default);
hi i wanna pass the textboxquantidadehoras.Text;datahorado.SelectedDate; correto.Desenvolvedor(from childwindow) to a grid in the main page called datagridhorastotais but i can't set the itemsource to "teste" form child window... any ideas? here is the code of the childwindow
public partial class ChildWindow2 : ChildWindow, INotifyPropertyChanged
{
public class Horas : INotifyPropertyChanged
{
private string quantidadehoras;
private DateTime? datahora;
private string desenvolvedor;
public string Quantidadehoras
{
get
{
return quantidadehoras;
}
set
{
quantidadehoras = value;
NotifyPropertyChanged("Quantidadehoras");
}
}
public DateTime? Datahora
{
get
{
return datahora;
}
set
{
datahora = value;
NotifyPropertyChanged("DataHora");
}
}
public string Desenvolvedor
{
get
{
return desenvolvedor;
}
set
{
desenvolvedor = value;
NotifyPropertyChanged("Desenvolvedor");
}
}
#region
public event PropertyChangedEventHandler PropertyChanged;
private void NotifyPropertyChanged(string propertyName)
{
if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
public class Horas2 : ObservableCollection<Horas>
{
public Horas2()
{
}
}
}
#endregion
public ChildWindow2()
{
InitializeComponent();
}
public class quadrodehorarios : ObservableCollection<ChildWindow2>, INotifyPropertyChanged
{
}
private void OKButton_Click(object sender, RoutedEventArgs e)
{
Horas2 teste= new Horas2();
Horas correto = new Horas();
correto.Quantidadehoras = textboxquantidadehoras.Text;
correto.Datahora = datahorado.SelectedDate;
correto.Desenvolvedor =textboxDesenvolvedor.Text;
this.DialogResult = true;
}
private void CancelButton_Click(object sender, RoutedEventArgs e)
{
this.DialogResult = false;
}
private void comboBox1_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
}
private void textboxqtdhoras_TextChanged(object sender, TextChangedEventArgs e)
{
}
}
}
I’m working up my chapter on the “Event Aggregator” pattern for my book this evening and I’m starting by collecting my thoughts on the subject. If you feel like you have to comment and give me free advice for this chapter, then I guess you should do that (please). I’ve written about the pattern before here and here. I’m also referencing some other patterns that I haven’t written much about yet. Ward Bell and John Papa have both blogged on these patterns. I’ll blog about my StoryTeller implementation tomorrow night.
In case you’re new to the pattern, the Event Aggregator object will:
Channel events from multiple objects into a single object to simplify registration for clients.
In essence, it’s a specialized form of the GoF Mediator pattern. It’s a “hub” object in your application that you can use to do decoupled “publish/subscribe” messaging between different parts of your application. I’ve used the pattern with WinForms apps, WPF with StoryTeller, and even JavaScript in web pages (going to be very important as we implement a dashboard pattern for our web app at work).
Here’s my braindump on the pattern:
Registration. Somebody has to be responsible for registering the listeners with the event aggregator hub.
You could have the listeners themselves do the registration by taking a dependency on the event aggregator as is idiomatic with Prism. This is great because it makes it obvious when looking at a class whether or not it is registered as a listener. I’m not a fan of this approach because I think it’s awkward and adds repetitive code to each listener – and repetitive code should be stamped out wherever it pops up.
You could use another object like a “Screen Activator” (more on this pattern later) to do the registration of ViewModels/Presenters, screens, or non-UI elements as listeners. This has the advantage of removing the responsibility of bootstrapping away from the class (ViewModel/Presenter/service) that does the real work.
This is really just 2a, but you could use a custom “Registry” class to make the explicit subscriber subscriptions in one place. I like
Use conventional registration with an IoC tool to automatically add objects to the event aggregator as appropriate. I use a marker interface plus a StructureMap “interceptor” to do this in StoryTeller. This is the “easiest” mechanically, but adds some overhead to understanding how the pieces connect. But, and there’s a big but, conventions are black magic rather than explicit code.
Discoverability/Traceability. An event aggregator is a form of indirection, and indirection almost always makes a system a little harder to understand. At some point you definitely need to understand what objects are publishing and which are receiving events. Strong typed events are a boon here because it’s relatively easy to use advanced IDE features like R#’s “Find Usages” to quickly determine publishers and subscribers to a particular type of event. The event aggregator analogue in CAB depended on string keys and made troubleshooting harder. Event aggregator implementations in JavaScript and other dynamic languages will have the same issue.
Diagnostics. For those of us using static typed languages, you might want to add a diagnostic report that can be generated on demand that can scan the codebase and identify publishers and subscribers based on a dependency on the event aggregator. Using marker interfaces or common super types for message classes can make the diagnostics much easier.
Event Filtering. Simply put, not every subscriber cares about every instance of a type of event. For example, StoryTeller has several widgets that need to respond to every single test event (queued, executing, finished), but the individual test screens only respond to events involving their one particular test. In this case you need to worry about how events are filtered. The responsibility for the filtering can be in:
The listener itself. The listener knows what it cares about, so let it decide whether or not to continue processing the event.
Filter within the EventAggregator itself. You can either register the listeners with a subject like this: EventAggregator.Register(this, myTest), but this is assuming a specialized event aggregator that “knows” about the subject. Another way is to make the registration with a Predicate or Func to filter within the event aggregator. I’m still experimenting here inside StoryTeller with this pattern a little bit.
I’m thinking about having either an IoC interceptor or a Screen Activator do the filtered event registration. Again, the point is to move the grunt work of setting up the screen out of the ViewModel/Presenter to keep the ViewModel/Presenter relatively clean
Thread Marshalling. It’s very handy to just let the event aggregator take care of marshalling callbacks to the screen back to the UI thread. The Prism Event Aggregator gives you fine grained control over whether or not the marshalling should take place. Maximum control might be nice when every bit of performance matters, but then again, it makes you error prone in a part of the code that is hard to test.
Queuing. At this point I let the StoryTeller EventAggregator just process things synchronously, but running the event publishing on a background thread or queuing events up may be necessary to conserve resources.
Open/Closed Principal. Using the Event Aggregator makes it much, much easier to add new features to your system without modifying existing code as you would have to if you were dependent upon direct communication without an event aggregator. This is an important issue for teams doing incremental delivery or cases where multiple teams are working on the same system in parallel.
Garbage Collection: Your event aggregator has to keep a reference to all the subscribers. This can present you with some serious memory leak issues as screens are closed, but not garbage collected if the event aggregator is keeping a reference. You can beat the issue by using WeakReferences internally inside your event aggregator. The other option is to explicitly un-register listeners. The WeakReference strategy may be more reliable, but has its own issues. Explicit un-registration isn’t that bad if you are using a “Screen Conductor” to manage the screen activation lifecycle. Much more on that later…
Event Latching. Two issues here:
It might be valuable to ignore events at times. I’m specifically thinking about the case of a screen on a tab that is not active/displayed. Let’s say that this screen receives an event about financial market data being updated. The act of updating the hidden screen’s display turns out to be very expensive in terms of resources. You might want to quietly ignore or “latch” events when a screen is deactivated and hidden. That of course adds some complexity to make the hidden screen “know” to update itself when it is activated again. I think this is where the “Screen Activator” and “Screen Conductor” patterns come into play. If there’s a standard workflow that happens whenever a user activates a tab, then the “screen activator” should get an Activate() call.
In some specialized cases you may want the Event Aggregator to “latch” itself while in the midst of responding to an event. This is especially important when a widget responding to an event publishes other events. Think about “change” events getting published during the act of binding a screen to new data. In this case the event aggregator should ignore new events until the first is completely finished.
Event Ordering. It might be important that events be completely processed in the order that they arrive to the event aggregator. For example, Chad & I had an issue last year with a subscriber receiving an event, then publishing other events that were processed before the original event reached all of its subscribers. There might be a code smell in there somewhere, but event ordering may be something you need to consider.
One size does not fit all: It can often be advantageous to have multiple event aggregators within one application. I often find it useful to use an event aggregator that is scoped within a single complex “Composite View” when a single screen is very complicated within its own right.
Instrumentation. The EventAggregator is effectively a message bus and has all the same advantages as a message bus. Sending all events through the event aggregator gives you a great centralized place to put instrumentation code. Less repetitive code == fewer mistakes and better productivity.
What about the Prism EventAggregator?
Many people first come into contact with the Event Aggregator pattern through the implementation in Prism. For my regular readers you may be shocked (shocked I say!) to know that I don’t particularly care for the way they implemented the pattern in Prism. I think the Prism implementation is clumsy and awkward to use. I despise that idiom of first getting the event aggregator, then retrieving an “Event” object that I’ll then listen to. Same thing with publishing. I think it’s awkward that I have two steps (get event, then publish) instead of just saying “IEventAggregator.Send().” All of that is unnecessary noise code, and the “get event, then listen/publish” adds a little bit of overhead to every single unit test that I write that involves either listening to or sending events (more mock object setup, and that would add up more than the extra production code). No, that’s not a huge deal, but noise code adds up, and every bit of ceremony/noise code I can remove due to infrastructure will make me more productive and the code easier to deal with by making it easier to read.
All I want is to go:
IEventAggregator.Send( the message ). Nothing else.
The listeners should have little or preferably NO/ZILCH/NADA coupling to the event aggregator.
I think Prism is far better than CAB, but it’s still headed for some of the same problems that CAB had. The complexity and awkwardness of the EventAggregator in Prism is directly caused by trying to make the EventAggregator generalized to every possible scenario that you can think of. You will be able to create a better implementation of EventAggregator for your application by tailoring something simpler for only the things you need. At a minimum, you could at least put an application specific wrapper around Prism’s generalized API’s to make them easier to consume. I think you could stand to sacrifice some of the flexibility of the Prism EventAggregator and end up with a simpler to consume alternative.
Don’t take this as a specific criticism of Prism itself, because the real issue is that generalized application frameworks are an automatic compromise. The single most important reason that Prism is better than CAB is that you could easily, and I do mean easily, roll your own Event Aggregator and replace the one in Prism while still using the rest of Prism. Hooray for “Composition over Inheritance.” You couldn’t do that with CAB.
Wiki:
God willing and the river don’t rise, I will have a public Wiki up for the Presentation Patterns book by the end of the weekend. On advice from multiple people, I’ll be writing most of the first draft on the public Wiki. I’ll announce it as soon as it exists.
If I understand what you are trying to do... You could use a Mediator pattern such as the Event Aggregator to communicate an event (the data selection) from the childwindow to the parent window. Here is a StackOverflow question that covers the Event Aggregator.
I have been looking for a neat answer to this design question with no success. I could not find help neither in the ".NET Framework design guidelines" nor in the "C# programing guidelines".
I basically have to expose a pattern as an API so the users can define and integrate their algorithms into my framework like this:
1)
// This what I provide
public abstract class AbstractDoSomething{
public abstract SomeThing DoSomething();
}
Users need to implementing this abstract class, they have to implement the DoSomething method (that I can call from within my framework and use it)
2)
I found out that this can also acheived by using delegates:
public sealed class DoSomething{
public String Id;
Func<SomeThing> DoSomething;
}
In this case, a user can only use DoSomething class this way:
DoSomething do = new DoSomething()
{
Id="ThisIsMyID",
DoSomething = (() => new Something())
}
Question
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
EDIT
In case of 1 : The registration is done this way (assuming MyDoSomething extends AbstractDoSomething:
MyFramework.AddDoSomething("DoSomethingIdentifier", new MyDoSomething());
In case of 2 : The registration is done like this:
MyFramework.AddDoSomething(new DoSomething());
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
The first is more "traditional" in terms of OOP, and may be more understandable to many developers. It also can have advantages in terms of allowing the user to manage lifetimes of the objects (ie: you can let the class implement IDisposable and dispose of instances on shutdown, etc), as well as being easy to extend in future versions in a way that doesn't break backwards compatibility, since adding virtual members to the base class won't break the API. Finally, it can be simpler to use if you want to use something like MEF to compose this automatically, which can simplify/remove the process of "registration" from the user's standpoint (as they can just create the subclass, and drop it in a folder, and have it discovered/used automatically).
The second is a more functional approach, and is simpler in many ways. This allows the user to implement your API with far fewer changes to their existing code, as they just need to wrap the necessary calls in a lambda with closures instead of creating a new type.
That being said, if you're going to take the approach of using a delegate, I wouldn't even make the user create a class - just use a method like:
MyFramework.AddOperation("ThisIsMyID", () => DoFoo());
This makes it a little bit more clear, in my opinion, that you're adding an operation to the system directly. It also completely eliminates the need for another type in your public API (DoSomething), which again simplifies the API.
I would go with the abstract class / interface if:
DoSomething is required
DoSomething will normally get really big (so DoSomething's implementation can be splited into several private / protected methods)
I would go with delegates if:
DoSomething can be treated as an event (OnDoingSomething)
DoSomething is optional (so you default it to a no-op delegate)
Though personally, if in my hand, I would always go by Delegate Model. I just love the simplicity and elegance of higher order functions. But while implementing the model, be careful about memory leaks. Subscribed events are one of the most common reasons of memory leaks in .Net. This means, suppose if you have an object that has some events exposed, the original object would never be disposed until all events are unsubscribed since event creates a strong reference.
As is typical for most of these types of questions, I would say "it depends". :)
But I think the reason for using the abstract class versus the lambda really comes down to behavior. Usually, I think of the lambda being used as a callback type of functionality -- where you'd like something custom happen when something else happens. I do this a lot in my client-side code:
- make a service call
- get some data back
- now invoke my callback to handle that data accordingly
You can do the same with the lambdas -- they are specific and are targeted for very specific situations.
Using the abstract class (or interface) really comes down to where your class' behavior is driven by the environment around it. What's happening, what client am I dealing with, etc.? These larger questions could suggest that you should define a set of behaviors and then allow your developers (or consumers of your API) to create their own sets of behavior based upon their requirements. Granted, you could do the same with lambdas, but I think it would be more complex to develop and also more complex to clearly communicate to your users.
So, I guess my rough rule of thumb is:
- use lambdas for specific callback or side-effect customized behaviors;
- use abstract classes or interfaces to provide a mechanism for object behavior customization (or at least the majority of the object's primary behavior).
Sorry I can't give you a clear definition, but I hope this helps. Good luck!
A few things to consider :
How many different functions/delegates would need to be over-ridden? If may functions, inheretance will group "sets" of overrides in an easier to understand way. If you have a single "registration" function, but many sub-portions can be delegated out to the implementor, this is a classic case of the "Template" pattern, which makes the most sense to be inherited.
How many different implementations of the same function will be needed? If just one, then inheretance is good, but if you have many implementations a delegate might save overhead.
If there are multiple implementations, will the program need to switch between them? Or will it only use a single implementation. If switching is required, delegates might be easier, but I would caution this, especially depending on the answer to #1. See the Strategy Pattern.
If the override needs access to any protected members, then inheretance. If it can rely only on publics, then delegate.
Other choices would be events, and extension methods as well.
I have a class that Handles send & receive over a socket between my application and the network. This class uses other classes, including a low level sockket connection class and a PDU handler class that creates the messages to send and handles received data.
Now i use an event to signal my class that the low level connection class has data for it and i need to send that data to the PDU handler to convert to information the application can use and then hand the data to the application.
For future usage, i am trying to get the class to be as generic as possible, so that on future Server/Client projects i will need only to change the PDU handler to take under consideration the new operations availlable and how to handle the data.
All that is well underway and now i am facing the isssue of handing the data back to the app. For that, my logical approach is an event letting the app know data is ready for collection. For that i can either:
a) have one event and let the app sort out what kind of message it is through the operation code (doable)
b) Have one event per operation code and have the app subscribe to all of them and thus know at start what it is getting
Considering the idea of making things generic, and the approach stated in b, is there a way to dinamicly create events based on a given delegate signature at runtime?
e. g.
imagine you have opcodes in an enum called MyOperation1 and MyOperation2 and you have defined a delegate like:
public delegate void PDUEventHandler(ParamType Param, [...]);
and i want to define events called:
public event PDUEventHandler MyOperation1;
public event PDUEventHandler MyOperation2;
But if i add a new operation code i will need an event for it.
Can this events be created dinamicly or do i need to do it by hand?
If i need to do it by hand then i guess a single event would be better and handle things app side.
Perhaps what you need is a callback - essentially you pass to the event handler a delegate for it to execute when the handler is done. Here's a stackoverflow thread to give you an idea
In terms of event handlers & re-useability, perhaps you can extend EventArgs and have that delegate as a property.
EDIT:
I was thinking a single PDUEventHandler having common code and a "hole" where custom code is run. That custom code is passed to the handler as a delegate (i.e. a method) or even a class instance. But let's change that a little...
Sounds like you need a factory. In fact you're practically describing a factory.
Conceptually let go of the idea of passing special opcodes to an EventHandler per se, or having multi-signature PDUEventHandlers.
Create a PDUHandlerFactoryclass. The factory returns a customized instance as a general PDUHandler class reference. Then instead of a PDUEventHander you caller has a PDUHandler reference that points to the factory-custom instance.
I read a question ages ago "How do C# Events work behind the scenes?" and Jon answered that all events are similar to methods...
In a purely hypothetical situation, I was wondering if someone could explain or point me to a resource that says when to use an event over a method?
Basically, If I want to have a big red/green status picture which is linked to a Bool field, and I wanted to change it based on the value of the bool, should I:
a) Have a method called Changepicture which is linked to the field and changes the state of the bool and the picture.
b) Have a get/set part to the field and stick an event in the set part.
c) Have a get/set part to the field and stick a method in the set part.
d) Other?
To gain more information about events see this post.
You have options.
If your object already implements INotifyPropertyChanged and your red/green picture is a control which supports databinding, then you can simply fire the NotifyPropertyCHanged event on the bool's set method, and add a databinding on that property to your control.
If not implementing INotifyPropertyChanged, I would still recommend doing something similar. I.e. creating your own event handler, and having the reg/green picture subscribe to the event. Just straight up calling a method from the set of your property creates a tight coupling, which is generally a bad thing to do.
The answer is: It depends.
If your boolean value is in the codebehind class of your visual component (e.g. WinForm) you can call a method ChangePicture without doing strange things. But if your boolean value is architectural more far away from the visual component an event is the right way to handle the scenario because you can not easily call a method on the visual component because the class that contains the boolean value perhaps doesn´t even know your visual component exists. :)
The best way to figure out what you should do is to look at classes in the .NET framework and see how they are designed.
Methods are "doers" or "actions", while you can see events as notification mechanisms. That is if others could be interested is being notified when something happens in an object then you can surface an event and have one or more subscribers to these events.
Since events in .NET are multi-cast, meaning multiple objects can subscribe and therefore be notified of an event happening, that may be other reason to raise an event in your objects. Events also follow the observer pattern in that the subject (your class) is really unaware of the subscribers (loosely coupled). While in order to call a method, the secondary object needs to have a reference to an instance of your class.
Note that, a method in your class eventually raises and event. So let's say you have a method in your class called ChangePicture. Then in the method's implementation, you could eventually raise an event PictureChanged. if someone is interested in being notified of this event, they can subscribe to this event. This someone is typically not the one that made the method call to change the picture.
Events are delegates. Delegates are objects. Event's are actually MulticastDelegates (a base class in the .NET framework). These objects eventually call a method, which is the method that gets called as part of the event notification. So they are slightly "heavier" then just a method call, but that should almost never determine your design.