Should I expose IObservable<T> on my interfaces? - c#

My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them them into blocks according to some criterion and processes those blocks.
Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar.
One of us suggested the following design. Its main theme is exposing IObservable<T> objects directly from the interfaces of our components.
// ********* Interfaces **********
interface IFooSource
{
// this is the event-stream of objects of type Foo
IObservable<Foo> FooArrivals { get; }
}
interface IBarSource
{
// this is the event-stream of objects of type Bar
IObservable<Bar> BarArrivals { get; }
}
/ ********* Implementations *********
class FooSource : IFooSource
{
// Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream.
}
class FooSubsetsToBarConverter : IBarSource
{
IFooSource fooSource;
IObservable<Bar> BarArrivals
{
get
{
// Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar>
}
}
}
// this class will subscribe to the bar source and do processing
class BarsProcessor
{
BarsProcessor(IBarSource barSource);
void Subscribe();
}
// ******************* Main ************************
class Program
{
public static void Main(string[] args)
{
var fooSource = FooSourceFactory.Create();
var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor
barsProcessor.Subscribe();
fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival.
}
}
The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed.
//********** interfaces *********
interface IPublisher<T>
{
void Subscribe(ISubscriber<T> subscriber);
}
interface ISubscriber<T>
{
Action<T> Callback { get; }
}
//********** implementations *********
class FooSource : IPublisher<Foo>
{
public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ }
// here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers
}
class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar>
{
void Callback(Foo foo)
{
// here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria
// maybe we use Rx here internally.
}
public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ }
}
class BarsProcessor : ISubscriber<Bar>
{
void Callback(Bar bar)
{
// here we put code that processes Bar objects
}
}
//********** program *********
class Program
{
public static void Main(string[] args)
{
var fooSource = fooSourceFactory.Create();
var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions
fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival.
}
}
Which one do you think is better? Exposing IObservable<T> and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed?
Here are some things to consider about the designs:
In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback.
The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx.
If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.

Exposing IObservable<T> does not pollute the design with Rx in any way. In fact the design decision is the exact same as pending between exposing an old school .NET event or rolling your own pub/sub mechanism. The only difference is that IObservable<T> is the newer concept.
Need a proof? Look at F# which is also a .NET language but younger than C#. In F# every event derives from IObservable<T>. Honestly, I see no sense in abstracting a perfectly suitable .NET pub/sub mechanism - that is IObservable<T> - away with your homegrown pub/sub abstraction. Just expose IObservable<T>.
Rolling your own pub/sub abstraction feels like applying Java patterns to .NET code to me. The difference is, in .NET there has always been great framework support for the Observer pattern and there is simply no need to roll your own.

First of all, it's worth noting that IObservable<T> is part of mscorlib.dll and the System namespace, and thus exposing it would be somewhat equivalent to exposing IComparable<T> or IDisposable. Which is equivalent to picking .NET as your platform, which you seem to have done already.
Now, instead of suggesting an answer, I want to suggest a different question, and then a different mindset, and I hope (and trust) that you'll manage from there.
You're basically asking: Do we want to promote scattered use of Rx operators all across our system?. Now obviously that's not very inviting, seeing as you probably conceptually treat Rx as a 3rd party library.
Either way, the answer doesn't lie in the basal designs you two proposed, but in the users of those designs. I recommend breaking your design down to abstraction levels, and making sure that the use of Rx operators is scoped in just one level. When I talk about abstraction levels, I mean something similar to the OSI Model, only in the same application's code.
The most important thing, in my book, is to not take the design standpoint of "Let's create something that's going to be used and scattered all across the system, and so we need to make sure we do it just once and just right, for all the years to come". I'm more of a "Let's make this abstraction layer produce the minimal API necessary for other layers to currently achieve their goals".
About the simplicity of both of your designs, it's actually hard to judge since Foo and Bar don't tell me much about use cases, and hence readability factors (which are, by the way, different from one use case to another).

In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback.
I would agree with the availability of Rx as an advantage. Listing some reasons why it is a drawback could help with determining how to address them. Some advantages I see are:
As Yam and Christoph both brushed against, IObservable/IObserver is in mscorlib as of .NET 4.0, so it will (hopefully) become a standard concept that everyone will immediately understand, like events or IEnumerable.
The operators of Rx. Once you need to compose, filter, or otherwise manipulate potentially multiple streams, these becomes very helpful. You will probably find yourself redoing this work in some form with your own interfaces.
The contract of Rx. The Rx library enforces a well-defined contract and does as much of the enforcing of that contract as it can. Even when you need to make your own operators, Observable.Create will do the work to enforce the contract (which is why implementing IObservable directly is not recommended by the Rx team).
The Rx library has good ways to ensure you end up on the right thread when needed.
I've written my share of operators where the library doesn't cover my case.
The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx.
I fail to see how the choice to expose Rx has much, if any, influence on how you implement the architecture under the hood any more than using your own interfaces would. I would assert that you should not be inventing new pub/sub architectures unless absolutely necessary.
Further, the Rx library may have operators that will simplify the "under the hood" parts.
If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.
Yes and no. The first thing I would think if I saw the second design is: "That's almost like IObservable; let's write some extension methods to convert the interfaces." The glue code is written once, used everywhere.
The glue code is straightforward, but if you think you will use Rx, just expose IObservable and save yourself the hassle.
Further Considerations
Basically, your alternate design differs in 3 key ways from IObservable/IObserver.
There is no way to unsubscribe. This may just be an oversight when copying to the question. If not, it's something to strongly consider adding if you go that route.
There is no defined path for errors to flow downstream (eg IObserver.OnError).
There is no way to indicate the completion of a stream (eg IObserver.OnCompleted). This is only relevant if your underlying data is intended to have a termination point.
Your alternate design also returns the callback as an action rather than having it as a method on the interface, but I don't think the distinction is important.
The Rx library encourages a functional approach. Your FooSubsetsToBarConverter class would be better suited as an extension method to IObservable<Foo> that returns IObservable<Bar>. This reduces clutter slightly (why make a class with one property when a function will do fine) and fits better with the chain-style composition of the rest of the Rx library. You could apply the same approach to the alternate interfaces, but without the operators to help, it may be more difficult.

Another alternative could be:
interface IObservableFooSource : IFooSource
{
IObservable<Foo> FooArrivals
{
get;
}
}
class FooSource : IObservableFooSource
{
// Implement the interface explicitly
IObservable<Foo> IObservableFooSource.FooArrivals
{
get
{
}
}
}
This way only clients that expect an IObservableFooSource will see the RX-specific methods, those that expect an IFooSource or a FooSource won't.

Related

Delegates (Lambda expressions) Vs Interfaces and abstract classes

I have been looking for a neat answer to this design question with no success. I could not find help neither in the ".NET Framework design guidelines" nor in the "C# programing guidelines".
I basically have to expose a pattern as an API so the users can define and integrate their algorithms into my framework like this:
1)
// This what I provide
public abstract class AbstractDoSomething{
public abstract SomeThing DoSomething();
}
Users need to implementing this abstract class, they have to implement the DoSomething method (that I can call from within my framework and use it)
2)
I found out that this can also acheived by using delegates:
public sealed class DoSomething{
public String Id;
Func<SomeThing> DoSomething;
}
In this case, a user can only use DoSomething class this way:
DoSomething do = new DoSomething()
{
Id="ThisIsMyID",
DoSomething = (() => new Something())
}
Question
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
EDIT
In case of 1 : The registration is done this way (assuming MyDoSomething extends AbstractDoSomething:
MyFramework.AddDoSomething("DoSomethingIdentifier", new MyDoSomething());
In case of 2 : The registration is done like this:
MyFramework.AddDoSomething(new DoSomething());
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
The first is more "traditional" in terms of OOP, and may be more understandable to many developers. It also can have advantages in terms of allowing the user to manage lifetimes of the objects (ie: you can let the class implement IDisposable and dispose of instances on shutdown, etc), as well as being easy to extend in future versions in a way that doesn't break backwards compatibility, since adding virtual members to the base class won't break the API. Finally, it can be simpler to use if you want to use something like MEF to compose this automatically, which can simplify/remove the process of "registration" from the user's standpoint (as they can just create the subclass, and drop it in a folder, and have it discovered/used automatically).
The second is a more functional approach, and is simpler in many ways. This allows the user to implement your API with far fewer changes to their existing code, as they just need to wrap the necessary calls in a lambda with closures instead of creating a new type.
That being said, if you're going to take the approach of using a delegate, I wouldn't even make the user create a class - just use a method like:
MyFramework.AddOperation("ThisIsMyID", () => DoFoo());
This makes it a little bit more clear, in my opinion, that you're adding an operation to the system directly. It also completely eliminates the need for another type in your public API (DoSomething), which again simplifies the API.
I would go with the abstract class / interface if:
DoSomething is required
DoSomething will normally get really big (so DoSomething's implementation can be splited into several private / protected methods)
I would go with delegates if:
DoSomething can be treated as an event (OnDoingSomething)
DoSomething is optional (so you default it to a no-op delegate)
Though personally, if in my hand, I would always go by Delegate Model. I just love the simplicity and elegance of higher order functions. But while implementing the model, be careful about memory leaks. Subscribed events are one of the most common reasons of memory leaks in .Net. This means, suppose if you have an object that has some events exposed, the original object would never be disposed until all events are unsubscribed since event creates a strong reference.
As is typical for most of these types of questions, I would say "it depends". :)
But I think the reason for using the abstract class versus the lambda really comes down to behavior. Usually, I think of the lambda being used as a callback type of functionality -- where you'd like something custom happen when something else happens. I do this a lot in my client-side code:
- make a service call
- get some data back
- now invoke my callback to handle that data accordingly
You can do the same with the lambdas -- they are specific and are targeted for very specific situations.
Using the abstract class (or interface) really comes down to where your class' behavior is driven by the environment around it. What's happening, what client am I dealing with, etc.? These larger questions could suggest that you should define a set of behaviors and then allow your developers (or consumers of your API) to create their own sets of behavior based upon their requirements. Granted, you could do the same with lambdas, but I think it would be more complex to develop and also more complex to clearly communicate to your users.
So, I guess my rough rule of thumb is:
- use lambdas for specific callback or side-effect customized behaviors;
- use abstract classes or interfaces to provide a mechanism for object behavior customization (or at least the majority of the object's primary behavior).
Sorry I can't give you a clear definition, but I hope this helps. Good luck!
A few things to consider :
How many different functions/delegates would need to be over-ridden? If may functions, inheretance will group "sets" of overrides in an easier to understand way. If you have a single "registration" function, but many sub-portions can be delegated out to the implementor, this is a classic case of the "Template" pattern, which makes the most sense to be inherited.
How many different implementations of the same function will be needed? If just one, then inheretance is good, but if you have many implementations a delegate might save overhead.
If there are multiple implementations, will the program need to switch between them? Or will it only use a single implementation. If switching is required, delegates might be easier, but I would caution this, especially depending on the answer to #1. See the Strategy Pattern.
If the override needs access to any protected members, then inheretance. If it can rely only on publics, then delegate.
Other choices would be events, and extension methods as well.

C# Object construction outside the constructor

When it comes to designing classes and "communication" between them, I always try to design them in such way that all object construction and composing take place in object constructor. I don't like the idea of object construction and composition taking place from outside, like other objects setting properties and calling methods on my object to initialize it. This especially gets ugly when multiple object try to do thisto your object and you never know in what order your props\methods will be executed.
Unforunatly I stumbl on such situations quite often, especially now with the growing popularity of dependecy injection frameworks, lots of libraries and frameworks rely on some kind of external object initialization, and quite often require not only constructor injection on our object but property injection too.
My question are:
Is it ok to have objects that relly on some method, or property to be called on them after which they can consider them initialzied?
Is ther some kind of pattern for situations when your object acting is receiver, and must support multiple interfaces that call it, and the order of these calls does matter? (something better than setting flags, like ThisWasDone, ThatWasCalled)
Is it ok to have objects that relly on some method, or property to be called on them after which they can consider them initialzied?
No. Init methods are a pain since there is no guarantee that they will get called. A simple solution is to switch to interfaces and use factory or builder pattern to compose the implementation.
#Mark Seemann has written a article about it: http://blog.ploeh.dk/2011/05/24/DesignSmellTemporalCoupling.aspx
Is there some kind of pattern for situations when your object acting is receiver, and must support multiple interfaces that call it, and the order of these calls does matter? (something better than setting flags, like ThisWasDone, ThatWasCalled)
Builder pattern.
I think it is OK, but there are implications. If this is an object to be used by others, you need to ensure that an exception is thrown any time a method or property is set or accessed and the initialization should have been called but isn't.
Obviously it is much more convenient and intuitive if you can take care of this in the constructor, then you don't have to implement these checks.
I don't see anything wrong in this. It may be not so convinient, but you can not ALWAYS use initialization in ctor, like you can not alwats drive under green light. These are dicisions that you made based on your app requirements.
It's ok. Immagine if your object, for example, need to read data from TCP stream or a file that ciuld be not present or corrupted. Raise an exception from ctor is baaad.
It's ok. If you think, for example, about some your DSL language compiler, it can looks like:
A) find all global variables and check if there mem allocation sum sutisfies your device requierements
B) parse for errors
C) check for self cycling
And so on...
Hoe this helps.
Answering (1)
Why not? An engine needs the driver because this must enter the key for the car, and later power-on. Will a car do things like detecting current speed if engine is stopeed? Or Will the car show remaining oil without powering-on it?
Some programming goals won't be able to have their actors initialized during its object construction, and this isn't because it's a non-proper way of doing things but because it's the natural, regular and/or semantically-wise way of representing its whole behavior.
Answering (2)
A decent class usage documentation will be your best friend. Like answer to (1), there're some things in this world that should be done in order to get them done rightly, and it's not a problem but a requirement.
Checking objects' state using flags isn't a problem too, it's a good way of adding reliability to your object models, because its own behaviors and consumers of them will be aware about if things got done as expected or not.
First of all, Factory Method.
public class MyClass
{
private MyClass()
{
}
public Create()
{
return new MyClass();
}
}
Second of all, why do you not want another class creating an object for you? (Factory)
public class MyThingFactory
{
IThing CreateThing(Speed speed)
{
if(speed == Speed.Fast)
{
return new FastThing();
}
return new SlowThing();
}
}
Third, why do multiple classes have side effects on new instances of your class? Don't you have declarative control over what other classes have access to your object?

C# On extending a large class in favor of readability

I have a large abstract class that handles weapons in my game. Combat cycles through a list of basic functions:
OnBeforeSwing
OnSwing
OnHit || OnMiss
What I have in mind is moving all combat damage-related calculations to another folder that handles just that. Combat damage-related calculations.
I was wondering if it would be correct to do so by making the OnHit method an extension one, or what would be the best approach to accomplish this.
Also. Periodically there are portions of the OnHit code that are modified, the hit damage formula is large because it takes into account a lot of conditions like resistances, transformation spells, item bonuses, special properties and other, similar, game elements.
This ends with a 500 line OnHit function, which kind of horrifies me. Even with region directives it's pretty hard to go through it without getting lost in the maze or even distracting yourself.
If I were to extend weapons with this function instead of just having the OnHit function, I could try to separate the different portions of the attack into other functions.
Then again, maybe I could to that by calling something like CombatSystem.HandleWeaponHit from the OnHit in the weapon class, and not use extension methods. It might be more appropriate.
Basically my question is if leaving it like this is really the best solution, or if I could (should?) move this part of the code into an extension method or a separate helper class that handles the damage model, and whether I should try and split the function into smaller "task" functions to improve readability.
I'm going to go out on a limb and suggest that your engine may not be abstracted enough. Mind you, I'm suggesting this without knowing anything else about your system aside from what you've told me in the OP.
In similar systems that I've designed, there were Actions and Effects. These were base classes. Each specific action (a machine gun attack, a specific spell, and so on) was a class derived from Action. Actions had an list of one or more specific effects that could be applied to Targets. This was achieved using Dependency Injection.
The combat engine didn't do all the math itself. Essentially, it asked the Target to calculate its defense rating, then cycled through all the active Actions and asked them to determine if any of its Effects applied to the Target. If they applied, it asked the Action to apply its relevant Effects to the Target.
Thus, the combat engine is small, and each Effect is very small, and easy to maintain.
If your system is one huge monolithic structure, you might consider a similar architecture.
OnHit should be an event handler, for starters. Any object that is hit should raise a Hit event, and then you can have one or more event handlers associated with that event.
If you cannot split up your current OnHit function into multiple event handlers, you can split it up into a single event handler but refactor it into multiple smaller methods that each perform a specific test or a specific calculation. It will make your code much more readable and maintainable.
IMHO Mike Hofer gives the leads.
The real point is not whether it's a matter of an extension method or not. The real point is that speaking of a single (extension or regular) method is unconceivable for such a complicated bunch of calculations.
Before thinking about the best implementation, you obviously need to rethink the whole thing to identify the best possible dispatch of responsibilities on objects. Each piece of elemental calculation must be done by the object it applies to. Always keep in mind the GRASP design patterns, especially Information Expert, Low Coupling and High Cohesion.
In general, each method in your project should always be a few lines of code long, no more. For each piece of calculation, think of which are all the classes on which this calculation is applicable. Then make this calculation a method of the common base class of them.
If there is no common base class, create a new interface, and make all these classes implement this interface. The interface might have methods or not : it can be used as a simple marker to identify the mentioned classes and make them have something in common.
Then you can build an elemental extension method like in this fake example :
public interface IExploding { int ExplosionRadius { get; } }
public class Grenade : IExploding { public int ExplosionRadius { get { return 30; } } ... }
public class StinkBomb : IExploding { public int ExplosionRadius { get { return 10; } } ... }
public static class Extensions
{
public static int Damages(this IExploding explosingObject)
{
return explosingObject.ExplosionRadius*100;
}
}
This sample is totally cheesy but simply aims to give leads to re-engineer your system in a more abstracted and maintenable way.
Hope this will help you !

Thread Safe Class Library Design

I'm working on a class library and have opted for a route with my design to make implementation and thread safety slightly easier, however I'm wondering if there might be a better approach.
A brief background is that I have a multi-threaded heuristic algorithm within a class library, that once set-up with a scenario should attempt to solve it. However I obviously want it to be thread safe and if someone makes a change to anything while it is solving for that to causes crashes or errors.
The current approach I've got is if I have a class A, then I create a number InternalA instances for each A instance. The InternalA has many of the important properties from the A class, but is internal an inaccessible outside the library.
The downside of this, is that if I wish to extend the decision making logic (or actually let someone do this outside the library) then it means I need to change the code within the InternalA (or provide some sort of delegate function).
Does this sound like the right approach?
It's hard to really say from just that - but I can say that if you can make everything immutable, your life will be a lot easier. Look at how functional languages approach immutable data structures and collections. The less shared mutable data you have, the simple threading will be.
Why Not?
Create generic class, that accepts 2 members class (eg. Lock/Unlock) - so you could provide
Threadsafe impl (implmenetation can use Monitor.Enter/Exit inside)
System-wide safe impl (using Mutex)
Unsafe, but fast (using empty impl).
another way i have had some success with is by using interfaces to achieve functional separation. the cost of this approach is that you end up with some fields 'repeated' because each interface requires total separation from the others fields.
In my case I had 2 threads that need to pass over a set of data that potentially is large and needs as little garbage collection as possible. Ie I only want to pass change information from the first stage to the second. And then have the first process the next work unit.
this was achieved by the use of change buffers to pass changes from one interface to the next.
this allows one thread to work away at one interface, make all its changes and then publish a struct containing the changes that the other interface (thread) needs to apply prior to its work.
by doing this You have a double buffer ... (thread 1 produces a change report whilst thread 2 consumes the last report). If you add more interfaces (and threads) it appears like there are pulses of work moving through the threads.
This was based on my research and I have no doubt that there are better methods available now.
My aim when coming up with this however was to avoid the need for locks in the vast majority of code by designing out race conditions. the other major consideration is performance in garbage collection - which may not be an issue for you.
this way is all good until you need complex interactions between threads ... then you find that you start forcing the layout of your buffer structures for reuse to get around inheritance which in turn has an upkeep overhead.
A little more information on the problem to help...
The heuristic I'm using is to solve TSP like problems. What happens right at the start of each
calculation is that all the aspects that form the problem (sales man/places to visit) are cloned
so they aren't affected across threads.
This means each thread can change data (such as stock left on a sales man etc) as there are a number
of values that change during the calculation as things progress. What I'd quite like to do is allow
the checked such as HasSufficientStock() for a simple example to be override by a developer using the library.
Unforutantely at present however to add further protection across threads and makings some simplier/lightweight
classes I convert them to these internal classes, and these are the things that are actually used and cloned.
For example
class A
{
public double Stock { get; }
// Processing and cloning actually works using these InternalA's
internal InternalA ConvertToInternal() {}
}
internal class InternalA : ICloneable
{
public double Stock { get; set; }
public bool HasSufficientStock() {}
}

C# Best practice: Centralised event controller or not

I have an app which consists of several different assemblies, one of which holds the various interfaces which the classes obey, and by which the classes communicate across assembly boundaries. There are several classes firing events, and several which are interested in these events.
My question is as follows: is it good practice to implement a central EventConsolidator of some kind? This would be highly coupled, as it would need to know every class (or at least interface) throwing an event, and every consumer of an event would need to have a reference to EventConsolidator in order to subscribe.
Currently I have the situation where class A knows class B (but not C), class B knows class C, etc. Then if C fires an event B needs to pick it up and fire its own event in order for A to respond. These kinds of chains can get quite long, and it may be that B is only interested in the event in order to pass it along. I don't want A to know about C though, as that would break encapsulation.
What is good practice in this situation? Centralise the events, or grin and bear it and define events in each intermediate class? Or what are the criteria by which to make the decision? Thanks!
Edit: Here is another question asking essentially the same thing.
You could put the event itself in an interface, so that A didn't need to know about C directly, but only that it has the relevant event. However, perhaps you mean that the instance of A doesn't have sight of an instance of C...
I would try to steer clear of a centralised event system. It's likely to make testing harder, and introduced tight coupling as you said.
One pattern which is worth knowing about is making event proxying simple. If B only exposes an event to proxy it to C, you can do:
public event FooHandler Foo
{
add
{
c.Foo += value;
}
remove
{
c.Foo -= value;
}
}
That way it's proxying the subscription/unsubscription rather than the act of raising the event. This has an impact on GC eligibility, of course - which may be beneficial or not, depending on the situation. Worth thinking about though.
What you could try is using the event brokering of either NInject or the Unity Application Block.
This allows you to, for example:
[Publish("foo://happened")]
public event EventHandler<FooArgs> FooHappened;
[Subscribe("foo://happened")]
public void Foo_Happened(object sender, FooArgs args)
{ }
If both objects are created through the container the events will be hooked up automatically.
I'd probably try to massage the domain so that each class can directly depend on the appropriate event source. What I mean is asking the question why don't A know about C? Is there perhaps a D waiting to emerge?
As an alternative approach you could consider an event broker architecture. It means observers don't know directly about the source. Here's an interesting video.
This would be highly coupled, as it would need to know every class
I think you answered your own question if you consider that coupling is bad! Passing events through a chain of potential handlers is a fairly common pattern in many environments; It may not be the most efficient approach, but it avoids the complexity that your suggested approach would involve.
Another approach you could take is to use a message dispatcher. This involves using a common message format (or at least a common message header format) to represent events, and then placing those messages into a queue. A dispatcher then picks up each of those events in turn (or based on some prioritisation), and routes them directly to the required handler. Each handler must be registered with the dispatcher at startup.
A message in this case could simply be a class with a few specific fields at the start. The specific message could simply be a derivative, or you could pass your message-specific data as an 'object' parameter along with the message header.
You can check out the EventBroker object in the M$ patterns and practises lib if you want centralised events.
Personally I think its better to think about your architecture instead and even though we use the EventBroker here, none of our new code uses it and we're hoping to phase it out one sunny day.
we have our own event broker implementation (open source)
Tutorial at: http://sourceforge.net/apps/mediawiki/bbvcommon/index.php?title=Event_Broker
And a performance analysis at: www.planetgeek.ch/2009/07/12/event-broker-performance/
Advantages compared to CAB:
- better logging
- extension support
- better error handling
- extendable handlers (UI, Background Thread, ...)
and some more I cannot recall right now.
Cheers,
Urs

Categories