I have a class that is going to be responsible for generating events on an frequent but irregular interval, that other classes must consume and operate on. I want to use Reactive Extensions for this task.
The consumer side of this is very straightforward; I have my consumer class implementing IObserver<Payload> and all seems well. The problem comes on the producer class.
Implementing IObservable<Payload> directly (that is, putting my own implementation for IDisposable Subscribe(IObserver<Payload> ) is, according to the documentation, not recommended. It suggests instead composing with the Observable.Create() set of functions. Since my class will run for a long time, I've tried creating an Observable with var myObservable = Observable.Never(), and then, when I have new Payloads available, calling myObservable.Publish(payloadData). When I do this, though, I don't seem to hit the OnNext implementation in my consumer.
I think, as a work-around, I can create an event in my class, and then create the Observable using the FromEvent function, but this seems like an overly complicated approach (i.e., it seems weird that the new hotness of Observables 'requires' events to work). Is there a simple approach I'm overlooking here? What's the 'standard' way to create your own Observable sources?
Create a new Subject<Payload> and call it's OnNext method to send an event. You can Subscribe to the subject with your observer.
The use of Subjects is often debated. For a thorough discussion on the use of Subjects (which links to a primer), see here - but in summary, this use case sounds valid (i.e. it probably meets the local + hot criteria).
As an aside, overloads of Subscribe accepting delegates may remove the need for you to provide an implemention of IObserver<T>.
Observable.Never() doesn't "send" notifications, you should use Observable.Return(yourValue)
If you need a guide with concrete examples i recommend reading Intro to Rx
Unless I come across a better way of doing it, what I've settled on for now is the use of a BlockingCollection.
var _itemsToSend = new BlockingCollection<Payload>();
IObservable<MessageQueuePayload> _deliverer =
_itemsToSend.GetConsumingEnumerable().ToObservable(Scheduler.Default);
Related
I'm starting with reactive extensions and I'm having a problem where I'm not sure if I'm on the right track.
I'm using an Observable to create and consume a listener for an event broker with .NET. I created a "IncomingMessage" class which contains the messages from the eventbroker as they come in and I start creating the listener in the Observerable.Create function. That works very well.
Now I also want to get status notification from the listener as in "Connecting...", "Connected.", "Closing..." which are not an IncomingMessage so I created a class "BrokerEvent" with a "Message" property and an interface for "IncomingMessage" and "BrokerEvent". Now I send both via observer.OnNext(...) as they occur. That also works well so far.
However on the Subscriber side I'm now having a bit of a problem to filter the events I need.
I do:
GetObservable().Where(x => x is BrokerEvent ||
(x is IncomingMessage msg &&
msg.User == "test")).Subscribe(...)
That works however I then need to figure out the type in Subscribe again which seems a bit ugly.
After trying a bit I ended up doing this now...
var observable = GetObservable().Publish();
observable.OfType<BrokerEvent>().Subscribe(...);
observable.OfType<IncomingMessage>().Where(x=>x.User == "test").Subscribe(...);
var disposable = observable.Connect();
This also seems to work but as I'm new to reactive extensions I'm not quite sure if that has any unwanted sideeffects. I'm also not sure if it's the "right" way to include status messages into the stream at all. Is there any better way to handle that (possible without using Publish) or is that the way to go?
And to stop listening is it enough to just dispose the disposable I get from .Connect() or do I have to also dispose both disposables I get from .Subscribe()?
Thanks!
I'm assuming GetObservable returns IObservable<object>, which isn't ideal.
The best way to do the code you have above is as follows:
var observable = GetObservable().Publish();
var sub1 = observable.OfType<BrokerEvent>().Subscribe(_ => { });
var sub2 = observable.OfType<IncomingMessage>().Where(x => x.User == "test").Subscribe(_ => { });
var connectSub = observable.Connect();
var disposable = new CompositeDisposable(sub1, sub2, connectSub);
The composite disposable will then dispose of all the children when it is disposed.
If the two message streams have nothing to do with each other, that approach will work. However, since you basically have a control-message stream, and a data-message stream, I'm assuming the messages from one may be important when handling the messages in the other. In this case you may want to treat it like one stream.
In that case, you may want to create a Discriminated-Union type for your observable, which may make handling easier.
What you can do is create an 'event handler' class with three overloads of a 'process message'. One for object (default) that does nothing, one for the status type, one for incoming message. In .Subscribe use this syntax
m=>processor.Process((dynamic)m)
This will call the correct implementation or do nothing, as required.
If you want to filter before calling the Process, you can introduce a common class (ProcessableMesage or some such), or you can call .Merge on your OfType streams, or you can take the same approach as above by having a dynamic MessageFilter class.
I am developing a C# WinForms application that contains a web browser control. The application contains a "scripting bridge" class that allows Javascript code to call into the web browser control (I need to know when certain functions are called in the JS code in order to perform some action in the WinForms application). Most of these operations are asynchronous because when I launch a request from the WinForms application, it will typically perform an ajax request within the JS code (not the C# code). Since this is an asynchronous operation, I was trying to come up with a better/easier way to manage the subscriptions/timeouts/error handling, etc. for these asynchronous events. I came across Reactive Extensions and decided to try it out.
I'm trying to determine if I am doing this correctly or not. I'm trying to wrap my head around Reactive Extensions. It's difficult to find simpler examples on the net for a lot of the Observable extension methods. Here is what I am doing right now:
public void SetupObservable()
{
IConnectableObservable<string> javascriptResponseObservable = Observable.Create<string>(
(IObserver<string> observer) =>
{
observer.OnNext("Testing");
observer.OnCompleted();
return Disposable.Create(() => Console.WriteLine("Observer has unsubscribed"));
})
.Timeout(DateTimeOffset.UtcNow.AddSeconds(5))
.Finally(() => Console.WriteLine("Observable sequence completed"))
.Publish();
IObserver<string> testObserver = Observer.Create<string>(
(value) => Console.WriteLine(value),
(e) => Console.WriteLine("Exception occurred: " + e.Message),
() => Console.WriteLine("Completed")
);
IDisposable unsubscriber = javascriptResponseObservable.Subscribe(testObserver);
}
// The following will be executed later (once the ajax request is completed)...
// Fire the event and notify all observables. If it took too long to get this point then the sequence will timeout with an exception.
public void OnSomeJavascriptFunctionCall()
{
// Somehow get the javascriptResponseObservable object...
javascriptResponseObservable.Connect();
}
I feel like I am doing this the wrong way or that there is a better way to accomplish this. For example, how do you retrieve the IObservable that was created earlier so that you can call more methods on it? Would I have to persist it in the class or somewhere else? It seems like a lot of the examples don't do this so it seems like I am doing something fundamentally wrong. Also, if several observers are subscribing to the IObservable from different classes, etc., again, how do you keep track of the IObservable? It seems like it needs to be persisted somewhere after it is created. Is there a Observable.GetExistingObservable() method of some sort that I am missing?
I feel like I am doing this the wrong way or that there is a better way to accomplish this.
Wrong is always a point of view, but I would argue, yes there is a better way to solve what you are doing.
I assume that your JavaScript bridge has some sort of way of raising events? And this is how it is able to call-you-back? If so, they you will want to leverage that call back and bridge that to Rx using either Observable.Create, Observable.FromEvent* or another Rx factory method.
That would be your first step, then you would need to pass your "commands" to your JS layer. This is where you would need to remember to subscribe to your callback sequence before you issue the command to mitigate any race conditions.
It is difficult to help any more, as you only show Rx code that seems to serve no purpose except trying to understand Rx, and no code that shows what you are trying to achieve in the C#-to-Js bridge. Please provide a "Minimum Complete Verifiable Example" - https://stackoverflow.com/help/mcve
I can register a single registration item with instanceCreator context (aka Func<T>), but there doesn't seem to be the same allowance with a RegisterAll.
TL;DR - Find the accepted answer and look at update 2 (or skip down to Update 3 on this question)
This is what I want to do:
container.RegisterAll<IFileWatcher>(
new List<Func<IFileWatcher>>
{
() => new FileWatcher(
#".\Triggers\TriggerWatch\SomeTrigger.txt",
container.GetInstance<IFileSystem>()),
() => new FileWatcher(
#".\Triggers\TriggerWatch\SomeOtherTrigger.txt",
container.GetInstance<IFileSystem>())
});
I tried adding an extension based on a previous Stack Overflow answer for multiple registrations, but it seems that last one in wins:
public static class SimpleInjectorExtensions
{
public static void RegisterAll<TService>(this Container container,
IEnumerable<Func<TService>> instanceCreators)
where TService : class
{
foreach (var instanceCreator in instanceCreators)
{
container.RegisterSingle(typeof(TService),instanceCreator);
}
container.RegisterAll<TService>(typeof (TService));
}
}
I'm also curious why there is a need for RegisterAll to exist in the first place. This is the first dependency injection container out of 5 that I've used that makes the distinction. The others just allow you to register multiple types against a service and then load them all up by calling Resolve<IEnumerable<TService>> (autofac) or GetAllInstances<TService> (both SimpleInjector and Ninject).
Update
For more clarity, I'm trying to build a list of items that I can pass to a composite that handles each of the individual items. It suffers from the same problem as the above since it falls into a group of tasks that all get registered to be run based on schedules, triggers, and events (Rx). To remove the register all for a moment and rip out some of the other stuff:
container.Register<ITask>(() => new FileWatchTask(
container.GetInstance<IFileSystem>(),
container.GetInstance<IMessageSubscriptionManagerService>(),
configuration,
container.GetAllInstances<IFileWatcher>()));
You can see that I am grabbing all instances of the previously registered file watchers.
What I need to know is a simple workaround for this issue and when it will be implemented (or if not, why it won't be). I will also accept that this is not possible given the current limitations of Simple Injector's design. What I will not accept is that I need to change and adapt my architecture to meet the limitations of a tool.
Update 2
Let's talk about OCP (Open Closed Principle aka the O in SOLID) and the impression I'm getting in how SimpleInjector breaks this particular principle in some cases.
Open Closed Principle is just that, open for extension, but closed for modification. What this means is that you can alter the behavior of an entity without altering its source code.
Now let's shift to an example that is relevant here:
var tasks = container.GetAllInstances<ITask>();
foreach (var task in tasks.OrEmptyListIfNull())
{
//registers the task with the scheduler, Rx Event Messaging, or another trigger of some sort
task.Initialize();
}
Notice how clean that is. To be able to do this though, I need to be able to register all instances of an interface:
container.RegisterAll<ITask>(
new List<Func<ITask>>{
() => new FileWatchTask(container.GetInstance<IFileSystem>(),container.GetInstance<IMessageSubscriptionManagerService>(),configuration,container.GetAllInstances<IFileWatcher>()),
() => new DefaultFtpTask(container.GetInstance<IFtpClient>(),container.GetInstance<IFileSystem>()),
() => new DefaultImportFilesTask(container.GetInstance<IFileSystem>())
}
);
Right? So the lesson here is that this is good and meeting OCP. I can change the behavior of the task runner simply by adding or removing items that are registered. Open for extension, closed for modification.
Now let's focus on trying to do it the way suggested in the answer below (prior to the second update, which finally answers this question), which the author is giving the impression to be a better design.
Let's start with what the answer from the maintainer mentions is good design for registration. The viewpoint that I'm getting is that I have to make a sacrifice to my code to somehow make the ITask more flexible to work with SimpleInjector:
container.Register<ITask<SomeGeneric1>(() => new FileWatchTask(container.GetInstance<IFileSystem>(),container.GetInstance<IMessageSubscriptionManagerService>(),configuration,container.GetAllInstances<IFileWatcher>()));
container.Register<ITask<SomeGeneric2>(() => new DefaultFtpTask(container.GetInstance<IFtpClient>(),container.GetInstance<IFileSystem>()));
container.Register<ITask<SomeGeneric3>(() => new DefaultImportFilesTask(container.GetInstance<IFileSystem>()));
Now let's see how that makes our design change:
var task1 = container.GetInstances<ITask<SomeGeneric1>();
task1.Initialize();
var task2 = container.GetInstances<ITask<SomeGeneric2>();
task2.Initialize();
var task3 = container.GetInstances<ITask<SomeGeneric3>();
task3.Initialize();
Ouch. You can see how every time I add or remove an item from the container registration, I now need to also update another section of code. Two places of modification for one change, I'm breaking multiple design issues.
You might say why am I asking the container for this? Well this is in the startup area, but let's explore if I wasn't.
So I will use constructor injection to illustrate why this is bad. First let's see my example as construction injection.
public class SomeClass {
public SomeClass(IEnumerable<ITask> tasks){}
}
Nice and clean.
Now, let's switch back to my understanding of the accepted answer's view (again prior to update 2):
public class SomeClass {
public SomeClass(ITask<Generic1> task1,
ITask<Generic2> task2,
ITask<Generic3> task3
) {}
}
Ouch. Everytime I have to edit multiple areas of code, and let's not even get started at how poor this design is.
What's the lesson here? I'm not the smartest guy in the world. I maintain (or try to maintain :)) multiple frameworks and I don't try to pretend I know more than or better than others. My sense of design might be skewed or I might be limiting others in some unknown way that I have not even thought of yet. I'm sure the author means well when he gives advice on design, but in some cases it may come across annoying (and a little condescending), especially for those of us that know what we are doing.
Update 3
So the question was answered in Update 2 from the maintainer. I was trying to use RegisterAll because it hadn't occurred to me that I could just use Register<IEnumerable<T>> (and unfortunately the documentation didn't point this out). It seems totally obvious now, but when people are making the jump from other IoC frameworks, they are carrying some baggage with them and may miss this awesome simplification in design! I missed it, with 4 other DI containers under my belt. Hopefully he adds it to the documentation or calls it out a little better.
From your first example (using the List<Func<IFileWatcher>>) I understand that you want to register a collection of transient filewatchers. In other words, every time you iterate the list, a new file watcher instance should be created. This is of course very different than registering a list with two (singleton) filewatchers (the same instances that are always returned). There's however some ambiguity in your question, since in the extension method you seem to register them as singleton. For the rest of my answer, I'll assume you want transient behavior.
The common use case for which RegisterAll is created, is to register a list of implementations for a common interface. For instance an application that has multiple IEventHandler<CustomerMoved> implementations that all need to be triggered when a CustomerMoved event got raised. In that case you supply the RegisterAll method with list of System.Type instances, and the container is completely in control of wiring those implementations for you. Since the container is in control of the creation, the collection is called 'container-controlled'.
The RegisterAll however, merely forward the creation back to the container, which means that by default the list results in the creation of transient instances (since unregistered concrete types are resolved as transient). This seems awkward, but it allows you to register a list with elements of different lifestyles, since you can register each item explicitly with the lifestyle of choice. It also allows you to supply the RegisterAll with abstractions (for instance typeof(IService)) and that will work as well, since the request is forwarded back to the container.
Your use case however is different. You want to register a list of elements of the exact same type, but each with a different configuration value. And to make things more difficult, you seem to want to register them as transients instead of singletons. By not-passing the RegisterAll a list of types, but an IEnumerable<TService> the container does not create and auto-wire those types , we call this a 'container-uncontrolled' collection.
Long story short: how do we register this? There are multiple ways to do this, but I personally like this approach:
string[] triggers = new[]
{
#".\Triggers\TriggerWatch\SomeTrigger.txt",
#".\Triggers\TriggerWatch\SomeOtherTrigger.txt"
};
container.RegisterAll<IFileWatcher>(
from trigger in triggers
select new FileWatcher(trigger,
container.GetInstance<IFileSystem>())
);
Here we register a LINQ query (which is just an IEnumerable<T>) using the RegisterAll method. Every time someone resolves an IEnumerable<IFileWatcher> it returns that same query, but since the select of that query contains a new FileWatcher, on iteration new instances are always returned. This effect can be seen using the following test:
var watchers = container.GetAllInstances<IFileWatcher>();
var first1 = watchers.First();
var first2 = watchers.First();
Assert.AreNotEqual(first1, first2, "Should be different instances");
Assert.AreEqual(first1.Trigger, first2.Trigger);
As this test shows, we resolve the collection once, but every time we iterate it (.First() iterates the collection), a new instance is created, but both instances have the same #".\Triggers\TriggerWatch\SomeTrigger.txt" value.
So as you can see, there is not limitation that prevents you from doing this effectively. However, you might need to think differently.
I'm also curious why there is a need for RegisterAll to exist in the
first place.
This is a very explicit design decision. You are right that most other containers just allow you to do a bunch of registrations of the same type and when asked for a collection, all registrations are returned. Problem with this is that it is easy to accidentally register a type again and this is something I wanted to prevent.
Further more, all containers have different behavior of which registration is returned when requesting for a single instance instead of requesting the collection. Some return the first registration others return the last. I wanted to prevent this ambiguity as well.
Last but not least, please note that registering collections of items of the same type should usually be an exception. In my experience 90% of the time when developers want to register multiple types of the same abstraction, there is some ambiguity in their design. By making registering collections explicit, I hoped to let this stick out.
What I will not accept is that I need to change and adapt my
architecture to meet the limitations of some tool.
I do agree with this. Your architecture should be leading, not the tools. You should chose your tools accordingly.
But please do note that Simple Injector has many limitations and most of those limitations are chosen deliberately to stimulate users to have a clean design. For instance, every time you violate one of the SOLID principles in your code, you will have problems. You will have problems keeping your code flexible, your tests readable, and your Composition Root maintainable. This in fact holds for all DI containers, but perhaps even more for Simple Injector. This is deliberate and if the developers are not interested in applying the SOLID principles and want a DI container that just works in any given circumstance, perhaps Simple Injector is not the best tool for the job. For instance, applying Simple Injector to a legacy code base can be daunting.
I hope this gives some perspective on the design of Simple Injector.
UPDATE
If you need singletons instead, this is even simpler. You can register them as follows:
var fs = new RealFileSystem();
container.RegisterSingle<IFileSystem>(fs);
container.RegisterAll<IFileWatcher>(
new FileWatcher(#".\Triggers\TriggerWatch\SomeTrigger.txt", fs),
new FileWatcher(#".\Triggers\TriggerWatch\SomeOtherTrigger.txt", fs)
);
UPDATE 2
You explicitly asked for RegisterAll<T>(Func<T>) support to lazily create a collection. In fact there already is support for this, just by using RegisterSingle<IEnumerable<T>>(Func<IEnumerable<T>>), as you can see here:
container.RegisterSingle<IEnumerable<IFileWatcher>>(() =>
{
return
from
var list = new List<IFileWatcher>
{
new FileWatcher(#".\Triggers\TriggerWatch\SomeTrigger.txt", container.GetInstance<IFileSystem>()),
new FileWatcher(#".\Triggers\TriggerWatch\SomeOtherTrigger.txt", container.GetInstance<IFileSystem>())
};
return list.AsReadOnly();
});
The RegisterAll<T>(IEnumerable<T>) is in fact a convenient overload that eventually calls into RegisterSingle<IEnumerable<T>>(collection).
Note that I explicitly return a readonly list. This is optional, but is an extra safety mechanism that prevents the collection from being altered by any application code. When using RegisterAll<T> collections are automatically wrapped in a read-only iterator.
The only catch with using RegisterSingle<IEnumerable<T>> is that the container will not iterate the collection when you call container.Verify(). However, in your case this would not be a problem, since when an element of the collection fails to initialize the call to GetInstance<IEnumerable<IFileWatcher>> will fail as well and with that the call to Verify().
UPDATE 3
I apologize if I gave to the impression that I meant your design is wrong. I have no way of knowing this. Since you explicitly asked about why some features where missing, I tried my best to explain the rationale behind this. That doesn't mean however that I think your design is bad, since there is no way for me of knowing.
let's switch back to what that would look like with the maintainer's view of good design
I'm not sure why you think that this is my view on good design? Having a SomeClass with a constructor that need to be changed every time you add a task in the system is definitely not a good design. We can safely agree on this. That breaks OCP. I would never advice anyone to do such thing. Besides having a constructor with many arguments is a design smell at least. The next minor release of Simple Injector even adds a diagnostic warning concerning types with too many dependencies since this often is an indication of a SRP violation. But again see how Simple Injector tries to ‘help’ developers here by providing guidance.
Still however, I do promote the use of generic interfaces, and that’s a case that the Simple Injector design is optimized for especially. An ITask interface is a good example of this. In that case, the ITask<T> will often be an abstraction over some business behavior you wish to execute, and the T is a parameter object that holds all parameters of the operation to execute (you can see it as a message with a message handler). This however is only useful when a consumer needs to execute an operation with a specific set of parameters (a specific version of T), for instance it wants to execute ITask<ShipOrder>. Since you are executing a batch of all tasks without supplying parameter, a design based on ITask<T> would probably be awkward.
But let's assume for a second that it is appropriate. Let's assume this, so I can explain how Simple Injector is optimized in this case. At the end of this update, I’ll show you how Simple Injector might still be able to help in your case, so hold your breath. In your code sample, you register your generic tasks as follows:
container.Register<ITask<SomeGeneric1>(() => new FileWatchTask(container.GetInstance<IFileSystem>(),container.GetInstance<IMessageSubscriptionManagerService>(),configuration,container.GetAllInstances<IFileWatcher>()));
container.Register<ITask<SomeGeneric2>(() => new DefaultFtpTask(container.GetInstance<IFtpClient>(),container.GetInstance<IFileSystem>()));
container.Register<ITask<SomeGeneric3>(() => new DefaultImportFilesTask(container.GetInstance<IFileSystem>()));
This is a rather painful way of registering all tasks in the system, since every time you change a constructor of a task implementation, you'll have to change this code. Simple Injector allows you to auto-wire types by looking at their constructor. In other words, Simple Injector allows you to simplify this code to the following:
container.Register<ITask<SomeGeneric1>, FileWatchTask>();
container.Register<ITask<SomeGeneric2>, DefaultFtpTask>();
container.Register<ITask<SomeGeneric3>, DefaultImportFilesTask>();
This already is much more maintainable, results in better performance and allows you to do add other interesting scenarios later on such as context based injection (since Simple Injector is in control of the whole object graph). This is the advised way of registering things in Simple Injector (prevent the use of a Func if possible).
Still, when having a architecture where a task is the center element, you would probably add new task implementations quite regularly. This will result in having dozens of registration lines and having to go back to this code to add a line every time you add a task. Simple Injector however has a batch registration feature that allows you to shrink this back to one single line of code:
// using SimpleInjector.Extensions;
container.RegisterManyForOpenGeneric(typeof(ITask<>), typeof(ITask<>).Assembly);
By calling this line, the container will search for all ITask<T> implementations that are located in the interface’s assembly and it will register them for you. Since this is done at runtime using reflection, the line does not have to be altered when new tasks are added to the system.
And since you're talking about the OCP, IMO Simple Injector has great support for the OCP. At some points it even beats all other frameworks out there. When I think about OCP, I particularly think about one specific pattern: the decorator pattern. The decorator pattern is a very important pattern to use when applying the OCP. Cross-cutting concerns for instance should not be added by changing some piece of business logic itself, but can best be added by wrapping classes with decorators. With Simple Injector, a decorator can be added with just a single line of code:
// using SimpleInjector.Extensions;
container.RegisterDecorator(typeof(ITask<>), typeof(TransactionTaskDecorator<>));
This ensures that a (transient) TransactionTaskDecorator<T> is wrapped around all ITask<T> implementations when they got resolved. Those decorators are integrated in the container’s pipeline, which means that they can have dependencies of their own, can have initializers, and can have a specific lifestyle. And decorators can be stacked easily:
container.RegisterDecorator(typeof(ITask<>), typeof(TransactionTaskDecorator<>));
container.RegisterDecorator(typeof(ITask<>), typeof(DeadlockRetryTaskDecorator<>));
This wraps all tasks in a transaction decorator and wraps that transaction decorator again in a deadlock retry decorator. And you can even apply decorators conditionally:
container.RegisterDecorator(typeof(ITask<>), typeof(ValidationTaskDecorator<>),
context => ShouldApplyValidator(context.ServiceType));
And if your decorator has a generic type constraint, Simple Injector would automatically apply the decorator when the generic type constraints match, nothing you have to do about this. And since Simple Injector generates expression trees and compiles them down to delegates, this is all a one-time cost. That doesn’t mean it’s for free, but you’ll pay only once and not per resolve.
There's no other DI library that makes adding decorators as easy and flexible as Simple Injector does.
So this is where Simple Injector really shines, but that doesn't help you much :-). Generic interfaces don't help you in this case, but still, even in your case, you might be able make your registration more maintainable. If you have many task implementations in the system (that is, much more than three), you might be able to automate things like this:
var taskTypes = (
from type in typeof(ITask).Assemby.GetTypes()
where typeof(ITask).IsAssignableFrom(type)
where !type.IsAbstract && !type.IsGenericTypeDefinition
select type)
.ToList();
// Register all as task types singleton
taskTypes.ForEach(type => container.Register(type, type, Lifestyle.Singleton));
// registers a list of all those (singleton) tasks.
container.RegisterAll<ITask>(taskTypes);
Alternatively, with Simple Injector 2.3 and up, you can pass in Registration instances directly into the RegisterAll method:
var taskTypes =
from type in typeof(ITask).Assemby.GetTypes()
where typeof(ITask).IsAssignableFrom(type)
where !type.IsAbstract && !type.IsGenericTypeDefinition
select type;
// registers a list of all those (singleton) tasks.
container.RegisterAll(typeof(ITask),
from type in taskTypes
select Lifestyle.Singleton.CreateRegistration(type, type, container));
This does assume however that all those task implementations have a single public constructor and all constructor arguments are resolvable (no configuration values such as int and string). If this is not the case, there are ways to change the default behavior of the framework, but if you want to know anything about this, it would be better to move that discussion to a new SO question.
Again, I’m sorry if I have annoyed you, but I rather annoy some developers than missing the opportunity in helping a lot others :-)
I have been looking for a neat answer to this design question with no success. I could not find help neither in the ".NET Framework design guidelines" nor in the "C# programing guidelines".
I basically have to expose a pattern as an API so the users can define and integrate their algorithms into my framework like this:
1)
// This what I provide
public abstract class AbstractDoSomething{
public abstract SomeThing DoSomething();
}
Users need to implementing this abstract class, they have to implement the DoSomething method (that I can call from within my framework and use it)
2)
I found out that this can also acheived by using delegates:
public sealed class DoSomething{
public String Id;
Func<SomeThing> DoSomething;
}
In this case, a user can only use DoSomething class this way:
DoSomething do = new DoSomething()
{
Id="ThisIsMyID",
DoSomething = (() => new Something())
}
Question
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
EDIT
In case of 1 : The registration is done this way (assuming MyDoSomething extends AbstractDoSomething:
MyFramework.AddDoSomething("DoSomethingIdentifier", new MyDoSomething());
In case of 2 : The registration is done like this:
MyFramework.AddDoSomething(new DoSomething());
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
The first is more "traditional" in terms of OOP, and may be more understandable to many developers. It also can have advantages in terms of allowing the user to manage lifetimes of the objects (ie: you can let the class implement IDisposable and dispose of instances on shutdown, etc), as well as being easy to extend in future versions in a way that doesn't break backwards compatibility, since adding virtual members to the base class won't break the API. Finally, it can be simpler to use if you want to use something like MEF to compose this automatically, which can simplify/remove the process of "registration" from the user's standpoint (as they can just create the subclass, and drop it in a folder, and have it discovered/used automatically).
The second is a more functional approach, and is simpler in many ways. This allows the user to implement your API with far fewer changes to their existing code, as they just need to wrap the necessary calls in a lambda with closures instead of creating a new type.
That being said, if you're going to take the approach of using a delegate, I wouldn't even make the user create a class - just use a method like:
MyFramework.AddOperation("ThisIsMyID", () => DoFoo());
This makes it a little bit more clear, in my opinion, that you're adding an operation to the system directly. It also completely eliminates the need for another type in your public API (DoSomething), which again simplifies the API.
I would go with the abstract class / interface if:
DoSomething is required
DoSomething will normally get really big (so DoSomething's implementation can be splited into several private / protected methods)
I would go with delegates if:
DoSomething can be treated as an event (OnDoingSomething)
DoSomething is optional (so you default it to a no-op delegate)
Though personally, if in my hand, I would always go by Delegate Model. I just love the simplicity and elegance of higher order functions. But while implementing the model, be careful about memory leaks. Subscribed events are one of the most common reasons of memory leaks in .Net. This means, suppose if you have an object that has some events exposed, the original object would never be disposed until all events are unsubscribed since event creates a strong reference.
As is typical for most of these types of questions, I would say "it depends". :)
But I think the reason for using the abstract class versus the lambda really comes down to behavior. Usually, I think of the lambda being used as a callback type of functionality -- where you'd like something custom happen when something else happens. I do this a lot in my client-side code:
- make a service call
- get some data back
- now invoke my callback to handle that data accordingly
You can do the same with the lambdas -- they are specific and are targeted for very specific situations.
Using the abstract class (or interface) really comes down to where your class' behavior is driven by the environment around it. What's happening, what client am I dealing with, etc.? These larger questions could suggest that you should define a set of behaviors and then allow your developers (or consumers of your API) to create their own sets of behavior based upon their requirements. Granted, you could do the same with lambdas, but I think it would be more complex to develop and also more complex to clearly communicate to your users.
So, I guess my rough rule of thumb is:
- use lambdas for specific callback or side-effect customized behaviors;
- use abstract classes or interfaces to provide a mechanism for object behavior customization (or at least the majority of the object's primary behavior).
Sorry I can't give you a clear definition, but I hope this helps. Good luck!
A few things to consider :
How many different functions/delegates would need to be over-ridden? If may functions, inheretance will group "sets" of overrides in an easier to understand way. If you have a single "registration" function, but many sub-portions can be delegated out to the implementor, this is a classic case of the "Template" pattern, which makes the most sense to be inherited.
How many different implementations of the same function will be needed? If just one, then inheretance is good, but if you have many implementations a delegate might save overhead.
If there are multiple implementations, will the program need to switch between them? Or will it only use a single implementation. If switching is required, delegates might be easier, but I would caution this, especially depending on the answer to #1. See the Strategy Pattern.
If the override needs access to any protected members, then inheretance. If it can rely only on publics, then delegate.
Other choices would be events, and extension methods as well.
I have an app which consists of several different assemblies, one of which holds the various interfaces which the classes obey, and by which the classes communicate across assembly boundaries. There are several classes firing events, and several which are interested in these events.
My question is as follows: is it good practice to implement a central EventConsolidator of some kind? This would be highly coupled, as it would need to know every class (or at least interface) throwing an event, and every consumer of an event would need to have a reference to EventConsolidator in order to subscribe.
Currently I have the situation where class A knows class B (but not C), class B knows class C, etc. Then if C fires an event B needs to pick it up and fire its own event in order for A to respond. These kinds of chains can get quite long, and it may be that B is only interested in the event in order to pass it along. I don't want A to know about C though, as that would break encapsulation.
What is good practice in this situation? Centralise the events, or grin and bear it and define events in each intermediate class? Or what are the criteria by which to make the decision? Thanks!
Edit: Here is another question asking essentially the same thing.
You could put the event itself in an interface, so that A didn't need to know about C directly, but only that it has the relevant event. However, perhaps you mean that the instance of A doesn't have sight of an instance of C...
I would try to steer clear of a centralised event system. It's likely to make testing harder, and introduced tight coupling as you said.
One pattern which is worth knowing about is making event proxying simple. If B only exposes an event to proxy it to C, you can do:
public event FooHandler Foo
{
add
{
c.Foo += value;
}
remove
{
c.Foo -= value;
}
}
That way it's proxying the subscription/unsubscription rather than the act of raising the event. This has an impact on GC eligibility, of course - which may be beneficial or not, depending on the situation. Worth thinking about though.
What you could try is using the event brokering of either NInject or the Unity Application Block.
This allows you to, for example:
[Publish("foo://happened")]
public event EventHandler<FooArgs> FooHappened;
[Subscribe("foo://happened")]
public void Foo_Happened(object sender, FooArgs args)
{ }
If both objects are created through the container the events will be hooked up automatically.
I'd probably try to massage the domain so that each class can directly depend on the appropriate event source. What I mean is asking the question why don't A know about C? Is there perhaps a D waiting to emerge?
As an alternative approach you could consider an event broker architecture. It means observers don't know directly about the source. Here's an interesting video.
This would be highly coupled, as it would need to know every class
I think you answered your own question if you consider that coupling is bad! Passing events through a chain of potential handlers is a fairly common pattern in many environments; It may not be the most efficient approach, but it avoids the complexity that your suggested approach would involve.
Another approach you could take is to use a message dispatcher. This involves using a common message format (or at least a common message header format) to represent events, and then placing those messages into a queue. A dispatcher then picks up each of those events in turn (or based on some prioritisation), and routes them directly to the required handler. Each handler must be registered with the dispatcher at startup.
A message in this case could simply be a class with a few specific fields at the start. The specific message could simply be a derivative, or you could pass your message-specific data as an 'object' parameter along with the message header.
You can check out the EventBroker object in the M$ patterns and practises lib if you want centralised events.
Personally I think its better to think about your architecture instead and even though we use the EventBroker here, none of our new code uses it and we're hoping to phase it out one sunny day.
we have our own event broker implementation (open source)
Tutorial at: http://sourceforge.net/apps/mediawiki/bbvcommon/index.php?title=Event_Broker
And a performance analysis at: www.planetgeek.ch/2009/07/12/event-broker-performance/
Advantages compared to CAB:
- better logging
- extension support
- better error handling
- extendable handlers (UI, Background Thread, ...)
and some more I cannot recall right now.
Cheers,
Urs