I have been looking for a neat answer to this design question with no success. I could not find help neither in the ".NET Framework design guidelines" nor in the "C# programing guidelines".
I basically have to expose a pattern as an API so the users can define and integrate their algorithms into my framework like this:
1)
// This what I provide
public abstract class AbstractDoSomething{
public abstract SomeThing DoSomething();
}
Users need to implementing this abstract class, they have to implement the DoSomething method (that I can call from within my framework and use it)
2)
I found out that this can also acheived by using delegates:
public sealed class DoSomething{
public String Id;
Func<SomeThing> DoSomething;
}
In this case, a user can only use DoSomething class this way:
DoSomething do = new DoSomething()
{
Id="ThisIsMyID",
DoSomething = (() => new Something())
}
Question
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
EDIT
In case of 1 : The registration is done this way (assuming MyDoSomething extends AbstractDoSomething:
MyFramework.AddDoSomething("DoSomethingIdentifier", new MyDoSomething());
In case of 2 : The registration is done like this:
MyFramework.AddDoSomething(new DoSomething());
Which of these two options is best for an easy, usable and most importantly understandable to expose as an API?
The first is more "traditional" in terms of OOP, and may be more understandable to many developers. It also can have advantages in terms of allowing the user to manage lifetimes of the objects (ie: you can let the class implement IDisposable and dispose of instances on shutdown, etc), as well as being easy to extend in future versions in a way that doesn't break backwards compatibility, since adding virtual members to the base class won't break the API. Finally, it can be simpler to use if you want to use something like MEF to compose this automatically, which can simplify/remove the process of "registration" from the user's standpoint (as they can just create the subclass, and drop it in a folder, and have it discovered/used automatically).
The second is a more functional approach, and is simpler in many ways. This allows the user to implement your API with far fewer changes to their existing code, as they just need to wrap the necessary calls in a lambda with closures instead of creating a new type.
That being said, if you're going to take the approach of using a delegate, I wouldn't even make the user create a class - just use a method like:
MyFramework.AddOperation("ThisIsMyID", () => DoFoo());
This makes it a little bit more clear, in my opinion, that you're adding an operation to the system directly. It also completely eliminates the need for another type in your public API (DoSomething), which again simplifies the API.
I would go with the abstract class / interface if:
DoSomething is required
DoSomething will normally get really big (so DoSomething's implementation can be splited into several private / protected methods)
I would go with delegates if:
DoSomething can be treated as an event (OnDoingSomething)
DoSomething is optional (so you default it to a no-op delegate)
Though personally, if in my hand, I would always go by Delegate Model. I just love the simplicity and elegance of higher order functions. But while implementing the model, be careful about memory leaks. Subscribed events are one of the most common reasons of memory leaks in .Net. This means, suppose if you have an object that has some events exposed, the original object would never be disposed until all events are unsubscribed since event creates a strong reference.
As is typical for most of these types of questions, I would say "it depends". :)
But I think the reason for using the abstract class versus the lambda really comes down to behavior. Usually, I think of the lambda being used as a callback type of functionality -- where you'd like something custom happen when something else happens. I do this a lot in my client-side code:
- make a service call
- get some data back
- now invoke my callback to handle that data accordingly
You can do the same with the lambdas -- they are specific and are targeted for very specific situations.
Using the abstract class (or interface) really comes down to where your class' behavior is driven by the environment around it. What's happening, what client am I dealing with, etc.? These larger questions could suggest that you should define a set of behaviors and then allow your developers (or consumers of your API) to create their own sets of behavior based upon their requirements. Granted, you could do the same with lambdas, but I think it would be more complex to develop and also more complex to clearly communicate to your users.
So, I guess my rough rule of thumb is:
- use lambdas for specific callback or side-effect customized behaviors;
- use abstract classes or interfaces to provide a mechanism for object behavior customization (or at least the majority of the object's primary behavior).
Sorry I can't give you a clear definition, but I hope this helps. Good luck!
A few things to consider :
How many different functions/delegates would need to be over-ridden? If may functions, inheretance will group "sets" of overrides in an easier to understand way. If you have a single "registration" function, but many sub-portions can be delegated out to the implementor, this is a classic case of the "Template" pattern, which makes the most sense to be inherited.
How many different implementations of the same function will be needed? If just one, then inheretance is good, but if you have many implementations a delegate might save overhead.
If there are multiple implementations, will the program need to switch between them? Or will it only use a single implementation. If switching is required, delegates might be easier, but I would caution this, especially depending on the answer to #1. See the Strategy Pattern.
If the override needs access to any protected members, then inheretance. If it can rely only on publics, then delegate.
Other choices would be events, and extension methods as well.
Related
In my (simplified) problem I have a method "Reading" that can use many different implementation of some IDisposableThing. I am passing delegates to the constructor right now so I can use the using statement.
Is this approach of passing a delegate of the constructor of my object appropriate?
My problem is that things like List<Func<IDisposable>> etc start looking bit scary (because delegates look like crap in c#) and passing in a object seems more usual and a clearer statement of intent.
Is there a better/different way of managing this situation without delegates?
public void Main()
{
Reading(() => new DisposableThingImplementation());
Reading(() => new AnotherDisposableThingImplementation());
}
public void Reading(Func<IDisposableThing> constructor)
{
using (IDisposableThing streamReader = constructor())
{
//do things
}
}
As I said in the comment, it's difficult to say what's best for your situation, so instead I'll just list your options so you can make an informed decision:
Continue doing what you're doing
Having to use around objects with an unpleasantly complicated-looking type is maybe not ideal visually, but in your situation it may well be perfectly appropriate
Use a custom delegate type
You can define a delegate like:
public delegate IDisposableThing DisposableThingConstructor();
Then anywhere you would write Func<IDisposableThing>, you can just write DisposableThingConstructor instead. For a commonly used delegate type, this may improve code readability, though this too is a matter of taste.
Move the using statements out of Reading
This really depends on whether it's sensible for the lifecycle management of these objects to be a responsibility of the Reading method or not. Given what we have of your code at the moment, we can't really judge this for you. An implementation with the lifecycle management moved out would look like:
public void Main()
{
using(var disposableThing = new DisposableThingImplementation())
Reading(disposableThing);
}
public void Reading(IDisposableThing disposableThing)
{
//do things
}
Use a factory pattern
In this option, you create a class which returns new IDisposableThing implementations. Lots of information can be found on the factory pattern which you may well already know, so I won't repeat it all here. This option may well be overkill for your purposes here, adding a lot of pointless complexity, but depending on how those DisposableThings are constructed, it may have additional benefits which make it worthwhile.
Use a generic argument
This option will only work if all of your IDisposableThing implementations have a parameterless constructor. I'm guessing that's not the case, but in case it is, it's a relatively straightforward approach:
public void Reading<T>() where T : IDisposableThing, new()
{
using(var disposableThing = new T())
{
//do things
}
}
Use an Inversion of Control container
This is another option which would certainly be overkill if used for this purpose alone. I include it mostly for completeness. Inversion of control containers like Ninject will give you easy ways to manage the lifecycles of objects passed into others.
I very much doubt this would be an appropriate solution in your case, especially since the disposable objects are not being used in another class's constructor. If you later run into a situation where you're trying to manage object lifecycle in a larger, complex object graph, this option might be worth revisiting.
Construct the objects outside of the using statement
This is specifically described as "not a best practice" in the MSDN documentation, but it is an option. You can do:
public void Main()
{
Reading(new DisposableThingImplementation());
}
public void Reading(IDisposableThing disposableThing)
{
using (disposableThing)
{
//do things
}
}
At the end of the using statement, the Dispose method will be called, but the object will not be garbage collected because it is still in scope. Trying to use the object after that would be very likely to cause problems because it is not fully initialized. So again, while this is an option, it's unlikely to be a good one.
Is this approach of passing a delegate of the constructor of my object appropriate? My problem is that things like List<Func<IDisposable>> etc start looking bit scary (because delegates look like crap in c#) and passing in a object seems more usual and a clearer statement of intent.
Yes, it's fine. However I understand your concern about passing a list of those things... Perhaps creating a custom delegate with the same signature as Func<IDisposable> and a more explicit name (e.g. SomethingFactory) would be clearer.
Is there a better/different way of managing this situation without delegates?
You could pass a factory or a list of factories to the method. I don't think it's really "better", though; it's mostly the same, since your factory would typically be represented as an interface with a single method, which is essentially the same as a delegate.
I can register a single registration item with instanceCreator context (aka Func<T>), but there doesn't seem to be the same allowance with a RegisterAll.
TL;DR - Find the accepted answer and look at update 2 (or skip down to Update 3 on this question)
This is what I want to do:
container.RegisterAll<IFileWatcher>(
new List<Func<IFileWatcher>>
{
() => new FileWatcher(
#".\Triggers\TriggerWatch\SomeTrigger.txt",
container.GetInstance<IFileSystem>()),
() => new FileWatcher(
#".\Triggers\TriggerWatch\SomeOtherTrigger.txt",
container.GetInstance<IFileSystem>())
});
I tried adding an extension based on a previous Stack Overflow answer for multiple registrations, but it seems that last one in wins:
public static class SimpleInjectorExtensions
{
public static void RegisterAll<TService>(this Container container,
IEnumerable<Func<TService>> instanceCreators)
where TService : class
{
foreach (var instanceCreator in instanceCreators)
{
container.RegisterSingle(typeof(TService),instanceCreator);
}
container.RegisterAll<TService>(typeof (TService));
}
}
I'm also curious why there is a need for RegisterAll to exist in the first place. This is the first dependency injection container out of 5 that I've used that makes the distinction. The others just allow you to register multiple types against a service and then load them all up by calling Resolve<IEnumerable<TService>> (autofac) or GetAllInstances<TService> (both SimpleInjector and Ninject).
Update
For more clarity, I'm trying to build a list of items that I can pass to a composite that handles each of the individual items. It suffers from the same problem as the above since it falls into a group of tasks that all get registered to be run based on schedules, triggers, and events (Rx). To remove the register all for a moment and rip out some of the other stuff:
container.Register<ITask>(() => new FileWatchTask(
container.GetInstance<IFileSystem>(),
container.GetInstance<IMessageSubscriptionManagerService>(),
configuration,
container.GetAllInstances<IFileWatcher>()));
You can see that I am grabbing all instances of the previously registered file watchers.
What I need to know is a simple workaround for this issue and when it will be implemented (or if not, why it won't be). I will also accept that this is not possible given the current limitations of Simple Injector's design. What I will not accept is that I need to change and adapt my architecture to meet the limitations of a tool.
Update 2
Let's talk about OCP (Open Closed Principle aka the O in SOLID) and the impression I'm getting in how SimpleInjector breaks this particular principle in some cases.
Open Closed Principle is just that, open for extension, but closed for modification. What this means is that you can alter the behavior of an entity without altering its source code.
Now let's shift to an example that is relevant here:
var tasks = container.GetAllInstances<ITask>();
foreach (var task in tasks.OrEmptyListIfNull())
{
//registers the task with the scheduler, Rx Event Messaging, or another trigger of some sort
task.Initialize();
}
Notice how clean that is. To be able to do this though, I need to be able to register all instances of an interface:
container.RegisterAll<ITask>(
new List<Func<ITask>>{
() => new FileWatchTask(container.GetInstance<IFileSystem>(),container.GetInstance<IMessageSubscriptionManagerService>(),configuration,container.GetAllInstances<IFileWatcher>()),
() => new DefaultFtpTask(container.GetInstance<IFtpClient>(),container.GetInstance<IFileSystem>()),
() => new DefaultImportFilesTask(container.GetInstance<IFileSystem>())
}
);
Right? So the lesson here is that this is good and meeting OCP. I can change the behavior of the task runner simply by adding or removing items that are registered. Open for extension, closed for modification.
Now let's focus on trying to do it the way suggested in the answer below (prior to the second update, which finally answers this question), which the author is giving the impression to be a better design.
Let's start with what the answer from the maintainer mentions is good design for registration. The viewpoint that I'm getting is that I have to make a sacrifice to my code to somehow make the ITask more flexible to work with SimpleInjector:
container.Register<ITask<SomeGeneric1>(() => new FileWatchTask(container.GetInstance<IFileSystem>(),container.GetInstance<IMessageSubscriptionManagerService>(),configuration,container.GetAllInstances<IFileWatcher>()));
container.Register<ITask<SomeGeneric2>(() => new DefaultFtpTask(container.GetInstance<IFtpClient>(),container.GetInstance<IFileSystem>()));
container.Register<ITask<SomeGeneric3>(() => new DefaultImportFilesTask(container.GetInstance<IFileSystem>()));
Now let's see how that makes our design change:
var task1 = container.GetInstances<ITask<SomeGeneric1>();
task1.Initialize();
var task2 = container.GetInstances<ITask<SomeGeneric2>();
task2.Initialize();
var task3 = container.GetInstances<ITask<SomeGeneric3>();
task3.Initialize();
Ouch. You can see how every time I add or remove an item from the container registration, I now need to also update another section of code. Two places of modification for one change, I'm breaking multiple design issues.
You might say why am I asking the container for this? Well this is in the startup area, but let's explore if I wasn't.
So I will use constructor injection to illustrate why this is bad. First let's see my example as construction injection.
public class SomeClass {
public SomeClass(IEnumerable<ITask> tasks){}
}
Nice and clean.
Now, let's switch back to my understanding of the accepted answer's view (again prior to update 2):
public class SomeClass {
public SomeClass(ITask<Generic1> task1,
ITask<Generic2> task2,
ITask<Generic3> task3
) {}
}
Ouch. Everytime I have to edit multiple areas of code, and let's not even get started at how poor this design is.
What's the lesson here? I'm not the smartest guy in the world. I maintain (or try to maintain :)) multiple frameworks and I don't try to pretend I know more than or better than others. My sense of design might be skewed or I might be limiting others in some unknown way that I have not even thought of yet. I'm sure the author means well when he gives advice on design, but in some cases it may come across annoying (and a little condescending), especially for those of us that know what we are doing.
Update 3
So the question was answered in Update 2 from the maintainer. I was trying to use RegisterAll because it hadn't occurred to me that I could just use Register<IEnumerable<T>> (and unfortunately the documentation didn't point this out). It seems totally obvious now, but when people are making the jump from other IoC frameworks, they are carrying some baggage with them and may miss this awesome simplification in design! I missed it, with 4 other DI containers under my belt. Hopefully he adds it to the documentation or calls it out a little better.
From your first example (using the List<Func<IFileWatcher>>) I understand that you want to register a collection of transient filewatchers. In other words, every time you iterate the list, a new file watcher instance should be created. This is of course very different than registering a list with two (singleton) filewatchers (the same instances that are always returned). There's however some ambiguity in your question, since in the extension method you seem to register them as singleton. For the rest of my answer, I'll assume you want transient behavior.
The common use case for which RegisterAll is created, is to register a list of implementations for a common interface. For instance an application that has multiple IEventHandler<CustomerMoved> implementations that all need to be triggered when a CustomerMoved event got raised. In that case you supply the RegisterAll method with list of System.Type instances, and the container is completely in control of wiring those implementations for you. Since the container is in control of the creation, the collection is called 'container-controlled'.
The RegisterAll however, merely forward the creation back to the container, which means that by default the list results in the creation of transient instances (since unregistered concrete types are resolved as transient). This seems awkward, but it allows you to register a list with elements of different lifestyles, since you can register each item explicitly with the lifestyle of choice. It also allows you to supply the RegisterAll with abstractions (for instance typeof(IService)) and that will work as well, since the request is forwarded back to the container.
Your use case however is different. You want to register a list of elements of the exact same type, but each with a different configuration value. And to make things more difficult, you seem to want to register them as transients instead of singletons. By not-passing the RegisterAll a list of types, but an IEnumerable<TService> the container does not create and auto-wire those types , we call this a 'container-uncontrolled' collection.
Long story short: how do we register this? There are multiple ways to do this, but I personally like this approach:
string[] triggers = new[]
{
#".\Triggers\TriggerWatch\SomeTrigger.txt",
#".\Triggers\TriggerWatch\SomeOtherTrigger.txt"
};
container.RegisterAll<IFileWatcher>(
from trigger in triggers
select new FileWatcher(trigger,
container.GetInstance<IFileSystem>())
);
Here we register a LINQ query (which is just an IEnumerable<T>) using the RegisterAll method. Every time someone resolves an IEnumerable<IFileWatcher> it returns that same query, but since the select of that query contains a new FileWatcher, on iteration new instances are always returned. This effect can be seen using the following test:
var watchers = container.GetAllInstances<IFileWatcher>();
var first1 = watchers.First();
var first2 = watchers.First();
Assert.AreNotEqual(first1, first2, "Should be different instances");
Assert.AreEqual(first1.Trigger, first2.Trigger);
As this test shows, we resolve the collection once, but every time we iterate it (.First() iterates the collection), a new instance is created, but both instances have the same #".\Triggers\TriggerWatch\SomeTrigger.txt" value.
So as you can see, there is not limitation that prevents you from doing this effectively. However, you might need to think differently.
I'm also curious why there is a need for RegisterAll to exist in the
first place.
This is a very explicit design decision. You are right that most other containers just allow you to do a bunch of registrations of the same type and when asked for a collection, all registrations are returned. Problem with this is that it is easy to accidentally register a type again and this is something I wanted to prevent.
Further more, all containers have different behavior of which registration is returned when requesting for a single instance instead of requesting the collection. Some return the first registration others return the last. I wanted to prevent this ambiguity as well.
Last but not least, please note that registering collections of items of the same type should usually be an exception. In my experience 90% of the time when developers want to register multiple types of the same abstraction, there is some ambiguity in their design. By making registering collections explicit, I hoped to let this stick out.
What I will not accept is that I need to change and adapt my
architecture to meet the limitations of some tool.
I do agree with this. Your architecture should be leading, not the tools. You should chose your tools accordingly.
But please do note that Simple Injector has many limitations and most of those limitations are chosen deliberately to stimulate users to have a clean design. For instance, every time you violate one of the SOLID principles in your code, you will have problems. You will have problems keeping your code flexible, your tests readable, and your Composition Root maintainable. This in fact holds for all DI containers, but perhaps even more for Simple Injector. This is deliberate and if the developers are not interested in applying the SOLID principles and want a DI container that just works in any given circumstance, perhaps Simple Injector is not the best tool for the job. For instance, applying Simple Injector to a legacy code base can be daunting.
I hope this gives some perspective on the design of Simple Injector.
UPDATE
If you need singletons instead, this is even simpler. You can register them as follows:
var fs = new RealFileSystem();
container.RegisterSingle<IFileSystem>(fs);
container.RegisterAll<IFileWatcher>(
new FileWatcher(#".\Triggers\TriggerWatch\SomeTrigger.txt", fs),
new FileWatcher(#".\Triggers\TriggerWatch\SomeOtherTrigger.txt", fs)
);
UPDATE 2
You explicitly asked for RegisterAll<T>(Func<T>) support to lazily create a collection. In fact there already is support for this, just by using RegisterSingle<IEnumerable<T>>(Func<IEnumerable<T>>), as you can see here:
container.RegisterSingle<IEnumerable<IFileWatcher>>(() =>
{
return
from
var list = new List<IFileWatcher>
{
new FileWatcher(#".\Triggers\TriggerWatch\SomeTrigger.txt", container.GetInstance<IFileSystem>()),
new FileWatcher(#".\Triggers\TriggerWatch\SomeOtherTrigger.txt", container.GetInstance<IFileSystem>())
};
return list.AsReadOnly();
});
The RegisterAll<T>(IEnumerable<T>) is in fact a convenient overload that eventually calls into RegisterSingle<IEnumerable<T>>(collection).
Note that I explicitly return a readonly list. This is optional, but is an extra safety mechanism that prevents the collection from being altered by any application code. When using RegisterAll<T> collections are automatically wrapped in a read-only iterator.
The only catch with using RegisterSingle<IEnumerable<T>> is that the container will not iterate the collection when you call container.Verify(). However, in your case this would not be a problem, since when an element of the collection fails to initialize the call to GetInstance<IEnumerable<IFileWatcher>> will fail as well and with that the call to Verify().
UPDATE 3
I apologize if I gave to the impression that I meant your design is wrong. I have no way of knowing this. Since you explicitly asked about why some features where missing, I tried my best to explain the rationale behind this. That doesn't mean however that I think your design is bad, since there is no way for me of knowing.
let's switch back to what that would look like with the maintainer's view of good design
I'm not sure why you think that this is my view on good design? Having a SomeClass with a constructor that need to be changed every time you add a task in the system is definitely not a good design. We can safely agree on this. That breaks OCP. I would never advice anyone to do such thing. Besides having a constructor with many arguments is a design smell at least. The next minor release of Simple Injector even adds a diagnostic warning concerning types with too many dependencies since this often is an indication of a SRP violation. But again see how Simple Injector tries to ‘help’ developers here by providing guidance.
Still however, I do promote the use of generic interfaces, and that’s a case that the Simple Injector design is optimized for especially. An ITask interface is a good example of this. In that case, the ITask<T> will often be an abstraction over some business behavior you wish to execute, and the T is a parameter object that holds all parameters of the operation to execute (you can see it as a message with a message handler). This however is only useful when a consumer needs to execute an operation with a specific set of parameters (a specific version of T), for instance it wants to execute ITask<ShipOrder>. Since you are executing a batch of all tasks without supplying parameter, a design based on ITask<T> would probably be awkward.
But let's assume for a second that it is appropriate. Let's assume this, so I can explain how Simple Injector is optimized in this case. At the end of this update, I’ll show you how Simple Injector might still be able to help in your case, so hold your breath. In your code sample, you register your generic tasks as follows:
container.Register<ITask<SomeGeneric1>(() => new FileWatchTask(container.GetInstance<IFileSystem>(),container.GetInstance<IMessageSubscriptionManagerService>(),configuration,container.GetAllInstances<IFileWatcher>()));
container.Register<ITask<SomeGeneric2>(() => new DefaultFtpTask(container.GetInstance<IFtpClient>(),container.GetInstance<IFileSystem>()));
container.Register<ITask<SomeGeneric3>(() => new DefaultImportFilesTask(container.GetInstance<IFileSystem>()));
This is a rather painful way of registering all tasks in the system, since every time you change a constructor of a task implementation, you'll have to change this code. Simple Injector allows you to auto-wire types by looking at their constructor. In other words, Simple Injector allows you to simplify this code to the following:
container.Register<ITask<SomeGeneric1>, FileWatchTask>();
container.Register<ITask<SomeGeneric2>, DefaultFtpTask>();
container.Register<ITask<SomeGeneric3>, DefaultImportFilesTask>();
This already is much more maintainable, results in better performance and allows you to do add other interesting scenarios later on such as context based injection (since Simple Injector is in control of the whole object graph). This is the advised way of registering things in Simple Injector (prevent the use of a Func if possible).
Still, when having a architecture where a task is the center element, you would probably add new task implementations quite regularly. This will result in having dozens of registration lines and having to go back to this code to add a line every time you add a task. Simple Injector however has a batch registration feature that allows you to shrink this back to one single line of code:
// using SimpleInjector.Extensions;
container.RegisterManyForOpenGeneric(typeof(ITask<>), typeof(ITask<>).Assembly);
By calling this line, the container will search for all ITask<T> implementations that are located in the interface’s assembly and it will register them for you. Since this is done at runtime using reflection, the line does not have to be altered when new tasks are added to the system.
And since you're talking about the OCP, IMO Simple Injector has great support for the OCP. At some points it even beats all other frameworks out there. When I think about OCP, I particularly think about one specific pattern: the decorator pattern. The decorator pattern is a very important pattern to use when applying the OCP. Cross-cutting concerns for instance should not be added by changing some piece of business logic itself, but can best be added by wrapping classes with decorators. With Simple Injector, a decorator can be added with just a single line of code:
// using SimpleInjector.Extensions;
container.RegisterDecorator(typeof(ITask<>), typeof(TransactionTaskDecorator<>));
This ensures that a (transient) TransactionTaskDecorator<T> is wrapped around all ITask<T> implementations when they got resolved. Those decorators are integrated in the container’s pipeline, which means that they can have dependencies of their own, can have initializers, and can have a specific lifestyle. And decorators can be stacked easily:
container.RegisterDecorator(typeof(ITask<>), typeof(TransactionTaskDecorator<>));
container.RegisterDecorator(typeof(ITask<>), typeof(DeadlockRetryTaskDecorator<>));
This wraps all tasks in a transaction decorator and wraps that transaction decorator again in a deadlock retry decorator. And you can even apply decorators conditionally:
container.RegisterDecorator(typeof(ITask<>), typeof(ValidationTaskDecorator<>),
context => ShouldApplyValidator(context.ServiceType));
And if your decorator has a generic type constraint, Simple Injector would automatically apply the decorator when the generic type constraints match, nothing you have to do about this. And since Simple Injector generates expression trees and compiles them down to delegates, this is all a one-time cost. That doesn’t mean it’s for free, but you’ll pay only once and not per resolve.
There's no other DI library that makes adding decorators as easy and flexible as Simple Injector does.
So this is where Simple Injector really shines, but that doesn't help you much :-). Generic interfaces don't help you in this case, but still, even in your case, you might be able make your registration more maintainable. If you have many task implementations in the system (that is, much more than three), you might be able to automate things like this:
var taskTypes = (
from type in typeof(ITask).Assemby.GetTypes()
where typeof(ITask).IsAssignableFrom(type)
where !type.IsAbstract && !type.IsGenericTypeDefinition
select type)
.ToList();
// Register all as task types singleton
taskTypes.ForEach(type => container.Register(type, type, Lifestyle.Singleton));
// registers a list of all those (singleton) tasks.
container.RegisterAll<ITask>(taskTypes);
Alternatively, with Simple Injector 2.3 and up, you can pass in Registration instances directly into the RegisterAll method:
var taskTypes =
from type in typeof(ITask).Assemby.GetTypes()
where typeof(ITask).IsAssignableFrom(type)
where !type.IsAbstract && !type.IsGenericTypeDefinition
select type;
// registers a list of all those (singleton) tasks.
container.RegisterAll(typeof(ITask),
from type in taskTypes
select Lifestyle.Singleton.CreateRegistration(type, type, container));
This does assume however that all those task implementations have a single public constructor and all constructor arguments are resolvable (no configuration values such as int and string). If this is not the case, there are ways to change the default behavior of the framework, but if you want to know anything about this, it would be better to move that discussion to a new SO question.
Again, I’m sorry if I have annoyed you, but I rather annoy some developers than missing the opportunity in helping a lot others :-)
My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them them into blocks according to some criterion and processes those blocks.
Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar.
One of us suggested the following design. Its main theme is exposing IObservable<T> objects directly from the interfaces of our components.
// ********* Interfaces **********
interface IFooSource
{
// this is the event-stream of objects of type Foo
IObservable<Foo> FooArrivals { get; }
}
interface IBarSource
{
// this is the event-stream of objects of type Bar
IObservable<Bar> BarArrivals { get; }
}
/ ********* Implementations *********
class FooSource : IFooSource
{
// Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream.
}
class FooSubsetsToBarConverter : IBarSource
{
IFooSource fooSource;
IObservable<Bar> BarArrivals
{
get
{
// Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar>
}
}
}
// this class will subscribe to the bar source and do processing
class BarsProcessor
{
BarsProcessor(IBarSource barSource);
void Subscribe();
}
// ******************* Main ************************
class Program
{
public static void Main(string[] args)
{
var fooSource = FooSourceFactory.Create();
var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor
barsProcessor.Subscribe();
fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival.
}
}
The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed.
//********** interfaces *********
interface IPublisher<T>
{
void Subscribe(ISubscriber<T> subscriber);
}
interface ISubscriber<T>
{
Action<T> Callback { get; }
}
//********** implementations *********
class FooSource : IPublisher<Foo>
{
public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ }
// here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers
}
class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar>
{
void Callback(Foo foo)
{
// here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria
// maybe we use Rx here internally.
}
public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ }
}
class BarsProcessor : ISubscriber<Bar>
{
void Callback(Bar bar)
{
// here we put code that processes Bar objects
}
}
//********** program *********
class Program
{
public static void Main(string[] args)
{
var fooSource = fooSourceFactory.Create();
var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions
fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival.
}
}
Which one do you think is better? Exposing IObservable<T> and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed?
Here are some things to consider about the designs:
In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback.
The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx.
If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.
Exposing IObservable<T> does not pollute the design with Rx in any way. In fact the design decision is the exact same as pending between exposing an old school .NET event or rolling your own pub/sub mechanism. The only difference is that IObservable<T> is the newer concept.
Need a proof? Look at F# which is also a .NET language but younger than C#. In F# every event derives from IObservable<T>. Honestly, I see no sense in abstracting a perfectly suitable .NET pub/sub mechanism - that is IObservable<T> - away with your homegrown pub/sub abstraction. Just expose IObservable<T>.
Rolling your own pub/sub abstraction feels like applying Java patterns to .NET code to me. The difference is, in .NET there has always been great framework support for the Observer pattern and there is simply no need to roll your own.
First of all, it's worth noting that IObservable<T> is part of mscorlib.dll and the System namespace, and thus exposing it would be somewhat equivalent to exposing IComparable<T> or IDisposable. Which is equivalent to picking .NET as your platform, which you seem to have done already.
Now, instead of suggesting an answer, I want to suggest a different question, and then a different mindset, and I hope (and trust) that you'll manage from there.
You're basically asking: Do we want to promote scattered use of Rx operators all across our system?. Now obviously that's not very inviting, seeing as you probably conceptually treat Rx as a 3rd party library.
Either way, the answer doesn't lie in the basal designs you two proposed, but in the users of those designs. I recommend breaking your design down to abstraction levels, and making sure that the use of Rx operators is scoped in just one level. When I talk about abstraction levels, I mean something similar to the OSI Model, only in the same application's code.
The most important thing, in my book, is to not take the design standpoint of "Let's create something that's going to be used and scattered all across the system, and so we need to make sure we do it just once and just right, for all the years to come". I'm more of a "Let's make this abstraction layer produce the minimal API necessary for other layers to currently achieve their goals".
About the simplicity of both of your designs, it's actually hard to judge since Foo and Bar don't tell me much about use cases, and hence readability factors (which are, by the way, different from one use case to another).
In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback.
I would agree with the availability of Rx as an advantage. Listing some reasons why it is a drawback could help with determining how to address them. Some advantages I see are:
As Yam and Christoph both brushed against, IObservable/IObserver is in mscorlib as of .NET 4.0, so it will (hopefully) become a standard concept that everyone will immediately understand, like events or IEnumerable.
The operators of Rx. Once you need to compose, filter, or otherwise manipulate potentially multiple streams, these becomes very helpful. You will probably find yourself redoing this work in some form with your own interfaces.
The contract of Rx. The Rx library enforces a well-defined contract and does as much of the enforcing of that contract as it can. Even when you need to make your own operators, Observable.Create will do the work to enforce the contract (which is why implementing IObservable directly is not recommended by the Rx team).
The Rx library has good ways to ensure you end up on the right thread when needed.
I've written my share of operators where the library doesn't cover my case.
The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx.
I fail to see how the choice to expose Rx has much, if any, influence on how you implement the architecture under the hood any more than using your own interfaces would. I would assert that you should not be inventing new pub/sub architectures unless absolutely necessary.
Further, the Rx library may have operators that will simplify the "under the hood" parts.
If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.
Yes and no. The first thing I would think if I saw the second design is: "That's almost like IObservable; let's write some extension methods to convert the interfaces." The glue code is written once, used everywhere.
The glue code is straightforward, but if you think you will use Rx, just expose IObservable and save yourself the hassle.
Further Considerations
Basically, your alternate design differs in 3 key ways from IObservable/IObserver.
There is no way to unsubscribe. This may just be an oversight when copying to the question. If not, it's something to strongly consider adding if you go that route.
There is no defined path for errors to flow downstream (eg IObserver.OnError).
There is no way to indicate the completion of a stream (eg IObserver.OnCompleted). This is only relevant if your underlying data is intended to have a termination point.
Your alternate design also returns the callback as an action rather than having it as a method on the interface, but I don't think the distinction is important.
The Rx library encourages a functional approach. Your FooSubsetsToBarConverter class would be better suited as an extension method to IObservable<Foo> that returns IObservable<Bar>. This reduces clutter slightly (why make a class with one property when a function will do fine) and fits better with the chain-style composition of the rest of the Rx library. You could apply the same approach to the alternate interfaces, but without the operators to help, it may be more difficult.
Another alternative could be:
interface IObservableFooSource : IFooSource
{
IObservable<Foo> FooArrivals
{
get;
}
}
class FooSource : IObservableFooSource
{
// Implement the interface explicitly
IObservable<Foo> IObservableFooSource.FooArrivals
{
get
{
}
}
}
This way only clients that expect an IObservableFooSource will see the RX-specific methods, those that expect an IFooSource or a FooSource won't.
When it comes to designing classes and "communication" between them, I always try to design them in such way that all object construction and composing take place in object constructor. I don't like the idea of object construction and composition taking place from outside, like other objects setting properties and calling methods on my object to initialize it. This especially gets ugly when multiple object try to do thisto your object and you never know in what order your props\methods will be executed.
Unforunatly I stumbl on such situations quite often, especially now with the growing popularity of dependecy injection frameworks, lots of libraries and frameworks rely on some kind of external object initialization, and quite often require not only constructor injection on our object but property injection too.
My question are:
Is it ok to have objects that relly on some method, or property to be called on them after which they can consider them initialzied?
Is ther some kind of pattern for situations when your object acting is receiver, and must support multiple interfaces that call it, and the order of these calls does matter? (something better than setting flags, like ThisWasDone, ThatWasCalled)
Is it ok to have objects that relly on some method, or property to be called on them after which they can consider them initialzied?
No. Init methods are a pain since there is no guarantee that they will get called. A simple solution is to switch to interfaces and use factory or builder pattern to compose the implementation.
#Mark Seemann has written a article about it: http://blog.ploeh.dk/2011/05/24/DesignSmellTemporalCoupling.aspx
Is there some kind of pattern for situations when your object acting is receiver, and must support multiple interfaces that call it, and the order of these calls does matter? (something better than setting flags, like ThisWasDone, ThatWasCalled)
Builder pattern.
I think it is OK, but there are implications. If this is an object to be used by others, you need to ensure that an exception is thrown any time a method or property is set or accessed and the initialization should have been called but isn't.
Obviously it is much more convenient and intuitive if you can take care of this in the constructor, then you don't have to implement these checks.
I don't see anything wrong in this. It may be not so convinient, but you can not ALWAYS use initialization in ctor, like you can not alwats drive under green light. These are dicisions that you made based on your app requirements.
It's ok. Immagine if your object, for example, need to read data from TCP stream or a file that ciuld be not present or corrupted. Raise an exception from ctor is baaad.
It's ok. If you think, for example, about some your DSL language compiler, it can looks like:
A) find all global variables and check if there mem allocation sum sutisfies your device requierements
B) parse for errors
C) check for self cycling
And so on...
Hoe this helps.
Answering (1)
Why not? An engine needs the driver because this must enter the key for the car, and later power-on. Will a car do things like detecting current speed if engine is stopeed? Or Will the car show remaining oil without powering-on it?
Some programming goals won't be able to have their actors initialized during its object construction, and this isn't because it's a non-proper way of doing things but because it's the natural, regular and/or semantically-wise way of representing its whole behavior.
Answering (2)
A decent class usage documentation will be your best friend. Like answer to (1), there're some things in this world that should be done in order to get them done rightly, and it's not a problem but a requirement.
Checking objects' state using flags isn't a problem too, it's a good way of adding reliability to your object models, because its own behaviors and consumers of them will be aware about if things got done as expected or not.
First of all, Factory Method.
public class MyClass
{
private MyClass()
{
}
public Create()
{
return new MyClass();
}
}
Second of all, why do you not want another class creating an object for you? (Factory)
public class MyThingFactory
{
IThing CreateThing(Speed speed)
{
if(speed == Speed.Fast)
{
return new FastThing();
}
return new SlowThing();
}
}
Third, why do multiple classes have side effects on new instances of your class? Don't you have declarative control over what other classes have access to your object?
I have an app which consists of several different assemblies, one of which holds the various interfaces which the classes obey, and by which the classes communicate across assembly boundaries. There are several classes firing events, and several which are interested in these events.
My question is as follows: is it good practice to implement a central EventConsolidator of some kind? This would be highly coupled, as it would need to know every class (or at least interface) throwing an event, and every consumer of an event would need to have a reference to EventConsolidator in order to subscribe.
Currently I have the situation where class A knows class B (but not C), class B knows class C, etc. Then if C fires an event B needs to pick it up and fire its own event in order for A to respond. These kinds of chains can get quite long, and it may be that B is only interested in the event in order to pass it along. I don't want A to know about C though, as that would break encapsulation.
What is good practice in this situation? Centralise the events, or grin and bear it and define events in each intermediate class? Or what are the criteria by which to make the decision? Thanks!
Edit: Here is another question asking essentially the same thing.
You could put the event itself in an interface, so that A didn't need to know about C directly, but only that it has the relevant event. However, perhaps you mean that the instance of A doesn't have sight of an instance of C...
I would try to steer clear of a centralised event system. It's likely to make testing harder, and introduced tight coupling as you said.
One pattern which is worth knowing about is making event proxying simple. If B only exposes an event to proxy it to C, you can do:
public event FooHandler Foo
{
add
{
c.Foo += value;
}
remove
{
c.Foo -= value;
}
}
That way it's proxying the subscription/unsubscription rather than the act of raising the event. This has an impact on GC eligibility, of course - which may be beneficial or not, depending on the situation. Worth thinking about though.
What you could try is using the event brokering of either NInject or the Unity Application Block.
This allows you to, for example:
[Publish("foo://happened")]
public event EventHandler<FooArgs> FooHappened;
[Subscribe("foo://happened")]
public void Foo_Happened(object sender, FooArgs args)
{ }
If both objects are created through the container the events will be hooked up automatically.
I'd probably try to massage the domain so that each class can directly depend on the appropriate event source. What I mean is asking the question why don't A know about C? Is there perhaps a D waiting to emerge?
As an alternative approach you could consider an event broker architecture. It means observers don't know directly about the source. Here's an interesting video.
This would be highly coupled, as it would need to know every class
I think you answered your own question if you consider that coupling is bad! Passing events through a chain of potential handlers is a fairly common pattern in many environments; It may not be the most efficient approach, but it avoids the complexity that your suggested approach would involve.
Another approach you could take is to use a message dispatcher. This involves using a common message format (or at least a common message header format) to represent events, and then placing those messages into a queue. A dispatcher then picks up each of those events in turn (or based on some prioritisation), and routes them directly to the required handler. Each handler must be registered with the dispatcher at startup.
A message in this case could simply be a class with a few specific fields at the start. The specific message could simply be a derivative, or you could pass your message-specific data as an 'object' parameter along with the message header.
You can check out the EventBroker object in the M$ patterns and practises lib if you want centralised events.
Personally I think its better to think about your architecture instead and even though we use the EventBroker here, none of our new code uses it and we're hoping to phase it out one sunny day.
we have our own event broker implementation (open source)
Tutorial at: http://sourceforge.net/apps/mediawiki/bbvcommon/index.php?title=Event_Broker
And a performance analysis at: www.planetgeek.ch/2009/07/12/event-broker-performance/
Advantages compared to CAB:
- better logging
- extension support
- better error handling
- extendable handlers (UI, Background Thread, ...)
and some more I cannot recall right now.
Cheers,
Urs