I've been looking for a specific solution for AOP logging. I need a interception that makes possible do something like this:
[MyCustomLogging("someParameter")]
The thing is, I saw examples in others DI frameworks that makes this possible. But my project is already using Autofac for DI and I don't know if is a good idea mix up with Unity (for example). In Autofac.extras.dynamiclibrary2 the class InterceptAttribute is sealed.
Anyone has an idea for this problem?
Ps.: I would be satisfied with this:
[Intercept(typeof(MyLoggingClass), "anotherParameter"]
Although the use of attributes to enrich types with metadata to feed cross-cutting concerns with data to use isn't bad, the use of attributes to mark classes or methods to run some cross-cutting concern usually is.
Marking code with the attribute like you shown has some serious downsides:
It makes your code dependent on the used interception library, making code harder to change and makes it harder to replace external libraries. The number of dependencies the core of your application has with external libraries should be kept to an absolute minimum. It would be ironic if your code was littered with dependencies upon the Dependency Injection library; the tool that is used to allow minimizing external dependencies and increasing loose coupling.
To apply cross-cutting concerns to a wide range of classes (which is what you usually want to do), you will have to go through the complete code base to add or remove attributes from methods. This is time consuming and error prone. But even worse, making sure that aspects are run in a particular order is hard with attributes. Some frameworks allow you to specify an order to the attribute (using some sort of Order property), but changing the order means making sweeping changes through the code to change the Order of the attributes. Forgetting one will cause bugs. This is a violation of the Open/closed principle.
Since the attribute references the aspect class (in your example typeof(MyLoggingClass)), this makes your code still statically dependent on the cross-cutting concern. Replacing the class with another will again cause you to do sweeping changes to your code base, and keeping the hard dependency makes it much harder to reuse code or decide at runtime or time of deployment whether the aspect should be applied or not. In many cases, you can't have this dependency from your code to the aspect, because the code lives in a base library, while the aspect is specific to the application framework. For instance, you might have the same business logic that runs both in a web application and a Windows service. When ran in a web application, you want to log in a different way. In other words, you are violating the Dependency inversion principle.
I therefore consider applying attributes this way bad practice.
Instead of using attributes like this, apply cross-cutting concerns transparently, either using interceptions or decorators. Decorators are my preferred approach, because their use is much cleaner, simpler, and therefore more maintainable. Decorators can be written without having to take a dependency at any external library and they can therefore be placed in any suitable place in your application. Downside of decorators however is that it's very cumbersome to write and apply them in case your design isn't SOLID, DRY and you're not following the Reused abstraction principle.
But if you use the right application design using SOLID and message based patterns, you'll find out that applying cross-cutting concerns such as logging is just a matter of writing a very simple decorator such as:
public class LoggingCommandHandlerDecorator<T> : ICommandHandler<T>
{
private readonly ILogger logger;
private readonly ICommandHandler<T> decoratee;
public LoggingCommandHandlerDecorator(ILogger logger, ICommandHandler<T> decoratee) {
this.logger = logger;
this.decoratee = decoratee;
}
public void Handle(T command) {
this.logger.Log("Handling {0}. Data: {1}", typeof(T).Name,
JsonConvert.SerializeObject(command));
this.decoratee.Handle(command);
}
}
Without a proper design, you can still use interception (without attributes), because interception allows you to 'decorate' any types that seem to have no relationship in code (share no common interface). Defining which types to intercept and which not can be cumbersome, but you will usually still be able to define this in one place of the application, thus without having to make sweeping changes throughout the code base.
Side node. As I said, using attributes to describe pure metadata is fine and preferable. For instance, take some code that is only allowed to run for users with certain permissions. You can mark that code as follows:
[Permission(Permissions.Crm.ManageCompanies)]
public class BlockCompany : ICommand {
public Guid CompanyId;
}
This attribute does not describe what aspects are run, nor does it reference any types from an external library (the PermissionAttribute is something you can (and should) define yourself), or any AOP-specific types. It solely enriches the code with metadata.
In the end, you obviously want to apply some cross-cutting concern that checks whether the current user has the right permissions, but the attribute doesn't force you into a specific direction. With the attribute above, I could imagine the decorator to look as follows:
public class PermissionCommandHandlerDecorator<T> : ICommandHandler<T>
{
private static readonly Guid requiredPermissionId =
typeof(T).GetCustomAttribute<PermissionAttribute>().PermissionId;
private readonly IUserPermissionChecker checker;
private readonly ICommandHandler<T> decoratee;
public PermissionCommandHandlerDecorator(IUserPermissionChecker checker,
ICommandHandler<T> decoratee) {
this.checker = checker;
this.decoratee = decoratee;
}
public void Handle(T command) {
this.checker.CheckPermission(requiredPermissionId);
this.decoratee.Handle(command);
}
}
Related
Do you think it might be reasonable to replace my service layer or service classes with MediatR? For example, my service classes look like this:
public interface IEntityService<TEntityDto> where TEntityDto : class, IDto
{
Task<TEntityDto> CreateAsync(TEntityDto entityDto);
Task<bool> DeleteAsync(int id);
Task<IEnumerable<TEntityDto>> GetAllAsync(SieveModel sieveModel);
Task<TEntityDto> GetByIdAsync(int id);
Task<TEntityDto> UpdateAsync(int id, TEntityDto entityDto);
}
I want to achieve some sort of modular design so other dynamically loaded modules
or plugins can write their own notification or command handlers for my main core application.
Currently, my application is not event-driven at all and there's no easy way for my dynamically loaded plugins to communicate.
I can either incorporate MediatR in my controllers removing service layer completely or use it with my service layer just publishing notifications so my plugins can handle them.
Currently, my logic is mostly CRUD but there's a lot of custom logic going on before creating, updating, deleting.
Possible replacement of my service would look like:
public class CommandHandler : IRequestHandler<CreateCommand, Response>, IRequestHandler<UpdateCommand, Response>, IRequestHandler<DeleteCommand, bool>
{
private readonly DbContext _dbContext;
public CommandHandler(DbContext dbContext)
{
_dbContext = dbContext;
}
public Task<Response> Handle(CreateCommand request, CancellationToken cancellationToken)
{
//...
}
public Task<Response> Handle(UpdateCommand request, CancellationToken cancellationToken)
{
//...
}
public Task<bool> Handle(DeleteCommand request, CancellationToken cancellationToken)
{
///...
}
}
Would it be something wrong to do?
Basically, I'm struggling what to choose for my logic flow:
Controller -> Service -> MediatR -> Notification handlers -> Repository
Controller -> MediatR -> Command handlers -> Repository
It seems like with MediatR I can't have a single model for Create, Update and Delete, so one way to re-use it I'd need to derive requests like:
public CreateRequest : MyDto, IRequest<MyDto> {}
public UpdateRequest : MyDto, IRequest<MyDto> {}
or embed it in my command like:
public CreateRequest : IRequest<MyDto>
{
MyDto MyDto { get; set; }
}
One advantage of MediatR is the ability to plug logic in and plug it out easily which seems like a nice fit for modular architecture but still, I'm a bit confused how to shape my architecture with it.
Update: I'm preserving the answer, but my position on this has changed somewhat as indicated in this blog post.
If you have a class, let's say an API controller, and it depends on
IRequestHandler<CreateCommand, Response>
What is the benefit of changing your class so that it depends on IMediator,
and instead of calling
return requestHandler.HandleRequest(request);
it calls
return mediator.Send(request);
The result is that instead of injecting the dependency we need, we inject a service locator which in turn resolves the dependency we need.
Quoting Mark Seeman's article,
In short, the problem with Service Locator is that it hides a class' dependencies, causing run-time errors instead of compile-time errors, as well as making the code more difficult to maintain because it becomes unclear when you would be introducing a breaking change.
It's not exactly the same as
var commandHandler = serviceLocator.Resolve<IRequestHandler<CreateCommand, Response>>();
return commandHandler.Handle(request);
because the mediator is limited to resolving command and query handlers, but it's close. It's still a single interface that provides access to lots of other ones.
It makes code harder to navigate
After we introduce IMediator, our class still indirectly depends on IRequestHandler<CreateCommand, Response>. The difference is that now we can't tell by looking at it. We can't navigate from the interface to its implementations. We might reason that we can still follow the dependencies if we know what to look for - that is, if we know the conventions of command handler interface names. But that's not nearly as helpful as a class actually declaring what it depends on.
Sure, we get the benefit of having interfaces wired up to concrete implementations without writing the code, but the savings are trivial and we'll likely lose whatever time we save because of the added (if minor) difficulty of navigating the code. And there are libraries which will register those dependencies for us anyway while still allowing us to inject abstraction we actually depend on.
It's a weird, skewed way of depending on abstractions
It's been suggested that using a mediator assists with implementing the decorator pattern. But again, we already gain that ability by depending on an abstraction. We can use one implementation of an interface or another that adds a decorator. The point of depending on abstractions is that we can change such implementation details without changing the abstraction.
To elaborate: The point of depending on ISomethingSpecific is that we can change or replace the implementation without modifying the classes that depend on it. But if we say, "I want to change the implementation of ISomethingSpecific (by adding a decorator), so to accomplish that I'm going to change the classes that depend on ISomethingSpecific, which were working just fine, and make them depend on some generic, all-purpose interface", then something has gone wrong. There are numerous other ways to add decorators without modifying parts of our code that don't need to change.
Yes, using IMediator promotes loose coupling. But we already accomplished that by using well-defined abstractions. Adding layer upon layer of indirection doesn't multiply that benefit. If you've got enough abstraction that it's easy to write unit tests, you've got enough.
Vague dependencies make it easier to violate the Single Responsibility Principle
Suppose you have a class for placing orders, and it depends on ICommandHandler<PlaceOrderCommand>. What happens if someone tries to sneak in something that doesn't belong there, like a command to update user data? They'll have to add a new dependency, ICommandHandler<ChangeUserAddressCommand>. What happens if they want to keep piling more stuff into that class, violating the SRP? They'll have to keep adding more dependencies. That doesn't prevent them from doing it, but at least it shines a light on what's happening.
On the other hand, what if you can add all sorts of random stuff into a class without adding more dependencies? The class depends on an abstraction that can do anything. It can place orders, update addresses, request sales history, whatever, and all without adding a single new dependency. That's the same problem you get if you inject an IoC container into a class where it doesn't belong. It's a single class or interface that can be used to request all sorts of dependencies. It's a service locator.
IMediator doesn't cause SRP violations, and its absence won't prevent them. But explicit, specific dependencies guide us away from such violations.
The Mediator Pattern
Curiously, using MediatR doesn't usually have anything to do with the mediator
pattern. The mediator pattern promotes loose coupling by having objects interact with a mediator rather than directly with each other. If we're already depending on an abstraction like an ICommandHandler then the tight coupling that the mediator pattern prevents doesn't exist in the first place.
The mediator pattern also encapsulates complex operations so that they appear simpler from the outside.
return mediator.Send(request);
is not simpler than
return requestHandler.HandleRequest(request);
The complexity of the two interactions is identical. Nothing is "mediated." Imagine that you're about to swipe your credit card at the grocery store, and then someone offers to simplify your complex interaction by leading you to another register where you do exactly the same thing.
What about CQRS?
A mediator is neutral when it comes to CQRS (unless we have two separate mediators, like ICommandMediator and IQueryMediator.) It seems counterproductive to separate our command handlers from our query handlers and then inject a single interface which in effect brings them back together and exposes all of our commands and queries in one place. At the very least it's hard to say that it helps us to keep them separate.
IMediator is used to invoke command and query handlers, but it has nothing to do with the extent to which they are segregated. If they were segregated before we added a mediator, they still are. If our query handler does something it shouldn't, the mediator will still happily invoke it.
I hope it doesn't sound like a mediator ran over my dog. But it's certainly not a silver bullet that sprinkles CQRS on our code or even necessarily improves our architecture.
We should ask, what are the benefits? What undesirable consequences could it have? Do I need that tool, or can I obtain the benefits I want without those consequences?
What I am asserting is that once we're already depending on abstractions, further steps to "hide" a class's dependencies usually add no value. They make it harder to read and understand, and erode our ability to detect and prevent other code smells.
Partly this was answered here: MediatR when and why I should use it? vs 2017 webapi
The biggest benefit of using MediaR(or MicroBus, or any other mediator implementation) is isolating and/or segregating your logic (one of the reasons its popular way to use CQRS) and a good foundation for implementing decorator pattern (so something like ASP.NET Core MVC filters). From MediatR 3.0 there's an inbuilt support for this (see Behaviours) (instead of using IoC decorators)
You can use the decorator pattern with services (classes like FooService) too. And you can use CQRS with services too (FooReadService, FooWriteService)
Other than that it's opinion-based, and use what you want to achieve your goal. The end result shouldn't make any difference except for code maintenance.
Additional reading:
Baking Round Shaped Apps with MediatR
(which compares custom mediator implementation with the one MediatR provides and porting process)
Is it good to handle multiple requests in a single handler?
I'm pondering the design of a C# library, that will have several different high level functions. Of course, those high-level functions will be implemented using the SOLID class design principles as much as possible. As such, there will probably be classes intended for consumers to use directly on a regular basis, and "support classes" that are dependencies of those more common "end user" classes.
The question is, what is the best way to design the library so it is:
DI Agnostic - Although adding basic "support" for one or two of the common DI libraries (StructureMap, Ninject, etc) seems reasonable, I want consumers to be able to use the library with any DI framework.
Non-DI usable - If a consumer of the library is using no DI, the library should still be as easy to use as possible, reducing the amount of work a user has to do to create all these "unimportant" dependencies just to get to the "real" classes they want to use.
My current thinking is to provide a few "DI registration modules" for the common DI libraries (e.g a StructureMap registry, a Ninject module), and a set or Factory classes that are non-DI and contain the coupling to those few factories.
Thoughts?
This is actually simple to do once you understand that DI is about patterns and principles, not technology.
To design the API in a DI Container-agnostic way, follow these general principles:
Program to an interface, not an implementation
This principle is actually a quote (from memory though) from Design Patterns, but it should always be your real goal. DI is just a means to achieve that end.
Apply the Hollywood Principle
The Hollywood Principle in DI terms says: Don't call the DI Container, it'll call you.
Never directly ask for a dependency by calling a container from within your code. Ask for it implicitly by using Constructor Injection.
Use Constructor Injection
When you need a dependency, ask for it statically through the constructor:
public class Service : IService
{
private readonly ISomeDependency dep;
public Service(ISomeDependency dep)
{
if (dep == null)
{
throw new ArgumentNullException("dep");
}
this.dep = dep;
}
public ISomeDependency Dependency
{
get { return this.dep; }
}
}
Notice how the Service class guarantees its invariants. Once an instance is created, the dependency is guaranteed to be available because of the combination of the Guard Clause and the readonly keyword.
Use Abstract Factory if you need a short-lived object
Dependencies injected with Constructor Injection tend to be long-lived, but sometimes you need a short-lived object, or to construct the dependency based on a value known only at run-time.
See this for more information.
Compose only at the Last Responsible Moment
Keep objects decoupled until the very end. Normally, you can wait and wire everything up in the application's entry point. This is called the Composition Root.
More details here:
Where should I do Injection with Ninject 2+ (and how do I arrange my Modules?)
Design - Where should objects be registered when using Windsor
Simplify using a Facade
If you feel that the resulting API becomes too complex for novice users, you can always provide a few Facade classes that encapsulate common dependency combinations.
To provide a flexible Facade with a high degree of discoverability, you could consider providing Fluent Builders. Something like this:
public class MyFacade
{
private IMyDependency dep;
public MyFacade()
{
this.dep = new DefaultDependency();
}
public MyFacade WithDependency(IMyDependency dependency)
{
this.dep = dependency;
return this;
}
public Foo CreateFoo()
{
return new Foo(this.dep);
}
}
This would allow a user to create a default Foo by writing
var foo = new MyFacade().CreateFoo();
It would, however, be very discoverable that it's possible to supply a custom dependency, and you could write
var foo = new MyFacade().WithDependency(new CustomDependency()).CreateFoo();
If you imagine that the MyFacade class encapsulates a lot of different dependencies, I hope it's clear how it would provide proper defaults while still making extensibility discoverable.
FWIW, long after writing this answer, I expanded upon the concepts herein and wrote a longer blog post about DI-Friendly Libraries, and a companion post about DI-Friendly Frameworks.
The term "dependency injection" doesn't specifically have anything to do with an IoC container at all, even though you tend to see them mentioned together. It simply means that instead of writing your code like this:
public class Service
{
public Service()
{
}
public void DoSomething()
{
SqlConnection connection = new SqlConnection("some connection string");
WindowsIdentity identity = WindowsIdentity.GetCurrent();
// Do something with connection and identity variables
}
}
You write it like this:
public class Service
{
public Service(IDbConnection connection, IIdentity identity)
{
this.Connection = connection;
this.Identity = identity;
}
public void DoSomething()
{
// Do something with Connection and Identity properties
}
protected IDbConnection Connection { get; private set; }
protected IIdentity Identity { get; private set; }
}
That is, you do two things when you write your code:
Rely on interfaces instead of classes whenever you think that the implementation might need to be changed;
Instead of creating instances of these interfaces inside a class, pass them as constructor arguments (alternatively, they could be assigned to public properties; the former is constructor injection, the latter is property injection).
None of this presupposes the existence of any DI library, and it doesn't really make the code any more difficult to write without one.
If you're looking for an example of this, look no further than the .NET Framework itself:
List<T> implements IList<T>. If you design your class to use IList<T> (or IEnumerable<T>), you can take advantage of concepts like lazy-loading, as Linq to SQL, Linq to Entities, and NHibernate all do behind the scenes, usually through property injection. Some framework classes actually accept an IList<T> as a constructor argument, such as BindingList<T>, which is used for several data binding features.
Linq to SQL and EF are built entirely around the IDbConnection and related interfaces, which can be passed in via the public constructors. You don't need to use them, though; the default constructors work just fine with a connection string sitting in a configuration file somewhere.
If you ever work on WinForms components you deal with "services", like INameCreationService or IExtenderProviderService. You don't even really know what what the concrete classes are. .NET actually has its own IoC container, IContainer, which gets used for this, and the Component class has a GetService method which is the actual service locator. Of course, nothing prevents you from using any or all of these interfaces without the IContainer or that particular locator. The services themselves are only loosely-coupled with the container.
Contracts in WCF are built entirely around interfaces. The actual concrete service class is usually referenced by name in a configuration file, which is essentially DI. Many people don't realize this but it is entirely possible to swap out this configuration system with another IoC container. Perhaps more interestingly, the service behaviors are all instances of IServiceBehavior which can be added later. Again, you could easily wire this into an IoC container and have it pick the relevant behaviors, but the feature is completely usable without one.
And so on and so forth. You'll find DI all over the place in .NET, it's just that normally it's done so seamlessly that you don't even think of it as DI.
If you want to design your DI-enabled library for maximum usability then the best suggestion is probably to supply your own default IoC implementation using a lightweight container. IContainer is a great choice for this because it's a part of the .NET Framework itself.
EDIT 2015: time has passed, I realize now that this whole thing was a huge mistake. IoC containers are terrible and DI is a very poor way to deal with side effects. Effectively, all of the answers here (and the question itself) are to be avoided. Simply be aware of side effects, separate them from pure code, and everything else either falls into place or is irrelevant and unnecessary complexity.
Original answer follows:
I had to face this same decision while developing SolrNet. I started with the goal of being DI-friendly and container-agnostic, but as I added more and more internal components, the internal factories quickly became unmanageable and the resulting library was inflexible.
I ended up writing my own very simple embedded IoC container while also providing a Windsor facility and a Ninject module. Integrating the library with other containers is just a matter of properly wiring the components, so I could easily integrate it with Autofac, Unity, StructureMap, whatever.
The downside of this is that I lost the ability to just new up the service. I also took a dependency on CommonServiceLocator which I could have avoided (I might refactor it out in the future) to make the embedded container easier to implement.
More details in this blog post.
MassTransit seems to rely on something similar. It has an IObjectBuilder interface which is really CommonServiceLocator's IServiceLocator with a couple more methods, then it implements this for each container, i.e. NinjectObjectBuilder and a regular module/facility, i.e. MassTransitModule. Then it relies on IObjectBuilder to instantiate what it needs. This is a valid approach of course, but personally I don't like it very much since it's actually passing around the container too much, using it as a service locator.
MonoRail implements its own container as well, which implements good old IServiceProvider. This container is used throughout this framework through an interface that exposes well-known services. To get the concrete container, it has a built-in service provider locator. The Windsor facility points this service provider locator to Windsor, making it the selected service provider.
Bottom line: there is no perfect solution. As with any design decision, this issue demands a balance between flexibility, maintainability and convenience.
What I would do is design my library in a DI container agnostic way to limit the dependency on the container as much as possible. This allows to swap out on DI container for another if need be.
Then expose the layer above the DI logic to the users of the library so that they can use whatever framework you chose through your interface. This way they can still use DI functionality that you exposed and they are free to use any other framework for their own purposes.
Allowing the users of the library to plug their own DI framework seems a bit wrong to me as it dramatically increases amount of maintenance. This also then becomes more of a plugin environment than straight DI.
I always saw people always talking about using framework like Ninject, Unity, Windsor to do the dependency resolver and injection. Take following code for example:
public class ProductsController : ApiController
{
private IProductRepository _repository;
public ProductsController(IProductRepository repository)
{
_repository = repository;
}
}
My question is: why can't we simply write as:
public class ProductsController : ApiController
{
private IProductRepository _repository;
public ProductsController() :this(null)
{}
public ProductsController(IProductRepository repository)
{
_repository = repository?? new ProductRepository();
}
}
In that case seems we don't need any framework, even for the unit test we can easily mock.
So what's the real purpose for those framework?
Thanks in advance!
In that case your ProductsController still depends on a low level component (the concrete ProductRepository in your case) which is a violation of the Dependency Inversion Principle. Whether or not this is a problem depends on multiple factors, but it causes the following problems:
The creation of the ProductRepository is still duplicated throughout the application causing you to make sweeping changes throughout the application when the constructor of ProductRepository chances (assuming that ProductRepository is used in more places, which is quite reasonable) which would be an Open/Closed Principle violation.
It causes you to make sweeping changes whenever you decide to wrap this ProductService with a decorator or interceptor that adds cross-cutting concerns (such as logging, audit trailing, security filtering, etc) that you surely don't want to repeat that code throughout all your repositories (again an OCP violation).
It forces the ProductsController to know about the ProductsRepository, which might be a problem depending on the size and complexity of the application your are writing.
So this is not about the use of frameworks, it's about applying software design principles. If you decide to adhere to these principles to make your application more maintainable, the frameworks like Ninject, Autofac and Simple Injector can help you with making the startup path of your application more maintainable. But nothing is preventing you from applying these principles without the use of any tool or library.
Small disclaimer: I'm an avid Unity user, and here are my 2 cents.
1st: Violation of SOLID (SRP/OCP/DIP)
Already stated by #democodemonkey and #thumbmunkeys, you couple the 2 classes tightly. Let's say that some classes (let it be ProductsThingamajigOne and ProductsThingamajigTwo) are using the ProductsController, and are using its default constructor. What if in the architect decides that the system should not use a ProductsRepository that saves Products into files, but should use a database or a cloud storage. What would the impact be on the classes?
2nd: What if the ProductRepository needs another dependency?
If the repository is based on a database, you might need to provide it with a ConnectionString. If it's based on files, you might need to provide it with a class of settings providing the exact path of where to save the files - and the truth is, that in general, applications tend to contain dependency trees (A dependent on B and C, B dependent on D, C dependent on E, D dependent on F and G and so on) that have more then 2 levels, so the SOLID violations hurts more, as more code has to be changed to perform some task - but even before that, can you imagine the code that would create the whole app?
Fact is, classes can have many dependencies of theirs own - and in this case, the issues described earlier multiply.
That's usually the job of the Bootstrapper - it defines the dependency structure, and performs (usually) a single resolve that brings the whole system up, like a puppet on a string.
3rd: What if the Dependency-Tree is not a tree, but a Graph?
Consider the following case: Class A dependent on classes B and C, B and C both are dependent on class D, and are expecting to use the same instance of D. A common practice was to make D a singleton, but that could cause a lot of issues. The other option is to pass an instance of D into the constructor of A, and have it create B and C, or pass instances of B and C to A and create them outside - and the complexity goes on and on.
4th: Packing (Assemblies)
Your code assumes that 'ProductsController' can see 'ProductRepository' (assembly-wise). What if there's no reference between them? the Assembly Map can be non-trivial. usually, the bootstrapping code (I'm assuming that it's in code and not in configuration file for a second here) is written in an assembly that references the entire solution. (This was also described by #Steven).
5th: Cool stuff you can do with IoC containers
Singletons are made easy (with unity: simply use a 'containercontrolledlifetimemanager' when registering),
Lazy Instantiation made really easy (with unity: register mapping of and ask in the constructor for a Func).
Those are just a couple of examples of things that IoC containers give you for (almost) free.
Of course you could do that, but this would cause the following issues:
The dependency to IProductRepository is not explicit anymore, it looks like an optional dependency
Other parts of the code might instantiate a different implementation of IProductRepository, which would be probably a problem in this case
The class becomes tightly coupled to ProductsController as it internally creates a dependency
In my opinion this is not a question about a framework. The point is to make modules composable, by exposing their dependencies in a constructor or property. Your example somewhat obfuscates that.
If class ProductRepository is not defined in the same assembly as ProductsController (or if you would ever want to move it to a different assembly) then you have just introduced a dependency that you don't want.
That's an anti-pattern described as "Bastard Injection" in the seminal book "Dependency Injection in .Net" by Mark Seeman.
However, if ProductRepository is ALWAYS going to be in the same assembly as ProductsController and if it does not depend on anything that the rest of the ProductsController assembly depends upon, it could be a local default - in which case it would be ok.
From the class names, I'm betting that such a dependency SHOULD NOT be introduced, and you are looking at bastard injection.
Here ProductsController is responsible for creating the ProductRepository.
What happens if ProductRepository requires an additional parameter in its constructor? Then ProductsController will have to change, and this violates the SRP.
As well as adding more complexity to all of your objects.
As well as making it unclear as to whether a caller needs to pass the child object, or is it optional?
The main purpose is to decouple object creation from its usage or consumption. The creation of the object "usually" is taken care of by factory classes. In your case, the factory classes will be designed to return an object of a type which implements IProductRepository interface.
In some frameworks, like in Sprint.Net, the factory classes instantiate objects that are declaratively written in the configuration (i.e. in the app.config or web.config). Thus making the program totally independent of the object it needs to create. This can be quite powerful at times.
It is important to distinguish dependency injection and inversion of control is not the same. You can using dependency injection without IOC frameworks like unity, ninject ..., performing the injection manually what they often called poor man's DI.
In my blog I recently wrote a post about this issue
http://xurxodeveloper.blogspot.com.es/2014/09/Inyeccion-de-Dependencias-DI.html
Going back to your example, I see weaknesses in the implementation.
1 - ProductsController depends on a concretion and not an abstraction, violation SOLID
2 - In case of interface and the repository are living in different projects, you'd be forced to have a reference to the project where the repository is located
3 -If in the future you need to add a parameter to the constructor, you would have to modify the controller when it's a simply repository client.
4 -Controller and repository can be developed for differents programmers, controller programmer must know how create repository
consider this usecase:
suppose, in future, if you want to inject CustomProductRepository instead of ProductRepository to ProductsController , to the software which is already deployed to client site.
with Spring.Net you can just update the spring configuration file(xml) to use your CustomProductRepository. So, with this, you can avoid re-compiling and re-installing software on client site since you have not modified any code.
If I have a class with a service that I want all derived classes to have access to (say a security object, or a repository) then I might do something like this:
public abstract class A
{
static ISecurity _security;
public ISecurity Security { get { return _security; } }
public static void SetSecurity(ISecurity security) { _security = security; }
}
public class Bootstrapper
{
public Bootstrapper()
{
A.SetSecurity(new Security());
}
}
It seems like lately I see static properties being shunned everywhere as something to absolutely avoid. To me, this seems cleaner than adding an ISecurity parameter to the constructor of every single derived class I make. Given all I've read lately though, I'm left wondering:
Is this is an acceptable application of dependency injection or am I violating some major design principle that could come back to haunt me later? I am not doing unit tests at this point so maybe if I were then I would suddenly realize the answer to my question. To be honest though I probably won't change my design over that, but if there is some other important reason why I should change it then I very well might.
Edit: I made a couple stupid mistakes the first time I wrote that code... it's fixed now. Just thought I'd point that out in case anyone happened to notice :)
Edit: SWeko makes a good point about all deriving classes having to use the same implementation. In cases where I've used this design, the service is always a singleton so it effectively enforces an already existing requirement. Naturally, this would be a bad design if that weren't the case.
This design could be problematic for a couple of reasons.
You already mention unit testing, which is rather important. Such static dependency can make testing much harder. When the fake ISecurity ever has to be anything else than a Null Object implementation, you will find yourself having to removing the fake implementation on test tear down. Removing it during test tear down prevents other tests from being influenced when you forget to remove that fake object. A test tear down makes your test more complicated. Not that much complicated, but having this adds up when many tests have tear down code and you'll have a hard time finding a bug in your test suit when one test forget to run the tear down. You will also have to make sure the registered ISecurity fake object is thread-safe and won't influence other tests that might run in parallel (test frameworks such as MSTest run tests in parallel for obvious performance reasons).
Another possible problem with injecting the dependency as static, is that you force this ISecurity dependency to be a singleton (and probably to be thread-safe). This disallows for instance to apply any interceptors and decorators that have a different lifestyle than singleton
Another problem is that removing this dependency from the constructor disables any analysis or diagnostics that could be done by the DI framework on your behalf. Since you manually set this dependency, the framework has no knowledge about this dependency. In a sense you move the responsibility of managing dependencies back to the application logic, instead of allowing the Composition Root to be in control over the way dependencies are wired together. Now the application has to know that ISecurity is in fact thread-safe. This is a responsibility that in general belongs to the Composition Root.
The fact that you want to store this dependency in a base type might even be an indication of a violation of a general design principle: The Single Responsibility Principle (SRP). It has some resemblance with a design mistake I made myself in the past. I had a set of business operations that all inherited from a base class. This base class implemented all sorts of behavior, such as transaction management, logging, audit trailing, adding fault tolerance, and.... adding security checks. This base class became an unmanageable God Object. It was unmanageable, simply because it had too many responsibilities; it violated the SRP. Here's my story if you want to know more about this.
So instead of having this security concern (it's probably a cross-cutting concern) implemented in a base class, try removing the base class all together and use a decorator to add security to those classes. You can wrap each class with one or more decorators and each decorator can handle one specific concern. This makes each decorator class easy to follow because they will follow the SRP.
The problem is that is not really dependency injection, even if it is encapsulated in the definition of the class. Admittedly,
static Security _security;
would be worse than Security, but still, the instances of A do not get to use whatever security the caller passed to them, they need to depend on the global setting of a static property.
What I'm trying to say is that your usage is not that different from:
public static class Globals
{
public static ISecurity Security {get; set;}
}
I'm pondering the design of a C# library, that will have several different high level functions. Of course, those high-level functions will be implemented using the SOLID class design principles as much as possible. As such, there will probably be classes intended for consumers to use directly on a regular basis, and "support classes" that are dependencies of those more common "end user" classes.
The question is, what is the best way to design the library so it is:
DI Agnostic - Although adding basic "support" for one or two of the common DI libraries (StructureMap, Ninject, etc) seems reasonable, I want consumers to be able to use the library with any DI framework.
Non-DI usable - If a consumer of the library is using no DI, the library should still be as easy to use as possible, reducing the amount of work a user has to do to create all these "unimportant" dependencies just to get to the "real" classes they want to use.
My current thinking is to provide a few "DI registration modules" for the common DI libraries (e.g a StructureMap registry, a Ninject module), and a set or Factory classes that are non-DI and contain the coupling to those few factories.
Thoughts?
This is actually simple to do once you understand that DI is about patterns and principles, not technology.
To design the API in a DI Container-agnostic way, follow these general principles:
Program to an interface, not an implementation
This principle is actually a quote (from memory though) from Design Patterns, but it should always be your real goal. DI is just a means to achieve that end.
Apply the Hollywood Principle
The Hollywood Principle in DI terms says: Don't call the DI Container, it'll call you.
Never directly ask for a dependency by calling a container from within your code. Ask for it implicitly by using Constructor Injection.
Use Constructor Injection
When you need a dependency, ask for it statically through the constructor:
public class Service : IService
{
private readonly ISomeDependency dep;
public Service(ISomeDependency dep)
{
if (dep == null)
{
throw new ArgumentNullException("dep");
}
this.dep = dep;
}
public ISomeDependency Dependency
{
get { return this.dep; }
}
}
Notice how the Service class guarantees its invariants. Once an instance is created, the dependency is guaranteed to be available because of the combination of the Guard Clause and the readonly keyword.
Use Abstract Factory if you need a short-lived object
Dependencies injected with Constructor Injection tend to be long-lived, but sometimes you need a short-lived object, or to construct the dependency based on a value known only at run-time.
See this for more information.
Compose only at the Last Responsible Moment
Keep objects decoupled until the very end. Normally, you can wait and wire everything up in the application's entry point. This is called the Composition Root.
More details here:
Where should I do Injection with Ninject 2+ (and how do I arrange my Modules?)
Design - Where should objects be registered when using Windsor
Simplify using a Facade
If you feel that the resulting API becomes too complex for novice users, you can always provide a few Facade classes that encapsulate common dependency combinations.
To provide a flexible Facade with a high degree of discoverability, you could consider providing Fluent Builders. Something like this:
public class MyFacade
{
private IMyDependency dep;
public MyFacade()
{
this.dep = new DefaultDependency();
}
public MyFacade WithDependency(IMyDependency dependency)
{
this.dep = dependency;
return this;
}
public Foo CreateFoo()
{
return new Foo(this.dep);
}
}
This would allow a user to create a default Foo by writing
var foo = new MyFacade().CreateFoo();
It would, however, be very discoverable that it's possible to supply a custom dependency, and you could write
var foo = new MyFacade().WithDependency(new CustomDependency()).CreateFoo();
If you imagine that the MyFacade class encapsulates a lot of different dependencies, I hope it's clear how it would provide proper defaults while still making extensibility discoverable.
FWIW, long after writing this answer, I expanded upon the concepts herein and wrote a longer blog post about DI-Friendly Libraries, and a companion post about DI-Friendly Frameworks.
The term "dependency injection" doesn't specifically have anything to do with an IoC container at all, even though you tend to see them mentioned together. It simply means that instead of writing your code like this:
public class Service
{
public Service()
{
}
public void DoSomething()
{
SqlConnection connection = new SqlConnection("some connection string");
WindowsIdentity identity = WindowsIdentity.GetCurrent();
// Do something with connection and identity variables
}
}
You write it like this:
public class Service
{
public Service(IDbConnection connection, IIdentity identity)
{
this.Connection = connection;
this.Identity = identity;
}
public void DoSomething()
{
// Do something with Connection and Identity properties
}
protected IDbConnection Connection { get; private set; }
protected IIdentity Identity { get; private set; }
}
That is, you do two things when you write your code:
Rely on interfaces instead of classes whenever you think that the implementation might need to be changed;
Instead of creating instances of these interfaces inside a class, pass them as constructor arguments (alternatively, they could be assigned to public properties; the former is constructor injection, the latter is property injection).
None of this presupposes the existence of any DI library, and it doesn't really make the code any more difficult to write without one.
If you're looking for an example of this, look no further than the .NET Framework itself:
List<T> implements IList<T>. If you design your class to use IList<T> (or IEnumerable<T>), you can take advantage of concepts like lazy-loading, as Linq to SQL, Linq to Entities, and NHibernate all do behind the scenes, usually through property injection. Some framework classes actually accept an IList<T> as a constructor argument, such as BindingList<T>, which is used for several data binding features.
Linq to SQL and EF are built entirely around the IDbConnection and related interfaces, which can be passed in via the public constructors. You don't need to use them, though; the default constructors work just fine with a connection string sitting in a configuration file somewhere.
If you ever work on WinForms components you deal with "services", like INameCreationService or IExtenderProviderService. You don't even really know what what the concrete classes are. .NET actually has its own IoC container, IContainer, which gets used for this, and the Component class has a GetService method which is the actual service locator. Of course, nothing prevents you from using any or all of these interfaces without the IContainer or that particular locator. The services themselves are only loosely-coupled with the container.
Contracts in WCF are built entirely around interfaces. The actual concrete service class is usually referenced by name in a configuration file, which is essentially DI. Many people don't realize this but it is entirely possible to swap out this configuration system with another IoC container. Perhaps more interestingly, the service behaviors are all instances of IServiceBehavior which can be added later. Again, you could easily wire this into an IoC container and have it pick the relevant behaviors, but the feature is completely usable without one.
And so on and so forth. You'll find DI all over the place in .NET, it's just that normally it's done so seamlessly that you don't even think of it as DI.
If you want to design your DI-enabled library for maximum usability then the best suggestion is probably to supply your own default IoC implementation using a lightweight container. IContainer is a great choice for this because it's a part of the .NET Framework itself.
EDIT 2015: time has passed, I realize now that this whole thing was a huge mistake. IoC containers are terrible and DI is a very poor way to deal with side effects. Effectively, all of the answers here (and the question itself) are to be avoided. Simply be aware of side effects, separate them from pure code, and everything else either falls into place or is irrelevant and unnecessary complexity.
Original answer follows:
I had to face this same decision while developing SolrNet. I started with the goal of being DI-friendly and container-agnostic, but as I added more and more internal components, the internal factories quickly became unmanageable and the resulting library was inflexible.
I ended up writing my own very simple embedded IoC container while also providing a Windsor facility and a Ninject module. Integrating the library with other containers is just a matter of properly wiring the components, so I could easily integrate it with Autofac, Unity, StructureMap, whatever.
The downside of this is that I lost the ability to just new up the service. I also took a dependency on CommonServiceLocator which I could have avoided (I might refactor it out in the future) to make the embedded container easier to implement.
More details in this blog post.
MassTransit seems to rely on something similar. It has an IObjectBuilder interface which is really CommonServiceLocator's IServiceLocator with a couple more methods, then it implements this for each container, i.e. NinjectObjectBuilder and a regular module/facility, i.e. MassTransitModule. Then it relies on IObjectBuilder to instantiate what it needs. This is a valid approach of course, but personally I don't like it very much since it's actually passing around the container too much, using it as a service locator.
MonoRail implements its own container as well, which implements good old IServiceProvider. This container is used throughout this framework through an interface that exposes well-known services. To get the concrete container, it has a built-in service provider locator. The Windsor facility points this service provider locator to Windsor, making it the selected service provider.
Bottom line: there is no perfect solution. As with any design decision, this issue demands a balance between flexibility, maintainability and convenience.
What I would do is design my library in a DI container agnostic way to limit the dependency on the container as much as possible. This allows to swap out on DI container for another if need be.
Then expose the layer above the DI logic to the users of the library so that they can use whatever framework you chose through your interface. This way they can still use DI functionality that you exposed and they are free to use any other framework for their own purposes.
Allowing the users of the library to plug their own DI framework seems a bit wrong to me as it dramatically increases amount of maintenance. This also then becomes more of a plugin environment than straight DI.