I am aware that a thousand and one questions relating to this topic have been asked, but I have gone through at least a dozen and am still not connecting the dots. I am trying to setup dependency injection for entity contexts.
I have always created my entity context as I have seen in the MS tutorials, like so:
public class UserController : Controller
{
private DbEntities db = new DbEntities();
}
Recent reading has told me that this is no longer (if it ever was) the best practice, and a dependency injection method should be used. Ninject is mentioned often, but I am seeing how you move from what I have, to the example give in the Ninject documentation.
It should look like this when I am done, right?
public class UserController : Controller
{
private DbEntities db;
public UserController(DbEntities context)
{
db = context;
}
}
The documentation starts out with "In the previous step we already prepared anything that is necessary for controller injection." which is confusing as hell, since the previous step was installation. I used the Nuget method to install, but I don't know what it means when it says "Now load your modules or define bindings in RegisterServices method." How do I do that, and is entity a module or a binding? The documentation feels so sparse.
I am sorry if I skipped over something critical in the docs, I've been bouncing between forums for hours trying to figure out this one step.
I used the Nuget method to install, but I don't know what it means
when it says "Now load your modules or define bindings in
RegisterServices method." How do I do that, and is entity a module or
a binding?
The Nuget installation actually does quite a lot of things for you already. The most important thing is that it sets up Ninject as controller factory, which means that Ninject will create your controllers and is able to pass in all dependencies you have registered with it.
If you check the App_Start folder you will find a file NinjectMVC3.cs. There is already an empty method RegisterServices() which you can use to register your dependencies.
For your example you must be able to resolve DbEntities. The easiest most basic way to do this is:
kernel.Bind<DbEntities>().ToSelf();
That said you really should pass in an interface to your controller so the controller does not depend on Entity Framework, using abstractions and registering a concrete class to use in the IoC container is one of the main reasons for dependency injection.
This should give you a start - the documentation you link to seems a bit outdated. I would recommend looking at the Ninject MVC3 sample on github.
Dependency Injection can seem confusing at first, but it's really actually quite simple.
A Dependency Injection "container" is basically a generic factory, with various object lifetime management features. Ninject, in particular, uses the syntax kernel.Bind() to confgure this factory. When you say kernel.Bind<DbEntities>().ToSelf() this means that Ninject will create an instance of the bound type (DbEntities in this case) whenever that type is requested. This request typically looks like this:
var entities = kernel.Get<DbEntities>(); // Note: don't do this, just an example
At it's core, this is what Dependency Injection is. A generic factory that can instantiate arbitrary types.
However, there is a lot more to it than that. One nice feature of Dependency Injection is that it will also instantiate any dependent types in the process. So, suppose you have a controller, and that controller has a dependency on DbEntities. Well, when a controller is instantiated by the DI Framework, it will also instantiate the dependent DbEntities. See the code below. When the MyController is instantiated, the DbEntities will automatically get instantiated (assuming you have bound the DbEntities class to self in the DI Configuration)
var controller = kernel.Get<MyController>();
public class MyController : Controller {
private DbEntities _entities;
public MyController(DbEntities entities) {
_entities = entities;
}
}
This is recursive. Any class that gets instantiated that has any objects that it itself might depend on get instantiated as well, and so on, and so on until finally, everything has what it needs to do its job.
Now, the great thing about MVC is that it has a way built-in to use any DI container automatically. You don't have to call kernel.Get because the framework does it for you when it creates the controllers when a request comes in. This feature is called IDependencyResolver, and is an interface that the MVC framework uses to allow third party DI containers to be used by the framework.
If you install Ninject by using the Nuget package Ninject.MVC3 then it will automatically configure all this for you, and you need only add your bindings to the RegisterServices() section of NinjectMVC3.cs
There's a lot more to it than this, but this should give you a basic understanding. Dependency Injection allows you to forget about the details of managing when objects are created and destroyed, you just specify in your constructor which dependencies you need, and assuming you have bindings for them in your configuration, MVC will take care of creating and destroying them for you. You just use them.
EDIT:
To be clear, I don't recommend you use the examples I give above. They are just simple illustrations of how DI works. In particular, the Get() syntax is known as "Service Location" and is considered to be bad. However, ultimately some code, somewhere must call Get(), it's just buried deep in the framework.
As Adam mentions, binding directly to the data entities context isn't a great idea, and you should eventually move to using an interface based approach.
I would never inject a concrete type here - you are directly coupling to a data access implementation.
Instead bind to IDbContext (or IUnitOfWork) - these are interfaces you define with a backing concrete implementation of DbContext, this way you can easily abstract away what technology you are using, making it more swappable, testable, maintainable, etc.
for ex:
http://blogs.planetcloud.co.uk/mygreatdiscovery/post/EF-Code-First-Common-Practices.aspx#disqus_thread
Related
I'm pondering the design of a C# library, that will have several different high level functions. Of course, those high-level functions will be implemented using the SOLID class design principles as much as possible. As such, there will probably be classes intended for consumers to use directly on a regular basis, and "support classes" that are dependencies of those more common "end user" classes.
The question is, what is the best way to design the library so it is:
DI Agnostic - Although adding basic "support" for one or two of the common DI libraries (StructureMap, Ninject, etc) seems reasonable, I want consumers to be able to use the library with any DI framework.
Non-DI usable - If a consumer of the library is using no DI, the library should still be as easy to use as possible, reducing the amount of work a user has to do to create all these "unimportant" dependencies just to get to the "real" classes they want to use.
My current thinking is to provide a few "DI registration modules" for the common DI libraries (e.g a StructureMap registry, a Ninject module), and a set or Factory classes that are non-DI and contain the coupling to those few factories.
Thoughts?
This is actually simple to do once you understand that DI is about patterns and principles, not technology.
To design the API in a DI Container-agnostic way, follow these general principles:
Program to an interface, not an implementation
This principle is actually a quote (from memory though) from Design Patterns, but it should always be your real goal. DI is just a means to achieve that end.
Apply the Hollywood Principle
The Hollywood Principle in DI terms says: Don't call the DI Container, it'll call you.
Never directly ask for a dependency by calling a container from within your code. Ask for it implicitly by using Constructor Injection.
Use Constructor Injection
When you need a dependency, ask for it statically through the constructor:
public class Service : IService
{
private readonly ISomeDependency dep;
public Service(ISomeDependency dep)
{
if (dep == null)
{
throw new ArgumentNullException("dep");
}
this.dep = dep;
}
public ISomeDependency Dependency
{
get { return this.dep; }
}
}
Notice how the Service class guarantees its invariants. Once an instance is created, the dependency is guaranteed to be available because of the combination of the Guard Clause and the readonly keyword.
Use Abstract Factory if you need a short-lived object
Dependencies injected with Constructor Injection tend to be long-lived, but sometimes you need a short-lived object, or to construct the dependency based on a value known only at run-time.
See this for more information.
Compose only at the Last Responsible Moment
Keep objects decoupled until the very end. Normally, you can wait and wire everything up in the application's entry point. This is called the Composition Root.
More details here:
Where should I do Injection with Ninject 2+ (and how do I arrange my Modules?)
Design - Where should objects be registered when using Windsor
Simplify using a Facade
If you feel that the resulting API becomes too complex for novice users, you can always provide a few Facade classes that encapsulate common dependency combinations.
To provide a flexible Facade with a high degree of discoverability, you could consider providing Fluent Builders. Something like this:
public class MyFacade
{
private IMyDependency dep;
public MyFacade()
{
this.dep = new DefaultDependency();
}
public MyFacade WithDependency(IMyDependency dependency)
{
this.dep = dependency;
return this;
}
public Foo CreateFoo()
{
return new Foo(this.dep);
}
}
This would allow a user to create a default Foo by writing
var foo = new MyFacade().CreateFoo();
It would, however, be very discoverable that it's possible to supply a custom dependency, and you could write
var foo = new MyFacade().WithDependency(new CustomDependency()).CreateFoo();
If you imagine that the MyFacade class encapsulates a lot of different dependencies, I hope it's clear how it would provide proper defaults while still making extensibility discoverable.
FWIW, long after writing this answer, I expanded upon the concepts herein and wrote a longer blog post about DI-Friendly Libraries, and a companion post about DI-Friendly Frameworks.
The term "dependency injection" doesn't specifically have anything to do with an IoC container at all, even though you tend to see them mentioned together. It simply means that instead of writing your code like this:
public class Service
{
public Service()
{
}
public void DoSomething()
{
SqlConnection connection = new SqlConnection("some connection string");
WindowsIdentity identity = WindowsIdentity.GetCurrent();
// Do something with connection and identity variables
}
}
You write it like this:
public class Service
{
public Service(IDbConnection connection, IIdentity identity)
{
this.Connection = connection;
this.Identity = identity;
}
public void DoSomething()
{
// Do something with Connection and Identity properties
}
protected IDbConnection Connection { get; private set; }
protected IIdentity Identity { get; private set; }
}
That is, you do two things when you write your code:
Rely on interfaces instead of classes whenever you think that the implementation might need to be changed;
Instead of creating instances of these interfaces inside a class, pass them as constructor arguments (alternatively, they could be assigned to public properties; the former is constructor injection, the latter is property injection).
None of this presupposes the existence of any DI library, and it doesn't really make the code any more difficult to write without one.
If you're looking for an example of this, look no further than the .NET Framework itself:
List<T> implements IList<T>. If you design your class to use IList<T> (or IEnumerable<T>), you can take advantage of concepts like lazy-loading, as Linq to SQL, Linq to Entities, and NHibernate all do behind the scenes, usually through property injection. Some framework classes actually accept an IList<T> as a constructor argument, such as BindingList<T>, which is used for several data binding features.
Linq to SQL and EF are built entirely around the IDbConnection and related interfaces, which can be passed in via the public constructors. You don't need to use them, though; the default constructors work just fine with a connection string sitting in a configuration file somewhere.
If you ever work on WinForms components you deal with "services", like INameCreationService or IExtenderProviderService. You don't even really know what what the concrete classes are. .NET actually has its own IoC container, IContainer, which gets used for this, and the Component class has a GetService method which is the actual service locator. Of course, nothing prevents you from using any or all of these interfaces without the IContainer or that particular locator. The services themselves are only loosely-coupled with the container.
Contracts in WCF are built entirely around interfaces. The actual concrete service class is usually referenced by name in a configuration file, which is essentially DI. Many people don't realize this but it is entirely possible to swap out this configuration system with another IoC container. Perhaps more interestingly, the service behaviors are all instances of IServiceBehavior which can be added later. Again, you could easily wire this into an IoC container and have it pick the relevant behaviors, but the feature is completely usable without one.
And so on and so forth. You'll find DI all over the place in .NET, it's just that normally it's done so seamlessly that you don't even think of it as DI.
If you want to design your DI-enabled library for maximum usability then the best suggestion is probably to supply your own default IoC implementation using a lightweight container. IContainer is a great choice for this because it's a part of the .NET Framework itself.
EDIT 2015: time has passed, I realize now that this whole thing was a huge mistake. IoC containers are terrible and DI is a very poor way to deal with side effects. Effectively, all of the answers here (and the question itself) are to be avoided. Simply be aware of side effects, separate them from pure code, and everything else either falls into place or is irrelevant and unnecessary complexity.
Original answer follows:
I had to face this same decision while developing SolrNet. I started with the goal of being DI-friendly and container-agnostic, but as I added more and more internal components, the internal factories quickly became unmanageable and the resulting library was inflexible.
I ended up writing my own very simple embedded IoC container while also providing a Windsor facility and a Ninject module. Integrating the library with other containers is just a matter of properly wiring the components, so I could easily integrate it with Autofac, Unity, StructureMap, whatever.
The downside of this is that I lost the ability to just new up the service. I also took a dependency on CommonServiceLocator which I could have avoided (I might refactor it out in the future) to make the embedded container easier to implement.
More details in this blog post.
MassTransit seems to rely on something similar. It has an IObjectBuilder interface which is really CommonServiceLocator's IServiceLocator with a couple more methods, then it implements this for each container, i.e. NinjectObjectBuilder and a regular module/facility, i.e. MassTransitModule. Then it relies on IObjectBuilder to instantiate what it needs. This is a valid approach of course, but personally I don't like it very much since it's actually passing around the container too much, using it as a service locator.
MonoRail implements its own container as well, which implements good old IServiceProvider. This container is used throughout this framework through an interface that exposes well-known services. To get the concrete container, it has a built-in service provider locator. The Windsor facility points this service provider locator to Windsor, making it the selected service provider.
Bottom line: there is no perfect solution. As with any design decision, this issue demands a balance between flexibility, maintainability and convenience.
What I would do is design my library in a DI container agnostic way to limit the dependency on the container as much as possible. This allows to swap out on DI container for another if need be.
Then expose the layer above the DI logic to the users of the library so that they can use whatever framework you chose through your interface. This way they can still use DI functionality that you exposed and they are free to use any other framework for their own purposes.
Allowing the users of the library to plug their own DI framework seems a bit wrong to me as it dramatically increases amount of maintenance. This also then becomes more of a plugin environment than straight DI.
In my app, I create a dbcontext for every call and dependency inject it through Ninject.
I've been thinking of creating a singleton class (ContextManager - BaseController would set the context on every request) to make the context available from everywhere, thus allowing all the services to share the same context. This would furthermore make it easy to, for example, disable proxy creating etc. seeing as the context is managed from one place only.
However, seeing as the object is a singleton object the context would be overwritten per each request which won't work for me (I don't want multiple requests sharing a single context).
What would be the best way to do this (how to preferably a single context ONLY in request scope)?
What you're describing is not a Singleton, but a Request-Scoped object. ASP.NET MVC has strong support for dependency injection, and you should allow your DI bindings to determine where the context comes from, rather than instantiating it yourself. Ninject has binding syntax to support this. I think it goes:
Bind<DataContext>().ToSelf().InRequestScope();
As long as you are using good Dependency-Injection patterns consistently, this should cause the same DataContext instance to be passed to every dependency you have within the same request.
The advantage to relying on Dependency Injection for construction of your context is that if you want to change details like disabling change tracking on the context, you can simply change your DI binding to use a custom method or factory, and the rest of your code doesn't have to change at all.
Singleton is not the right approach here, but this is not too hard to implement, by simply instantiating your data context in your controller and injecting it into your service classes, e.g.:
public class SomeController {
private DataContext _context;
private SomeService _service;
public SomeController() {
_context = ...InstantiateContext();
_service = new SomeService(_context);
}
}
This also allows makes it relatively simple to inject your context into your controller if you wish to unit test that. You can also relatively simply dispose of your context by coding this into the dispose method of the controller class (as noted above, a base controller class may be useful).
A singleton carries some persistent state - this is anathema to unit testing and will ultimately give you difficulties in your code.
I always saw people always talking about using framework like Ninject, Unity, Windsor to do the dependency resolver and injection. Take following code for example:
public class ProductsController : ApiController
{
private IProductRepository _repository;
public ProductsController(IProductRepository repository)
{
_repository = repository;
}
}
My question is: why can't we simply write as:
public class ProductsController : ApiController
{
private IProductRepository _repository;
public ProductsController() :this(null)
{}
public ProductsController(IProductRepository repository)
{
_repository = repository?? new ProductRepository();
}
}
In that case seems we don't need any framework, even for the unit test we can easily mock.
So what's the real purpose for those framework?
Thanks in advance!
In that case your ProductsController still depends on a low level component (the concrete ProductRepository in your case) which is a violation of the Dependency Inversion Principle. Whether or not this is a problem depends on multiple factors, but it causes the following problems:
The creation of the ProductRepository is still duplicated throughout the application causing you to make sweeping changes throughout the application when the constructor of ProductRepository chances (assuming that ProductRepository is used in more places, which is quite reasonable) which would be an Open/Closed Principle violation.
It causes you to make sweeping changes whenever you decide to wrap this ProductService with a decorator or interceptor that adds cross-cutting concerns (such as logging, audit trailing, security filtering, etc) that you surely don't want to repeat that code throughout all your repositories (again an OCP violation).
It forces the ProductsController to know about the ProductsRepository, which might be a problem depending on the size and complexity of the application your are writing.
So this is not about the use of frameworks, it's about applying software design principles. If you decide to adhere to these principles to make your application more maintainable, the frameworks like Ninject, Autofac and Simple Injector can help you with making the startup path of your application more maintainable. But nothing is preventing you from applying these principles without the use of any tool or library.
Small disclaimer: I'm an avid Unity user, and here are my 2 cents.
1st: Violation of SOLID (SRP/OCP/DIP)
Already stated by #democodemonkey and #thumbmunkeys, you couple the 2 classes tightly. Let's say that some classes (let it be ProductsThingamajigOne and ProductsThingamajigTwo) are using the ProductsController, and are using its default constructor. What if in the architect decides that the system should not use a ProductsRepository that saves Products into files, but should use a database or a cloud storage. What would the impact be on the classes?
2nd: What if the ProductRepository needs another dependency?
If the repository is based on a database, you might need to provide it with a ConnectionString. If it's based on files, you might need to provide it with a class of settings providing the exact path of where to save the files - and the truth is, that in general, applications tend to contain dependency trees (A dependent on B and C, B dependent on D, C dependent on E, D dependent on F and G and so on) that have more then 2 levels, so the SOLID violations hurts more, as more code has to be changed to perform some task - but even before that, can you imagine the code that would create the whole app?
Fact is, classes can have many dependencies of theirs own - and in this case, the issues described earlier multiply.
That's usually the job of the Bootstrapper - it defines the dependency structure, and performs (usually) a single resolve that brings the whole system up, like a puppet on a string.
3rd: What if the Dependency-Tree is not a tree, but a Graph?
Consider the following case: Class A dependent on classes B and C, B and C both are dependent on class D, and are expecting to use the same instance of D. A common practice was to make D a singleton, but that could cause a lot of issues. The other option is to pass an instance of D into the constructor of A, and have it create B and C, or pass instances of B and C to A and create them outside - and the complexity goes on and on.
4th: Packing (Assemblies)
Your code assumes that 'ProductsController' can see 'ProductRepository' (assembly-wise). What if there's no reference between them? the Assembly Map can be non-trivial. usually, the bootstrapping code (I'm assuming that it's in code and not in configuration file for a second here) is written in an assembly that references the entire solution. (This was also described by #Steven).
5th: Cool stuff you can do with IoC containers
Singletons are made easy (with unity: simply use a 'containercontrolledlifetimemanager' when registering),
Lazy Instantiation made really easy (with unity: register mapping of and ask in the constructor for a Func).
Those are just a couple of examples of things that IoC containers give you for (almost) free.
Of course you could do that, but this would cause the following issues:
The dependency to IProductRepository is not explicit anymore, it looks like an optional dependency
Other parts of the code might instantiate a different implementation of IProductRepository, which would be probably a problem in this case
The class becomes tightly coupled to ProductsController as it internally creates a dependency
In my opinion this is not a question about a framework. The point is to make modules composable, by exposing their dependencies in a constructor or property. Your example somewhat obfuscates that.
If class ProductRepository is not defined in the same assembly as ProductsController (or if you would ever want to move it to a different assembly) then you have just introduced a dependency that you don't want.
That's an anti-pattern described as "Bastard Injection" in the seminal book "Dependency Injection in .Net" by Mark Seeman.
However, if ProductRepository is ALWAYS going to be in the same assembly as ProductsController and if it does not depend on anything that the rest of the ProductsController assembly depends upon, it could be a local default - in which case it would be ok.
From the class names, I'm betting that such a dependency SHOULD NOT be introduced, and you are looking at bastard injection.
Here ProductsController is responsible for creating the ProductRepository.
What happens if ProductRepository requires an additional parameter in its constructor? Then ProductsController will have to change, and this violates the SRP.
As well as adding more complexity to all of your objects.
As well as making it unclear as to whether a caller needs to pass the child object, or is it optional?
The main purpose is to decouple object creation from its usage or consumption. The creation of the object "usually" is taken care of by factory classes. In your case, the factory classes will be designed to return an object of a type which implements IProductRepository interface.
In some frameworks, like in Sprint.Net, the factory classes instantiate objects that are declaratively written in the configuration (i.e. in the app.config or web.config). Thus making the program totally independent of the object it needs to create. This can be quite powerful at times.
It is important to distinguish dependency injection and inversion of control is not the same. You can using dependency injection without IOC frameworks like unity, ninject ..., performing the injection manually what they often called poor man's DI.
In my blog I recently wrote a post about this issue
http://xurxodeveloper.blogspot.com.es/2014/09/Inyeccion-de-Dependencias-DI.html
Going back to your example, I see weaknesses in the implementation.
1 - ProductsController depends on a concretion and not an abstraction, violation SOLID
2 - In case of interface and the repository are living in different projects, you'd be forced to have a reference to the project where the repository is located
3 -If in the future you need to add a parameter to the constructor, you would have to modify the controller when it's a simply repository client.
4 -Controller and repository can be developed for differents programmers, controller programmer must know how create repository
consider this usecase:
suppose, in future, if you want to inject CustomProductRepository instead of ProductRepository to ProductsController , to the software which is already deployed to client site.
with Spring.Net you can just update the spring configuration file(xml) to use your CustomProductRepository. So, with this, you can avoid re-compiling and re-installing software on client site since you have not modified any code.
This may seem obvious to most people, but I'm just trying to confirm that Dependency Injection (DI) relies on the use of Interfaces.
More specifically, in the case of a class which has a certain Interface as a parameter in its constructor or a certain Interface defined as a property (aka. Setter), the DI framework can hand over an instance of a concrete class to satisfy the needs of that Interface in that class. (Apologies if this description is not clear. I'm having trouble describing this properly because the terminology/concepts are still somewhat new to me.)
The reason I ask is that I currently have a class that has a dependency of sorts. Not so much an object dependency, but a URL. The class looks like this [C#]:
using System.Web.Services.Protocols;
public partial class SomeLibraryService : SoapHttpClientProtocol
{
public SomeLibraryService()
{
this.Url = "http://MyDomainName.com:8080/library-service/jse";
}
}
The SoapHttpClientProtocol class has a Public property called Url (which is a plain old "string") and the constructor here initializes it to a hard-coded value.
Could I possibly use a DI framework to inject a different value at construction? I'm thinking not since this.Url isn't any sort of Interface; it's a String.
[Incidentally, the code above was "auto-generated by wsdl", according to the comments in the code I'm working with. So I don't particularly want to change this code, although I don't see myself re-generating it either. So maybe changing this code is fine.]
I could see myself making an alternate constructor that takes a string as a parameter and initializes this.Url that way, but I'm not sure that's the correct approach regarding keeping loosely coupled separation of concerns. (SoC)
Any advice for this situation?
DI really just means a class wont construct it's external dependencies and will not manage the lifetime of those dependencies. Dependencies can be injected either via constructor, or via method parameter. Interfaces or abstract types are common to clarify the contract the consumer expects from its dependency, however simple types can be injected as well in some cases.
For example, a class in a library might call HttpContext.Current internally, which makes arbitrary assumptions about the application the code will be hosted in. An DI version of the library method would expect a HttpContext instance to be injected via parameter, etc.
It's not required to use interfaces -- you could use concrete types or abstract base classes. But many of the advantages of DI (such as being able to change an implementation of a dependancy) come when using interfaces.
Castle Windsor (the DI framework I know best), allows you to map objects in the IoC container to Interfaces, or to just names, which would work in your case.
Dependency Injection is a way of organizing your code. Maybe some of your confusion comes from the fact that there is not one official way to do it. It can be achieved using "regular" c# code , or by using a framework like Castle Windsor. Sometimes (often?) this involves using interfaces. No matter how it is achieved, the big picture goal of DI is usually to make your code easier to test and easier to modify later on.
If you were to inject the URL in your example via a constructor, that could be considered "manual" DI. The Wikipedia article on DI has more examples of manual vs framework DI.
I would like to answer with a focus on using interfaces in .NET applications. Polymorphism in .NET can be achieved through virtual or abstract methods, or interfaces.
In all cases, there is a method signature with no implementation at all or an implementation that can be overridden.
The 'contract' of a function (or even a property) is defined but how the method is implemented, the logical guts of the method can be different at runtime, determined by which subclass is instantiated and passed-in to the method or constructor, or set on a property (the act of 'injection').
The official .NET type design guidelines advocate using abstract base classes over interfaces since they have better options for evolving them after shipping, can include convenience overloads and are better able to self-document and communicate correct usage to implementers.
However, care must be taken not to add any logic. The temptation to do so has burned people in the past so many people use interfaces - many other people use interfaces simply because that's what the programmers sitting around them do.
It's also interesting to point out that while DI itself is rarely over-used, using a framework to perform the injection is quite often over-used to the detriment of increased complexity, a chain-reaction can take place where more and more types are needed in the container even though they are never 'switched'.
IoC frameworks should be used sparingly, usually only when you need to swap out objects at runtime, according to the environment or configuration. This usually means switching major component "seams" in the application such as the repository objects used to abstract your data layer.
For me, the real power of an IoC framework is to switch implementation in places where you have no control over creation. For example, in ASP.NET MVC, the creation of the controller class is performed by the ASP.NET framework, so injecting anything is impossible. The ASP.NET framework has some hooks that IoC frameworks can use to 'get in-between' the creation process and perform their magic.
Luke
I'm pondering the design of a C# library, that will have several different high level functions. Of course, those high-level functions will be implemented using the SOLID class design principles as much as possible. As such, there will probably be classes intended for consumers to use directly on a regular basis, and "support classes" that are dependencies of those more common "end user" classes.
The question is, what is the best way to design the library so it is:
DI Agnostic - Although adding basic "support" for one or two of the common DI libraries (StructureMap, Ninject, etc) seems reasonable, I want consumers to be able to use the library with any DI framework.
Non-DI usable - If a consumer of the library is using no DI, the library should still be as easy to use as possible, reducing the amount of work a user has to do to create all these "unimportant" dependencies just to get to the "real" classes they want to use.
My current thinking is to provide a few "DI registration modules" for the common DI libraries (e.g a StructureMap registry, a Ninject module), and a set or Factory classes that are non-DI and contain the coupling to those few factories.
Thoughts?
This is actually simple to do once you understand that DI is about patterns and principles, not technology.
To design the API in a DI Container-agnostic way, follow these general principles:
Program to an interface, not an implementation
This principle is actually a quote (from memory though) from Design Patterns, but it should always be your real goal. DI is just a means to achieve that end.
Apply the Hollywood Principle
The Hollywood Principle in DI terms says: Don't call the DI Container, it'll call you.
Never directly ask for a dependency by calling a container from within your code. Ask for it implicitly by using Constructor Injection.
Use Constructor Injection
When you need a dependency, ask for it statically through the constructor:
public class Service : IService
{
private readonly ISomeDependency dep;
public Service(ISomeDependency dep)
{
if (dep == null)
{
throw new ArgumentNullException("dep");
}
this.dep = dep;
}
public ISomeDependency Dependency
{
get { return this.dep; }
}
}
Notice how the Service class guarantees its invariants. Once an instance is created, the dependency is guaranteed to be available because of the combination of the Guard Clause and the readonly keyword.
Use Abstract Factory if you need a short-lived object
Dependencies injected with Constructor Injection tend to be long-lived, but sometimes you need a short-lived object, or to construct the dependency based on a value known only at run-time.
See this for more information.
Compose only at the Last Responsible Moment
Keep objects decoupled until the very end. Normally, you can wait and wire everything up in the application's entry point. This is called the Composition Root.
More details here:
Where should I do Injection with Ninject 2+ (and how do I arrange my Modules?)
Design - Where should objects be registered when using Windsor
Simplify using a Facade
If you feel that the resulting API becomes too complex for novice users, you can always provide a few Facade classes that encapsulate common dependency combinations.
To provide a flexible Facade with a high degree of discoverability, you could consider providing Fluent Builders. Something like this:
public class MyFacade
{
private IMyDependency dep;
public MyFacade()
{
this.dep = new DefaultDependency();
}
public MyFacade WithDependency(IMyDependency dependency)
{
this.dep = dependency;
return this;
}
public Foo CreateFoo()
{
return new Foo(this.dep);
}
}
This would allow a user to create a default Foo by writing
var foo = new MyFacade().CreateFoo();
It would, however, be very discoverable that it's possible to supply a custom dependency, and you could write
var foo = new MyFacade().WithDependency(new CustomDependency()).CreateFoo();
If you imagine that the MyFacade class encapsulates a lot of different dependencies, I hope it's clear how it would provide proper defaults while still making extensibility discoverable.
FWIW, long after writing this answer, I expanded upon the concepts herein and wrote a longer blog post about DI-Friendly Libraries, and a companion post about DI-Friendly Frameworks.
The term "dependency injection" doesn't specifically have anything to do with an IoC container at all, even though you tend to see them mentioned together. It simply means that instead of writing your code like this:
public class Service
{
public Service()
{
}
public void DoSomething()
{
SqlConnection connection = new SqlConnection("some connection string");
WindowsIdentity identity = WindowsIdentity.GetCurrent();
// Do something with connection and identity variables
}
}
You write it like this:
public class Service
{
public Service(IDbConnection connection, IIdentity identity)
{
this.Connection = connection;
this.Identity = identity;
}
public void DoSomething()
{
// Do something with Connection and Identity properties
}
protected IDbConnection Connection { get; private set; }
protected IIdentity Identity { get; private set; }
}
That is, you do two things when you write your code:
Rely on interfaces instead of classes whenever you think that the implementation might need to be changed;
Instead of creating instances of these interfaces inside a class, pass them as constructor arguments (alternatively, they could be assigned to public properties; the former is constructor injection, the latter is property injection).
None of this presupposes the existence of any DI library, and it doesn't really make the code any more difficult to write without one.
If you're looking for an example of this, look no further than the .NET Framework itself:
List<T> implements IList<T>. If you design your class to use IList<T> (or IEnumerable<T>), you can take advantage of concepts like lazy-loading, as Linq to SQL, Linq to Entities, and NHibernate all do behind the scenes, usually through property injection. Some framework classes actually accept an IList<T> as a constructor argument, such as BindingList<T>, which is used for several data binding features.
Linq to SQL and EF are built entirely around the IDbConnection and related interfaces, which can be passed in via the public constructors. You don't need to use them, though; the default constructors work just fine with a connection string sitting in a configuration file somewhere.
If you ever work on WinForms components you deal with "services", like INameCreationService or IExtenderProviderService. You don't even really know what what the concrete classes are. .NET actually has its own IoC container, IContainer, which gets used for this, and the Component class has a GetService method which is the actual service locator. Of course, nothing prevents you from using any or all of these interfaces without the IContainer or that particular locator. The services themselves are only loosely-coupled with the container.
Contracts in WCF are built entirely around interfaces. The actual concrete service class is usually referenced by name in a configuration file, which is essentially DI. Many people don't realize this but it is entirely possible to swap out this configuration system with another IoC container. Perhaps more interestingly, the service behaviors are all instances of IServiceBehavior which can be added later. Again, you could easily wire this into an IoC container and have it pick the relevant behaviors, but the feature is completely usable without one.
And so on and so forth. You'll find DI all over the place in .NET, it's just that normally it's done so seamlessly that you don't even think of it as DI.
If you want to design your DI-enabled library for maximum usability then the best suggestion is probably to supply your own default IoC implementation using a lightweight container. IContainer is a great choice for this because it's a part of the .NET Framework itself.
EDIT 2015: time has passed, I realize now that this whole thing was a huge mistake. IoC containers are terrible and DI is a very poor way to deal with side effects. Effectively, all of the answers here (and the question itself) are to be avoided. Simply be aware of side effects, separate them from pure code, and everything else either falls into place or is irrelevant and unnecessary complexity.
Original answer follows:
I had to face this same decision while developing SolrNet. I started with the goal of being DI-friendly and container-agnostic, but as I added more and more internal components, the internal factories quickly became unmanageable and the resulting library was inflexible.
I ended up writing my own very simple embedded IoC container while also providing a Windsor facility and a Ninject module. Integrating the library with other containers is just a matter of properly wiring the components, so I could easily integrate it with Autofac, Unity, StructureMap, whatever.
The downside of this is that I lost the ability to just new up the service. I also took a dependency on CommonServiceLocator which I could have avoided (I might refactor it out in the future) to make the embedded container easier to implement.
More details in this blog post.
MassTransit seems to rely on something similar. It has an IObjectBuilder interface which is really CommonServiceLocator's IServiceLocator with a couple more methods, then it implements this for each container, i.e. NinjectObjectBuilder and a regular module/facility, i.e. MassTransitModule. Then it relies on IObjectBuilder to instantiate what it needs. This is a valid approach of course, but personally I don't like it very much since it's actually passing around the container too much, using it as a service locator.
MonoRail implements its own container as well, which implements good old IServiceProvider. This container is used throughout this framework through an interface that exposes well-known services. To get the concrete container, it has a built-in service provider locator. The Windsor facility points this service provider locator to Windsor, making it the selected service provider.
Bottom line: there is no perfect solution. As with any design decision, this issue demands a balance between flexibility, maintainability and convenience.
What I would do is design my library in a DI container agnostic way to limit the dependency on the container as much as possible. This allows to swap out on DI container for another if need be.
Then expose the layer above the DI logic to the users of the library so that they can use whatever framework you chose through your interface. This way they can still use DI functionality that you exposed and they are free to use any other framework for their own purposes.
Allowing the users of the library to plug their own DI framework seems a bit wrong to me as it dramatically increases amount of maintenance. This also then becomes more of a plugin environment than straight DI.