Why we need framework to do the Dependency Resolver? - c#

I always saw people always talking about using framework like Ninject, Unity, Windsor to do the dependency resolver and injection. Take following code for example:
public class ProductsController : ApiController
{
private IProductRepository _repository;
public ProductsController(IProductRepository repository)
{
_repository = repository;
}
}
My question is: why can't we simply write as:
public class ProductsController : ApiController
{
private IProductRepository _repository;
public ProductsController() :this(null)
{}
public ProductsController(IProductRepository repository)
{
_repository = repository?? new ProductRepository();
}
}
In that case seems we don't need any framework, even for the unit test we can easily mock.
So what's the real purpose for those framework?
Thanks in advance!

In that case your ProductsController still depends on a low level component (the concrete ProductRepository in your case) which is a violation of the Dependency Inversion Principle. Whether or not this is a problem depends on multiple factors, but it causes the following problems:
The creation of the ProductRepository is still duplicated throughout the application causing you to make sweeping changes throughout the application when the constructor of ProductRepository chances (assuming that ProductRepository is used in more places, which is quite reasonable) which would be an Open/Closed Principle violation.
It causes you to make sweeping changes whenever you decide to wrap this ProductService with a decorator or interceptor that adds cross-cutting concerns (such as logging, audit trailing, security filtering, etc) that you surely don't want to repeat that code throughout all your repositories (again an OCP violation).
It forces the ProductsController to know about the ProductsRepository, which might be a problem depending on the size and complexity of the application your are writing.
So this is not about the use of frameworks, it's about applying software design principles. If you decide to adhere to these principles to make your application more maintainable, the frameworks like Ninject, Autofac and Simple Injector can help you with making the startup path of your application more maintainable. But nothing is preventing you from applying these principles without the use of any tool or library.

Small disclaimer: I'm an avid Unity user, and here are my 2 cents.
1st: Violation of SOLID (SRP/OCP/DIP)
Already stated by #democodemonkey and #thumbmunkeys, you couple the 2 classes tightly. Let's say that some classes (let it be ProductsThingamajigOne and ProductsThingamajigTwo) are using the ProductsController, and are using its default constructor. What if in the architect decides that the system should not use a ProductsRepository that saves Products into files, but should use a database or a cloud storage. What would the impact be on the classes?
2nd: What if the ProductRepository needs another dependency?
If the repository is based on a database, you might need to provide it with a ConnectionString. If it's based on files, you might need to provide it with a class of settings providing the exact path of where to save the files - and the truth is, that in general, applications tend to contain dependency trees (A dependent on B and C, B dependent on D, C dependent on E, D dependent on F and G and so on) that have more then 2 levels, so the SOLID violations hurts more, as more code has to be changed to perform some task - but even before that, can you imagine the code that would create the whole app?
Fact is, classes can have many dependencies of theirs own - and in this case, the issues described earlier multiply.
That's usually the job of the Bootstrapper - it defines the dependency structure, and performs (usually) a single resolve that brings the whole system up, like a puppet on a string.
3rd: What if the Dependency-Tree is not a tree, but a Graph?
Consider the following case: Class A dependent on classes B and C, B and C both are dependent on class D, and are expecting to use the same instance of D. A common practice was to make D a singleton, but that could cause a lot of issues. The other option is to pass an instance of D into the constructor of A, and have it create B and C, or pass instances of B and C to A and create them outside - and the complexity goes on and on.
4th: Packing (Assemblies)
Your code assumes that 'ProductsController' can see 'ProductRepository' (assembly-wise). What if there's no reference between them? the Assembly Map can be non-trivial. usually, the bootstrapping code (I'm assuming that it's in code and not in configuration file for a second here) is written in an assembly that references the entire solution. (This was also described by #Steven).
5th: Cool stuff you can do with IoC containers
Singletons are made easy (with unity: simply use a 'containercontrolledlifetimemanager' when registering),
Lazy Instantiation made really easy (with unity: register mapping of and ask in the constructor for a Func).
Those are just a couple of examples of things that IoC containers give you for (almost) free.

Of course you could do that, but this would cause the following issues:
The dependency to IProductRepository is not explicit anymore, it looks like an optional dependency
Other parts of the code might instantiate a different implementation of IProductRepository, which would be probably a problem in this case
The class becomes tightly coupled to ProductsController as it internally creates a dependency
In my opinion this is not a question about a framework. The point is to make modules composable, by exposing their dependencies in a constructor or property. Your example somewhat obfuscates that.

If class ProductRepository is not defined in the same assembly as ProductsController (or if you would ever want to move it to a different assembly) then you have just introduced a dependency that you don't want.
That's an anti-pattern described as "Bastard Injection" in the seminal book "Dependency Injection in .Net" by Mark Seeman.
However, if ProductRepository is ALWAYS going to be in the same assembly as ProductsController and if it does not depend on anything that the rest of the ProductsController assembly depends upon, it could be a local default - in which case it would be ok.
From the class names, I'm betting that such a dependency SHOULD NOT be introduced, and you are looking at bastard injection.

Here ProductsController is responsible for creating the ProductRepository.
What happens if ProductRepository requires an additional parameter in its constructor? Then ProductsController will have to change, and this violates the SRP.
As well as adding more complexity to all of your objects.
As well as making it unclear as to whether a caller needs to pass the child object, or is it optional?

The main purpose is to decouple object creation from its usage or consumption. The creation of the object "usually" is taken care of by factory classes. In your case, the factory classes will be designed to return an object of a type which implements IProductRepository interface.
In some frameworks, like in Sprint.Net, the factory classes instantiate objects that are declaratively written in the configuration (i.e. in the app.config or web.config). Thus making the program totally independent of the object it needs to create. This can be quite powerful at times.

It is important to distinguish dependency injection and inversion of control is not the same. You can using dependency injection without IOC frameworks like unity, ninject ..., performing the injection manually what they often called poor man's DI.
In my blog I recently wrote a post about this issue
http://xurxodeveloper.blogspot.com.es/2014/09/Inyeccion-de-Dependencias-DI.html
Going back to your example, I see weaknesses in the implementation.
1 - ProductsController depends on a concretion and not an abstraction, violation SOLID
2 - In case of interface and the repository are living in different projects, you'd be forced to have a reference to the project where the repository is located
3 -If in the future you need to add a parameter to the constructor, you would have to modify the controller when it's a simply repository client.
4 -Controller and repository can be developed for differents programmers, controller programmer must know how create repository

consider this usecase:
suppose, in future, if you want to inject CustomProductRepository instead of ProductRepository to ProductsController , to the software which is already deployed to client site.
with Spring.Net you can just update the spring configuration file(xml) to use your CustomProductRepository. So, with this, you can avoid re-compiling and re-installing software on client site since you have not modified any code.

Related

Architecture: Dependency Injection, Loosely Coupled Assemblies, Implementation Hiding

I've been working on a personal project which, beyond just making something useful for myself, I've tried to use as a way to continue finding and learning architectural lessons. One such lesson has appeared like a Kodiak bear in the middle of a bike path and I've been struggling quite mightily with it.
The problem is essentially an amalgam of issues at the intersection of dependency injection, assembly decoupling and implementation hiding (that is, implementing my public interfaces using internal classes).
At my jobs, I've typically found that various layers of an application hold their own interfaces which they publicly expose, but internally implement. Each assembly's DI code registers the internal class to the public interface. This technique prevents outside assemblies from newing-up an instance of the implementation class. However, some books I've been reading while building this solution have spoken against this. The main things that conflict with my previous thinking have to do with the DI composition root and where one should keep the interfaces for a given implementation. If I move dependency registration to a single, global composition root (as Mark Seemann suggests), then I can get away from each assembly having to run its own dependency registrations. However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them). As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it. As an example, here is a diagram he provided, and, for contrast, a diagram for how I would normally implement the same solution (okay, these aren't quite the same; kindly focus on the arrows and notice when implementation arrows cross assembly boundaries instead of composition arrows).
Martin Style
What I've normally seen
I immediately saw the advantage in Martin's diagram, that it allows the lower assemblies to be swapped out for another, given that it has a class that implements the interface in the layer above it. However, I also saw this seemingly major disadvantage: If you want to swap out the assembly from an upper layer, you essentially "steal" the interface away that the lower layer is implementing.
After thinking about it for a little bit, I decided the best way to be fully decoupled in both directions would be to have the interfaces that specify the contract between layers in their own assemblies. Consider this updated diagram:
Is this nutty? Is it right on? To me, it seems like this solves the problem of interface segregation. It doesn't, however, solve the problem of not being able to hide the implementation class as internal. Is there anything reasonable that can be done there? Should I not be worried about this?
One solution that I'm toying around with in my head is to have each layer implement the proxy layer's interface twice; once with a public class and once with an internal class. This way, the public class could merely wrap/decorate the internal class, like this:
Some code might look like this:
namespace MechanismProxy // Simulates Mechanism Proxy Assembly
{
public interface IMechanism
{
void DoStuff();
}
}
namespace MechanismImpl // Simulates Mechanism Assembly
{
using MechanismProxy;
// This class would be registered to IMechanism in the DI container
public class Mechanism : IMechanism
{
private readonly IMechanism _internalMechanism = new InternalMechanism();
public void DoStuff()
{
_internalMechanism.DoStuff();
}
}
internal class InternalMechanism : IMechanism
{
public void DoStuff()
{
// Do whatever
}
}
}
... of course, I'd still have to address some issues regarding constructor injection and passing the dependencies injected into the public class to the internal one. There's also the problem that outside assemblies could possibly new-up the public Mechanism... I would need a way to ensure only the DI container can do that... I suppose if I could figure that out, I wouldn't even need the internal version. Anyway, if anyone can help me understand how to overcome these architectural problems, it would be mightily appreciated.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
Unless you are building a reusable library (that gets published on NuGet and gets used by other code bases you have no control over), there is typically no reason to make classes internal. Especially since you program to interfaces, the only place in the application that depends on those classes is the Composition Root.
Also note that, in case you move the abstractions to a different library, and let both the consuming and the implementing assembly depend on that assembly, those assemblies don't have to depend on each other. This means that it doesn't matter at all whether those classes are public or internal.
This level of separation (placing the interfaces in an assembly of its own) however is hardly ever needed. In the end it's all about the required granularity during deployment and the size of the application.
As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it.
This is the Dependency Inversion Principle, which states:
In a direct application of dependency inversion, the abstracts are owned by the upper/policy layers
This is a somewhat opinion based topic, but since you asked, I'll give mine.
Your focus on creating as many assemblies as possible to be as flexible as possible is very theoretical and you have to weigh the practical value vs costs.
Don't forget that assemblies are only a container for compiled code. They become mostly relevant only when you look at the processes for developing, building and deploying/delivering them. So you have to ask a lot more questions before you can make a good decision on how exactly to split up the code into assemblies.
So here a few examples of questions I'd ask beforehand:
Does it make sense from your application domain to split up the
assemblies in this way (e.g. will you really need to swap out
assemblies)?
Will you have separate teams in place for developing those?
What will the scope be in terms of size (both LOC and team sizes)?
Is it required to protect the implementation from being available/visible? E.g. are those external interfaces or internal?
Do you really need to rely on assemblies as a mechanism to enforce your architectural separation? Or are there other, better measures (e.g. code reviews, code checkers, etc.)?
Will your calls really only happen between assemblies or will you need remote calls at some point?
Do you have to use private assemblies?
Will sealed classes help instead enforcing your architecture?
For a very general view, leaving these additional factors out, I would side with Martin Fowler's diagram, because that is just the standard way of how to provide/use interfaces. If your answers to the questions indicate additional value by further splitting up/protecting the code that may be fine, too. But you'd have to tell us more about your application domain and you'd have to be able to justify it well.
So in a way you are confronted with two old wisdoms:
Architecture tends to follow organizational setups.
It is very easy to over-engineer (over-complicate) architectures but it is very hard to design them as simple as possible. Simple is most of the time better.
When coming up with an architecture you want to consider those factors upfront otherwise they'll come and haunt you later in the form of technical debt.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
That doesn't sound like a downside. Implementation classes, that are bound to abstractions in your Composition Root, could probably be used in an explicit way somewhere else for some other reasons. I don't see any benefit from hiding them.
I would need a way to ensure only the DI container can do that...
No, you don't.
Your confusion probably stems from the fact that you think of the DI and Composition Root like there is a container behind, for sure.
In fact, however, the infrastructure could be completely "container-agnostic" in a sense that you still have your dependencies injected but you don't think of "how". A Composition Root that uses a container is your choice, as good choice as possible another Composition Root where you manually compose dependencies. In other words, the Composition Root could be the only place in your code that is aware of a DI container, if any is used. Your code is built agaist the idea of Dependency Inversion, not the idea of Dependency Inversion container.
A short tutorial of mine can possibly shed some light here
http://www.wiktorzychla.com/2016/01/di-factories-and-composition-root.html

DI in class library [duplicate]

I'm pondering the design of a C# library, that will have several different high level functions. Of course, those high-level functions will be implemented using the SOLID class design principles as much as possible. As such, there will probably be classes intended for consumers to use directly on a regular basis, and "support classes" that are dependencies of those more common "end user" classes.
The question is, what is the best way to design the library so it is:
DI Agnostic - Although adding basic "support" for one or two of the common DI libraries (StructureMap, Ninject, etc) seems reasonable, I want consumers to be able to use the library with any DI framework.
Non-DI usable - If a consumer of the library is using no DI, the library should still be as easy to use as possible, reducing the amount of work a user has to do to create all these "unimportant" dependencies just to get to the "real" classes they want to use.
My current thinking is to provide a few "DI registration modules" for the common DI libraries (e.g a StructureMap registry, a Ninject module), and a set or Factory classes that are non-DI and contain the coupling to those few factories.
Thoughts?
This is actually simple to do once you understand that DI is about patterns and principles, not technology.
To design the API in a DI Container-agnostic way, follow these general principles:
Program to an interface, not an implementation
This principle is actually a quote (from memory though) from Design Patterns, but it should always be your real goal. DI is just a means to achieve that end.
Apply the Hollywood Principle
The Hollywood Principle in DI terms says: Don't call the DI Container, it'll call you.
Never directly ask for a dependency by calling a container from within your code. Ask for it implicitly by using Constructor Injection.
Use Constructor Injection
When you need a dependency, ask for it statically through the constructor:
public class Service : IService
{
private readonly ISomeDependency dep;
public Service(ISomeDependency dep)
{
if (dep == null)
{
throw new ArgumentNullException("dep");
}
this.dep = dep;
}
public ISomeDependency Dependency
{
get { return this.dep; }
}
}
Notice how the Service class guarantees its invariants. Once an instance is created, the dependency is guaranteed to be available because of the combination of the Guard Clause and the readonly keyword.
Use Abstract Factory if you need a short-lived object
Dependencies injected with Constructor Injection tend to be long-lived, but sometimes you need a short-lived object, or to construct the dependency based on a value known only at run-time.
See this for more information.
Compose only at the Last Responsible Moment
Keep objects decoupled until the very end. Normally, you can wait and wire everything up in the application's entry point. This is called the Composition Root.
More details here:
Where should I do Injection with Ninject 2+ (and how do I arrange my Modules?)
Design - Where should objects be registered when using Windsor
Simplify using a Facade
If you feel that the resulting API becomes too complex for novice users, you can always provide a few Facade classes that encapsulate common dependency combinations.
To provide a flexible Facade with a high degree of discoverability, you could consider providing Fluent Builders. Something like this:
public class MyFacade
{
private IMyDependency dep;
public MyFacade()
{
this.dep = new DefaultDependency();
}
public MyFacade WithDependency(IMyDependency dependency)
{
this.dep = dependency;
return this;
}
public Foo CreateFoo()
{
return new Foo(this.dep);
}
}
This would allow a user to create a default Foo by writing
var foo = new MyFacade().CreateFoo();
It would, however, be very discoverable that it's possible to supply a custom dependency, and you could write
var foo = new MyFacade().WithDependency(new CustomDependency()).CreateFoo();
If you imagine that the MyFacade class encapsulates a lot of different dependencies, I hope it's clear how it would provide proper defaults while still making extensibility discoverable.
FWIW, long after writing this answer, I expanded upon the concepts herein and wrote a longer blog post about DI-Friendly Libraries, and a companion post about DI-Friendly Frameworks.
The term "dependency injection" doesn't specifically have anything to do with an IoC container at all, even though you tend to see them mentioned together. It simply means that instead of writing your code like this:
public class Service
{
public Service()
{
}
public void DoSomething()
{
SqlConnection connection = new SqlConnection("some connection string");
WindowsIdentity identity = WindowsIdentity.GetCurrent();
// Do something with connection and identity variables
}
}
You write it like this:
public class Service
{
public Service(IDbConnection connection, IIdentity identity)
{
this.Connection = connection;
this.Identity = identity;
}
public void DoSomething()
{
// Do something with Connection and Identity properties
}
protected IDbConnection Connection { get; private set; }
protected IIdentity Identity { get; private set; }
}
That is, you do two things when you write your code:
Rely on interfaces instead of classes whenever you think that the implementation might need to be changed;
Instead of creating instances of these interfaces inside a class, pass them as constructor arguments (alternatively, they could be assigned to public properties; the former is constructor injection, the latter is property injection).
None of this presupposes the existence of any DI library, and it doesn't really make the code any more difficult to write without one.
If you're looking for an example of this, look no further than the .NET Framework itself:
List<T> implements IList<T>. If you design your class to use IList<T> (or IEnumerable<T>), you can take advantage of concepts like lazy-loading, as Linq to SQL, Linq to Entities, and NHibernate all do behind the scenes, usually through property injection. Some framework classes actually accept an IList<T> as a constructor argument, such as BindingList<T>, which is used for several data binding features.
Linq to SQL and EF are built entirely around the IDbConnection and related interfaces, which can be passed in via the public constructors. You don't need to use them, though; the default constructors work just fine with a connection string sitting in a configuration file somewhere.
If you ever work on WinForms components you deal with "services", like INameCreationService or IExtenderProviderService. You don't even really know what what the concrete classes are. .NET actually has its own IoC container, IContainer, which gets used for this, and the Component class has a GetService method which is the actual service locator. Of course, nothing prevents you from using any or all of these interfaces without the IContainer or that particular locator. The services themselves are only loosely-coupled with the container.
Contracts in WCF are built entirely around interfaces. The actual concrete service class is usually referenced by name in a configuration file, which is essentially DI. Many people don't realize this but it is entirely possible to swap out this configuration system with another IoC container. Perhaps more interestingly, the service behaviors are all instances of IServiceBehavior which can be added later. Again, you could easily wire this into an IoC container and have it pick the relevant behaviors, but the feature is completely usable without one.
And so on and so forth. You'll find DI all over the place in .NET, it's just that normally it's done so seamlessly that you don't even think of it as DI.
If you want to design your DI-enabled library for maximum usability then the best suggestion is probably to supply your own default IoC implementation using a lightweight container. IContainer is a great choice for this because it's a part of the .NET Framework itself.
EDIT 2015: time has passed, I realize now that this whole thing was a huge mistake. IoC containers are terrible and DI is a very poor way to deal with side effects. Effectively, all of the answers here (and the question itself) are to be avoided. Simply be aware of side effects, separate them from pure code, and everything else either falls into place or is irrelevant and unnecessary complexity.
Original answer follows:
I had to face this same decision while developing SolrNet. I started with the goal of being DI-friendly and container-agnostic, but as I added more and more internal components, the internal factories quickly became unmanageable and the resulting library was inflexible.
I ended up writing my own very simple embedded IoC container while also providing a Windsor facility and a Ninject module. Integrating the library with other containers is just a matter of properly wiring the components, so I could easily integrate it with Autofac, Unity, StructureMap, whatever.
The downside of this is that I lost the ability to just new up the service. I also took a dependency on CommonServiceLocator which I could have avoided (I might refactor it out in the future) to make the embedded container easier to implement.
More details in this blog post.
MassTransit seems to rely on something similar. It has an IObjectBuilder interface which is really CommonServiceLocator's IServiceLocator with a couple more methods, then it implements this for each container, i.e. NinjectObjectBuilder and a regular module/facility, i.e. MassTransitModule. Then it relies on IObjectBuilder to instantiate what it needs. This is a valid approach of course, but personally I don't like it very much since it's actually passing around the container too much, using it as a service locator.
MonoRail implements its own container as well, which implements good old IServiceProvider. This container is used throughout this framework through an interface that exposes well-known services. To get the concrete container, it has a built-in service provider locator. The Windsor facility points this service provider locator to Windsor, making it the selected service provider.
Bottom line: there is no perfect solution. As with any design decision, this issue demands a balance between flexibility, maintainability and convenience.
What I would do is design my library in a DI container agnostic way to limit the dependency on the container as much as possible. This allows to swap out on DI container for another if need be.
Then expose the layer above the DI logic to the users of the library so that they can use whatever framework you chose through your interface. This way they can still use DI functionality that you exposed and they are free to use any other framework for their own purposes.
Allowing the users of the library to plug their own DI framework seems a bit wrong to me as it dramatically increases amount of maintenance. This also then becomes more of a plugin environment than straight DI.

Should a dependency be injected many "levels" up than it is needed?

I'm writing a C# ASP.NET MVC web application using SOLID principles.
I've written a ViewModelService, which depends on a AccountService and a RepositoryService, so I've injected those two services in the the ViewModelServer.
The PermissionService depends on the HttpContextBase in order to use GetOwinContext() to get an instance of the UserManager. The controller has an instance of HttpContextBase that needs to be used - so it seems like I have to inject the HttpContextBase instance into the ViewModelService which then injects it into the PermissionService.
So, in terms of code I have:
public ViewModelService
public CategoryRepository(ApplicationDbContext context, IPermissionService permissionservice)
public AccountService(HttpContextBase httpcontext, IPrincipal securityprincipal)
to instantiate the ViewModelService, I then do this:
new ViewModelService(
new CategoryRepository(
new ApplicationDbContext(),
new PermissionService(
new AccountService(HttpContext, Thread.CurrentPrincipal),
new UserPasswordRepository(new ApplicationDbContext()),
new ApplicationSettingsService())),
new PasswordRepository(
new ApplicationDbContext(),
new PermissionService(
new AccountService(HttpContext, Thread.CurrentPrincipal),
new UserPasswordRepository(new ApplicationDbContext()),
new ApplicationSettingsService())),
new ModelValidatorService());
Should a dependency be injected from that many "levels" up, or is there a better way?
There's a balance to be struck.
On the one hand, you have the school of thought which would insist that all dependencies must be exposed by the class to be "properly" injected. (This is the school of thought which considers something like a Service Locator to be an anti-pattern.) There's merit to this, but taken to an extreme you find yourself where you are now. Just the right kind of complexity in some composite models, which themselves have composite models, results in aggregate roots which need tons of dependencies injected solely to satisfy dependencies of deeper models.
Personally I find that this creates coupling in situations like this. Which is what DI is intended to resolve, not to create.
On the other hand, you have the school of thought which allows for a Service Locator approach, where models can internally invoke some common domain service to resolve a dependency for it. There's merit to this, but taken to an extreme you find that your dependencies are less known and there's a potential for runtime errors if any given dependency can't be resolved. (Basically, you can get errors at a higher level because consuming objects never knew that consumed objects needed something which wasn't provided.)
Personally I've used a service locator approach a lot (mostly because it's a very handy pattern for introducing DI to a legacy domain as part of a larger refactoring exercise, which is a lot of what I do professionally) and have never run into such issues.
There's yin and yang either way. And I think each solution space has its own balance. If you're finding that direct injection is making the system difficult to maintain, it may be worth investigating service location. Conversely, it may also be worth investigating if the overall domain model itself is inherently coupled and this DI issue is simply a symptom of that coupling and not the cause of it.
Yes, the entire intent of Dependency Injection is that you compose big object graphs up-front. You compose object graphs from the Composition Root, which is a place in your application that has the Single Responsibility of composing object graphs. That's not any particular Controller, but a separate class that composes Controllers with their dependencies.
The Composition Root must have access to all types it needs to compose, unless you want to get into late-binding strategies (which I'll generally advise against, unless there's a specific need).
I am firmly of the opinion that Service Locators are worse than Dependency Injection. They can be a useful legacy technique, and a useful stepping stone on to something better, but if you are designing something new, then steer clear.
The main reason for this is that Service Locators lead to code that has implicit dependencies, and this makes the code less clear and breaks encapsulation. It can also lead to run time errors instead of compile time errors, and Interacting Tests.
Your example uses Constructor Injection, which is usually the most appropriate form of Dependency Injection:
public ViewModelService(ICategoryRepository categoryRepository, IPasswordRepository passwordRepository, IModelValidatorService modelValidator) { ... }
This has explicit dependencies, which is good. It means that you cannot create the object without passing in its dependencies, and if you try to you will get a compile time error rather than a run time one. It also is good for encapsulation, as just by looking at the interface of the class you know what dependencies it needs.
You could do this using service locators as below:
public ViewModelService()
{
var categoryRepository = CategoryRepositoryServiceLocator.Instance;
var passwordRepository = PasswordRepositoryServiceLocator.Instance;
var modelValidator FModelValidatorServiceLocator.Instance;
...
}
This has implicit dependencies, that you cannot tell just by looking at the interface, you must also look at the implementation (this breaks encapsulation). You can also forget to set up one of the Service Locators, which will lead to a run time exception.
In your example I thinky your ViewModelService is good. It references abstractions (ICategoryRepository etc) and doesn't care about how these abstractions are created. The code you use to create the ViewModelService is a bit ugly, and I would recommend using an Inversion of Control container (such as Castle Windsor, StructureMap etc) to help here.
In Castle Windsor, you could do something like the following:
container.Register(Classes.FromAssemblyNamed("Repositories").Pick().WithServiceAllInterfaces());
container.Register(Component.For<IAccountService>().ImplementedBy<AccountService>());
container.Register(Component.For<IApplicationDBContext>().ImplementedBy<IApplicationDBContext>());
container.Register(Component.For<IApplicationSettingsService>().ImplementedBy<IApplicationSettingsService>());
var viewModelService = _container.Resolve<ViewModelService>();
Make sure to read and understand the "Register, Resolve, Release" and "Composition Root" patterns before you start.
Good luck!

Does Dependency Injection (DI) rely on Interfaces?

This may seem obvious to most people, but I'm just trying to confirm that Dependency Injection (DI) relies on the use of Interfaces.
More specifically, in the case of a class which has a certain Interface as a parameter in its constructor or a certain Interface defined as a property (aka. Setter), the DI framework can hand over an instance of a concrete class to satisfy the needs of that Interface in that class. (Apologies if this description is not clear. I'm having trouble describing this properly because the terminology/concepts are still somewhat new to me.)
The reason I ask is that I currently have a class that has a dependency of sorts. Not so much an object dependency, but a URL. The class looks like this [C#]:
using System.Web.Services.Protocols;
public partial class SomeLibraryService : SoapHttpClientProtocol
{
public SomeLibraryService()
{
this.Url = "http://MyDomainName.com:8080/library-service/jse";
}
}
The SoapHttpClientProtocol class has a Public property called Url (which is a plain old "string") and the constructor here initializes it to a hard-coded value.
Could I possibly use a DI framework to inject a different value at construction? I'm thinking not since this.Url isn't any sort of Interface; it's a String.
[Incidentally, the code above was "auto-generated by wsdl", according to the comments in the code I'm working with. So I don't particularly want to change this code, although I don't see myself re-generating it either. So maybe changing this code is fine.]
I could see myself making an alternate constructor that takes a string as a parameter and initializes this.Url that way, but I'm not sure that's the correct approach regarding keeping loosely coupled separation of concerns. (SoC)
Any advice for this situation?
DI really just means a class wont construct it's external dependencies and will not manage the lifetime of those dependencies. Dependencies can be injected either via constructor, or via method parameter. Interfaces or abstract types are common to clarify the contract the consumer expects from its dependency, however simple types can be injected as well in some cases.
For example, a class in a library might call HttpContext.Current internally, which makes arbitrary assumptions about the application the code will be hosted in. An DI version of the library method would expect a HttpContext instance to be injected via parameter, etc.
It's not required to use interfaces -- you could use concrete types or abstract base classes. But many of the advantages of DI (such as being able to change an implementation of a dependancy) come when using interfaces.
Castle Windsor (the DI framework I know best), allows you to map objects in the IoC container to Interfaces, or to just names, which would work in your case.
Dependency Injection is a way of organizing your code. Maybe some of your confusion comes from the fact that there is not one official way to do it. It can be achieved using "regular" c# code , or by using a framework like Castle Windsor. Sometimes (often?) this involves using interfaces. No matter how it is achieved, the big picture goal of DI is usually to make your code easier to test and easier to modify later on.
If you were to inject the URL in your example via a constructor, that could be considered "manual" DI. The Wikipedia article on DI has more examples of manual vs framework DI.
I would like to answer with a focus on using interfaces in .NET applications. Polymorphism in .NET can be achieved through virtual or abstract methods, or interfaces.
In all cases, there is a method signature with no implementation at all or an implementation that can be overridden.
The 'contract' of a function (or even a property) is defined but how the method is implemented, the logical guts of the method can be different at runtime, determined by which subclass is instantiated and passed-in to the method or constructor, or set on a property (the act of 'injection').
The official .NET type design guidelines advocate using abstract base classes over interfaces since they have better options for evolving them after shipping, can include convenience overloads and are better able to self-document and communicate correct usage to implementers.
However, care must be taken not to add any logic. The temptation to do so has burned people in the past so many people use interfaces - many other people use interfaces simply because that's what the programmers sitting around them do.
It's also interesting to point out that while DI itself is rarely over-used, using a framework to perform the injection is quite often over-used to the detriment of increased complexity, a chain-reaction can take place where more and more types are needed in the container even though they are never 'switched'.
IoC frameworks should be used sparingly, usually only when you need to swap out objects at runtime, according to the environment or configuration. This usually means switching major component "seams" in the application such as the repository objects used to abstract your data layer.
For me, the real power of an IoC framework is to switch implementation in places where you have no control over creation. For example, in ASP.NET MVC, the creation of the controller class is performed by the ASP.NET framework, so injecting anything is impossible. The ASP.NET framework has some hooks that IoC frameworks can use to 'get in-between' the creation process and perform their magic.
Luke

Dependency Inject (DI) "friendly" library

I'm pondering the design of a C# library, that will have several different high level functions. Of course, those high-level functions will be implemented using the SOLID class design principles as much as possible. As such, there will probably be classes intended for consumers to use directly on a regular basis, and "support classes" that are dependencies of those more common "end user" classes.
The question is, what is the best way to design the library so it is:
DI Agnostic - Although adding basic "support" for one or two of the common DI libraries (StructureMap, Ninject, etc) seems reasonable, I want consumers to be able to use the library with any DI framework.
Non-DI usable - If a consumer of the library is using no DI, the library should still be as easy to use as possible, reducing the amount of work a user has to do to create all these "unimportant" dependencies just to get to the "real" classes they want to use.
My current thinking is to provide a few "DI registration modules" for the common DI libraries (e.g a StructureMap registry, a Ninject module), and a set or Factory classes that are non-DI and contain the coupling to those few factories.
Thoughts?
This is actually simple to do once you understand that DI is about patterns and principles, not technology.
To design the API in a DI Container-agnostic way, follow these general principles:
Program to an interface, not an implementation
This principle is actually a quote (from memory though) from Design Patterns, but it should always be your real goal. DI is just a means to achieve that end.
Apply the Hollywood Principle
The Hollywood Principle in DI terms says: Don't call the DI Container, it'll call you.
Never directly ask for a dependency by calling a container from within your code. Ask for it implicitly by using Constructor Injection.
Use Constructor Injection
When you need a dependency, ask for it statically through the constructor:
public class Service : IService
{
private readonly ISomeDependency dep;
public Service(ISomeDependency dep)
{
if (dep == null)
{
throw new ArgumentNullException("dep");
}
this.dep = dep;
}
public ISomeDependency Dependency
{
get { return this.dep; }
}
}
Notice how the Service class guarantees its invariants. Once an instance is created, the dependency is guaranteed to be available because of the combination of the Guard Clause and the readonly keyword.
Use Abstract Factory if you need a short-lived object
Dependencies injected with Constructor Injection tend to be long-lived, but sometimes you need a short-lived object, or to construct the dependency based on a value known only at run-time.
See this for more information.
Compose only at the Last Responsible Moment
Keep objects decoupled until the very end. Normally, you can wait and wire everything up in the application's entry point. This is called the Composition Root.
More details here:
Where should I do Injection with Ninject 2+ (and how do I arrange my Modules?)
Design - Where should objects be registered when using Windsor
Simplify using a Facade
If you feel that the resulting API becomes too complex for novice users, you can always provide a few Facade classes that encapsulate common dependency combinations.
To provide a flexible Facade with a high degree of discoverability, you could consider providing Fluent Builders. Something like this:
public class MyFacade
{
private IMyDependency dep;
public MyFacade()
{
this.dep = new DefaultDependency();
}
public MyFacade WithDependency(IMyDependency dependency)
{
this.dep = dependency;
return this;
}
public Foo CreateFoo()
{
return new Foo(this.dep);
}
}
This would allow a user to create a default Foo by writing
var foo = new MyFacade().CreateFoo();
It would, however, be very discoverable that it's possible to supply a custom dependency, and you could write
var foo = new MyFacade().WithDependency(new CustomDependency()).CreateFoo();
If you imagine that the MyFacade class encapsulates a lot of different dependencies, I hope it's clear how it would provide proper defaults while still making extensibility discoverable.
FWIW, long after writing this answer, I expanded upon the concepts herein and wrote a longer blog post about DI-Friendly Libraries, and a companion post about DI-Friendly Frameworks.
The term "dependency injection" doesn't specifically have anything to do with an IoC container at all, even though you tend to see them mentioned together. It simply means that instead of writing your code like this:
public class Service
{
public Service()
{
}
public void DoSomething()
{
SqlConnection connection = new SqlConnection("some connection string");
WindowsIdentity identity = WindowsIdentity.GetCurrent();
// Do something with connection and identity variables
}
}
You write it like this:
public class Service
{
public Service(IDbConnection connection, IIdentity identity)
{
this.Connection = connection;
this.Identity = identity;
}
public void DoSomething()
{
// Do something with Connection and Identity properties
}
protected IDbConnection Connection { get; private set; }
protected IIdentity Identity { get; private set; }
}
That is, you do two things when you write your code:
Rely on interfaces instead of classes whenever you think that the implementation might need to be changed;
Instead of creating instances of these interfaces inside a class, pass them as constructor arguments (alternatively, they could be assigned to public properties; the former is constructor injection, the latter is property injection).
None of this presupposes the existence of any DI library, and it doesn't really make the code any more difficult to write without one.
If you're looking for an example of this, look no further than the .NET Framework itself:
List<T> implements IList<T>. If you design your class to use IList<T> (or IEnumerable<T>), you can take advantage of concepts like lazy-loading, as Linq to SQL, Linq to Entities, and NHibernate all do behind the scenes, usually through property injection. Some framework classes actually accept an IList<T> as a constructor argument, such as BindingList<T>, which is used for several data binding features.
Linq to SQL and EF are built entirely around the IDbConnection and related interfaces, which can be passed in via the public constructors. You don't need to use them, though; the default constructors work just fine with a connection string sitting in a configuration file somewhere.
If you ever work on WinForms components you deal with "services", like INameCreationService or IExtenderProviderService. You don't even really know what what the concrete classes are. .NET actually has its own IoC container, IContainer, which gets used for this, and the Component class has a GetService method which is the actual service locator. Of course, nothing prevents you from using any or all of these interfaces without the IContainer or that particular locator. The services themselves are only loosely-coupled with the container.
Contracts in WCF are built entirely around interfaces. The actual concrete service class is usually referenced by name in a configuration file, which is essentially DI. Many people don't realize this but it is entirely possible to swap out this configuration system with another IoC container. Perhaps more interestingly, the service behaviors are all instances of IServiceBehavior which can be added later. Again, you could easily wire this into an IoC container and have it pick the relevant behaviors, but the feature is completely usable without one.
And so on and so forth. You'll find DI all over the place in .NET, it's just that normally it's done so seamlessly that you don't even think of it as DI.
If you want to design your DI-enabled library for maximum usability then the best suggestion is probably to supply your own default IoC implementation using a lightweight container. IContainer is a great choice for this because it's a part of the .NET Framework itself.
EDIT 2015: time has passed, I realize now that this whole thing was a huge mistake. IoC containers are terrible and DI is a very poor way to deal with side effects. Effectively, all of the answers here (and the question itself) are to be avoided. Simply be aware of side effects, separate them from pure code, and everything else either falls into place or is irrelevant and unnecessary complexity.
Original answer follows:
I had to face this same decision while developing SolrNet. I started with the goal of being DI-friendly and container-agnostic, but as I added more and more internal components, the internal factories quickly became unmanageable and the resulting library was inflexible.
I ended up writing my own very simple embedded IoC container while also providing a Windsor facility and a Ninject module. Integrating the library with other containers is just a matter of properly wiring the components, so I could easily integrate it with Autofac, Unity, StructureMap, whatever.
The downside of this is that I lost the ability to just new up the service. I also took a dependency on CommonServiceLocator which I could have avoided (I might refactor it out in the future) to make the embedded container easier to implement.
More details in this blog post.
MassTransit seems to rely on something similar. It has an IObjectBuilder interface which is really CommonServiceLocator's IServiceLocator with a couple more methods, then it implements this for each container, i.e. NinjectObjectBuilder and a regular module/facility, i.e. MassTransitModule. Then it relies on IObjectBuilder to instantiate what it needs. This is a valid approach of course, but personally I don't like it very much since it's actually passing around the container too much, using it as a service locator.
MonoRail implements its own container as well, which implements good old IServiceProvider. This container is used throughout this framework through an interface that exposes well-known services. To get the concrete container, it has a built-in service provider locator. The Windsor facility points this service provider locator to Windsor, making it the selected service provider.
Bottom line: there is no perfect solution. As with any design decision, this issue demands a balance between flexibility, maintainability and convenience.
What I would do is design my library in a DI container agnostic way to limit the dependency on the container as much as possible. This allows to swap out on DI container for another if need be.
Then expose the layer above the DI logic to the users of the library so that they can use whatever framework you chose through your interface. This way they can still use DI functionality that you exposed and they are free to use any other framework for their own purposes.
Allowing the users of the library to plug their own DI framework seems a bit wrong to me as it dramatically increases amount of maintenance. This also then becomes more of a plugin environment than straight DI.

Categories