Access HttpContext in constructor for fake DI - c#

I am working on an asp.net mvc application that does not have DI or unit testing yet. So I started to restructure the application to have unit tests by spliting the application into 3 layers: Controllers - Services - DataAccess.
Some of the controllers were using the Session and the Cookies to store and retrieve values. So I create an interface and a class that deals with saving and retrieving values from Session and Cookies.
I did this only by using unit testing and never run the application.
Since the application did not had DI I created on the contructor of the controller the ContextService by giving as an input parameter the HttpContext of the Controller.
However when I run the application the values were not retrieved or saved in the Session or Cookies. It seems that the HttpContext is null on contructor.
Question 1:
How should I deal with my ContextService. Should it use the static property HttpContext.Current in order to access the session and cookies (how will it be unit tested) or ...?
Question 2:
If you know another solution how should it be adapt in order to have also DI in the future.

I created on the contructor of the controller the ContextService by giving as an input parameter the HttpContext of the Controller.
By passing the HttpContext from the controller to the service, you make the controller responsible of the creation of that service. This tightly couples the controller with the service, while loose coupling is the goal.
hould it use the static property HttpContext.Current in order to access the session and cookies
how will it be unit tested
It won't. This is an important reason why we create abstractions. Some parts in our system can't be unit tested and we want to be able to replace them with fake implementations that we use under test.
The trick, however, is to make the replaced part as small as possible, and preferably don't mix it with business logic, since replacing that will also mean you won't be testing that logic.
You should hide access to the HttpContext.Current behind an abstraction. But when you do that, make sure that you define the abstraction in a way that suits your application best. For instance, take a close look at what it is that your ContextService wants. Does it really want to access cookies? Probably not. Or does it want to the name or ID of the currently logged in user? That's more likely. So you should model your abstractions around that.
As an example, define an abstraction that allows application code to access information about the logged in user using a IUserContext:
public interface IUserContext
{
string UserName { get; }
}
One possible implementation of this abstraction is one that retrieves this information from an HTTP cookie:
public class CookieUserContext : IUserContext
{
public string UserName => HttpContext.Current.Cookies["name"];
}
But you can easily imagine other implementations, for instance when that same application code needs to run outside the context of a web request, for instance as part of a background operation, or an isolated Windows service application. This is another important reason to introduce abstractions—whenever the same code needs to be able to run in different environments.
If you're interested, the book Dependency Injection in .NET, by Mark Seemann, goes into great detail about these kinds of patterns and principles, such as reasons for applying DI, preventing tight coupling. The second edition of this book, by Seemann and myself, even goes into more detail about the things you are struggling with, such as preventing leaky abstractions, how to separate behavior into classes, and designing applications using the SOLID principles. The book's homepage contains a download link for the first chapter, which is free to download.

Related

SimpleInjector: End-to-end testing of controller's methods on a test database

I have a web app with several REST API controllers. This controllers got injected repositories as per this tutorial using SimpleInjector. I'd like to add some end-to-end testing to my project to make sure controller's method calls affect database in predictable manner (I'm using EF6, MySQL, code first). I was going to use this plan to test my app. I like overall approach but it seems like in this approach author is feeding db context directly into controller. In my case I have a Controller that gets an injected Repository from constructor and in turn Repositiry gets injected DbContext. Obviously I can hardcode the chain of creating DbContext, instantiating Repository followed by instantiating a Controller but it kinda defies the purpose of using the SimpleInjector, isn't it? I think there should be the way do it in more transparrent manner.
Basically I would like to inject separate database into my tests. When server is running it's using one database, when tests are running they are using the other ad-hoc database.
I have my test classes in a separate project, so I will need a way to instantiate my Controllers and Repositories from the main project. I'm not sure how I can do it either. Is it a good idea to expose my SimpleInjector.Container from another project somehow?
Additional info: I'm using .Net Framework (non-Core), I would like to manage withouth mocking for now unless it's required.
You can abstract the DbContext behind an interface and use SimpleInjector's option to override registrations for your tests. That will allow you to register a different implementation of your context for testing. Then in your test setup code call your standarad registrations, assuming they're all in your composition root and/or bootstrapping projedct. Then flip the override switch and register the test context.
Override Registrations
- For testing only
In your case, I would expect you to don't have to do anything in particular. Your end-to-end tests would call into a test version of the web application over HTTP, and this test application is configured with a connection string that points at a test database. This way you can use the exact same DI configuration, without having to do any changes. You certainly don't want to inject a different DbContext during testing.
Another option is to test in-memory, which means you don't call the web application over HTTP, but instead request a controller directly from Simple Injector and call its methods. Here the same holds: the only thing you want to change is your connection string, which is something that should already be configurable.

Seemann's DI Azure Table Data Access

In book "Dependency Injection in .Net" by Mark Seemann, in second chapter, is analysis of some badly written 3-layer asp.net application. The main point is: application fails because the lowest layer, data access, can not be converted from SQL with Entity Framework to Azure with no-SQL database. Here is exact quotation:
To enable the e-commerce application as a cloud application, the Data Access library
must be replaced with a module that uses the Table Storage Service. Is this possible?
From the dependency graph in figure 2.10, we already know that both User Interface and Domain libraries depend on the Entity Framework-based Data Access library.
If we try to remove the Data Access library, the solution will no longer compile,
because a required DEPENDENCY is missing.
In a big application with dozens of modules, we could also try to remove those
modules that don’t compile to see what would be left. In the case of Mary’s application, it’s evident that we’d have to remove all modules, leaving nothing behind.
Although it would be possible to develop an Azure Table Data Access
library that mimics the API exposed by the original Data Access
library, there’s no way we could inject it into the application.
Graph 2.10:
My question is - why module that is imitating previous behavior can not be injected into application, and what that really mean? Is it related to Azure specifics? I have not been working much with no-sql database before.
Essentialy, what he means is that your UI code is directly dependent on the code in the Data Access library. An example of how this might be used in the UI-layer:
public class SomeController : Controller
{
[Route("someRoute")]
[HttpGet]
public ViewResult SomeRoute()
{
// Here we're using the data component directly
var dataComponent = new DataAccessLayer.DataComponent();
return View(dataComponent.GetSomeData());
}
}
If we want to swap out the DataAccess-library it means we would have to go into all our controllers and change the code to use the new component (unless we create exactly the same class names in the same namespaces, but that's unlikely).
On the other hand, we could also write the controller like this:
public class SomeController : Controller
{
IDataComponent _dataComponent;
public SomeController(IDataComponent dataComponent)
{
_dataComponent = dataComponent;
}
[Route("someRoute")]
[HttpGet]
public ViewResult SomeRoute()
{
// Now we're using the interface that was injected
return View(_dataComponent.GetSomeData());
}
}
By defining the class like this, we can externally specify which concrete class that implements the IDataComponent interface should be injected into the constructor. This allows us to "wire" our application externally. We're injecting a concrete class into a class.
Dependency Injection is one way to make it easier to "program against an interface, not a concrete class" .
The example Mark Seemann gives relates to databases vs Azure Table Storage, but it's just that, an example. This is not related to NoSql (or storage mechanisms in general). The same principles apply for everything that depends on other classes (generally service-type classes).
EDIT after comments:
It's indeed true that you could just modify the internals of the DataComponent (or repository if that's what you're using).
However, using DI (and programming against an interface in general) gives you more options:
You could have various implementations at the same time and inject a different implementation depending on which controller it is (for example)
You could reuse the same instance in all your controllers by specifying the lifecycle in the registration (probably not usable in this case)
For testing purposes, you could inject a different implementation into the controller (such as a mock, which you can test for invocations)

Is this an abuse of dependency injection? (when are dependencies not dependencies)

We have a multi-tenant web application in which a many pages operate per-tenant. As a result many of our interfaces look like this
interface ISprocketDeployer
{
void DeploySprocket(int tenantId);
}
It occurred to me that it might be better to simplify these interfaces to be unaware of the tenantId. The pages would also then be unaware of the tenantId, like so
[Inject] // Ninject
public ISprocketDeployer SprocketDeployer { get; set; }
private void _button_OnClick(object sender, EventArgs e)
{
SprocketDeployer.DeploySprocket();
}
The dependency injection framework would then inject the tenant ID as a dependency by looking at the currently authenticated user. Is this a good idea or just an abuse of dependency injection?
It further occurred to me that many implementations also take additional dependencies just for looking up details about the tenant, and that I could reduce the number of dependencies further by just injecting in that detail directly, for example
class SprocketDeployer
{
public SprocketDeployer(ITenantRepository tenantRepository)
{
_tenantRepository = tenantRepository;
}
void DeploySprocket(int tenantId)
{
var tenantName = _tenantRepository.GetTenant(tenantId).Name;
// Do stuff with tenantName
}
}
Would become
class SprocketDeployer
{
public SprocketDeployer(Tenant tenant)
{
_tenant = tenant;
}
void DeploySprocket()
{
var tenantName = _tenant.Name;
// Do stuff with tenantName
}
}
I then realised that I could also inject in other "dependencies", such as details about the currently logged in user in the same way.
At that point I become unsure. While it seemed like a fantastic idea at first I realised that I wasn't sure when to stop adding extra "dependencies". How do I decide what should be a dependency and what should be a parameter?
I would stop short of calling it abuse, but that said:
The general use case of dependency injection (via a container) is to inject pure services that do not directly represent state. One of the immediate problems is informing the container of which instance of your object it should be injecting at run-time. If your SprocketDeployer requires a Tenant, and your system includes many Tenants, how does the container figure out which tenant to supply at runtime?
If you want to avoid passing Tenant around, consider using Thread Local Storage (TLS). However, there will still be some point in the pipeline where the Tenant needs to be added to TLS.
Edit
From your comment:
I solve the problem of figuring out which tenant to supply at runtime
in Ninject by binding the type to a method which examines
HttpContext.Current and using InRequestScope. It works fine, but I've
not seen anything to indicate that this is (or isn't) a recommended
practice.
If I understand you correctly, that sounds like a factory of sorts? If that's the case, I see nothing wrong with it.
A minor nitpick might be: it's nice to be able to not have to be concerned about how your services are scoped. When they are truly stateless services, you can view them as pure swappable components that have no side effects based on container configuration.
As with Phil, I would not call this dependency injection abuse, though it does feel a bit odd.
You have at least a few options. I'll detail a couple that seem the best from the detail you've provided, though these may have been what you were referring to when you said 'I then realised that I could also inject in other "dependencies", such as details about the currently logged in user in the same way.'
Option 1: Abstract tenant identification to a factory
It may make perfect sense to have an abstraction that represents the current tenant. This abstraction is a factory, but I prefer the term "provider" because factory connotes creation whereas a provider may simply retrieve an existing object (Note: I realize Microsoft introduced a provider pattern but that's not what I'm referring to). In this context you're not injecting data, instead you're injecting a service. I'd probably call it ICurrentTenantProvider. The implementation is frequently context specific. Right now, for example, it would come from your HttpContext object. But, you could decide a specific customer needed their own server and then inject an ICurrentTenantProvider that would retrieve it from your web.config file.
Option 2: Hide multitenancy entirely
Unless you ever have to do different things based on the tenant[1], it may be better to hide the multitenancy entirely. In this case you'd inject classes, that I'm going to call providers, that are context aware and the result of whose function calls would be based on the current tenant. For example, you might have an ICssProvider and an IImageProvider. These providers alone would be aware that the application supported multitenancy. They may use another abstraction such as the ICurrentTenantProvider referenced above or may use the HttpContxt directly. Regardless of the implementation, they would return context specific to the tenant.
In both cases, I'd recommend injecting a service instead of data. The service provides an abstraction layer and allows you to inject an implementation that's appropriately context aware.
Making the Decision
How do I decide what should be a dependency and what should be a parameter?
I generally only ever inject services and avoid injecting things like value objects. To decide you might ask yourself some questions:
Would it make sense to register this object type (e.g., int tenantId) in the IoC container?
Is this object/type consistent for the standard lifetime of the application (e.g., instance per http request), or does it change?
Will most objects end up dependent on this particular object/type?
Would this object need to be passed around a lot if made a parameter?
For (1), it doesn't make sense to inject value objects. For (2), if it is consistent, as the tenant would be, it may be better to inject a service that's aware of the tenant. If yes to (3), it may indicate a missing abstraction. If yes to (4) you may again be missing an abstraction.
In the vein of (3) and (4) and depending on the details of the application, I could see ICurrentTenantProvider being injected in a lot of places, which may indicate it's a little low level. At that point the ICssProvider or similar abstractions may be useful.
[1] - If you inject data, like an int, you're forced to query and you may end up in a situation where you'd want to replace conditional with polymorphism.
10/14/15 UPDATE BEGIN
A little over three months later I've had a bit of a change of heart on the specific situation I mentioned running into with this approach.
I had mentioned that for a long time now I've also regularly injected the current "identity" (tenantAccount, user, etc.) wherever it was necessary. But, that I had ran into a situation where I needed the ability to temporarily change that identity for just a portion of the code to be executed (within the same execution thread).
Initially, a clean solution to this situation wasn't obvious to me.
I'm glad to say that in the end I did eventually come up with a viable solution - and it has been happily churning away for some time now.
It will take some time to put together an actual code sample (it's currently implemented in a proprietary system) but in the meantime here is at least a high level conceptual overview.
Note: Name the interfaces, classes, methods, etc. whatever you like - even combine things if that makes sense for you. It's just the overall concepts that are important.
First, we define an IIdentityService, exposing a GetIdenity(). This becomes the de facto dependency for getting the current identity anywhere we need it (repos, services, etc. everything uses this).
The IIdentityService implementation takes a dependency on an IIdentityServiceOrchestrator.
In my system the IIdentityServiceOrchestrator implmentation makes use mutliple IIdentityResolvers (of which only two are actually applicable to this discussion: authenticatedIdentityResolver, and manualIdentityResolver). IIdentityServiceOrchestrator exposes a .Mode property to set the active IIdentityResolver (by default this is set to 'authenticated' in my system).
Now, you could just stop there and inject the IIdentityServiceOrchestrator anywhere you needed to set the identity. But, then you'd be responsible for managing the entire process of setting and rolling back the temporary identity (setting the mode, and also backing up and restoring the identity details if it was already in manual mode, etc.).
So, the next step is to introduce an IIdentityServiceOchestratorTemporaryModeSwitcher. Yes, I know the name is long - Name it what you want. ;) This exposes two methods: SetTemporaryIdentity() and Rollback(). SetTemporaryIdentiy() is overloaded so you can set via mode or manual identity. The implementation takes a dependency on the IIdentityServiceOrchestrator and manages all the details of backing up the current existing identity details, setting the new mode/details, and rolling back the details.
Now, again you could just stop there and inject IIdentityServiceOchestratorTemporaryModeSwitcher anywhere you'd need to set the temporary identity. But, then you'd be forced to .SetTemporaryIdentity() in one place and .Rollback() in another and in practice this can get messy if it's not necessary.
So, now we finally introduce the final pieces of the puzzle: TemporaryIdentityContext and ITemporaryIdentityContextFactory.
TemporaryIdentityContext Implements IDisposable and takes a dependency on both the IIdentityServiceOchestratorTemporaryModeSwitcher and an Identity / Mode set via an overloaded constructor. In the ctor we use the IIdentityServiceOchestratorTemporaryModeSwitcher.SetTemporaryIdentity() to set the temporary identity and on dispose we call into IIdentityServiceOchestratorTemporaryModeSwitcher.Rollback to clean things up.
Now, where we need to set the identity we inject the ITemporaryIdentityContextFactory which exposes a .Create() (again overloaded for identity / mode) and this is how we procure our temporary identity contexts. The returned temporaryIdentityContext object itself isn't really touched it just exists to control the lifetime of the temporary identity.
Example flow:
// Original Identity
Using (_TemporaryIdentityContextFactory.Create(manualIdentity)) {
// Temp Identity Now in place
DoSomeStuff();
}
// Back to original Identity again..
That's pretty much it conceptually; obviously a LOT of the details have been left out.
There's also the matter of IOC lifetime that should be discussed. In its purest form as discussed here, generally each of the componenets (IIdentityService, IIdentityServiceOrchestrator, ITemporaryIdentityContextFactory) could be set to a 'PerRequest' lifetime. However, it could get funky if you happen to be spawing multiple threads from a single request... in which case you'd likely want to go with a 'perthread', etc. lifetime to ensure there was no thread crosstalk on the injections.
Ok, hope that actually helps someone (and didn't come across as completely convoluted, lol). I'll post a code sample that should clear things up further as I have time.
10/14/15 UPDATE END
Just wanted to chime in and say you're not alone in this practice. I've got a couple multi-tenant apps in the wild that inject the tenant information where it's needed in the same manner.
However, I have more recently ran into an issue where doing this has caused me quite a bit of grief.
Just for the sake of example lets say you have the following (very linear) dependency graph:
ISomeService -> IDep2 -> IDep3 -> ISomeRepository -> ITenentInfoProvider
So, ISomeService depends IDep2, which depenends on IDep3... so on and so on until way out in some leaf ITenentInfoProvider is injected.
So, what's the problem? Well, what if in ISomeService you need to act on another tenant than the one you're currently logged in as? How do you get a different set of TenantInfo injected into ISomeRepository?
Well, some IOC containers DO have context-based conditional support (Ninject's "WhenInjectedInto", "WhenAnyAnchestorNamed" bindings for example). So, in simpler cases you could manage something hacky with those.
But what if in ISomeService you need to initiate two operations, each against a different tenant? The above solutions will fail without the introduction of multiple marker interfaces, etc. Changing your code to this extent for the sake of dependency injection just smells bad on multiple levels.
Now, I did come up with a container based solution, but I don't like it.
You can introduce an ITenantInfoResolverStratagy and have an implementation for each "way" of resolving the TenantInfo (AuthenticationBasedTenantInfoResolverStratagy, UserProvidedTenantInfoResolverStratagy, etc.).
Next you introduce a CurrentTenantInfoResolverStratagy (registered with the container as PerRequestLifeTime so it's a singlton for the life of your call, etc.). This can be injected anywhere you need to set the strategy that will be used by downstream clients. So, in our example we inject it into ISomeService, we set the strategy to "UserProvided" (feeding it a TenantId, etc.) and now, down the chain, when ISomeRepository asks ITenentInfoProvider for the TenantInfo, ITenentInfoProvider turns around gets it from an injected CurrentTenantInfoResolverStratagy.
Back in ISomeService, the CurrentTenantInfoResolverStratagy could be changed multiple times as needed.
So, why don't I like this?
To me, this is really just an overly complicated global variable. And in my mind just about all the problems associated globals apply here (unexpected behavior due to it being mutable by anyone at any time, concurrency issues, etc. etc. etc.).
The problem this whole thing sets out to solve (mostly just not having to pass the tenantId / tenantInfo around as a parameter) is probably just not worth the inherent issues that come with it.
So what's a better solution? Well there probably is some elegant thing that I'm just not thinking of (maybe some Chain Of Command implemenation?).
But, really I don't know.
It may not be elegant but passing a TenantId / TenantInfo around as a parameter in any tenant related method calls would definitely avoid this whole debacle.
If anyone else has better ideas please by all means chime in.

Options for IoC Auto Wiring in Domain Driven Design

In my latest ASP.NET MVC 2 application I have been trying to put into practice the concepts of Domain Driven Design (DDD), the Single Responsibility Principle (SRP), Inversion of Control (IoC), and Test Driven Development (TDD). As an architecture example I have been following Jeffery Palermo's "Onion Architecture" which is expanded on greatly in ASP.NET MVC 2 in Action.
While, I have begun to successfully apply most (some?) of these principles I am missing a key piece of the puzzle. I am having trouble determining the best mechanism for auto-wiring a service layer to my domain entities.
As an example: each domain entity that needs the ability to send an email should depend on an IEmailService interface. From my reading, best practice to reveal this dependency would be to use constructor injection. In my UI layer I perform a similar injection for repository interface implementations using the StructureMapControllerFactory from ASP.NET MVC Contrib.
Where I am confused is what is the best mechanism for auto-wiring the injection of the necessary services into domain entities? Should the domain entities even be injected this way? How would I go about using IEmailService if I don't inject it into the domain entities?
Additional Stack Overflow questions which are great DDD, SRP, IoC, TDD references:
IoC Containers and Domain Driven Design
How to avoid having very large objects with Domain Driven Design
Unless I'm misunderstanding your intent and instead I'm choosing to focus on semantics I'm going to dissect this statement "As an example: each domain entity that needs the ability to send an email should depend on an IEmailService interface."
I would have to argue this is upon itself is an extreme bastardization of DDD. Why should a domain entity ever need to depend on an email service? IMO it shouldn't. There is no justification for it.
However there are business operations in conjunction with a domain entity that would require the need to send emails. You should have your IEmailService dependency contained in this class here, not the domain entity. This class would most likely fall into one of a few nearly synonymous names: Model, Service or Controller dependent upon which architecture/layer you're in.
At this point your StructureMapControllerFactory would then correctly auto wire everything that would use the IEmailService.
While I might be minorly over generalizing it's pretty much standard practice to have domain entities be POCOs or be nearly POCOs (to avoid violation of the SRP) however frequently SRP is violated in domain entities for sake of serialization and validation. Choosing to violate SRP for those types of cross cutting concerns is more of a personal belief stance as opposed to a "right" or "wrong" decision.
As a final follow up if your question is on the portion of code that is truly operating in a stand alone service whether web or OS based and how to wire up the dependencies from that, a normal solution would be take over the service at a base level and apply IOC to it in the same similar fashion as the StructureMapControllerFactory does in MVC. How to achieve this would be entirely dependent upon the infrastructure you're working with.
Response:
Lets say you have IOrderConfirmService which has a method EmailOrderConfirmation(Order order). You would end up with something like this:
public class MyOrderConfirmService : IOrderConfirmService
{
private readonly IEmailService _mailer;
public MyOrderConfirmService(IEmailService mailer)
{
_mailer = mailer;
}
public void EmailOrderConfirmation(Order order)
{
var msg = ConvertOrderToMessage(order); //good extension method candidite
_mailer.Send(msg);
}
}
You would then have your OrderController class that would be something like
public class OrderController : Controller
{
private readonly IOrderConfirmService _service;
public OrderController(IOrderConfirmService service)
{
_service= service;
}
public ActionResult Confirm()
{
_service.EmailOrderConfirmation(some order);
return View();
}
}
StrucutreMap will inherently build up you're entire architecture chain when you use constructor injection correctly. This is the fundamental difference between tight coupling and inversion of control. So when the StructureMapFactory goes to build up your controller the first thing it will see is that it needs IOrderConfirmService. At this point it will check if it can plug IOrderConfirmService directly which it can't because it needs IEmailService. So it will check if it can plug IEmailService and for argumentsake lets say it can. So at this point it will build EmailService, which it will then build MyOrderConfirmService and plug in EmailService, and then finally build OrderController and plug in MyOrderConfirmService. This is where the term inversion of control comes from. StructureMap will build the EmailService first in the entire chain of dependencies and ending last with the Controller. In a tightly coupled setup this will be the opposite where the Controller will be built first and have to build the business service and then build the email service. Tightly coupled design is very brittle when compared to IOC.

Design - Where should objects be registered when using Windsor [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I will have the following components in my application
DataAccess
DataAccess.Test
Business
Business.Test
Application
I was hoping to use Castle Windsor as IoC to glue the layers together but I am bit uncertain about the design of the gluing.
My question is who should be responsible for registering the objects into Windsor?
I have a couple of ideas;
Each layer can register its own objects. To test the BL, the test bench could register mock classes for the DAL.
Each layer can register the object of its dependencies, e.g. the business layer registers the components of the data access layer. To test the BL, the test bench would have to unload the "real" DAL object and register the mock objects.
The application (or test app) registers all objects of the dependencies.
I am seeking some ideas and pros/cons with the different paths.
In general, all components in an application should be composed as late as possible, because that ensures maximum modularity, and that modules are as loosely coupled as possible.
In practice, this means that you should configure the container at the root of your application.
In a desktop app, that would be in the Main method (or very close to it)
In an ASP.NET (including MVC) application, that would be in Global.asax
In WCF, that would be in a ServiceHostFactory
etc.
The container is simply the engine that composes modules into a working application. In principle, you could write the code by hand (this is called Poor Man's DI), but it is just so much easier to use a DI Container like Windsor.
Such a Composition Root will ideally be the only piece of code in the application's root, making the application a so-called Humble Executable (a term from the excellent xUnit Test Patterns) that doesn't need unit testing in itself.
Your tests should not need the container at all, as your objects and modules should be composable, and you can directly supply Test Doubles to them from the unit tests. It is best if you can design all of your modules to be container-agnostic.
Also specifically in Windsor you should encapsulate your component registration logic within installers (types implementing IWindsorInstaller) See the documentation for more details
While Mark's answer is great for web scenarios, the key flaw with applying it for all architectures (namely rich-client - ie: WPF, WinForms, iOS, etc.) is the assumption that all components needed for an operation can/should be created at once.
For web servers this makes sense since every request is extremely short-lived and an ASP.NET MVC controller gets created by the underlying framework (no user code) for every request that comes in. Thus the controller and all its dependencies can easily be composed by a DI framework, and there is very little maintenance cost to doing so. Note that the web framework is responsible for managing the lifetime of the controller and for all purposes the lifetime of all its dependencies (which the DI framework will create/inject for you upon the controller's creation). It is totally fine that the dependencies live for the duration of the request and your user code does not need to manage the lifetime of components and sub-components itself. Also note that web servers are stateless across different requests (except for session state, but that's irrelevant for this discussion) and that you never have multiple controller/child-controller instances that need to live at the same time to service a single request.
In rich-client apps however this is very much not the case. If using an MVC/MVVM architecture (which you should!) a user's session is long-living and controllers create sub-controllers / sibling controllers as the user navigates through the app (see note about MVVM at the bottom). The analogy to the web world is that every user input (button click, operation performed) in a rich-client app is the equivalent of a request being received by the web framework. The big difference however is that you want the controllers in a rich-client app to stay alive between operations (very possible that the user does multiple operations on the same screen - which is governed by a particular controller) and also that sub-controllers get created and destroyed as the user performs different actions (think about a tab control that lazily creates the tab if the user navigates to it, or a piece of UI that only needs to get loaded if the user performs particular actions on a screen).
Both these characteristics mean that it's the user code that needs to manage the lifetime of controllers/sub-controllers, and that the controllers' dependencies should NOT all be created upfront (ie: sub-controllers, view-models, other presentation components etc.). If you use a DI framework to perform these responsibilities you will end up with not only a lot more code where it doesn't belong (See: Constructor over-injection anti-pattern) but you will also need to pass along a dependency container throughout most of your presentation layer so that your components can use it to create their sub-components when needed.
Why is it bad that my user-code has access to the DI container?
1) The dependency container holds references to a lot of components in your app. Passing this bad boy around to every component that needs to create/manage anoter sub-component is the equivalent of using globals in your architecture. Worse off any sub-component can also register new components into the container so soon enough it will become a global storage as well. Developers will throw objects into the container just to pass around data between components (either between sibling controllers or between deep controller hierarchies - ie: an ancestor controller needs to grab data from a grandparent controller). Note that in the web world where the container is not passed around to user code this is never a problem.
2) The other problem with dependency containers versus service locators / factories / direct object instantiation is that resolving from a container makes it completely ambiguous whether you are CREATING a component or simply REUSING an existing one. Instead it is left up to a centralized configuration (ie: bootstrapper / Composition Root) to figure out what the lifetime of the component is. In certain cases this is okay (ie: web controllers, where it is not user code that needs to manage component's lifetime but the runtime request processing framework itself). This is extremely problematic however when the design of your components should INDICATE whether it's their responsibility to manage a component and what it's lifetime should be (Example: A phone app pops up a sheet that asks the user for some info. This is achieved by a controller creating a sub-controller which governs the overlaying sheet. Once the user enters some info the sheet is resigned, and control is returned to the initial controller, which still maintains state from what the user was doing prior). If DI is used to resolve the sheet sub-controller it's ambiguous what the lifetime of it should be or whom should be responsible for managing it (the initiating controller). Compare this to the explicit responsibility dictated by the use of other mechanisms.
Scenario A:
// not sure whether I'm responsible for creating the thing or not
DependencyContainer.GimmeA<Thing>()
Scenario B:
// responsibility is clear that this component is responsible for creation
Factory.CreateMeA<Thing>()
// or simply
new Thing()
Scenario C:
// responsibility is clear that this component is not responsible for creation, but rather only consumption
ServiceLocator.GetMeTheExisting<Thing>()
// or simply
ServiceLocator.Thing
As you can see DI makes it unclear whom is responsible for the lifetime management of the sub-component.
NOTE:
Technically speaking many DI frameworks do have some way of creating
components lazily (See: How not to do dependency injection - the
static or singleton container) which is a lot better than passing
the container around, but you are still paying the cost of mutating your
code to pass around creation functions everywhere, you lack first-level
support for passing in valid constructor parameters during creation,
and at the end of the day you are still using an indirection mechanism
unnecessarily in places where the only benefit is to achieve testability,
which can be achieved in better, simpler ways (see below).
What does all this mean?
It means DI is appropriate for certain scenarios, and inappropriate for others. In rich-client applications it happens to carry a lot of the downsides of DI with very few of the upsides. The further your app scales out in complexity the bigger the maintenance costs will grow. It also carries the grave potential for misuse, which depending on how tight your team communication and code review processes are, can be anywhere from a non-issue to a severe tech debt cost. There is a myth going around that Service Locators or Factories or good old Instantiation are somehow bad and outdated mechanisms simply because they may not be the optimal mechanism in the web app world, where perhaps a lot of people play in. We should not over-generalize these learnings to all scenarios and view everything as nails just because we've learned to wield a particular hammer.
My recommendation FOR RICH-CLIENT APPS is to use the minimal mechanism that meets the requirements for each component at hand. 80% of the time this should be direct instantitation. Service locators can be used to house your main business layer components (ie: application services which are generally singleton in nature), and of course Factories and even the Singleton pattern also have their place. There is nothing to say you can't use a DI framework hidden behind your service locator to create your business layer dependencies and everything they depend on in one go - if that ends up making your life easier in that layer, and that layer doesn't exhibit the lazy loading which rich-client presentation layers overwhelmingly do. Just make sure to shield your user code from access to that container so that you can prevent the mess that passing a DI container around can create.
What about testability?
Testability can absolutely be achieved without a DI framework. I recommend using an interception framework such as UnitBox (free) or TypeMock (pricey). These frameworks give you the tools you need to get around the problem at hand (how do you mock out instantiation and static calls in C#) and do not require you to change your whole architecture to get around them (which unfortunately is where the trend has gone in the .NET/Java world). It is wiser to find a solution to the problem at hand and use the natural language mechanisms and patterns optimal for the underlying component then to try to fit every square peg into the round DI hole. Once you start using these simpler, more specific mechanisms you will notice there is very little need for DI in your codebase if any at all.
NOTE: For MVVM architectures
In basic MVVM architectures view-models effectively take on the
responsibility of controllers, so for all purposes consider the
'controller' wording above to apply to 'view-model'. Basic MVVM works
fine for small apps but as the complexity of an app grows you may want
to use an MVCVM approach. View-models become mostly dumb DTOs to
facilitate data-binding to the view while interaction with the
business layer and between groups of view-models representing
screens/sub-screens gets encapsulated into explicit
controller/sub-controller components. In either architecture the
responsibility of controllers exists and exhibits the same
characteristics discussed above.

Categories