Architecture: Dependency Injection, Loosely Coupled Assemblies, Implementation Hiding - c#

I've been working on a personal project which, beyond just making something useful for myself, I've tried to use as a way to continue finding and learning architectural lessons. One such lesson has appeared like a Kodiak bear in the middle of a bike path and I've been struggling quite mightily with it.
The problem is essentially an amalgam of issues at the intersection of dependency injection, assembly decoupling and implementation hiding (that is, implementing my public interfaces using internal classes).
At my jobs, I've typically found that various layers of an application hold their own interfaces which they publicly expose, but internally implement. Each assembly's DI code registers the internal class to the public interface. This technique prevents outside assemblies from newing-up an instance of the implementation class. However, some books I've been reading while building this solution have spoken against this. The main things that conflict with my previous thinking have to do with the DI composition root and where one should keep the interfaces for a given implementation. If I move dependency registration to a single, global composition root (as Mark Seemann suggests), then I can get away from each assembly having to run its own dependency registrations. However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them). As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it. As an example, here is a diagram he provided, and, for contrast, a diagram for how I would normally implement the same solution (okay, these aren't quite the same; kindly focus on the arrows and notice when implementation arrows cross assembly boundaries instead of composition arrows).
Martin Style
What I've normally seen
I immediately saw the advantage in Martin's diagram, that it allows the lower assemblies to be swapped out for another, given that it has a class that implements the interface in the layer above it. However, I also saw this seemingly major disadvantage: If you want to swap out the assembly from an upper layer, you essentially "steal" the interface away that the lower layer is implementing.
After thinking about it for a little bit, I decided the best way to be fully decoupled in both directions would be to have the interfaces that specify the contract between layers in their own assemblies. Consider this updated diagram:
Is this nutty? Is it right on? To me, it seems like this solves the problem of interface segregation. It doesn't, however, solve the problem of not being able to hide the implementation class as internal. Is there anything reasonable that can be done there? Should I not be worried about this?
One solution that I'm toying around with in my head is to have each layer implement the proxy layer's interface twice; once with a public class and once with an internal class. This way, the public class could merely wrap/decorate the internal class, like this:
Some code might look like this:
namespace MechanismProxy // Simulates Mechanism Proxy Assembly
{
public interface IMechanism
{
void DoStuff();
}
}
namespace MechanismImpl // Simulates Mechanism Assembly
{
using MechanismProxy;
// This class would be registered to IMechanism in the DI container
public class Mechanism : IMechanism
{
private readonly IMechanism _internalMechanism = new InternalMechanism();
public void DoStuff()
{
_internalMechanism.DoStuff();
}
}
internal class InternalMechanism : IMechanism
{
public void DoStuff()
{
// Do whatever
}
}
}
... of course, I'd still have to address some issues regarding constructor injection and passing the dependencies injected into the public class to the internal one. There's also the problem that outside assemblies could possibly new-up the public Mechanism... I would need a way to ensure only the DI container can do that... I suppose if I could figure that out, I wouldn't even need the internal version. Anyway, if anyone can help me understand how to overcome these architectural problems, it would be mightily appreciated.

However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
Unless you are building a reusable library (that gets published on NuGet and gets used by other code bases you have no control over), there is typically no reason to make classes internal. Especially since you program to interfaces, the only place in the application that depends on those classes is the Composition Root.
Also note that, in case you move the abstractions to a different library, and let both the consuming and the implementing assembly depend on that assembly, those assemblies don't have to depend on each other. This means that it doesn't matter at all whether those classes are public or internal.
This level of separation (placing the interfaces in an assembly of its own) however is hardly ever needed. In the end it's all about the required granularity during deployment and the size of the application.
As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it.
This is the Dependency Inversion Principle, which states:
In a direct application of dependency inversion, the abstracts are owned by the upper/policy layers

This is a somewhat opinion based topic, but since you asked, I'll give mine.
Your focus on creating as many assemblies as possible to be as flexible as possible is very theoretical and you have to weigh the practical value vs costs.
Don't forget that assemblies are only a container for compiled code. They become mostly relevant only when you look at the processes for developing, building and deploying/delivering them. So you have to ask a lot more questions before you can make a good decision on how exactly to split up the code into assemblies.
So here a few examples of questions I'd ask beforehand:
Does it make sense from your application domain to split up the
assemblies in this way (e.g. will you really need to swap out
assemblies)?
Will you have separate teams in place for developing those?
What will the scope be in terms of size (both LOC and team sizes)?
Is it required to protect the implementation from being available/visible? E.g. are those external interfaces or internal?
Do you really need to rely on assemblies as a mechanism to enforce your architectural separation? Or are there other, better measures (e.g. code reviews, code checkers, etc.)?
Will your calls really only happen between assemblies or will you need remote calls at some point?
Do you have to use private assemblies?
Will sealed classes help instead enforcing your architecture?
For a very general view, leaving these additional factors out, I would side with Martin Fowler's diagram, because that is just the standard way of how to provide/use interfaces. If your answers to the questions indicate additional value by further splitting up/protecting the code that may be fine, too. But you'd have to tell us more about your application domain and you'd have to be able to justify it well.
So in a way you are confronted with two old wisdoms:
Architecture tends to follow organizational setups.
It is very easy to over-engineer (over-complicate) architectures but it is very hard to design them as simple as possible. Simple is most of the time better.
When coming up with an architecture you want to consider those factors upfront otherwise they'll come and haunt you later in the form of technical debt.

However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
That doesn't sound like a downside. Implementation classes, that are bound to abstractions in your Composition Root, could probably be used in an explicit way somewhere else for some other reasons. I don't see any benefit from hiding them.
I would need a way to ensure only the DI container can do that...
No, you don't.
Your confusion probably stems from the fact that you think of the DI and Composition Root like there is a container behind, for sure.
In fact, however, the infrastructure could be completely "container-agnostic" in a sense that you still have your dependencies injected but you don't think of "how". A Composition Root that uses a container is your choice, as good choice as possible another Composition Root where you manually compose dependencies. In other words, the Composition Root could be the only place in your code that is aware of a DI container, if any is used. Your code is built agaist the idea of Dependency Inversion, not the idea of Dependency Inversion container.
A short tutorial of mine can possibly shed some light here
http://www.wiktorzychla.com/2016/01/di-factories-and-composition-root.html

Related

Safely making wide-reaching change to IoC/DI config

Specific Question:
How can I unit Test my DI configuration against my codebase to ensure that all the wiring up still works after I make some change to the automated binding detection.
I've been contributing to a small-ish codebase (maybe ~10 pages? and 20-30 services/controllers) which uses Ninject for Ioc/DI.
I've discovered that in the Ninject Kernel it is configured to BindDefaultInterface. That means that if you ask it for an IFoo, it will go looking for a Foo class.
But it does that based on the string pattern, not the C# inheritance. That means that MyFoo : IFoo won't bind, and you could also get other weird "coincidental" bindings, maybe?
It all works so far, because everyone happens to have called their WhateverService interface IWhateverService.
But this seems enormously brittle and unintuitive to me. And it specifically broke when I wanted to rename my live FilePathProvider : IFilePathProvider to be AppSettingsBasedFilePathProvider (as opposed to the RootFolderFilePathProvider, or the NCrunchFilePathProvider which get used in Test) on the basis of that telling you what it did :)
There are a couple of alternative configurations:
BindToDefaultInterfaces (note plural) which will bind MyOtherBar to IMyOtherBar, IOtherBar & IBar (I think)
BindToSingleInterface works if every class implements exactly 1 interface.
BindToAllInterfaces does exactly what it sounds like.
I'd like to change to those, but I'm concerned about introducing obscure bugs whereby some class somewhere stops binding in the way that it should, but I don't notice.
Is there any way to test this / make this change with a reasonable amount of safety (i.e. more than "do it and hope", anyway!) without just trying to work out how to excercise EVERY possible component.
So, I managed to solve this...
My solution is not without its drawbacks, but it does fundamentally achieve the safety I wanted.
Summary
Roughly speaking there are 2 aspects:
Programmatically Test that every binding that the DI Kernel knows about can be resolved cleanly.
Programmatically Test that every relevant Interface used in your codebase can be resolved cleanly.
Both take roughly the same path:
Refactor your DI configuration code, so that the core portion of it that defines bindings for the meat of your app can be run in isolation from the rest of the Startup Code.
At the start of your Test invoke the above DI config code, so that you have a replica of the kernel object that your site uses, whose bindings you can test
perform some amount of Reflection, to generate a list of the relevant Type objects which the kernel should be able to provide.
(optional) filter that list to ignore some classes and interfaces that you know your tests don't need concern themselves about (e.g. your code doesn't need to worry about whether the Kernel knows how to bootstrap itself, so it can ignore any Bindings it has in the namespace belonging to your DI framework.).
Then loop over the Interface type objects you have left and ensure that kernel.Get(interfaceType) runs without an Exception for each one.
Read on for more of the Gory details...
Validating all defined Kernel Bindings
This is going to be specific to the DI framework in question, but for Ninject it's pretty hairy...
It would be much nicer if a Ninject kernel had a built-in way to expose its collection of Bindings, but alas it doesn't. But the bindings collection is available privately, so if you perform the correct Reflection incantations you can get hold of them. You then have to do some more Reflection to convert its Binding objects into {InterfaceType : ConcreteType} pairs.
I'll post the minutiae of how to extract these objects from Ninject separately, since that is orthogonal to the question of how to set up tests for DI config in general. {#Placeholder for a link to that#}
Other DI Frameworks may make this easier by providing these collections more publicly (or even by providing some sort of Validate() method directly.)
Once you have a list of the interface that the kernel thinks it can bind, just loop over them and test out resolving each one.
Details of this will vary by Language and Testing Framework, but I use C# and FluentAssertions, so I assigned Action resolutionAction = (() => testKernel.Get(interfaceType)) and asserted resolutionAction.ShouldNotThrow() or something very similar.
Validating all relevant interfaces in your codebase
The first half is all very well, but all it tells you is that the Bindings that you DI has picked up are well-defined. It doesn't tell you whether any Bindings are entirely missing.
You can cover that case by collecting all of the interesting Assemblies in your codebase:
Assembly.GetAssembly(typeof(Main.SampleClassFromMainAssembly))
Assembly.GetAssembly(typeof(Repos.SampleRepoClass))
Assembly.GetAssembly(typeof(Web.SampleController))
Assembly.GetAssembly(typeof(Other.SampleClassFromAnotherSeparateAssemblyInUse))
Then for each Assembly reflect over its classes to find the public Interfaces that it exposes, and ensure that each of those can be resolved by the kernel.
You've got a couple of issues with this approach:
What if you miss an Assembly, or someone adds a new Assembly, but doesn't add it to the tests?
This isn't directly a problem, but it would mean your tests don't protect you as well as you think. I put in a safety net test, to assert that every Assembly that the Ninject Kernel knows about should be in this list of Assemblies to be tested. If someone adds a new Assembly, it will likely contain something that is provided by the kernel, so this safety-net test will fail, bringing the developers attention to this test class.
What about classes that AREN'T provided by the kernel?
I found that mainly these classes were not provided for a clear reason - maybe they're actually provided by Factory classes, or maybe the class is badly used and is manually constructed. Either way these classes were a minority and could be listed as explicit exceptions ("loop over all classes; if classname = foo then ignore it.") relatively painlessly.
Overall, this is moderately hairy. And is more fragile that I'd generally like tests to be.
But it works.
It might be something that you write before making the change, solely so that you can run it once before your change, once after the change to check that nothing's broken and then delete it?

Should I avoid using Dependency Injection and IoC?

In my mid-size project I used static classes for repositories, services etc. and it actually worked very well, even if the most of programmers will expect the opposite. My codebase was very compact, clean and easy to understand. Now I tried to rewrite everything and use IoC (Invertion of Control) and I was absolutely disappointed. I have to manually initialize dozen of dependencies in every class, controller etc., add more projects for interfaces and so on. I really don't see any benefits in my project and it seems that it causes more problems than solves. I found the following drawbacks in IoC/DI:
much bigger codesize
ravioli-code instead of spaghetti-code
slower performance, need to initialize all dependencies in constructor even if the method I want to call has only one dependency
harder to understand when no IDE is used
some errors are pushed to run-time
adding additional dependency (DI framework itself)
new staff have to learn DI first in order to work with it
a lot of boilerplate code, which is bad for creative people (for example copy instances from constructor to properties...)
We do not test the entire codebase, but only certain methods and use real database. So, should Dependency Injection be avoided when no mocking is required for testing?
The majority of your concerns seem to boil down to either misuse or misunderstanding.
much bigger codesize
This is usually a result of properly respecting both the Single Responsibility Principle and the Interface Segregation Principle. Is it drastically bigger? I suspect not as large as you claim. However, what it is doing is most likely boiling down classes to specific functionality, rather than having "catch-all" classes that do anything and everything. In most cases this is a sign of healthy separation of concerns, not an issue.
ravioli-code instead of spaghetti-code
Once again, this is most likely causing you to think in stacks instead of hard-to-see dependencies. I think this is a great benefit since it leads to proper abstraction and encapsulation.
slower performance Just use a fast container. My favorites are SimpleInjector and LightInject.
need to initialize all dependencies in constructor even
if the method I want to call has only one dependency
Once again, this is a sign that you are violating the Single Responsibility Principle. This is a good thing because it is forcing you to logically think through your architecture rather than adding willy-nilly.
harder to understand when no IDE is used some errors are pushed to run-time
If you are STILL not using an IDE, shame on you. There's no good argument for it with modern machines. In addition, some containers (SimpleInjector) will validate on first run if you so choose. You can easily detect this with a simple unit test.
adding additional dependency (DI framework itself)
You have to pick and choose your battles. If the cost of learning a new framework is less than the cost of maintaining spaghetti code (and I suspect it will be), then the cost is justified.
new staff have to learn DI first in order to work with it
If we shy away from new patterns, we never grow. I think of this as an opportunity to enrich and grow your team, not a way to hurt them. In addition, the tradeoff is learning the spaghetti code which might be far more difficult than picking up an industry-wide pattern.
a lot of boilerplate code which is bad for creative people (for example copy instances from constructor to properties...)
This is plain wrong. Mandatory dependencies should always be passed in via the constructor. Only optional dependencies should be set via properties, and that should only be done in very specific circumstances since oftentimes it is violating the Single Responsibility Principle.
We do not test the entire codebase, but only certain methods and use real database. So, should Dependency Injection be avoided when no mocking is required for testing?
I think this might be the biggest misconception of all. Dependency Injection isn't JUST for making testing easier. It is so you can glance at the signature of a class constructor and IMMEDIATELY know what is required to make that class tick. This is impossible with static classes since classes can call both up and down the stack whenever they like without rhyme or reason. Your goal should be to add consistency, clarity, and distinction to your code. This is the single biggest reason to use DI and it is why I highly recommend you revisit it.
Although IoC/DI is not some silver bullet that works in all cases, it is possible that you didn't apply it correctly. The set of principles behind Dependency Injection take time to master, or at least, it sure did for me. When applied right, it can bring (among others) the following benefits:
Improved testability
Improved flexibility
Improved maintainability
Improved parallel development
From your question, I can already extract some things that might have gone wrong in your case:
I have to manually initialize dozen of dependencies in every class
This implies that each class you create is responsible of creating the dependencies it requires. This is an anti-pattern known as Control Freak. A class should not new up its dependencies itself. You might even have applied the Service Locator anti-pattern where your class requests its dependencies by calling the container (or an abstraction that represents the container) to get a particular dependency. A class should just define the dependencies it requires as constructor arguments.
dozen of dependencies
This statement implies that you are violating the Single Responsibly Principle. This is actually not coupled to IoC/DI, your old code probably already violated the Single Responsibility Principle causing it to become hard to understand and maintain for other developers. It's often hard for the original author to understand why others have a hard time maintaining code, since the thing you wrote often fits nicely in your head. Often the violation of the SRP will cause others to have trouble understanding and maintaining code. And testing classes that violate SRP is often even harder. A class should have half a dozen dependencies at most.
add more projects for interfaces and so on
This implies that you are violating the Reused Abstraction Principle. In general, the majority of components/classes in your application should be covered by a dozen of abstractions. For instance, all classes that implement some use case probably deserve one single (generic) abstraction. Classes that implement queries also deserve one abstraction. For the systems that I write, 80% to 95% of my components (classes that contain the application's behavior) are covered by 5 to 12 (mostly generic) abstractions. Most of the time you don't need to create a new project solely for the interfaces.
Most of the time I place those interfaces in the root of the same project.
much bigger codesize
The amount of code you write will initially not be very different. The practice of Dependency Injection however, only works great when applying SOLID as well, and SOLID promotes small focussed classes. Classes with one single responsibility. This means that you will have many small classes that are easy to understand and easy to compose into flexible systems. And don't forget: we shouldn't strive to write less code, but rather more maintainable code.
However, with a good SOLID design and the right abstractions in place, I experienced actually having to write much less code than I had to before. For instance, applying certain cross-cutting concerns (like logging, audit trailing, authorization, etc) can be applied by just writing a few lines of code in the infrastructure layer of the application, instead of having it to be spread out throughout the complete application. It even lead me to be able to do things that werent feasible before, because they forced me to make sweeping changes throughout the entire code base, which was so time consuming that management didn't allow me to do so.
ravioli-code instead of spaghetti-code
harder to understand when no IDE is used
This is kind of true. Dependency Injection promotes classes to become decoupled from one another. This can sometimes make it harder to browse to a code base, since a class usually depends on an abstraction instead of a concrete classes. In the past I found the flexibily that DI gives me outweigh the cost of finding the implementation by far. With Visual Studio 2015 I can simply do CTRL + F12 to find the implementations of an interface. If there is just one implementation, Visual Studio will jump right to that implementation.
slower performance
This is not true. The performance doesn't have to be any different than working with a code base of only static method calls. You however chose to have your classes with a Transient lifestyle which means it you new up instances all over the place. In my last applications I created all my classes just once per application, which gives roughly the same performance as only having static method calls, but with the benefit of the application being very flexible and maintainable. But note that even if you decide to new complete graphs of objects for each (web) request, the performance cost will most likely be orders of magnitude lower than any I/O (database, file system and web services calls) that you perform during that request, even with the slowest DI containers.
some errors are pushed to run-time
adding additional dependency (DI framework itself)
These issues both imply the usage of a DI library. DI libraries do object composition at runtime. A DI library however is not a required tool when practicing Dependency Injection. Small applications can benefit from using Dependency Injection without a tool; a practice called Pure DI. Your application might not benefit from using a DI container, but most applications actually benefit from using Dependency Injection (when used correctly) as a practice. Againt: tools are optional, writing maintainable code isn't.
But even if you use a DI library, there are libraries that have tools built-in that allow you to verify and diagnose your configuration. They won't give you compile-time support, but they allow you to run this analysis either when the application starts up or using a unit test. This prevents you from doing a regression on the complete application just to verify whether your container is wired correctly. My advise is to pick a DI container that helps you in detecting these configuration errors.
new staff have to learn DI first in order to work with it
This is kind of true, but Dependency Injection itself isn't actually hard to learn. What is actually hard to learn is to apply the SOLID principles correctly, and you need to learn this anyway when you want to write applications that need to be maintained by more than one developer for a considerate period of time. I rather invest into teaching the developers on my team to write SOLID code instead of just letting them crank out code; that will surely cause a maintenance hell later on.
a lot of boilerplate code
There is some boilerplate code when we look at code written in C# 6, but this isn't actually that bad, especially when you consider the advantages it gives. And future versions of C# will remove the boilerplate that is mainly caused by having to define constructors that take in arguments that are null-checked and assigned to private variables. C# 7 or 8 will surely fix this when record types and non-nullable reference types are introduced.
which is bad for creative people
I'm sorry, but this argument is plain bullshit. I've seen this argument used over and over again as an excuse to write bad code by developers who didn't want to learn about design patterns and software principles and practices. Being creative is no excuse for writing code that no one else can understand or code that is impossible to test. We need to apply accepted patterns and practices and within that boundary there is enough room to be creative, while writing good code. Writing code is not an art; it’s a craft.
Like I said, DI is not appropriate in all cases, and the practices around it take time to master. I can advise you to read the book Dependency Injection in .NET by Mark Seemann; it will give many answers and will give you a good sense how and when to apply it, and when not.
Be warned: I hate IoC.
There are many great answers here which are comforting. The main benefits according to Steven (very strong answer) are:
Improved testability
Improved flexibility
Improved maintainability
Improved scalability
My experiences are very different through, here they are for some balance:
(Bonus) Stupid Repository Pattern
Too often, this is included along with IoC. The repository pattern should only be used to access external data, and where interchangeability is a core expectation.
When you use this, with Entity Framework, you disable all the power of Entity Framework, this also happens with Service Layers.
Eg. Calling:
var employees = peopleService.GetPeople(false, false, true, true); //Terrible
It should be:
var employees = db.People.ActiveOnly().ToViewModel();
In this case using extension methods.
Who needs flexibility?
If you have no plans to change service implementations, you don't need it. If you think you'll have more than one implementation in the future, perhaps add IoC then, and only for that part.
But "Testability"!
Entity Framework (and probably other ORMs too), allow you to change the connection-string to point to an in-memory database. Granted, that's only available starting EF7. However, it can simply be a new (proper) test database in a staging environment.
Do you have other special test resources and service points? In this day and age, they're probably different WebService URI endpoints, which can also be configured in App.Config / Web.Config.
Automated Tests make your code maintainable
TDD - If it's a Web Application, use Jasmine or Selenium and have automated behaviour tests. This tests everything all the way to the user. It's an investment over time, starting by covering critical features and functions.
DevOps/SysOps - Maintain scripts for provisioning your whole environment (this is also best practice), spin up a staging environment and run all the tests. You can also clone your production environment and run your tests there. Don't make "maintainable" and "testable" your excuse for choosing IoC. Start with those requirements and find the best ways to meet those requirements.
Scalability - in what way?
(I probably need to read the book)
For coder scalability, Distributed Code Version Control, is the norm (although I hate merging).
For human resource scalability, you shouldn't be wasting days designing extra abstract layers for your project.
For production concurrent user scalability, you should be building, testing, then improving.
For server throughput scalability, you need to think a lot higher-level than IoC. Are you going to run a server on the customer LAN? Can you replicate your data? Are you replicating at the database level or application level? Is offline access important while mobile? These are substantial architecture questions, where IoC is rarely the answer.
Try F12
If you're using an IDE (which you should be doing), such as Visual Studio Community Edition, then you'll know how handy F12 can be, to navigate around code.
With IoC you'll be taken to the Interface, and then you'll need to find all references using a particular interface. Only one extra step, but for a tool that's used so much, it frustrates me.
Steven is on the ball
With Visual Studio 2015 I can simply do CTRL + F12 to find the
implementations of an interface.
Yes, but you have to then trawl through a list of both usages as well as the declaration. (Actually I think in the latest VS, the declaration lists separately, but it's still an extra mouse click, taking your hands away from the keyboard. And I should say this is a limitation of Visual Studio, not able to take you to an only interface implementation directly.
There are many 'textbook' arguments in favor of using IoC, but in my personal experience, the gains are/were:
Possibility to test only parts of the project, and mock some other parts. For example, if you have a component returning configuration from DB, it's easy to mock it so that your test can work without a real DB. With static classes this is not possible.
Better visibility and control of dependencies. With the static classes it's very easy to add some dependecies without even noticing, that can create problems later. With IoC this is more explicit and visible.
More explicit initialization order. With static classes this can be often a black box, and there can be latent problems due to circular usage.
The only inconvenience for me was that by placing everything before interfaces it's not possible to navigate directly to the implementation from the usage (F12).
However, it is the developers of a project who can judge best the pros and cons in the particular case.
Was there a reason why you didn't choose to use an IOC Library (StructureMap, Ninject, Autofac, etc)?
Using any of these would have made your life much easier.
Although David L has already made an excellent set of commentaries on your points, I'll add my own as well.
Much bigger codesize
I am not sure how you ended up with a larger codebase; the typical setup for an IOC library is pretty small, and since you are defining your invariants (dependencies) in the class constructors, you are also removing some code (i.e. the "new xyz()" stuff) that you don't need any more.
Ravioli-code instead of spaghetti-code
I happen to quite like ravioli :)
Slower performance, need to initialize all dependencies in constructor even if the method I want to call has only one dependency
If you are doing this then you are not really using Dependency Injection at all. You should be receiving ready-made, fully loaded object graphs via the dependency arguments declared in the constructor parameters of the class itself - not creating them in the constructor!
Most modern IOC libraries are ridiculously fast, and will never, ever be a performance problem.
Here's a good video that proves the point.
Harder to understand when no IDE is used
That's true, but it also means you can take the opportunity to think in terms of abstractions. So for example, you can look at a piece of code
public class Something
{
readonly IFrobber _frobber;
public Something(IFrobber frobber)
{
_frobber=frobber;
}
public void LetsFrobSomething(Thing theThing)
{
_frobber.Frob(theThing)
}
}
When you are looking at this code and trying to figure out if it works, or if it is the root cause of a problem, you can ignore the actual IFrobber implementation; it just represents the abstract capability to Frob something, and you don't need to mentally carry along how any particular Frobber might do its work. you can focus on making sure that this class does what it's supposed to - namely, delegating some work to a Frobber of some kind.
Note also that you don't even need to use interfaces here; you can go ahead and inject concrete implementations as well. However that tends to violate the Dependency Inversion principle (which is only tangenitally related to the DI we are talking about here) because it forces the class to depend on a concretion as opposed to an abstraction.
Some errors are pushed to run-time
No more or less than they would be with manually constructing graphs in the constructor;
Adding additional dependency (DI framework itself)
That is also true, but most IOC libraries are pretty small and unobtrusive, and at some point you have to decide if the tradeoff of having a slightly larger production artifact is worth it (it really is)
New staff have to learn DI first in order to work with it
That isn't really any different than would be the case with any new technology :) Learning to use an IOC library tends to open the mind to other possibilities like TDD, the SOLID principles and so forth, which is never a bad thing!
A lot of boilerplate code, which is bad for creative people (for example copy instances from constructor to properties...)
I don't understand this one, how you might end up with much boilerplate code; I wouldn't count storing the given dependencies in private readonly members as boilerplate worth talking about - bearing in mind that if you have more than 3 or 4 dependencies per class you are likely to be in violation of the SRP and should rethink your design.
Finally if you are not convinced by any of the arguments put forth here, I would still recommend you read Mark Seeman's "Dependency Injection in .Net". (or indeed anything else he has to say on DI which you can find on his blog).
I promise you will learn some useful things and I can tell you, it changed the way I write software for the better.
if you have to initialise dependencies manually in the code, you're doing something wrong. General patter for IoC is constructor injection or, probably, property injection. Class or controller shouldn't know about DI container at all.
Generally, all you have to do is:
configure container, like Interface = Class in Singleton scope
Use it, like Controller(Interface interface) {}
Benefit from controlling all dependencies in one place
I dont see any boilerplate code or slower performance or anything else you described. I can't really imaging how to write more or less complex app without it.
But generally, you need to decide what is more important. To please "creative people" or build maintainable and robust app.
Btw, to create property or filed from constructor you can use Alt+Enter in R# and it do all the job for you.

Is this a bad use of a static property?

If I have a class with a service that I want all derived classes to have access to (say a security object, or a repository) then I might do something like this:
public abstract class A
{
static ISecurity _security;
public ISecurity Security { get { return _security; } }
public static void SetSecurity(ISecurity security) { _security = security; }
}
public class Bootstrapper
{
public Bootstrapper()
{
A.SetSecurity(new Security());
}
}
It seems like lately I see static properties being shunned everywhere as something to absolutely avoid. To me, this seems cleaner than adding an ISecurity parameter to the constructor of every single derived class I make. Given all I've read lately though, I'm left wondering:
Is this is an acceptable application of dependency injection or am I violating some major design principle that could come back to haunt me later? I am not doing unit tests at this point so maybe if I were then I would suddenly realize the answer to my question. To be honest though I probably won't change my design over that, but if there is some other important reason why I should change it then I very well might.
Edit: I made a couple stupid mistakes the first time I wrote that code... it's fixed now. Just thought I'd point that out in case anyone happened to notice :)
Edit: SWeko makes a good point about all deriving classes having to use the same implementation. In cases where I've used this design, the service is always a singleton so it effectively enforces an already existing requirement. Naturally, this would be a bad design if that weren't the case.
This design could be problematic for a couple of reasons.
You already mention unit testing, which is rather important. Such static dependency can make testing much harder. When the fake ISecurity ever has to be anything else than a Null Object implementation, you will find yourself having to removing the fake implementation on test tear down. Removing it during test tear down prevents other tests from being influenced when you forget to remove that fake object. A test tear down makes your test more complicated. Not that much complicated, but having this adds up when many tests have tear down code and you'll have a hard time finding a bug in your test suit when one test forget to run the tear down. You will also have to make sure the registered ISecurity fake object is thread-safe and won't influence other tests that might run in parallel (test frameworks such as MSTest run tests in parallel for obvious performance reasons).
Another possible problem with injecting the dependency as static, is that you force this ISecurity dependency to be a singleton (and probably to be thread-safe). This disallows for instance to apply any interceptors and decorators that have a different lifestyle than singleton
Another problem is that removing this dependency from the constructor disables any analysis or diagnostics that could be done by the DI framework on your behalf. Since you manually set this dependency, the framework has no knowledge about this dependency. In a sense you move the responsibility of managing dependencies back to the application logic, instead of allowing the Composition Root to be in control over the way dependencies are wired together. Now the application has to know that ISecurity is in fact thread-safe. This is a responsibility that in general belongs to the Composition Root.
The fact that you want to store this dependency in a base type might even be an indication of a violation of a general design principle: The Single Responsibility Principle (SRP). It has some resemblance with a design mistake I made myself in the past. I had a set of business operations that all inherited from a base class. This base class implemented all sorts of behavior, such as transaction management, logging, audit trailing, adding fault tolerance, and.... adding security checks. This base class became an unmanageable God Object. It was unmanageable, simply because it had too many responsibilities; it violated the SRP. Here's my story if you want to know more about this.
So instead of having this security concern (it's probably a cross-cutting concern) implemented in a base class, try removing the base class all together and use a decorator to add security to those classes. You can wrap each class with one or more decorators and each decorator can handle one specific concern. This makes each decorator class easy to follow because they will follow the SRP.
The problem is that is not really dependency injection, even if it is encapsulated in the definition of the class. Admittedly,
static Security _security;
would be worse than Security, but still, the instances of A do not get to use whatever security the caller passed to them, they need to depend on the global setting of a static property.
What I'm trying to say is that your usage is not that different from:
public static class Globals
{
public static ISecurity Security {get; set;}
}

Does Dependency Injection (DI) rely on Interfaces?

This may seem obvious to most people, but I'm just trying to confirm that Dependency Injection (DI) relies on the use of Interfaces.
More specifically, in the case of a class which has a certain Interface as a parameter in its constructor or a certain Interface defined as a property (aka. Setter), the DI framework can hand over an instance of a concrete class to satisfy the needs of that Interface in that class. (Apologies if this description is not clear. I'm having trouble describing this properly because the terminology/concepts are still somewhat new to me.)
The reason I ask is that I currently have a class that has a dependency of sorts. Not so much an object dependency, but a URL. The class looks like this [C#]:
using System.Web.Services.Protocols;
public partial class SomeLibraryService : SoapHttpClientProtocol
{
public SomeLibraryService()
{
this.Url = "http://MyDomainName.com:8080/library-service/jse";
}
}
The SoapHttpClientProtocol class has a Public property called Url (which is a plain old "string") and the constructor here initializes it to a hard-coded value.
Could I possibly use a DI framework to inject a different value at construction? I'm thinking not since this.Url isn't any sort of Interface; it's a String.
[Incidentally, the code above was "auto-generated by wsdl", according to the comments in the code I'm working with. So I don't particularly want to change this code, although I don't see myself re-generating it either. So maybe changing this code is fine.]
I could see myself making an alternate constructor that takes a string as a parameter and initializes this.Url that way, but I'm not sure that's the correct approach regarding keeping loosely coupled separation of concerns. (SoC)
Any advice for this situation?
DI really just means a class wont construct it's external dependencies and will not manage the lifetime of those dependencies. Dependencies can be injected either via constructor, or via method parameter. Interfaces or abstract types are common to clarify the contract the consumer expects from its dependency, however simple types can be injected as well in some cases.
For example, a class in a library might call HttpContext.Current internally, which makes arbitrary assumptions about the application the code will be hosted in. An DI version of the library method would expect a HttpContext instance to be injected via parameter, etc.
It's not required to use interfaces -- you could use concrete types or abstract base classes. But many of the advantages of DI (such as being able to change an implementation of a dependancy) come when using interfaces.
Castle Windsor (the DI framework I know best), allows you to map objects in the IoC container to Interfaces, or to just names, which would work in your case.
Dependency Injection is a way of organizing your code. Maybe some of your confusion comes from the fact that there is not one official way to do it. It can be achieved using "regular" c# code , or by using a framework like Castle Windsor. Sometimes (often?) this involves using interfaces. No matter how it is achieved, the big picture goal of DI is usually to make your code easier to test and easier to modify later on.
If you were to inject the URL in your example via a constructor, that could be considered "manual" DI. The Wikipedia article on DI has more examples of manual vs framework DI.
I would like to answer with a focus on using interfaces in .NET applications. Polymorphism in .NET can be achieved through virtual or abstract methods, or interfaces.
In all cases, there is a method signature with no implementation at all or an implementation that can be overridden.
The 'contract' of a function (or even a property) is defined but how the method is implemented, the logical guts of the method can be different at runtime, determined by which subclass is instantiated and passed-in to the method or constructor, or set on a property (the act of 'injection').
The official .NET type design guidelines advocate using abstract base classes over interfaces since they have better options for evolving them after shipping, can include convenience overloads and are better able to self-document and communicate correct usage to implementers.
However, care must be taken not to add any logic. The temptation to do so has burned people in the past so many people use interfaces - many other people use interfaces simply because that's what the programmers sitting around them do.
It's also interesting to point out that while DI itself is rarely over-used, using a framework to perform the injection is quite often over-used to the detriment of increased complexity, a chain-reaction can take place where more and more types are needed in the container even though they are never 'switched'.
IoC frameworks should be used sparingly, usually only when you need to swap out objects at runtime, according to the environment or configuration. This usually means switching major component "seams" in the application such as the repository objects used to abstract your data layer.
For me, the real power of an IoC framework is to switch implementation in places where you have no control over creation. For example, in ASP.NET MVC, the creation of the controller class is performed by the ASP.NET framework, so injecting anything is impossible. The ASP.NET framework has some hooks that IoC frameworks can use to 'get in-between' the creation process and perform their magic.
Luke

Interfaces separated from the class implementation in separate projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We work on a middle-size project (3 developers over more than 6 months) and need to make following decision: We'd like to have interfaces separated from concrete implementation. The first is to store the interface in a separate file.
We'd like to go further and separate the data even more: We'd like to have one project (CSPROJ) with interface in one .CS file plus another .CS file with help classes (like some public classes used within this interface, some enums etc.). Then, we'd like to have another project (CSPROJ) with a factory pattern, concrete interface implementation and other "worker" classes.
Any class which wants to create an object implementing this interface must include the first project which contains the interfaces and public classes, not the implementation itself.
This solution has one big disadvantage: it multiplies the number of assemblies by 2, because you would have for every "normal" project one project with interace and one with implementation.
What would you recommend? Do you think it's a good idea to place all interfaces in one separate project rather than one interface in its own project?
I would distinguish between interfaces like this:
Standalone interfaces whose purpose you can describe without talking about the rest of your project. Put these in a single dedicated "interface assembly", which is probably referenced by all other assemblies in your project. Typical examples: ILogger, IFileSystem, IServiceLocator.
Class coupled interfaces which really only make sense in the context of your project's classes. Put these in the same assembly as the classes they are coupled to.
An example: suppose your domain model has a Banana class. If you retrieve bananas through a IBananaRepository interface, then that interface is tightly coupled to bananas. It is impossible to implement or use the interface without knowing something about bananas. Therefore it is only logical that the interface resides in the same assembly as Banana.
The previous example has a technical coupling, but the coupling might just be a logical one. For example, a IFecesThrowingTarget interface may only make sense as a collaborator of the Monkey class even if the interface declaration has no technical link to Monkey.
My answer does depend on the notion that it's okay to have some coupling to classes. Hiding everything behind an interface would be a mistake. Sometimes it's okay to just "new up" a class, instead of injecting it or creating it via a factory.
Yes, I think this is a good idea. Actually, we do it here all the time, and we eventually have to do it because of a simple reason:
We use Remoting to access server functionality. So the Remote Objects on the server need to implement the interfaces and the client code has to have access to the interfaces to use the remote objects.
In general, I think you are more loosely coupled when you put the interfaces in a separate project, so just go along and do it. It isn't really a problem to have 2 assemblies, is it?
ADDITION:
Just crossed my mind: By putting the interfaces in a separate assembly, you additionally get the benefit of being able to reuse the interfaces if a few of them are general enough.
I think it you should consider first whether ALL interfaces belong to the 'public interface' of your project.
If they are to be shared by multiple projects, executables and/or services, i think it's fair to put them into a separate assembly.
However, if they are for internal use only and there for your convenience, you could choose to keep them in the same assembly as the implementation, thus keeping the overall amount of assemblies relatively low.
I wouldn't do it unless it offers a proven benefit for your application's architecture.
It's good to keep an eye on the number of assemblies you're creating. Even if an interface and its implementation are in the same assembly, you can still achieve the decoupling you rightly seek with a little discipline.
If an implementation of an interface ends up having a lot of dependencies (on other assemblies, etc), then having the interface in an isolated assembly can simply life for higher level consumers.
They can reference the interface without inadvertently becoming dependent on the specific implementation's dependencies.
We used to have quite a number of separate assemblies in our shared code. Over time, we found that we almost invariably referenced these in groups. This made more work for the developers, and we had to hunt to find what assembly a class or interface was in. We ended up combining some of these assemblies based on usage patterns. Life got easier.
There are a lot of considerations here - are you writing a library for developers, are you deploying the DLLs to offsite customers, are you using remoting (thanks, Maximilian Mayerl) or writing WCF services, etc. There is no one right answer - it depends.
In general I agree with Jeff Sternal - don't break up the assemblies unless it offers a proven benefit.
There are pros and cons to the approach, and you will also need to temper the decision with how it best fits into your architectural approach.
On the "pro" side, you can achieve a level of separation to help enforce correct implementations of the interfaces. Consider that if you have junior- or mid-level developer working on implementations, the interfaces themselves can be defined in a project that they only have read access on. Perhaps a senior-level, team lead, or architect is responsible for the design and maintenance of the interfaces. If these interfaces are used on multiple projects, this can help mitigate the risk of unintentional breaking changes on other projects when only working in one. Also, if you work with third party vendors who you distribute an API to, packaging the interfaces is a very good thing to do.
Obviously, there are some down sides. The assembly does not contain executable code. In some shops that I have worked at, they have frowned upon not having functionality in an assembly, regardless of the reason. There definitely is additional overhead. Depending on how you set up your physical file and namespace structure, you might have multiple assemblies doing the same thing (although not required).
On a semi-random note, make sure to document your interfaces well. Documentation inheritance from interfaces using GhostDoc is a beautiful thing.
This is a good idea and I appreciate some of the distinctions in the accepted answer. Since both enumerations and especially interfaces are by their very nature dependency-less this gives them special properties and makes them immune from circular dependencies and even just complex dependency graphs that make a system "brittle". A co-worker of mine once called a similar technique the "memento pattern" and never failed to point out a useful application of it.
Put an interface into a project that already has many dependencies and that interface, at least with respect to the project, comes with all the dependencies of the product. Do this often and you're more likely to face situations with circular dependencies. The temptation is then to compensate with patches that wouldn't otherwise be needed.
It's as if coupling interfaces with projects having many dependencies contaminates them. The design intent of interfaces is to de-couple so in most cases it makes little sense to couple them to classes.

Categories