There have been several questions already posted with specific questions about dependency injection, such as when to use it and what frameworks are there for it. However,
What is dependency injection and when/why should or shouldn't it be used?
The best definition I've found so far is one by James Shore:
"Dependency Injection" is a 25-dollar
term for a 5-cent concept. [...]
Dependency injection means giving an
object its instance variables. [...].
There is an article by Martin Fowler that may prove useful, too.
Dependency injection is basically providing the objects that an object needs (its dependencies) instead of having it construct them itself. It's a very useful technique for testing, since it allows dependencies to be mocked or stubbed out.
Dependencies can be injected into objects by many means (such as constructor injection or setter injection). One can even use specialized dependency injection frameworks (e.g. Spring) to do that, but they certainly aren't required. You don't need those frameworks to have dependency injection. Instantiating and passing objects (dependencies) explicitly is just as good an injection as injection by framework.
Dependency Injection is passing dependency to other objects or framework( dependency injector).
Dependency injection makes testing easier. The injection can be done through constructor.
SomeClass() has its constructor as following:
public SomeClass() {
myObject = Factory.getObject();
}
Problem:
If myObject involves complex tasks such as disk access or network access, it is hard to do unit test on SomeClass(). Programmers have to mock myObject and might intercept the factory call.
Alternative solution:
Passing myObject in as an argument to the constructor
public SomeClass (MyClass myObject) {
this.myObject = myObject;
}
myObject can be passed directly which makes testing easier.
One common alternative is defining a do-nothing constructor. Dependency injection can be done through setters. (h/t #MikeVella).
Martin Fowler documents a third alternative (h/t #MarcDix), where classes explicitly implement an interface for the dependencies programmers wish injected.
It is harder to isolate components in unit testing without dependency injection.
In 2013, when I wrote this answer, this was a major theme on the Google Testing Blog. It remains the biggest advantage to me, as programmers not always need the extra flexibility in their run-time design (for instance, for service locator or similar patterns). Programmers often need to isolate the classes during testing.
I found this funny example in terms of loose coupling:
Source: Understanding dependency injection
Any application is composed of many objects that collaborate with each other to perform some useful stuff. Traditionally each object is responsible for obtaining its own references to the dependent objects (dependencies) it collaborate with. This leads to highly coupled classes and hard-to-test code.
For example, consider a Car object.
A Car depends on wheels, engine, fuel, battery, etc. to run. Traditionally we define the brand of such dependent objects along with the definition of the Car object.
Without Dependency Injection (DI):
class Car{
private Wheel wh = new NepaliRubberWheel();
private Battery bt = new ExcideBattery();
//The rest
}
Here, the Car object is responsible for creating the dependent objects.
What if we want to change the type of its dependent object - say Wheel - after the initial NepaliRubberWheel() punctures?
We need to recreate the Car object with its new dependency say ChineseRubberWheel(), but only the Car manufacturer can do that.
Then what does the Dependency Injection do for us...?
When using dependency injection, objects are given their dependencies at run time rather than compile time (car manufacturing time).
So that we can now change the Wheel whenever we want. Here, the dependency (wheel) can be injected into Car at run time.
After using dependency injection:
Here, we are injecting the dependencies (Wheel and Battery) at runtime. Hence the term : Dependency Injection. We normally rely on DI frameworks such as Spring, Guice, Weld to create the dependencies and inject where needed.
class Car{
private Wheel wh; // Inject an Instance of Wheel (dependency of car) at runtime
private Battery bt; // Inject an Instance of Battery (dependency of car) at runtime
Car(Wheel wh,Battery bt) {
this.wh = wh;
this.bt = bt;
}
//Or we can have setters
void setWheel(Wheel wh) {
this.wh = wh;
}
}
The advantages are:
decoupling the creation of object (in other word, separate usage from the creation of object)
ability to replace dependencies (eg: Wheel, Battery) without changing the class that uses it(Car)
promotes "Code to interface not to implementation" principle
ability to create and use mock dependency during test (if we want to use a Mock of Wheel during test instead of a real instance.. we can create Mock Wheel object and let DI framework inject to Car)
Dependency Injection is a practice where objects are designed in a manner where they receive instances of the objects from other pieces of code, instead of constructing them internally. This means that any object implementing the interface which is required by the object can be substituted in without changing the code, which simplifies testing, and improves decoupling.
For example, consider these clases:
public class PersonService {
public void addManager( Person employee, Person newManager ) { ... }
public void removeManager( Person employee, Person oldManager ) { ... }
public Group getGroupByManager( Person manager ) { ... }
}
public class GroupMembershipService() {
public void addPersonToGroup( Person person, Group group ) { ... }
public void removePersonFromGroup( Person person, Group group ) { ... }
}
In this example, the implementation of PersonService::addManager and PersonService::removeManager would need an instance of the GroupMembershipService in order to do its work. Without Dependency Injection, the traditional way of doing this would be to instantiate a new GroupMembershipService in the constructor of PersonService and use that instance attribute in both functions. However, if the constructor of GroupMembershipService has multiple things it requires, or worse yet, there are some initialization "setters" that need to be called on the GroupMembershipService, the code grows rather quickly, and the PersonService now depends not only on the GroupMembershipService but also everything else that GroupMembershipService depends on. Furthermore, the linkage to GroupMembershipService is hardcoded into the PersonService which means that you can't "dummy up" a GroupMembershipService for testing purposes, or to use a strategy pattern in different parts of your application.
With Dependency Injection, instead of instantiating the GroupMembershipService within your PersonService, you'd either pass it in to the PersonService constructor, or else add a Property (getter and setter) to set a local instance of it. This means that your PersonService no longer has to worry about how to create a GroupMembershipService, it just accepts the ones it's given, and works with them. This also means that anything which is a subclass of GroupMembershipService, or implements the GroupMembershipService interface can be "injected" into the PersonService, and the PersonService doesn't need to know about the change.
The accepted answer is a good one - but I would like to add to this that DI is very much like the classic avoiding of hardcoded constants in the code.
When you use some constant like a database name you'd quickly move it from the inside of the code to some config file and pass a variable containing that value to the place where it is needed. The reason to do that is that these constants usually change more frequently than the rest of the code. For example if you'd like to test the code in a test database.
DI is analogous to this in the world of Object Oriented programming. The values there instead of constant literals are whole objects - but the reason to move the code creating them out from the class code is similar - the objects change more frequently then the code that uses them. One important case where such a change is needed is tests.
Let's try simple example with Car and Engine classes, any car need an engine to go anywhere, at least for now. So below how code will look without dependency injection.
public class Car
{
public Car()
{
GasEngine engine = new GasEngine();
engine.Start();
}
}
public class GasEngine
{
public void Start()
{
Console.WriteLine("I use gas as my fuel!");
}
}
And to instantiate the Car class we will use next code:
Car car = new Car();
The issue with this code that we tightly coupled to GasEngine and if we decide to change it to ElectricityEngine then we will need to rewrite Car class. And the bigger the application the more issues and headache we will have to add and use new type of engine.
In other words with this approach is that our high level Car class is dependent on the lower level GasEngine class which violate Dependency Inversion Principle(DIP) from SOLID. DIP suggests that we should depend on abstractions, not concrete classes. So to satisfy this we introduce IEngine interface and rewrite code like below:
public interface IEngine
{
void Start();
}
public class GasEngine : IEngine
{
public void Start()
{
Console.WriteLine("I use gas as my fuel!");
}
}
public class ElectricityEngine : IEngine
{
public void Start()
{
Console.WriteLine("I am electrocar");
}
}
public class Car
{
private readonly IEngine _engine;
public Car(IEngine engine)
{
_engine = engine;
}
public void Run()
{
_engine.Start();
}
}
Now our Car class is dependent on only the IEngine interface, not a specific implementation of engine.
Now, the only trick is how do we create an instance of the Car and give it an actual concrete Engine class like GasEngine or ElectricityEngine. That's where Dependency Injection comes in.
Car gasCar = new Car(new GasEngine());
gasCar.Run();
Car electroCar = new Car(new ElectricityEngine());
electroCar.Run();
Here we basically inject(pass) our dependency(Engine instance) to Car constructor. So now our classes have loose coupling between objects and their dependencies, and we can easily add new types of engines without changing the Car class.
The main benefit of the Dependency Injection that classes are more loosely coupled, because they do not have hard-coded dependencies. This follows the Dependency Inversion Principle, which was mentioned above. Instead of referencing specific implementations, classes request abstractions (usually interfaces) which are provided to them when the class is constructed.
So in the end Dependency injection is just a technique for
achieving loose coupling between objects and their dependencies.
Rather than directly instantiating dependencies that class needs in
order to perform its actions, dependencies are provided to the class
(most often) via constructor injection.
Also when we have many dependencies it is very good practice to use Inversion of Control(IoC) containers which we can tell which interfaces should be mapped to which concrete implementations for all our dependencies and we can have it resolve those dependencies for us when it constructs our object. For example, we could specify in the mapping for the IoC container that the IEngine dependency should be mapped to the GasEngine class and when we ask the IoC container for an instance of our Car class, it will automatically construct our Car class with a GasEngine dependency passed in.
UPDATE: Watched course about EF Core from Julie Lerman recently and also liked her short definition about DI.
Dependency injection is a pattern to allow your application to inject
objects on the fly to classes that need them, without forcing those
classes to be responsible for those objects. It allows your code to be
more loosely coupled, and Entity Framework Core plugs in to this same
system of services.
Let's imagine that you want to go fishing:
Without dependency injection, you need to take care of everything yourself. You need to find a boat, to buy a fishing rod, to look for bait, etc. It's possible, of course, but it puts a lot of responsibility on you. In software terms, it means that you have to perform a lookup for all these things.
With dependency injection, someone else takes care of all the preparation and makes the required equipment available to you. You will receive ("be injected") the boat, the fishing rod and the bait - all ready to use.
This is the most simple explanation about Dependency Injection and Dependency Injection Container I have ever seen:
Without Dependency Injection
Application needs Foo (e.g. a controller), so:
Application creates Foo
Application calls Foo
Foo needs Bar (e.g. a service), so:
Foo creates Bar
Foo calls Bar
Bar needs Bim (a service, a repository,
…), so:
Bar creates Bim
Bar does something
With Dependency Injection
Application needs Foo, which needs Bar, which needs Bim, so:
Application creates Bim
Application creates Bar and gives it Bim
Application creates Foo and gives it Bar
Application calls Foo
Foo calls Bar
Bar does something
Using a Dependency Injection Container
Application needs Foo so:
Application gets Foo from the Container, so:
Container creates Bim
Container creates Bar and gives it Bim
Container creates Foo and gives it Bar
Application calls Foo
Foo calls Bar
Bar does something
Dependency Injection and dependency Injection Containers are different things:
Dependency Injection is a method for writing better code
a DI Container is a tool to help injecting dependencies
You don't need a container to do dependency injection. However a container can help you.
Before going to the technical description first visualize it with a real-life example because you will find a lot of technical stuff to learn dependency injection but the majority of the people can't get the core concept of it.
In the first picture, assume that you have a car factory with a lot of units. A car is actually built in the assembly unit but it needs engine, seats as well as wheels. So an assembly unit is dependent on these all units and they are the dependencies of the factory.
You can feel that now it is too complicated to maintain all of the tasks in this factory because along with the main task (assembling a car in the Assembly unit) you have to also focus on other units. It is now very costly to maintain and the factory building is huge so it takes your extra bucks for rent.
Now, look at the second picture. If you find some provider companies that will provide you with the wheel, seat, and engine for cheaper than your self-production cost then now you don't need to make them in your factory. You can rent a smaller building now just for your assembly unit which will lessen your maintenance task and reduce your extra rental cost. Now you can also focus only on your main task (Car assembly).
Now we can say that all the dependencies for assembling a car are injected on the factory from the providers. It is an example of a real-life Dependency Injection (DI).
Now in the technical word, dependency injection is a technique whereby one object (or static method) supplies the dependencies of another object. So, transferring the task of creating the object to someone else and directly using the dependency is called dependency injection.
This will help you now to learn DI with a technical explanation. This will show when to use DI and when you should not.
.
Doesn't "dependency injection" just mean using parameterized constructors and public setters?
James Shore's article shows the following examples for comparison.
Constructor without dependency injection:
public class Example {
private DatabaseThingie myDatabase;
public Example() {
myDatabase = new DatabaseThingie();
}
public void doStuff() {
...
myDatabase.getData();
...
}
}
Constructor with dependency injection:
public class Example {
private DatabaseThingie myDatabase;
public Example(DatabaseThingie useThisDatabaseInstead) {
myDatabase = useThisDatabaseInstead;
}
public void doStuff() {
...
myDatabase.getData();
...
}
}
To make Dependency Injection concept simple to understand. Let's take an example of switch button to toggle(on/off) a bulb.
Without Dependency Injection
Switch needs to know beforehand which bulb I am connected to (hard-coded dependency). So,
Switch -> PermanentBulb //switch is directly connected to permanent bulb, testing not possible easily
Switch(){
PermanentBulb = new Bulb();
PermanentBulb.Toggle();
}
With Dependency Injection
Switch only knows I need to turn on/off whichever Bulb is passed to me. So,
Switch -> Bulb1 OR Bulb2 OR NightBulb (injected dependency)
Switch(AnyBulb){ //pass it whichever bulb you like
AnyBulb.Toggle();
}
Modifying James Example for Switch and Bulb:
public class SwitchTest {
TestToggleBulb() {
MockBulb mockbulb = new MockBulb();
// MockBulb is a subclass of Bulb, so we can
// "inject" it here:
Switch switch = new Switch(mockBulb);
switch.ToggleBulb();
mockBulb.AssertToggleWasCalled();
}
}
public class Switch {
private Bulb myBulb;
public Switch() {
myBulb = new Bulb();
}
public Switch(Bulb useThisBulbInstead) {
myBulb = useThisBulbInstead;
}
public void ToggleBulb() {
...
myBulb.Toggle();
...
}
}`
What is Dependency Injection (DI)?
As others have said, Dependency Injection(DI) removes the responsibility of direct creation, and management of the lifespan, of other object instances upon which our class of interest (consumer class) is dependent (in the UML sense). These instances are instead passed to our consumer class, typically as constructor parameters or via property setters (the management of the dependency object instancing and passing to the consumer class is usually performed by an Inversion of Control (IoC) container, but that's another topic).
DI, DIP and SOLID
Specifically, in the paradigm of Robert C Martin's SOLID principles of Object Oriented Design, DI is one of the possible implementations of the Dependency Inversion Principle (DIP). The DIP is the D of the SOLID mantra - other DIP implementations include the Service Locator, and Plugin patterns.
The objective of the DIP is to decouple tight, concrete dependencies between classes, and instead, to loosen the coupling by means of an abstraction, which can be achieved via an interface, abstract class or pure virtual class, depending on the language and approach used.
Without the DIP, our code (I've called this 'consuming class') is directly coupled to a concrete dependency and is also often burdened with the responsibility of knowing how to obtain, and manage, an instance of this dependency, i.e. conceptually:
"I need to create/use a Foo and invoke method `GetBar()`"
Whereas after application of the DIP, the requirement is loosened, and the concern of obtaining and managing the lifespan of the Foo dependency has been removed:
"I need to invoke something which offers `GetBar()`"
Why use DIP (and DI)?
Decoupling dependencies between classes in this way allows for easy substitution of these dependency classes with other implementations which also fulfil the prerequisites of the abstraction (e.g. the dependency can be switched with another implementation of the same interface). Moreover, as others have mentioned, possibly the most common reason to decouple classes via the DIP is to allow a consuming class to be tested in isolation, as these same dependencies can now be stubbed and/or mocked.
One consequence of DI is that the lifespan management of dependency object instances is no longer controlled by a consuming class, as the dependency object is now passed into the consuming class (via constructor or setter injection).
This can be viewed in different ways:
If lifespan control of dependencies by the consuming class needs to be retained, control can be re-established by injecting an (abstract) factory for creating the dependency class instances, into the consumer class. The consumer will be able to obtain instances via a Create on the factory as needed, and dispose of these instances once complete.
Or, lifespan control of dependency instances can be relinquished to an IoC container (more about this below).
When to use DI?
Where there likely will be a need to substitute a dependency for an equivalent implementation,
Any time where you will need to unit test the methods of a class in isolation of its dependencies,
Where uncertainty of the lifespan of a dependency may warrant experimentation (e.g. Hey, MyDepClass is thread safe - what if we make it a singleton and inject the same instance into all consumers?)
Example
Here's a simple C# implementation. Given the below Consuming class:
public class MyLogger
{
public void LogRecord(string somethingToLog)
{
Console.WriteLine("{0:HH:mm:ss} - {1}", DateTime.Now, somethingToLog);
}
}
Although seemingly innocuous, it has two static dependencies on two other classes, System.DateTime and System.Console, which not only limit the logging output options (logging to console will be worthless if no one is watching), but worse, it is difficult to automatically test given the dependency on a non-deterministic system clock.
We can however apply DIP to this class, by abstracting out the the concern of timestamping as a dependency, and coupling MyLogger only to a simple interface:
public interface IClock
{
DateTime Now { get; }
}
We can also loosen the dependency on Console to an abstraction, such as a TextWriter. Dependency Injection is typically implemented as either constructor injection (passing an abstraction to a dependency as a parameter to the constructor of a consuming class) or Setter Injection (passing the dependency via a setXyz() setter or a .Net Property with {set;} defined). Constructor Injection is preferred, as this guarantees the class will be in a correct state after construction, and allows the internal dependency fields to be marked as readonly (C#) or final (Java). So using constructor injection on the above example, this leaves us with:
public class MyLogger : ILogger // Others will depend on our logger.
{
private readonly TextWriter _output;
private readonly IClock _clock;
// Dependencies are injected through the constructor
public MyLogger(TextWriter stream, IClock clock)
{
_output = stream;
_clock = clock;
}
public void LogRecord(string somethingToLog)
{
// We can now use our dependencies through the abstraction
// and without knowledge of the lifespans of the dependencies
_output.Write("{0:yyyy-MM-dd HH:mm:ss} - {1}", _clock.Now, somethingToLog);
}
}
(A concrete Clock needs to be provided, which of course could revert to DateTime.Now, and the two dependencies need to be provided by an IoC container via constructor injection)
An automated Unit Test can be built, which definitively proves that our logger is working correctly, as we now have control over the dependencies - the time, and we can spy on the written output:
[Test]
public void LoggingMustRecordAllInformationAndStampTheTime()
{
// Arrange
var mockClock = new Mock<IClock>();
mockClock.Setup(c => c.Now).Returns(new DateTime(2015, 4, 11, 12, 31, 45));
var fakeConsole = new StringWriter();
// Act
new MyLogger(fakeConsole, mockClock.Object)
.LogRecord("Foo");
// Assert
Assert.AreEqual("2015-04-11 12:31:45 - Foo", fakeConsole.ToString());
}
Next Steps
Dependency injection is invariably associated with an Inversion of Control container(IoC), to inject (provide) the concrete dependency instances, and to manage lifespan instances. During the configuration / bootstrapping process, IoC containers allow the following to be defined:
mapping between each abstraction and the configured concrete implementation (e.g. "any time a consumer requests an IBar, return a ConcreteBar instance")
policies can be set up for the lifespan management of each dependency, e.g. to create a new object for each consumer instance, to share a singleton dependency instance across all consumers, to share the same dependency instance only across the same thread, etc.
In .Net, IoC containers are aware of protocols such as IDisposable and will take on the responsibility of Disposing dependencies in line with the configured lifespan management.
Typically, once IoC containers have been configured / bootstrapped, they operate seamlessly in the background allowing the coder to focus on the code at hand rather than worrying about dependencies.
The key to DI-friendly code is to avoid static coupling of classes, and not to use new() for the creation of Dependencies
As per above example, decoupling of dependencies does require some design effort, and for the developer, there is a paradigm shift needed to break the habit of newing dependencies directly, and instead trusting the container to manage dependencies.
But the benefits are many, especially in the ability to thoroughly test your class of interest.
Note : The creation / mapping / projection (via new ..()) of POCO / POJO / Serialization DTOs / Entity Graphs / Anonymous JSON projections et al - i.e. "Data only" classes or records - used or returned from methods are not regarded as Dependencies (in the UML sense) and not subject to DI. Using new to project these is just fine.
The whole point of Dependency Injection (DI) is to keep application source code clean and stable:
clean of dependency initialization code
stable regardless of dependency used
Practically, every design pattern separates concerns to make future changes affect minimum files.
The specific domain of DI is delegation of dependency configuration and initialization.
Example: DI with shell script
If you occasionally work outside of Java, recall how source is often used in many scripting languages (Shell, Tcl, etc., or even import in Python misused for this purpose).
Consider simple dependent.sh script:
#!/bin/sh
# Dependent
touch "one.txt" "two.txt"
archive_files "one.txt" "two.txt"
The script is dependent: it won't execute successfully on its own (archive_files is not defined).
You define archive_files in archive_files_zip.sh implementation script (using zip in this case):
#!/bin/sh
# Dependency
function archive_files {
zip files.zip "$#"
}
Instead of source-ing implementation script directly in the dependent one, you use an injector.sh "container" which wraps both "components":
#!/bin/sh
# Injector
source ./archive_files_zip.sh
source ./dependent.sh
The archive_files dependency has just been injected into dependent script.
You could have injected dependency which implements archive_files using tar or xz.
Example: removing DI
If dependent.sh script used dependencies directly, the approach would be called dependency lookup (which is opposite to dependency injection):
#!/bin/sh
# Dependent
# dependency look-up
source ./archive_files_zip.sh
touch "one.txt" "two.txt"
archive_files "one.txt" "two.txt"
Now the problem is that dependent "component" has to perform initialization itself.
The "component"'s source code is neither clean nor stable because every changes in initialization of dependencies requires new release for "components"'s source code file as well.
Last words
DI is not as largely emphasized and popularized as in Java frameworks.
But it's a generic approach to split concerns of:
application development (single source code release lifecycle)
application deployment (multiple target environments with independent lifecycles)
Using configuration only with dependency lookup does not help as number of configuration parameters may change per dependency (e.g. new authentication type) as well as number of supported types of dependencies (e.g. new database type).
All the above answers are good, my aim is to explain the concept in a simple way so that anyone without a programming knowledge can also understand concept
Dependency injection is one of the design pattern that help us to create complex systems in a simpler manner.
We can see a wide variety of application of this pattern in our day to day life.
Some of the examples are Tape recorder, VCD, CD Drive etc.
The above image is an image of Reel-to-reel portable tape recorder, mid-20th century. Source.
The primary intention of a tape recorder machine is to record or playback sound.
While designing a system it require a reel to record or playback sound or music. There are two possibilities for designing this system
we can place the reel inside the machine
we can provide a hook for the reel where it can be placed.
If we use the first one we need to open the machine to change the reel.
if we opt for the second one, that is placing a hook for reel, we are getting an added benefit of playing any music by changing the reel. and also reducing the function only to playing whatever in the reel.
Like wise dependency injection is the process of externalizing the dependencies to focus only on the specific functionality of the component so that independent components can be coupled together to form a complex system.
The main benefits we achieved by using dependency injection.
High cohesion and loose coupling.
Externalizing dependency and looking only on responsibility.
Making things as components and to combine to form a large systems with high capabilities.
It helps to develop high quality components since they are independently developed they are properly tested.
It helps to replace the component with another if one fails.
Now a days these concept forms the basis of well known frameworks in programming world.
The Spring Angular etc are the well-known software frameworks built on the top of this concept
Dependency injection is a pattern used to create instances of objects that other objects rely upon without knowing at compile time which class will be used to provide that functionality or simply the way of injecting properties to an object is called dependency injection.
Example for Dependency injection
Previously we are writing code like this
Public MyClass{
DependentClass dependentObject
/*
At somewhere in our code we need to instantiate
the object with new operator inorder to use it or perform some method.
*/
dependentObject= new DependentClass();
dependentObject.someMethod();
}
With Dependency injection, the dependency injector will take off the instantiation for us
Public MyClass{
/* Dependency injector will instantiate object*/
DependentClass dependentObject
/*
At somewhere in our code we perform some method.
The process of instantiation will be handled by the dependency injector
*/
dependentObject.someMethod();
}
You can also read
Difference between Inversion of Control & Dependency Injection
Example, we have 2 class Client and Service. Client will use Service
public class Service {
public void doSomeThingInService() {
// ...
}
}
Without Dependency Injection
Way 1)
public class Client {
public void doSomeThingInClient() {
Service service = new Service();
service.doSomeThingInService();
}
}
Way 2)
public class Client {
Service service = new Service();
public void doSomeThingInClient() {
service.doSomeThingInService();
}
}
Way 3)
public class Client {
Service service;
public Client() {
service = new Service();
}
public void doSomeThingInClient() {
service.doSomeThingInService();
}
}
1) 2) 3) Using
Client client = new Client();
client.doSomeThingInService();
Advantages
Simple
Disadvantages
Hard for test Client class
When we change Service constructor, we need to change code in all place create Service object
Use Dependency Injection
Way 1) Constructor injection
public class Client {
Service service;
Client(Service service) {
this.service = service;
}
// Example Client has 2 dependency
// Client(Service service, IDatabas database) {
// this.service = service;
// this.database = database;
// }
public void doSomeThingInClient() {
service.doSomeThingInService();
}
}
Using
Client client = new Client(new Service());
// Client client = new Client(new Service(), new SqliteDatabase());
client.doSomeThingInClient();
Way 2) Setter injection
public class Client {
Service service;
public void setService(Service service) {
this.service = service;
}
public void doSomeThingInClient() {
service.doSomeThingInService();
}
}
Using
Client client = new Client();
client.setService(new Service());
client.doSomeThingInClient();
Way 3) Interface injection
Check https://en.wikipedia.org/wiki/Dependency_injection
===
Now, this code is already follow Dependency Injection and it is easier for test Client class.
However, we still use new Service() many time and it is not good when change Service constructor. To prevent it, we can use DI injector like
1) Simple manual Injector
public class Injector {
public static Service provideService(){
return new Service();
}
public static IDatabase provideDatatBase(){
return new SqliteDatabase();
}
public static ObjectA provideObjectA(){
return new ObjectA(provideService(...));
}
}
Using
Service service = Injector.provideService();
2) Use library: For Android dagger2
Advantages
Make test easier
When you change the Service, you only need to change it in Injector class
If you use use Constructor Injection, when you look at constructor of Client, you will see how many dependency of Client class
Disadvantages
If you use use Constructor Injection, the Service object is created when Client created, sometime we use function in Client class without use Service so created Service is wasted
Dependency Injection definition
https://en.wikipedia.org/wiki/Dependency_injection
A dependency is an object that can be used (Service)
An injection is the passing of a dependency (Service) to a dependent object (Client) that would use it
What is dependency Injection?
Dependency Injection(DI) means to decouple the objects which are dependent on each other. Say object A is dependent on Object B so the idea is to decouple these object from each other. We don’t need to hard code the object using new keyword rather sharing dependencies to objects at runtime in spite of compile time.
If we talk about
How Dependency Injection works in Spring:
We don’t need to hard code the object using new keyword rather define the bean dependency in the configuration file. The spring container will be responsible for hooking up all.
Inversion of Control (IOC)
IOC is a general concept and it can be expressed in many different ways and Dependency Injection is one concrete example of IOC.
Two types of Dependency Injection:
Constructor Injection
Setter Injection
1. Constructor-based dependency injection:
Constructor-based DI is accomplished when the container invokes a class constructor with a number of arguments, each representing a dependency on other class.
public class Triangle {
private String type;
public String getType(){
return type;
}
public Triangle(String type){ //constructor injection
this.type=type;
}
}
<bean id=triangle" class ="com.test.dependencyInjection.Triangle">
<constructor-arg value="20"/>
</bean>
2. Setter-based dependency injection:
Setter-based DI is accomplished by the container calling setter methods on your beans after invoking a no-argument constructor or no-argument static factory method to instantiate your bean.
public class Triangle{
private String type;
public String getType(){
return type;
}
public void setType(String type){ //setter injection
this.type = type;
}
}
<!-- setter injection -->
<bean id="triangle" class="com.test.dependencyInjection.Triangle">
<property name="type" value="equivialteral"/>
NOTE:
It is a good rule of thumb to use constructor arguments for mandatory dependencies and setters for optional dependencies. Note that the if we use annotation based than #Required annotation on a setter can be used to make setters as a required dependencies.
The best analogy I can think of is the surgeon and his assistant(s) in an operation theater, where the surgeon is the main person and his assistant who provides the various surgical components when he needs it so that the surgeon can concentrate on the one thing he does best (surgery). Without the assistant the surgeon has to get the components himself every time he needs one.
DI for short, is a technique to remove a common additional responsibility (burden) on components to fetch the dependent components, by providing them to it.
DI brings you closer to the Single Responsibility (SR) principle, like the surgeon who can concentrate on surgery.
When to use DI : I would recommend using DI in almost all production projects ( small/big), particularly in ever changing business environments :)
Why : Because you want your code to be easily testable, mockable etc so that you can quickly test your changes and push it to the market. Besides why would you not when you there are lots of awesome free tools/frameworks to support you in your journey to a codebase where you have more control.
It means that objects should only have as many dependencies as is needed to do their job and the dependencies should be few. Furthermore, an object’s dependencies should be on interfaces and not on “concrete” objects, when possible. (A concrete object is any object created with the keyword new.) Loose coupling promotes greater reusability, easier maintainability, and allows you to easily provide “mock” objects in place of expensive services.
The “Dependency Injection” (DI) is also known as “Inversion of Control” (IoC), can be used as a technique for encouraging this loose coupling.
There are two primary approaches to implementing DI:
Constructor injection
Setter injection
Constructor injection
It’s the technique of passing objects dependencies to its constructor.
Note that the constructor accepts an interface and not concrete object. Also, note that an exception is thrown if the orderDao parameter is null. This emphasizes the importance of receiving a valid dependency. Constructor Injection is, in my opinion, the preferred mechanism for giving an object its dependencies. It is clear to the developer while invoking the object which dependencies need to be given to the “Person” object for proper execution.
Setter Injection
But consider the following example… Suppose you have a class with ten methods that have no dependencies, but you’re adding a new method that does have a dependency on IDAO. You could change the constructor to use Constructor Injection, but this may force you to changes to all constructor calls all over the place. Alternatively, you could just add a new constructor that takes the dependency, but then how does a developer easily know when to use one constructor over the other. Finally, if the dependency is very expensive to create, why should it be created and passed to the constructor when it may only be used rarely? “Setter Injection” is another DI technique that can be used in situations such as this.
Setter Injection does not force dependencies to be passed to the constructor. Instead, the dependencies are set onto public properties exposed by the object in need. As implied previously, the primary motivators for doing this include:
Supporting dependency injection without having to modify the constructor of a legacy class.
Allowing expensive resources or services to be created as late as possible and only when needed.
Here is the example of how the above code would look like:
public class Person {
public Person() {}
public IDAO Address {
set { addressdao = value; }
get {
if (addressdao == null)
throw new MemberAccessException("addressdao" +
" has not been initialized");
return addressdao;
}
}
public Address GetAddress() {
// ... code that uses the addressdao object
// to fetch address details from the datasource ...
}
// Should not be called directly;
// use the public property instead
private IDAO addressdao;
I know there are already many answers, but I found this very helpful: http://tutorials.jenkov.com/dependency-injection/index.html
No Dependency:
public class MyDao {
protected DataSource dataSource = new DataSourceImpl(
"driver", "url", "user", "password");
//data access methods...
public Person readPerson(int primaryKey) {...}
}
Dependency:
public class MyDao {
protected DataSource dataSource = null;
public MyDao(String driver, String url, String user, String password) {
this.dataSource = new DataSourceImpl(driver, url, user, password);
}
//data access methods...
public Person readPerson(int primaryKey) {...}
}
Notice how the DataSourceImpl instantiation is moved into a constructor. The constructor takes four parameters which are the four values needed by the DataSourceImpl. Though the MyDao class still depends on these four values, it no longer satisfies these dependencies itself. They are provided by whatever class creating a MyDao instance.
I think since everyone has written for DI, let me ask a few questions..
When you have a configuration of DI where all the actual implementations(not interfaces) that are going to be injected into a class (for e.g services to a controller) why is that not some sort of hard-coding?
What if I want to change the object at runtime? For example, my config already says when I instantiate MyController, inject for FileLogger as ILogger. But I might want to inject DatabaseLogger.
Every time I want to change what objects my AClass needs, I need to now look into two places - The class itself and the configuration file. How does that make life easier?
If Aproperty of AClass is not injected, is it harder to mock it out?
Going back to the first question. If using new object() is bad, how come we inject the implementation and not the interface? I think a lot of you are saying we're in fact injecting the interface but the configuration makes you specify the implementation of that interface ..not at runtime .. it is hardcoded during compile time.
This is based on the answer #Adam N posted.
Why does PersonService no longer have to worry about GroupMembershipService? You just mentioned GroupMembership has multiple things(objects/properties) it depends on. If GMService was required in PService, you'd have it as a property. You can mock that out regardless of whether you injected it or not. The only time I'd like it to be injected is if GMService had more specific child classes, which you wouldn't know until runtime. Then you'd want to inject the subclass. Or if you wanted to use that as either singleton or prototype. To be honest, the configuration file has everything hardcoded as far as what subclass for a type (interface) it is going to inject during compile time.
EDIT
A nice comment by Jose Maria Arranz on DI
DI increases cohesion by removing any need to determine the direction of dependency and write any glue code.
False. The direction of dependencies is in XML form or as annotations, your dependencies are written as XML code and annotations. XML and annotations ARE source code.
DI reduces coupling by making all of your components modular (i.e. replaceable) and have well-defined interfaces to each other.
False. You do not need a DI framework to build a modular code based on interfaces.
About replaceable: with a very simple .properties archive and Class.forName you can define which classes can change. If ANY class of your code can be changed, Java is not for you, use an scripting language. By the way: annotations cannot be changed without recompiling.
In my opinion there is one only reason for DI frameworks: boiler plate reduction. With a well done factory system you can do the same, more controlled and more predictable as your preferred DI framework, DI frameworks promise code reduction (XML and annotations are source code too). The problem is this boiler plate reduction is just real in very very simple cases (one instance-per class and similar), sometimes in the real world picking the appropriated service object is not as easy as mapping a class to a singleton object.
The popular answers are unhelpful, because they define dependency injection in a way that isn't useful. Let's agree that by "dependency" we mean some pre-existing other object that our object X needs. But we don't say we're doing "dependency injection" when we say
$foo = Foo->new($bar);
We just call that passing parameters into the constructor. We've been doing that regularly ever since constructors were invented.
"Dependency injection" is considered a type of "inversion of control", which means that some logic is taken out of the caller. That isn't the case when the caller passes in parameters, so if that were DI, DI would not imply inversion of control.
DI means there is an intermediate level between the caller and the constructor which manages dependencies. A Makefile is a simple example of dependency injection. The "caller" is the person typing "make bar" on the command line, and the "constructor" is the compiler. The Makefile specifies that bar depends on foo, and it does a
gcc -c foo.cpp; gcc -c bar.cpp
before doing a
gcc foo.o bar.o -o bar
The person typing "make bar" doesn't need to know that bar depends on foo. The dependency was injected between "make bar" and gcc.
The main purpose of the intermediate level is not just to pass in the dependencies to the constructor, but to list all the dependencies in just one place, and to hide them from the coder (not to make the coder provide them).
Usually the intermediate level provides factories for the constructed objects, which must provide a role that each requested object type must satisfy. That's because by having an intermediate level that hides the details of construction, you've already incurred the abstraction penalty imposed by factories, so you might as well use factories.
Dependency Injection means a way (actually any-way) for one part of code (e.g a class) to have access to dependencies (other parts of code, e.g other classes, it depends upon) in a modular way without them being hardcoded (so they can change or be overriden freely, or even be loaded at another time, as needed)
(and ps , yes it has become an overly-hyped 25$ name for a rather simple, concept), my .25 cents
From the Book, 'Well-Grounded Java Developer: Vital techniques of Java 7 and polyglot programming
DI is a particular form of IoC, whereby the process of finding your dependencies is
outside the direct control of your currently executing code.
Dependency injection is one possible solution to what could generally be termed the "Dependency Obfuscation" requirement. Dependency Obfuscation is a method of taking the 'obvious' nature out of the process of providing a dependency to a class that requires it and therefore obfuscating, in some way, the provision of said dependency to said class. This is not necessarily a bad thing. In fact, by obfuscating the manner by which a dependency is provided to a class then something outside the class is responsible for creating the dependency which means, in various scenarios, a different implementation of the dependency can be supplied to the class without making any changes to the class. This is great for switching between production and testing modes (eg., using a 'mock' service dependency).
Unfortunately the bad part is that some people have assumed you need a specialized framework to do dependency obfuscation and that you are somehow a 'lesser' programmer if you choose not to use a particular framework to do it. Another, extremely disturbing myth, believed by many, is that dependency injection is the only way of achieving dependency obfuscation. This is demonstrably and historically and obviously 100% wrong but you will have trouble convincing some people that there are alternatives to dependency injection for your dependency obfuscation requirements.
Programmers have understood the dependency obfuscation requirement for years and many alternative solutions have evolved both before and after dependency injection was conceived. There are Factory patterns but there are also many options using ThreadLocal where no injection to a particular instance is needed - the dependency is effectively injected into the thread which has the benefit of making the object available (via convenience static getter methods) to any class that requires it without having to add annotations to the classes that require it and set up intricate XML 'glue' to make it happen. When your dependencies are required for persistence (JPA/JDO or whatever) it allows you to achieve 'tranaparent persistence' much easier and with domain model and business model classes made up purely of POJOs (i.e. no framework specific/locked in annotations).
Dependency Injection for 5 year olds.
When you go and get things out of the refrigerator for yourself, you can cause problems. You might leave the door open, you might get something Mommy or Daddy doesn't want you to have. You might be even looking for something we don't even have or which has expired.
What you should be doing is stating a need, "I need something to drink with lunch," and then we will make sure you have something when you sit down to eat.
In simple words dependency injection (DI) is the way to remove dependencies or tight coupling between different object. Dependency Injection gives a cohesive behavior to each object.
DI is the implementation of IOC principal of Spring which says "Don't call us we will call you". Using dependency injection programmer doesn't need to create object using the new keyword.
Objects are once loaded in Spring container and then we reuse them whenever we need them by fetching those objects from Spring container using getBean(String beanName) method.
from Book Apress.Spring.Persistence.with.Hibernate.Oct.2010
The purpose of dependency injection is to decouple the work of
resolving external software components from your application business
logic.Without dependency injection, the details of how a component
accesses required services can get muddled in with the component’s
code. This not only increases the potential for errors, adds code
bloat, and magnifies maintenance complexities; it couples components
together more closely, making it difficult to modify dependencies when
refactoring or testing.
Dependency Injection (DI) is part of Dependency Inversion Principle (DIP) practice, which is also called Inversion of Control (IoC). Basically you need to do DIP because you want to make your code more modular and unit testable, instead of just one monolithic system. So you start identifying parts of the code that can be separated from the class and abstracted away. Now the implementation of the abstraction need to be injected from outside of the class. Normally this can be done via constructor. So you create a constructor that accepts the abstraction as a parameter, and this is called dependency injection (via constructor). For more explanation about DIP, DI, and IoC container you can read Here
Dependency Injection (DI) is one from Design Patterns, which uses the basic feature of OOP - the relationship in one object with another object. While inheritance inherits one object to do more complex and specific another object, relationship or association simply creates a pointer to another object from one object using attribute. The power of DI is in combination with other features of OOP as are interfaces and hiding code.
Suppose, we have a customer (subscriber) in the library, which can borrow only one book for simplicity.
Interface of book:
package com.deepam.hidden;
public interface BookInterface {
public BookInterface setHeight(int height);
public BookInterface setPages(int pages);
public int getHeight();
public int getPages();
public String toString();
}
Next we can have many kind of books; one of type is fiction:
package com.deepam.hidden;
public class FictionBook implements BookInterface {
int height = 0; // height in cm
int pages = 0; // number of pages
/** constructor */
public FictionBook() {
// TODO Auto-generated constructor stub
}
#Override
public FictionBook setHeight(int height) {
this.height = height;
return this;
}
#Override
public FictionBook setPages(int pages) {
this.pages = pages;
return this;
}
#Override
public int getHeight() {
// TODO Auto-generated method stub
return height;
}
#Override
public int getPages() {
// TODO Auto-generated method stub
return pages;
}
#Override
public String toString(){
return ("height: " + height + ", " + "pages: " + pages);
}
}
Now subscriber can have association to the book:
package com.deepam.hidden;
import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
public class Subscriber {
BookInterface book;
/** constructor*/
public Subscriber() {
// TODO Auto-generated constructor stub
}
// injection I
public void setBook(BookInterface book) {
this.book = book;
}
// injection II
public BookInterface setBook(String bookName) {
try {
Class<?> cl = Class.forName(bookName);
Constructor<?> constructor = cl.getConstructor(); // use it for parameters in constructor
BookInterface book = (BookInterface) constructor.newInstance();
//book = (BookInterface) Class.forName(bookName).newInstance();
} catch (InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (NoSuchMethodException e) {
e.printStackTrace();
} catch (SecurityException e) {
e.printStackTrace();
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
return book;
}
public BookInterface getBook() {
return book;
}
public static void main(String[] args) {
}
}
All the three classes can be hidden for it's own implementation. Now we can use this code for DI:
package com.deepam.implement;
import com.deepam.hidden.Subscriber;
import com.deepam.hidden.FictionBook;
public class CallHiddenImplBook {
public CallHiddenImplBook() {
// TODO Auto-generated constructor stub
}
public void doIt() {
Subscriber ab = new Subscriber();
// injection I
FictionBook bookI = new FictionBook();
bookI.setHeight(30); // cm
bookI.setPages(250);
ab.setBook(bookI); // inject
System.out.println("injection I " + ab.getBook().toString());
// injection II
FictionBook bookII = ((FictionBook) ab.setBook("com.deepam.hidden.FictionBook")).setHeight(5).setPages(108); // inject and set
System.out.println("injection II " + ab.getBook().toString());
}
public static void main(String[] args) {
CallHiddenImplBook kh = new CallHiddenImplBook();
kh.doIt();
}
}
There are many different ways how to use dependency injection. It is possible to combine it with Singleton, etc., but still in basic it is only association realized by creating attribute of object type inside another object.
The usefulness is only and only in feature, that code, which we should write again and again is always prepared and done for us forward. This is why DI so closely binded with Inversion of Control (IoC) which means, that our program passes control another running module, which does injections of beans to our code. (Each object, which can be injected can be signed or considered as a Bean.) For example in Spring it is done by creating and initialization ApplicationContext container, which does this work for us. We simply in our code create the Context and invoke initialization the beans. In that moment injection has been done automatically.
I would propose a slightly different, short and precise definition of what Dependency Injection is, focusing on the primary goal, not on the technical means (following along from here):
Dependency Injection is the process of creating the static, stateless
graph of service objects, where each service is parametrised by its
dependencies.
The objects that we create in our applications (regardless if we use Java, C# or other object-oriented language) usually fall into one of two categories: stateless, static and global “service objects” (modules), and stateful, dynamic and local “data objects”.
The module graph - the graph of service objects - is typically created on application startup. This can be done using a container, such as Spring, but can also be done manually, by passing parameters to object constructors. Both ways have their pros and cons, but a framework definitely isn’t necessary to use DI in your application.
One requirement is that the services must be parametrised by their dependencies. What this means exactly depends on the language and approach taken in a given system. Usually, this takes the form of constructor parameters, but using setters is also an option. This also means that the dependencies of a service are hidden (when invoking a service method) from the users of the service.
When to use? I would say whenever the application is large enough that encapsulating logic into separate modules, with a dependency graph between the modules gives a gain in readability and explorability of the code.
Related
I'm currently reading the book Dependency Injection in .NET by Mark Seeman. In this book he recommends the Register, Resolve, Release pattern and also recommends that each of these operations should appear only once in your application's code.
My situation is the following: I'm creating an application that communicates with a PLC (a kind of industrial embedded computer) using a proprietary communication protocol for which the PLC manufacturer provides an library. The library's documentation recommends creating a connection to the PLC and maintaining it open; then using a timer or a while loop, a request should be periodically sent to read the contents of the PLC's memory, which changes over time.
The values read from the PLC's memory should be used to operate on a database, for which I intend to use Entity Framework. As I understand it, the best option is to create a new dbContext on every execution of the loop in order to avoid a stall cache or concurrency problems (the loop could be potentially executing every few milliseconds for a long time while the connection is kept open all the time).
My first option was calling Resolve on application construction to create a long-lived object that would be injected with the PLC communication object and would handle loop execution and keep the connection alive. Then, at the beginning of every loop execution I intended to call Resolve again to create a short-lived object that would be injected with a new dbContext and which would perform the operations on the database. However, after reading the advice on that book I'm doubting whether I'm on the right track.
My first idea was to pass a delegate to the long-lived object upon its construction that would allow it to build new instances of the short-lived object (I believe it is the factory pattern), thus removing the dependency on the DI container from my long-lived object. However, this construct still violates the aforementioned pattern.
Which is the right way of handling Dependency Injection in this situation?
My first attempt without DI:
class NaiveAttempt
{
private PlcCommunicationObject plcCommunicationObject;
private Timer repeatedExecutionTimer;
public NaiveAttempt()
{
plcCommunicationObject = new PlcCommunicationObject("192.168.0.10");
plcCommunicationObject.Connect();
repeatedExecutionTimer = new Timer(100); //Read values from PLC every 100ms
repeatedExecutionTimer.Elapsed += (_, __) =>
{
var memoryContents = plcCommunicationObject.ReadMemoryContents();
using (var ctx = new DbContext())
{
// Operate upon database
ctx.SaveChanges();
}
}
}
}
Second attempt using Poor man's DI.
class OneLoopObject
{
private PlcCommunicationObject plcCommunicationObject;
private Func<DbContext> dbContextFactory;
public OneLoopObject(PlcCommunicationObject plcCommunicationObject, DbContext dbContext
{
this.plcCommunicationObject = plcCommunicationObject;
this.dbContext = dbContext;
}
public void Execute()
{
var memoryContents = plcCommunicationObject.ReadMemoryContents();
// Operate upon database
}
}
class LongLivedObject
{
private PlcCommunicationObject plcCommunicationObject;
private Timer repeatedExecutionTimer;
private Func<OneLoopObject> oneLoopObjectFactory;
public LongLivedObject(PlcCommunicationObject plcCommunicationObject, Func<PlcCommunicationObject, OneLoopObject> oneLoopObjectFactory)
{
this.plcCommunicationObject = plcCommunicationObject;
this.dbContextFactory = dbContextFactory;
this repeatedExecutionTimer = new Timer(100);
this.repeatedExecutionTimer.Elapsed += (_, __) =>
{
var loopObject = oneLoopObjectFactory(plcCommunicationObject);
loopObject.Execute();
}
}
}
static class Program
{
static void Main()
{
Func<PlcCommunicationObject, OneLoopObject> oneLoopObjectFactory = plc => new OneLoopObject(plc, new DbContext());
var myObject = LongLivedObject(new PlcCommunicationObject("192.168.1.1"), oneLoopObjectFactory)
Console.ReadLine();
}
}
The first edition states (chapter 3, page 82):
In its pure form, the Register Resolve Release pattern states that you should only make a single method call in each phase [...] an application should only contain a single call to the Resolve method.
This description stems from the idea that your application only contains either one root object (typically when writing a simple console application), or one single logical group of root types, e.g. MVC controllers. With MVC controllers, for instance, you would have a custom Controller Factory, which is provided by the MVC framework with a controller type to build. That factory will, in that case, only have a single call to Resolve while supplying the type.
There are cases, however, where your application has multiple groups of root types. For instance, a web application could have a mix of API Controllers, MVC Controllers and View Components. For each logical group you would likely have a single call to Resolve, and thus multiple calls to Resolve (typically because each root type gets its own factory) in your application.
There are other valid reasons for calling back into the container. For instance, you might want to defer building part of the object graph, to combat the issue of Captive Dependencies. This seems your case. Another reason for having an extra resolve is when you use the Mediator pattern to dispatch messages to a certain implementation (or implementations) that can handle that message. In that case your Mediator implementation would typically wrap the container and call Resolve. The Mediator’s abstraction would likely be defined in your Domain library, while the Mediator’s implementation, with its knowledge of the container, should be defined inside the Composition Root.
The advice of having a single call to Resolve should, therefore, not be taken literally. The actual goal here is to build a single object graph as much as possible in one call, compared to letting classes themselves call back into the container to resolve their dependencies (i.e. the Service Locator anti-pattern).
The other important point that (the second edition of) the book makes is
Querying for Dependencies, even if through a DI Container, becomes a Service Locator if used incorrectly. When application code (as opposed to infrastructure code) actively queries a service in order to be provided with required Dependencies, then it has become a Service Locator.
A DI Container encapsulated in a Composition Root isn't a Service Locator—it's an infrastructure component.
(note: this quote is from the second edition; Although the first edition contains this information as well, it might be formulated differently).
So the goal of the RRR pattern is to promote encapsulation of the DI Container within the Composition Root, which is why it insists in having a single call to Resolve.
Do note that while writing the second edition, Mark and I wanted to rewrite the discussion of the RRR pattern. Main reason for this was that we found the text to be confusing (as your question indicates). However, we eventually ran out of time so we decided to simply remove that elaborate discussion. We felt that the most important points were already made.
Combining factories with DI is a common solution. There is absolutely nothing wrong with creating and disposing objects dynamically in your program (it's much more difficult and limiting to try to account for every bit of memory you'll need up front).
I found a post by Mark Seeman about the Register, Resolve, Release Pattern (RRR) here: http://blog.ploeh.dk/2010/09/29/TheRegisterResolveReleasepattern/
He states that...
The names originate with Castle Windsor terminology, where we:
Register components with the container
Resolve root components
Release components from the container
So the RRR pattern is limited to the DI Container. You do indeed Register and Release components with the container one time in your application. This says nothing about objects not injected through DI, ie those objects created dynamically in the normal execution of your program.
I have seen various articles use distinct terminology for the two different types of things you create in your program with relation to DI. There are Service Objects, ie those global objects injected via DI to your application. Then there are Data or Value Objects. These are created by your program dynamically as needed and are generally limited to some local scope. Both are perfectly valid.
It sounds like you want to be able to both resolve objects from the container and then release them, all without directly referencing the container.
You can do that by having both a Create and a Release method in your factory interface.
public interface IFooFactory
{
Foo Create();
void Release(Foo created);
}
This allows you to hide references to the container within the implementation of IFooFactory.
You can create your own factory implementation, but for convenience some containers, like Windsor, will create the factory implementation for you.
var container = new WindsorContainer();
container.AddFacility<TypedFactoryFacility>();
container.Register(Component.For<Foo>());
container.Register(
Component.For<IFooFactory>()
.AsFactory()
);
You can inject the factory, call Create to obtain an instance of whatever the factory creates, and when you're done with it, pass that instance to the Release method.
Windsor does this by convention. The method names don't matter. If you call a method of the interface that returns something, it attempts to resolve it. If a method returns void and takes an argument then it tries to release the argument from the container.
Behind the scenes it's roughly the same as if you wrote this:
public class WindsorFooFactory : IFooFactory
{
private readonly IWindsorContainer _container;
public WindsorFooFactory(IWindsorContainer container)
{
_container = container;
}
public Foo Create()
{
return _container.Resolve<Foo>();
}
public void Release(Foo created)
{
_container.Release(created);
}
}
The factory implementation "knows" about the container, but that's okay. Its job is to create objects. The factory interface doesn't mention the container, so classes that depend on the interface aren't coupled to the container. You could create an entirely different implementation of the factory that doesn't use a container. If the object didn't need to be released you could have a Release method that does nothing.
So, in a nutshell, the factory interface is what enables you to follow the resolve/release part of the pattern without directly depending on the container.
Here's another example that shows a little bit more of what you can do with these abstract factories.
Autofac uses Func<> as the factory pattern so you could always do the same:
public class Foo()
{
private readonly Func<Bar> _barFactory;
public Foo(Func<Bar> barFactory)
{
_barFactory = barFactory;
}
}
Adding Factory Interfaces for factories is not something I think anyone should need to do most of the time, it's extra work for little to no reward.
Then you simply need to keep track of which entities are externally owned or DI owned for your release (Dispose in C#).
I'm trying to improve the performance of my IoC container. We are using Unity and SimpleInjector and we have a class with this constructor:
public AuditFacade(
IIocContainer container,
Func<IAuditManager> auditManagerFactory,
Func<ValidatorFactory> validatorCreatorFactory,
IUserContext userContext,
Func<ITenantManager> tenantManagerFactory,
Func<IMonitoringComponent> monitoringComponentFactory)
: base(container, auditManagerFactory, GlobalContext.CurrentTenant,
validatorCreatorFactory, userContext, tenantManagerFactory)
{
_monitoringComponent = new Lazy<IMonitoringComponent>(monitoringComponentFactory);
}
I also have another class with this constructor:
public AuditTenantComponent(Func<IAuditTenantRepository> auditTenantRepository)
{
_auditTenantRepository = new Lazy<IAuditTenantRepository>(auditTenantRepository);
}
I'm seeing that the second one gets resolved in 1 millisecond, most of the time, whereas the first one takes on average 50-60 milliseconds. I'm sure the reasoning for the slower one is because of the parameters, it has more parameters. But how can I improve the performance of this slower one? Is it the fact that we are using Func<T> as parameters? What can I change if it is causing the slowness?
There is possibly a lot to improve on your current design. These improvements can be placed in five different categories, namely:
Possible abuse of base classes
Use of Service Locator anti-pattern
Use of Ambient Context anti-pattern
Leaky abstractions
Doing too much in injection constructors
Possible abuse of base classes
The general consensus is that you should prefer composition over inheritance. Inheritance is often overused and often adds more complexity compared to using composition. With inheritance the derived class is strongly coupled to the base class implementation. I often see a base class being used as practical utility class containing all sorts of helper methods for cross-cutting concerns and other behavior that some of the derived classes may need.
An often better approach is to remove the base class all together and inject a service into the implementation (the AuditFacade class in your case) that exposed just the functionality that the service needs. Or in case of cross-cutting concerns, don't inject that behavior at all, but wrap the implementation with a decorator that extends the class'es behavior with cross-cutting concerns.
In your case, I think the complication is clearly happening, since 6 out of 7 injected dependencies are not used by the implementation, but are only passed on to the base class. In other words, those 6 dependencies are implementation details of the base class, while the implementation still is forced to know about them. By abstracting (part of) that base class behind a service, you can minimize the number of dependencies that AuditFacade needs to two dependencies: the Func<IMonitoringComponent> and the new abstraction. The implementation behind that abstraction will have 6 constructor dependencies, but the AuditFacade (and other implementations) are oblivious to that.
Use of Service Locator anti-pattern
The AuditFacade depends on an IIocContainer abstraction and this is very like an implementation of the Service Locator pattern. Service Locator should be considered an anti-pattern because:
it hides a class' dependencies, causing run-time errors instead of
compile-time errors, as well as making the code more difficult to
maintain because it becomes unclear when you would be introducing a
breaking change.
There are always better alternatives to injecting your container or an abstraction over your container into application code. Do note that at some times you might want to inject the container into factory implementations, but at long as those are placed inside your Composition Root, there's no harm in that, since Service Locator is about roles, not mechanics.
Use of Ambient Context anti-pattern
The static GlobalContext.CurrentTenant property is an implementation of the Ambient Context anti-pattern. Mark Seemann and I write about this pattern in our book:
The problems with AMBIENT CONTEXT are related to the problems with SERVICE
LOCATOR. The main problems are:
The DEPENDENCY is hidden.
Testing becomes more difficult.
It becomes very hard to change the DEPENDENCY based on its context. [paragraph 5.3.3]
The use in this case is really weird IMO, because you grab the current tenant from some static property from inside your constructor to pass it on to the base class. Why doesn't the base class call that property itself?
But no one should call that static property. The use of those static properties makes your code harder to read and maintain. It makes unit testing harder and since your code base will usually be littered with calls to such static, it becomes a hidden dependency; it has the same downsides as the use of Service Locator.
Leaky abstractions
A Leaky Abstraction is a Dependency Inversion Principle violation, where the abstraction violates the second part of the principle, namely:
B. Abstractions should not depend on details. Details should depend on
abstractions.
Although Lazy<T> is not abstractions by itself (Lazy<T> is a concrete type), it can become leaky abstraction when used as constructor argument. For instance, if you are injecting an Lazy<IMonitoringComponent> instead of an IMonitoringComponent directly (which is what you are basically doing in your code), the new Lazy<IMonitoringComponent> dependency leaks implementation details. This Lazy<IMonitoringComponent> communicates to the consumer that the used IMonitoringComponent implementation is expensive or time consuming to create. But why should the consumer care about this?
But there are more problems with this. If at one point in time the used IUserContext implementation becomes costly to create, we must start to make sweeping changes throughout the application (a violation of the Open/Closed Principle) because all IUserContext dependencies need to be changed to Lazy<IUserContext> and all consumers of that IUserContext must be changed to use userContext.Value. instead. And you'll have to change all your unit tests as well. And what happens if you forget to change one IUserContext reference to Lazy<IUserContext> or when you accidentally depend on IUserContext when you create a new class? You have a bug in your code, because at that point the user context implementation is created right away and this will cause a performance problem (this causes a problem, because that is the reason you are using Lazy<T> in the first place).
So why are we exactly making sweeping changes to our code base and polluting it with that extra layer of indirection? There is no reason for this. The fact that a dependency is costly to create is an implementation detail. You should hide it behind an abstraction. Here's an example:
public class LazyMonitoringComponentProxy : IMonitoringComponent {
private Lazy<IMonitoringComponent> component;
public LazyMonitoringComponentProxy(Lazy<IMonitoringComponent> component) {
this.component = component;
}
void IMonitoringComponent.MonitoringMethod(string someVar) {
this.component.Value.MonitoringMethod(someVar);
}
}
In this example we've hidden the Lazy<IMonitoringComponent> behind a proxy class. This allows us to replace the original IMonitoringComponent implementation with this LazyMonitoringComponentProxy without having to make any change to the rest of the applicaiton. With Simple Injector, we can register this type as follows:
container.Register<IMonitoringComponent>(() => new LazyMonitoringComponentProxy(
new Lazy<IMonitoringComponent>(container.GetInstance<CostlyMonitoringComp>));
And just as Lazy<T> can be abused as leaky abstraction, the same holds for Func<T>, especially when you're doing this for performance reasons. When applying DI correctly, there is most of the time no need to inject factory abstractions into your code such as Func<T>.
Do note that if you are injecting Lazy<T> and Func<T> all over the place, you are complicating your code base unneeded.
Doing too much in injection constructors
But besides Lazy<T> and Func<T> being leaky abstractions, the fact that you need them a lot is an indication of a problem with your application, because Injection Constructors should be simple. If constructors take a long time to run, your constructors are doing too much. Constructor logic is often hard to test and if such constructor makes a call to the database or requests data from HttpContext, verification of your object graphs becomes much harder to the point that you might skip verification all together. Skipping verification of the object graph is a terrible thing to do, because this forces you to click through the complete application to find out whether or not your DI container is configured correctly.
I hope this gives you some ideas about improving the design of your classes.
You can hook into Simple Injector's pipeline and add profiling, which allows you to spot which types are slow to create. Here's an extension method that you can use:
public struct ProfileData {
public readonly ExpressionBuildingEventArgs Info;
public readonly TimeSpan Elapsed;
public ProfileData(ExpressionBuildingEventArgs info, TimeSpan elapsed) {
this.Info = info;
this.Elapsed = elapsed;
}
}
static void EnableProfiling(Container container, List<ProfileData> profileLog) {
container.ExpressionBuilding += (s, e) => {
Func<Func<object>, object> profilingWrapper = creator => {
var watch = Stopwatch.StartNew();
var instance = creator.Invoke();
profileLog.Add(new ProfileData(e, watch.Elapsed));
return instance;
};
Func<object> instanceCreator =
Expression.Lambda<Func<object>>(e.Expression).Compile();
e.Expression = Expression.Convert(
Expression.Invoke(
Expression.Constant(profilingWrapper),
Expression.Constant(instanceCreator)),
e.KnownImplementationType);
};
}
And you can use this as follows:
var container = new Container();
// TODO: Your registrations here.
// Hook the profiler
List<ProfileData> profileLog = new List<ProfileData>(1000);
// Call this after all registrations.
EnableProfiling(container, profileLog);
// Trigger verification to allow everything to be precompiled.
container.Verify();
profileLog.Clear();
// Resolve a type:
container.GetInstance<AuditFacade>();
// Display resolve time in order of time.
var slowestFirst = profileLog.OrderByDescending(line => line.Elapsed);
foreach (var line in slowestFirst)
{
Console.WriteLine(string.Format("{0} ms: {1}",
line.Info.KnownImplementationType.Name,
line.Elapsed.TotalMilliseconds);
}
Do note that the shown times include the time it takes to resolve the dependencies, but this will probably allow you pretty easily what type causes the delay.
There are two important thing I want to note about the given code here:
This code will have severely negative impact on the performance of resolving object graphs, and
The code is NOT thread-safe.
So don't use it in your production environment.
Everything you do has a cost associated with it. Typically, more constructor parameters that are resolved recursively take longer than fewer parameters. But you must decide if the cost is ok or too high.
In your case, will the 50 ms cause a bottleneck? are you only creating 1 instance or are you puking them out in a tight loop? Just comparing the 1 ms with 50 ms might cause you to condemn the slower one, but if the user cannot tell that 50 ms passed and it doesn't cause a problem elsewhere in your app, why run through hoops to make it faster if you don't know it'll ever be needed?
I've been reading up on how to write testable code and stumbled upon the Dependency Injection design pattern.
This design pattern is really easy to understand and there is really nothing to it, the object asks for the values rather then creating them itself.
However, now that I'm thinking about how this could be used the application im currenty working on I realize that there are some complications to it. Imagine the following example:
public class A{
public string getValue(){
return "abc";
}
}
public class B{
private A a;
public B(A a){
this.a=a;
}
public void someMethod(){
String str = a.getValue();
}
}
Unit testing someMethod () would now be easy since i can create a mock of A and have getValue() return whatever I want.
The class B's dependency on A is injected through the constructor, but this means that A has to be instantiated outside the class B so this dependency have moved to another class instead. This would be repeated many layers down and on some point instantiation has to be done.
Now to the question, is it true that when using Dependency Injection, you keep passing the dependencys through all these layers? Wouldn't that make the code less readable and more time consuming to debug? And when you reach the "top" layer, how would you unit test that class?
I hope I understand your question correctly.
Injecting Dependencies
No we don't pass the dependencies through all the layers. We only pass them to layers that directly talk to them. For example:
public class PaymentHandler {
private customerRepository;
public PaymentHandler(CustomerRepository customerRepository) {
this.customerRepository = customerRepository;
}
public void handlePayment(CustomerId customerId, Money amount) {
Customer customer = customerRepository.findById(customerId);
customer.charge(amount);
}
}
public interface CustomerRepository {
public Customer findById(CustomerId customerId);
}
public class DefaultCustomerRepository implements CustomerRepository {
private Database database;
public CustomerRepository(Database database) {
this.database = database;
}
public Customer findById(CustomerId customerId) {
Result result = database.executeQuery(...);
// do some logic here
return customer;
}
}
public interface Database {
public Result executeQuery(Query query);
}
PaymentHandler does not know about the Database, it only talks to CustomerRepository. The injection of Database stops at the repository layer.
Readability of the code
When doing manual injection without framework or libraries to help, we might end up with Factory classes that contain many boilerplate code like return new D(new C(new B(), new A()); which at some point can be less readable. To solve this problem we tend to use DI frameworks like Guice to avoid writing so many factories.
However, for classes that actually do work / business logic, they should be more readable and understandable as they only talk to their direct collaborators and do the work they need to do.
Unit Testing
I assume that by "Top" layer you mean the PaymentHandler class. In this example, we can create a stub CustomerRepository class and have it return a Customer object that we can check against, then pass the stub to the PaymentHandler to check whether the correct amount is charged.
The general idea is to pass in fake collaborators to control their output so that we can safely assert the behavior of the class under test (in this example the PaymentHandler class).
Why interfaces
As mentioned in the comments above, it is more preferable to depend on interfaces instead of concrete classes, they provide better testability(easy to mock/stub) and easier debugging.
Hope this helps.
Well yes, that would mean you have to pass the dependencies over all the layers. However, that's where Inversion of Control containers come in handy. They allow you to register all components (classes) in the system. Then you can ask the IoC container for an instance of class B (in your example), which would automatically call the correct constructor for you automatically creating any objects the constructor depends upon (in your case class A).
A nice discussion can be found here: Why do I need an IoC container as opposed to straightforward DI code?
IMO, your question demonstrates that you understand the pattern.
Used correctly, you would have a Composition Root where all dependencies are resolved and injected. Use of an IoC container here would resolve dependencies and pass them down through the layers for you.
This is in direct opposition to the Service Location pattern, which is considered by many as an anti-pattern.
Use of a Composition Root shouldn't make your code less readable/understandable as well-designed classes with clear and relevant dependencies should be reasonably self-documenting. I'm not sure about unit testing a Composition Root. It has a discreet role so it should be testable.
There seems to be a stigma on SO regarding use of Singletons. I've never personally bought into it but for the sake of open mindedness I'm attempting to give IoC concepts a try as an alternative because I'm frankly bored with my everyday work and would like to try something different. Forgive me if my interpretation of IoC concepts are incorrect or misguided.
Here's the situation: I'm building a simple HttpListener based web server in a windows service that utilizes a plug-in model to determine how a request should be handled based on the URL requested (just like everyone else that asks about HttpListener). My approach to discovering the plug-ins is to query a configured directory for assemblies decorated with a HttpModuleAssemblyAttribute. These assemblies can contain 0 or more IHttpModule children who in addition are decorated with a HttpModuleAttribute used to specify the module's name, version, human readable description and various other information. Something like:
[HttpModule(/*Some property values that matter */)]
public class SimpleHttpModule : IHttpModule
{
public void Execute(HttpListenerContext context)
{
/* Do Something Special */
}
}
When an HttpModule is discovered I would typically add it to a Dictionary<string, Type> object who's sole purpose is to keep track of which modules we know about. This dictionary would typically live in my variety of a Singleton which takes on the persona of an ACE style Singleton (a legacy from my C++ days where I learned about Singletons).
Now what I am trying to implement is something similar using (my understanding of) general IoC concepts. Basically what I have is an AppService collection where IAppService is defined as:
public interface IAppService : IDisposable
{
void Initialize();
}
And my plug-in AppService would look something like:
[AppService("Plugins")]
internal class PluginAppService : IAppService, IDictionary<string, Type>
{
/* Common IDictionary Implementation consisting of something like: */
internal Type Item(string modName)
{
Type modType;
if (!this.TryGetValue(modName, out modType)
return null;
return modType;
}
internal void Initialize()
{
// Find internal and external plug-ins and add them to myself
}
// IDisposable clean up method that attempts to dispose all known plug-ins
}
Then during service OnStart I instantiate an instance of AppServices which is locally known but passed to the constructor of all instantiated plug-ins:
public class AppServices : IDisposable, IDictionary<string, IAppService>
{
/* Simple implementation of IDictionary */
public void Initialization()
{
// Find internal IAppService implementations, instantiate them (passing this as a constructor parameter), initialize them and add them to this.
// Somewhere in there would be something like
Add(appSvcName, appSvc);
}
}
Our once single method implementation becomes an abstract implementation + a constructor on the child:
[HttpModule(/*Some property values that matter */)]
public abstract class HttpModule : IHttpModule
{
protected AppServices appServices = null;
public HttpModule(AppServices services)
{
appServices = services;
}
public abstract void Execute(HttpListenerContext context);
}
[HttpModule(/*Some property values that matter */)]
public class SimpleHttpModule : HttpModule
{
public SimpleHttpModule(AppServices services) : base(services) { }
public override void Execute(HttpListenerContext context)
{
/* Do Something Special */
}
}
And any access to commonly used application services becomes:
var plugType = appServices["Plugins"][plugName];
rather than:
var plugType = PluginManager.Instance[plugName];
Am I missing some basic IoC concept here that would simplify this all or is there really a benefit to all of this additional code? In my world, Singletons are simple creatures that allow code throughout a program to access needed (relatively static) information (in this case types).
To pose the questions more explicitly:
Is this a valid implementation of a Factory Singleton translated to IoC/DI concepts?
If it is, where is the payback/benefit for the additional code required and imposition of a seemingly more clunky API?
IoC is a generic term. Dependency Injection is the more preferred term these days.
Dependency Injection really shines in several circumstances. First, it defines a more testable architecture than solutions that have hard-coded instantiations of dependencies. Singletons are difficult to unit test because they are static, and static data cannot be "unloaded".
Second, Dependency Injection not only instantiates the type you want, but all dependant types. Thus, if class A needs class B, and class B needs class C and D, then a good DI framework will automatically create all dependencies, and control their lifetimes (for instance, making them live for the lifetime of a single web request).
DI Containers can be though of as generic factories that can instantiate any kind of object (so long as it's properly configured and meets the requirments of the DI framework). So you don't have to write a custom factory.
Like with any generic solution, it's designed to give 90% of the use cases what they need. Sure, you could create a hand crafted custom linked list data structure every time you need a collection, but 90=% of the time a generic one will work just fine. The same is true of DI and Custom Factories.
IoC becomes more interesting when you get round to writing unit tests. Sorry to answer a question with more questions, but... What would the unit tests look like for both of your implementations? Would you be able to unit test classes that used the PluginManager without looking up assemblies from disk?
EDIT
Just because you can achieve the same functionality with singletons doesn't mean it's as easy to maintain. By using IoC (at least this style with constructors) you're explicitly stating the dependencies an object has. By using singletons that information is hidden within the class. It also makes it harder to replace those dependencies with alternate implementations.
So, with a singleton PluginManager it would difficult to test your HTTP server with mock plugins, rather it looking them up from some location on disk. With the IoC version, you could pass around an alternate version of the IAppService that just looks the plugins up from a pre-populated Dictionary.
While I'm still not really convinced that IoC/DI is better in this situation, I definitely have seen benefit as the project's scope crept. For things like logging and configurability it most certainly is the right approach.
I look forward to experimenting with it more in future projects.
Pretty new to dependency injection and I'm trying to figure out if this is an anti pattern.
Let's say I have 3 assemblies:
Foo.Shared - this has all the interfaces
Foo.Users - references Foo.Shared
Foo.Payment - references Foo.Shared
Foo.Users needs an object that is built within Foo.Payment, and Foo.Payment also needs stuff from Foo.Users. This creates some sort of circular dependency.
I have defined an interface in Foo.Shared that proxies the Dependency Injection framework I'm using (in this case NInject).
public interface IDependencyResolver
{
T Get<T>();
}
In the container application, I have an implementation of this interface:
public class DependencyResolver:IDependencyResolver
{
private readonly IKernel _kernel;
public DependencyResolver(IKernel kernel)
{
_kernel = kernel;
}
public T Get<T>()
{
return _kernel.Get<T>();
}
}
The configuration looks like this:
public class MyModule:StandardModule
{
public override void Load()
{
Bind<IDependencyResolver>().To<DependencyResolver>().WithArgument("kernel", Kernel);
Bind<Foo.Shared.ISomeType>().To<Foo.Payment.SomeType>(); // <- binding to different assembly
...
}
}
This allows me to instantiate a new object of Foo.Payment.SomeType from inside Foo.Users without needing a direct reference:
public class UserAccounts:IUserAccounts
{
private ISomeType _someType;
public UserAccounts(IDependencyResolver dependencyResolver)
{
_someType = dependencyResolver.Get<ISomeType>(); // <- this essentially creates a new instance of Foo.Payment.SomeType
}
}
This makes it unclear what the exact dependencies of the UserAccounts class are in this instance, which makes me think it's not a good practice.
How else can I accomplish this?
Any thoughts?
Althought somewhat controversial: yes, this is an anti-pattern. It's known as a Service Locator and while some consider it a proper design pattern, I consider it an anti-pattern.
This issue is that usage of e.g. your UserAccounts class becomes implicit instead of explicit. While the constructor states that it needs an IDependencyResolver, it doesn't state what should go in it. If you pass it an IDependencyResolver that can't resolve ISomeType, it's going to throw.
What's worse is that at later iterations, you may be tempted to resolve some other type from within UserAccounts. It's going to compile just fine, but is likely to throw at run-time if/when the type can't be resolved.
Don't go that route.
From the information given, it's impossible to tell you exactly how you should solve your particular problem with circular dependencies, but I'd suggest that you rethink your design. In many cases, circular references are a symptom of Leaky Abstractions, so perhaps if you remodel your API a bit, it will go away - it's often surprising how small changes are required.
In general, the solution to any problem is adding another layer of indirection. If you truly need to have objects from both libraries collaborating tightly, you can typically introduce an intermediate broker.
In many cases, a Publish/subscribe model works well.
The Mediator pattern may provide an alternative if communication must go both ways.
You can also introduce an Abstract Factory to retrieve the instance you need as you need it, instead of requiring it to be wired up immediately.
I agree with ForeverDebugging - it would be good to eliminate the circular dependency. See if you can separate the classes like this:
Foo.Payment.dll: Classes that deal only with payment, not with users
Foo.Users.dll: Classes that deal only with users, not with payment
Foo.UserPayment.dll: Classes that deal with both payment and users
Then you have one assembly that references two others, but no circle of dependencies.
If you do have a circular dependency between assemblies, it doesn't necessarily mean you have a circular dependency between classes. For example, suppose you have these dependencies:
Foo.Users.UserAccounts depends on Foo.Shared.IPaymentHistory, which is implemented by Foo.Payment.PaymentHistory.
A different payment class, Foo.Payment.PaymentGateway, depends on Foo.Shared.IUserAccounts. IUserAccounts is implemented by Foo.Users.UserAccounts.
Assume there are no other dependencies.
Here there is a circle of assemblies that will depend on each other at runtime in your application (though they don't depend on each other at compile time, since they go through the shared DLL). But there is no circle of classes that depend on each other, at compile time or at runtime.
In this case, you should still be able to use your IoC container normally, without adding an extra level of indirection. In your MyModule, just bind each interface to the appropriate concrete type. Make each class accept its dependencies as arguments to the constructor. When your top-level application code needs an instance of a class, let it ask the IoC container for the class. Let the IoC container worry about finding everything that class depends on.
If you do end up with a circular dependency between classes, you probably need to use property injection (aka setter injection) on one of the classes, instead of constructor injection. I don't use Ninject, but it does support property injection - here is the documentation.
Normally IoC containers use constructor injection - they pass dependencies in to the constructor of the class that depends on them. But this doesn't work when there is a circular dependency. If classes A and B depend on each other, you'd need to pass an instance of class A in to the constructor of class B. But in order to create an A, you need to pass an instance of class B into its constructor. It's a chicken-and-egg problem.
With property injection, you tell your IoC container to first call the constructor, and then set a property on the constructed object. Normally this is used for optional dependencies, such as loggers. But you could also use it to break a circular dependency between two classes that require each other.
This isn't pretty though, and I'd definitely recommend refactoring your classes to eliminate circular dependencies.
That does seem a little odd to me. Is it possible to separate the logic which needs both references out into a third assembly to break the dependencies and avoid the risk?