Is dependency injection container a singleton? - c#

I am learning DI in .Net Core and I find all examples only use one instance of ServiceCollection. I wonder whether this instance must be a singleton but I get confused because we can invoke new. Probably because of my lack of knowledge, it really makes sense to have multiple instances of ServiceCollection. Any comment and suggestion are welcome!

It's both more efficient and less Dangerous than creating multiple service providers. Creating one instance allows you to have all your services in one place instead of divided over multiple provider instances.
A service provider doesn't have to be a singleton, but it makes users of dependency injection frameworks less likely go down the bad road.
The bad road in this case is separating your dependencies and later having to pass/ know the right dependency provider to choose from when getting your dependencies.
This makes your code more complicated than it has to be, as well as creating no benefit for both you and especially other people who will join you on your project and have to figure out which provider had the object which can access the database.
Most frameworks have their service providers accessible statically which also allows you to retrieve services and merge the service provider into your project far easier. Having multiple instances would make this difficult.
Normally with dependency injection you would for example pass it directly in your constructor.
So in short:
It's convenient
efficient
easy to read and understand
makes it difficult to use it the wrong way
Allows it to be easily used as a static object

Related

Should a dependency be injected many "levels" up than it is needed?

I'm writing a C# ASP.NET MVC web application using SOLID principles.
I've written a ViewModelService, which depends on a AccountService and a RepositoryService, so I've injected those two services in the the ViewModelServer.
The PermissionService depends on the HttpContextBase in order to use GetOwinContext() to get an instance of the UserManager. The controller has an instance of HttpContextBase that needs to be used - so it seems like I have to inject the HttpContextBase instance into the ViewModelService which then injects it into the PermissionService.
So, in terms of code I have:
public ViewModelService
public CategoryRepository(ApplicationDbContext context, IPermissionService permissionservice)
public AccountService(HttpContextBase httpcontext, IPrincipal securityprincipal)
to instantiate the ViewModelService, I then do this:
new ViewModelService(
new CategoryRepository(
new ApplicationDbContext(),
new PermissionService(
new AccountService(HttpContext, Thread.CurrentPrincipal),
new UserPasswordRepository(new ApplicationDbContext()),
new ApplicationSettingsService())),
new PasswordRepository(
new ApplicationDbContext(),
new PermissionService(
new AccountService(HttpContext, Thread.CurrentPrincipal),
new UserPasswordRepository(new ApplicationDbContext()),
new ApplicationSettingsService())),
new ModelValidatorService());
Should a dependency be injected from that many "levels" up, or is there a better way?
There's a balance to be struck.
On the one hand, you have the school of thought which would insist that all dependencies must be exposed by the class to be "properly" injected. (This is the school of thought which considers something like a Service Locator to be an anti-pattern.) There's merit to this, but taken to an extreme you find yourself where you are now. Just the right kind of complexity in some composite models, which themselves have composite models, results in aggregate roots which need tons of dependencies injected solely to satisfy dependencies of deeper models.
Personally I find that this creates coupling in situations like this. Which is what DI is intended to resolve, not to create.
On the other hand, you have the school of thought which allows for a Service Locator approach, where models can internally invoke some common domain service to resolve a dependency for it. There's merit to this, but taken to an extreme you find that your dependencies are less known and there's a potential for runtime errors if any given dependency can't be resolved. (Basically, you can get errors at a higher level because consuming objects never knew that consumed objects needed something which wasn't provided.)
Personally I've used a service locator approach a lot (mostly because it's a very handy pattern for introducing DI to a legacy domain as part of a larger refactoring exercise, which is a lot of what I do professionally) and have never run into such issues.
There's yin and yang either way. And I think each solution space has its own balance. If you're finding that direct injection is making the system difficult to maintain, it may be worth investigating service location. Conversely, it may also be worth investigating if the overall domain model itself is inherently coupled and this DI issue is simply a symptom of that coupling and not the cause of it.
Yes, the entire intent of Dependency Injection is that you compose big object graphs up-front. You compose object graphs from the Composition Root, which is a place in your application that has the Single Responsibility of composing object graphs. That's not any particular Controller, but a separate class that composes Controllers with their dependencies.
The Composition Root must have access to all types it needs to compose, unless you want to get into late-binding strategies (which I'll generally advise against, unless there's a specific need).
I am firmly of the opinion that Service Locators are worse than Dependency Injection. They can be a useful legacy technique, and a useful stepping stone on to something better, but if you are designing something new, then steer clear.
The main reason for this is that Service Locators lead to code that has implicit dependencies, and this makes the code less clear and breaks encapsulation. It can also lead to run time errors instead of compile time errors, and Interacting Tests.
Your example uses Constructor Injection, which is usually the most appropriate form of Dependency Injection:
public ViewModelService(ICategoryRepository categoryRepository, IPasswordRepository passwordRepository, IModelValidatorService modelValidator) { ... }
This has explicit dependencies, which is good. It means that you cannot create the object without passing in its dependencies, and if you try to you will get a compile time error rather than a run time one. It also is good for encapsulation, as just by looking at the interface of the class you know what dependencies it needs.
You could do this using service locators as below:
public ViewModelService()
{
var categoryRepository = CategoryRepositoryServiceLocator.Instance;
var passwordRepository = PasswordRepositoryServiceLocator.Instance;
var modelValidator FModelValidatorServiceLocator.Instance;
...
}
This has implicit dependencies, that you cannot tell just by looking at the interface, you must also look at the implementation (this breaks encapsulation). You can also forget to set up one of the Service Locators, which will lead to a run time exception.
In your example I thinky your ViewModelService is good. It references abstractions (ICategoryRepository etc) and doesn't care about how these abstractions are created. The code you use to create the ViewModelService is a bit ugly, and I would recommend using an Inversion of Control container (such as Castle Windsor, StructureMap etc) to help here.
In Castle Windsor, you could do something like the following:
container.Register(Classes.FromAssemblyNamed("Repositories").Pick().WithServiceAllInterfaces());
container.Register(Component.For<IAccountService>().ImplementedBy<AccountService>());
container.Register(Component.For<IApplicationDBContext>().ImplementedBy<IApplicationDBContext>());
container.Register(Component.For<IApplicationSettingsService>().ImplementedBy<IApplicationSettingsService>());
var viewModelService = _container.Resolve<ViewModelService>();
Make sure to read and understand the "Register, Resolve, Release" and "Composition Root" patterns before you start.
Good luck!

Dependency injection: single class (WCF service) having multiple dependencies (DB repositories) how to handle?

I've read a book "Dependency injection in .NET" by Mark Seemann and it opened my eyes on many things. But still few question left. Here is one of them:
Let's say we have a WCF service exposing API for working with some database:
public class MyService : IMyService
{
private ITableARepository _reposA;
private ITableARepository _reposB;
//....
public IEnumerable<EntityA> GetAEntities()
{
return _reposA.GetAll().Select(x=>x.ToDTO())
}
public IEnumerable<EntityB> GetBEntities()
{
return _reposB.GetAll().Select(x=>x.ToDTO())
}
//...
}
There may be dozens of repositories service depend on. Some methods use one, some methods another, some methods use few repositories.
And my question is how to correctly organize injection of repository dependencies into service?
Options I see:
Constructor injection. Create a huge constructor with dozens of arguments. Easy for usage, but hard for managing parameters list. Also it's extreemely bad for performance as each unused repository is a waste of resources even if it doesn't use separate DB connection.
Property injection. Optimizes performance, but usage becomes non-obvious. How should creator of the service know which properties to initialize for specific method call? Moreover this creator should be universal for each method call and be located in the composition root. So logic there becomes very complicated and error-prone.
Somewhat non-standard (not described in a book) approach: create a repository factory and depend on it instead of concrete repositories. But the book say factories are very often used incorrectly as a side way to overcome problems that can be resolved much better with proper DI usage. So this approach looks suspicious for me (while achieving both performance and transparency objectives).
Or is there a conceptual problem with this relation 1 to many dependencies?
I assume the answer should differ depending on service instance context mode (probably when it's Single instance, constructor injection is just fine; for PerCall option 3 looks best if to ignore the above warning; for perSession everything depends on the session lifetime: whether it's more close to Single instance or PerCall).
If it really depends on instance context mode, then it becomes hard to change it, because change requires large changes in the code (to move from constructor injection to property injection or to repository factory). But the whole concept of WCF service ensures it is simple to change the instance context mode (and it's not so unlikely that I will need to change it). That makes me even more confused about DI and WCF combination.
Could anyone explain how this case should be resolved correctly?
Create a huge constructor with dozens of arguments
You should not create classes with a huge number of constructor arguments. This is the constructor over-injection code-smell. Having constructors with a huge amount of arguments is an indication that such class does too much: violates the Single Responsibility Principle. This leads to code that is hard to maintain and extend.
Also it's extremely bad for performance as each unused repository is a waste of resources
Have you measured this? The amount of constructor arguments should be mainly irreverent for the performance of the application. This should not cause any noticeable difference in performance. And if it does, it becomes be time to look at the amount of work that your constructors do (since injection constructors should be simple) or its time to switch to a faster DI container if your constructors are simple. Creating a bunch of services classes should normally be blazingly fast.
even if it doesn't use separate DB connection.
The constructors should not open connections in the first place. Again: they should be simple.
Property injection. Optimizes performance
How should creator of the service know which properties to initialize for specific method call
The caller can't reliably determine which dependencies are required, since only constructor arguments are typically required. Requiring properties results in temporal coupling and you lose compile-time support.
Since the caller can't determine which properties are needed, all properties need to be injected and this makes the performance equivalent as with constructor injection, which -as I said- should not be a problem at all.
Somewhat non-standard (not described in a book) approach: create a repository factory and depend on it instead of concrete repositories.
Instead of injecting a repository factory, you could inject a repository provider, a pattern which is better known as the Unit of Work pattern. The unit of work may give access to repositories.
I assume the answer should differ depending on service instance context mode
No, since you should never use the WCF 'Single' mode. In most cases the dependencies you inject into your WCF services are not thread-safe and should not outlive a single request. Injecting them into a singleton WCF service causes Captive Dependencies and this is bad because it leads to all kinds of concurrency bugs.
The core problem here seems that your WCF Service classes are big and violate the Single Responsibily Principle, causing them to hard to create, maintain, and test. Fix this violation by either:
Splitting them up in multiple smaller classes, or
Moving functionality out of them into aggregate services and apply patterns such as the command/handler and query/handler patterns.

ASP.NET MVC3 Hand coding IoC

Ninject, Sprint.NET, Unity, Autofac, Castle.Windsor are all examples are IoC frameworks that are available. However, I like the learning curve and control of writing my own. It is definitely common practice to not "re-invent the wheel" and just use pre-existing structures. If your comment is along those lines please be gentle.
Can IoC be implemented without the use of XML? It seems to me most, if not all, of the aforementioned frameworks use XML but I would much rather just write mine in C# instead of using XML to load a .dll. The C# is all converted into one .dll eventually anyway.
From my understanding, if wrong please correct, IoC can be used with DI to make the functionality of classes be based off of their definition and implementation while allowing for a separation of concerns.
This is accomplished in C# using microsoft's library System.ComponentModel.IContainer by having a class which inherits it. A class, such as Product, would have an interface IProduct. A generic constructor would then inherit from IContainer and in the constructor, allow a repository to be passed in, an instantiated object to be passed in, and a function to be passed in. This would allow a controller action to then instantiate an interface (IProduct), instantiate the generic constructor with the current repository instance, and then pass it the interface and function.
Is this setup accurate?
I am still trying to learn more about this topic, and have read the wiki articles on IoC, DI, and read about Castle.Windsor, ninject, Unity, and looked over multiple definitions from the MSDN regarding C# libraries which are used. Any assistance, corrections, or suggestions, are greatly appreciated. Thanks
Can IoC be implemented without the use of XML?
Yes, Ninject, Unity, Castle Windsor and Autofac can be configured without using any XML at all. (not sure about Spring.NET, last time I used it it was impossible, version 1.3)
From my understanding, if wrong please correct, IoC can be used with
DI to make the functionality of classes be based off of their
definition and implementation while allowing for a separation of
concerns.
If under "IoC" you mean "IoC container" then yes, it can be used with DI, but since DI is a particular case of Inversion Of Control your IoC container will be just a container for you dependencies. By just having it your will not magically get any DI-friendly types. It's just a support for managing your inverted dependencies.
Edit
As Mystere Man pointed in his answer you need to improve you understanding of the IoC containers. So I would recommend to read this wonderful book (from Mark Seeman) about all that stuff.
I think it is a great exercise to start without a DI container. Before focusing on using a DI framework, focus on best patterns and practices. Especially, design all classes around Dependency Injection and make sure your code follows the SOLID principles. Both sounds pretty easy, but this takes a shift in mindset and a lot of practice before you will get this right (but is well worth it).
When you do this, and do this well, you will quickly notice that your application will evolve in amazing ways. Your code will be testable and extendable in ways that you never imagined before, without your code to rot over time (however, it keeps constant focus to prevent code from rotting).
Still, when you do all this right (which –again- takes a lot of practice), you will still have one part of your application that, despite your best efforts, will get more complex and harder to maintain, as the application grows. This is the part of the application where you wire all dependencies together: the Composition Root.
And this is where DI containers come in. They have fancy names and compete with each other over features, but their goal can be stated in a single sentence:
The goal of a DI container is to keep the Composition Root
maintainable.
Although you can write your own simple DI container to wire up your dependencies, to prevent your Composition Root to become a big fragile, ever changing ball of mud, the container must at least have one crucial feature: Automatic Constructor Injection (a.k.a. auto-wiring). With auto-wiring, the container will look at the constructor arguments of a type that it needs to create, and it will inject the dependencies in it based on the types of those arguments. This feature will make the difference between a maintenance nightmare and a healthy Composition Root. Although creating your own container that supports auto-wiring isn't that hard (with expression trees it takes about 20 lines of code), the moment you start needing auto-wiring is the time to start using one of the existing DI frameworks.
So in conclusion, if you feel it helps you in the learning experience by doing this by hand, please do, as long as you stick to SOLID, DI, DRY, and TDD. When the burden of changing your Composition Root for each change in the application gets too big (which will be sooner than you might expect), switch to an established framework.
I would suggest using an existing DI container first, to understand how it works from the end user perspective. Then you can go about re-designing the wheel. My favorite saying is "You have to know the rules before you can break them".
Some of what you've said doesn't make a lot of sense. you don't have to use System.ComponentModel.IContainer in any framekwork i know of. Maybe Unity requires that (Microsoft's container) but none of the others do. I'm not familiar with Unity thogh.

Should I use objects passed to Constructor during Injection?

I understand one of the (maybe best) ways of using inversion of control is by injecting the dependent objects through the constructor (constructor injection).
However, if I make calls to these objects outside of the object using them, I feel like I am violating some sort of rule - is this the case? I don't think there is any way of preventing that from happening, but should I establish a rule that (outside of mocked objects) we should never call methods from these objects?
[EDIT] Here's a simplified example of what I am doing. I have a FileController object that basically is used for cataloging files. It uses a FileDal object that talks to the database to insert/query/update File and Directory tables.
On my real implementation I build the controller by instructing Castle to use a SQL Server version of the DAL, in my unit test I use an in-memory Sqlite version of the DAL. However, due to the way the DAL is implemented, I need to call BeginTransaction and Commit around the usage of the FileController so the connection does not get closed and I can later make retrievals and asserts. Why I have to do that is not much important, but it led me to think that calling methods on a DAL object that is used by other clients (controllers) didn't sound kosher. Here's an example:
FileDal fileDal = CastleFactory.CreateFileDal();
fileDal.BeginTransaction();
FileController fileController = new FileController(fileDal);
fileController.CallInterestingMethodThatUsesFileDal();
fileDal.Commit();
It really depends on the type of object - but in general, I'd expect that to be okay.
Indeed, quite often the same dependency will be injected into many objects. For example, if you had an IAuthenticator and several classes needed to use authentication, it would make sense to create a single instance and inject it into each of the dependent classes, assuming they needed the same configuration.
I typically find that my dependencies are immutable types which are naturally thread-safe. That's not always the case, of course - and in some cases (with some IoC containers, at least) you may have dependencies automatically constructed to live for a particular thread or session - but "service-like" dependencies should generally be okay to call from multiple places and threads.

What is Castle Windsor, and why should I care?

I'm a long-time Windows developer, having cut my teeth on win32 and early COM. I've been working with .NET since 2001, so I'm pretty fluent in C# and the CLR. I'd never heard of Castle Windsor until I started participating in Stack Overflow. I've read the Castle Windsor "Getting Started" guide, but it's not clicking.
Teach this old dog new tricks, and tell me why I should be integrating Castle Windsor into my enterprise apps.
Castle Windsor is an inversion of control tool. There are others like it.
It can give you objects with pre-built and pre-wired dependencies right in there. An entire object graph created via reflection and configuration rather than the "new" operator.
Start here: http://tech.groups.yahoo.com/group/altdotnet/message/10434
Imagine you have an email sending class. EmailSender. Imagine you have another class WorkflowStepper. Inside WorkflowStepper you need to use EmailSender.
You could always say new EmailSender().Send(emailMessage);
but that - the use of new - creates a TIGHT COUPLING that is hard to change. (this is a tiny contrived example after all)
So what if, instead of newing this bad boy up inside WorkflowStepper, you just passed it into the constructor?
So then whoever called it had to new up the EmailSender.
new WorkflowStepper(emailSender).Step()
Imagine you have hundreds of these little classes that only have one responsibility (google SRP).. and you use a few of them in WorkflowStepper:
new WorkflowStepper(emailSender, alertRegistry, databaseConnection).Step()
Imagine not worrying about the details of EmailSender when you are writing WorkflowStepper or AlertRegistry
You just worry about the concern you are working with.
Imagine this whole graph (tree) of objects and dependencies gets wired up at RUN TIME, so that when you do this:
WorkflowStepper stepper = Container.Get<WorkflowStepper>();
you get a real deal WorkflowStepper with all the dependencies automatically filled in where you need them.
There is no new
It just happens - because it knows what needs what.
And you can write fewer defects with better designed, DRY code in a testable and repeatable way.
Mark Seemann wrote and excellent book on DI (Dependency Injection) which is a subset of IOC. He also compares a number of containers. I cannot recommend this book enough. The book's name is: "Dependency Injection in .Net" https://www.manning.com/books/dependency-injection-in-dot-net
I think IoC is a stepping stone in the right direction on the path towards greater productivity and enjoyment of development team (including PM, BA an BOs). It helps to establish a separation of concerns between developers and for testing. It gives peace of mind when architecting which allows for flexibility as frameworks may come in and out.
The best way to accomplish the goal that IoC (CW or Ninject etc..) takes a stab at is to eliminate politics #1 and #2 remove need for developers to put on the facade of false understanding when developing. Do these two solutions not seem related to IoC? They are :)
Castle Windsor is Dependency Injection container. It means with the help of this you can inject your dependencies and use them without creating them with the help of new keyword.
e.g. Consider you have written a repository or a service and you wish to use it at many places, you need to first register your service / repository and you can start using it after injecting it on the required place.
You can take a look at the below tutorial which I followed to learn castle windsor.
link.
Hope it will help you.
Put simply. Imagine you have some class buried in your code that needs a few simple config values to do its job. That means everything that creates an instance of that class needs to get those dependencies, so you usually end up having to refactor loads of classes along the way to just pass a bit of config down to where the instance gets created.
So either lots of classes are needlessly altered, you bunch the config values into one big config class which is also bad... or worst still go Service Locator!
IoC allows your class to get all its depencencies without that hassle, and manages lifetimes of instances more explicitly too.

Categories