I have a project where the Ninject is used as IoC container. My concern is that a lot of classes have such kind of constructors:
[Inject]
public HomeController(
UserManager userManager, RoleManager roleManager, BlahblahManager blahblahManager) {
_userManager = userManager;
_roleManager = roleManager;
_blahblahManager = blahblahManager;
}
What if I don't want to have all instances of these classes at once?
The way, when all this classes are wrapped by Lazy<T> and passed to constructor is not exactly what I need. The T instances are not created yet, but Lazy<T> instances are already stored in memory.
My colleague is suggesting me to use Factory pattern to have control over all instantiations, but I'm not sure that IoC have such great design bug.
Is there a workaround for this situation or IoC really have such big defect in it's design? Maybe I should use another IoC container?
Any suggestions?
Seems to me that you are doing premature optimization: don't do it.
The constructors of your services should do nothing more than storing the dependencies that it takes in private fields. In that case the creation of such an object is really light weight. Don't forget that object creation in .NET is really fast. In most cases, from a performance perspective, it just doesn't matter whether those dependencies get injected or not. Especially when comparing to the amount of objects the rest of your application (and the frameworks you use) are spitting out. The real costs is when you start using web services, databases or the file system (or I/O in general), because they cause a much bigger delay.
If the creation is really expensive, you should normally hide the creation behind a Virtual Proxy instead of injecting a Lazy<T> in every consumer, since this allows common application code to stay oblivious to the fact that there is a mechanism to delay the creation (both your application code and test code are becoming more complex when you do this).
Chapter 8 of Dependency Injection: Principle, Practices, Patterns contains a more detailed discussion about lazy and Virtual Proxies.
However, a Lazy<T> just consumes 20 bytes of memory (and another 24 bytes for its wrapped Func<T>, assuming a 32bit process), and the creation of a Lazy<T> instance is practically free. So there is no need to worry about this, except when you’re in an environment with really tight memory constraints.
And if memory consumption is a problem, try registering services with a lifetime that is bigger than transient. You could do a per request, per web request, or singleton. I would even say that when you're in an environment where creating new objects is a problem, you should probably only use singleton services (but it's unlikely that you're working on such an environment, since you're building a web app).
Do note that Ninject is one of the slower DI libraries for .NET. If that's troubling you, switch to a faster container. Some containers have performance that is near newing up object graphs by hand.
but by all means, do profile this, many developers switch DI libraries for the wrong reasons.
Do note that the use of Lazy<T> as dependency is a leaky abstraction (a violation of the Dependency Inversion Principle). Please read this answer for more information.
Steven is correct in saying that this looks like premature optimization. The construction of these object is very fast and is usually never the bottleneck.
However using Lazy to express a dependency you don't need right away is a common pattern in Dependency Injection frameworks. Actofac is one such container that has built in support for various wrapping types. I'm sure there is also an extension for Ninject as well, maybe take a look at this one, Ninject Lazy.
You can also inject into an action method with the syntax below. (I'm not sure exactly what version this was introduced).
Constructor is best practice, but I had to do this once deliberately when I had a service that was doing some expensive initialization - accidentally in fact - but it wasn't discovered for a while and it was just easiest to move it to the one method that did need it.
This can make for cleaner code if you only need to access a service from one action method - but bear in mind if you inject it to the method you'll have to pass it around everywhere because it will no longer be on this. Definitely don't go assigning to this.service in an action method - that's horrible.
public IActionResult About([FromServices] IDateTime dateTime)
{
ViewData["Message"] = "Currently on the server the time is " + dateTime.Now;
return View();
}
https://learn.microsoft.com/en-us/aspnet/core/mvc/controllers/dependency-injection?view=aspnetcore-2.2#action-injection-with-fromservices
Related
In my mid-size project I used static classes for repositories, services etc. and it actually worked very well, even if the most of programmers will expect the opposite. My codebase was very compact, clean and easy to understand. Now I tried to rewrite everything and use IoC (Invertion of Control) and I was absolutely disappointed. I have to manually initialize dozen of dependencies in every class, controller etc., add more projects for interfaces and so on. I really don't see any benefits in my project and it seems that it causes more problems than solves. I found the following drawbacks in IoC/DI:
much bigger codesize
ravioli-code instead of spaghetti-code
slower performance, need to initialize all dependencies in constructor even if the method I want to call has only one dependency
harder to understand when no IDE is used
some errors are pushed to run-time
adding additional dependency (DI framework itself)
new staff have to learn DI first in order to work with it
a lot of boilerplate code, which is bad for creative people (for example copy instances from constructor to properties...)
We do not test the entire codebase, but only certain methods and use real database. So, should Dependency Injection be avoided when no mocking is required for testing?
The majority of your concerns seem to boil down to either misuse or misunderstanding.
much bigger codesize
This is usually a result of properly respecting both the Single Responsibility Principle and the Interface Segregation Principle. Is it drastically bigger? I suspect not as large as you claim. However, what it is doing is most likely boiling down classes to specific functionality, rather than having "catch-all" classes that do anything and everything. In most cases this is a sign of healthy separation of concerns, not an issue.
ravioli-code instead of spaghetti-code
Once again, this is most likely causing you to think in stacks instead of hard-to-see dependencies. I think this is a great benefit since it leads to proper abstraction and encapsulation.
slower performance Just use a fast container. My favorites are SimpleInjector and LightInject.
need to initialize all dependencies in constructor even
if the method I want to call has only one dependency
Once again, this is a sign that you are violating the Single Responsibility Principle. This is a good thing because it is forcing you to logically think through your architecture rather than adding willy-nilly.
harder to understand when no IDE is used some errors are pushed to run-time
If you are STILL not using an IDE, shame on you. There's no good argument for it with modern machines. In addition, some containers (SimpleInjector) will validate on first run if you so choose. You can easily detect this with a simple unit test.
adding additional dependency (DI framework itself)
You have to pick and choose your battles. If the cost of learning a new framework is less than the cost of maintaining spaghetti code (and I suspect it will be), then the cost is justified.
new staff have to learn DI first in order to work with it
If we shy away from new patterns, we never grow. I think of this as an opportunity to enrich and grow your team, not a way to hurt them. In addition, the tradeoff is learning the spaghetti code which might be far more difficult than picking up an industry-wide pattern.
a lot of boilerplate code which is bad for creative people (for example copy instances from constructor to properties...)
This is plain wrong. Mandatory dependencies should always be passed in via the constructor. Only optional dependencies should be set via properties, and that should only be done in very specific circumstances since oftentimes it is violating the Single Responsibility Principle.
We do not test the entire codebase, but only certain methods and use real database. So, should Dependency Injection be avoided when no mocking is required for testing?
I think this might be the biggest misconception of all. Dependency Injection isn't JUST for making testing easier. It is so you can glance at the signature of a class constructor and IMMEDIATELY know what is required to make that class tick. This is impossible with static classes since classes can call both up and down the stack whenever they like without rhyme or reason. Your goal should be to add consistency, clarity, and distinction to your code. This is the single biggest reason to use DI and it is why I highly recommend you revisit it.
Although IoC/DI is not some silver bullet that works in all cases, it is possible that you didn't apply it correctly. The set of principles behind Dependency Injection take time to master, or at least, it sure did for me. When applied right, it can bring (among others) the following benefits:
Improved testability
Improved flexibility
Improved maintainability
Improved parallel development
From your question, I can already extract some things that might have gone wrong in your case:
I have to manually initialize dozen of dependencies in every class
This implies that each class you create is responsible of creating the dependencies it requires. This is an anti-pattern known as Control Freak. A class should not new up its dependencies itself. You might even have applied the Service Locator anti-pattern where your class requests its dependencies by calling the container (or an abstraction that represents the container) to get a particular dependency. A class should just define the dependencies it requires as constructor arguments.
dozen of dependencies
This statement implies that you are violating the Single Responsibly Principle. This is actually not coupled to IoC/DI, your old code probably already violated the Single Responsibility Principle causing it to become hard to understand and maintain for other developers. It's often hard for the original author to understand why others have a hard time maintaining code, since the thing you wrote often fits nicely in your head. Often the violation of the SRP will cause others to have trouble understanding and maintaining code. And testing classes that violate SRP is often even harder. A class should have half a dozen dependencies at most.
add more projects for interfaces and so on
This implies that you are violating the Reused Abstraction Principle. In general, the majority of components/classes in your application should be covered by a dozen of abstractions. For instance, all classes that implement some use case probably deserve one single (generic) abstraction. Classes that implement queries also deserve one abstraction. For the systems that I write, 80% to 95% of my components (classes that contain the application's behavior) are covered by 5 to 12 (mostly generic) abstractions. Most of the time you don't need to create a new project solely for the interfaces.
Most of the time I place those interfaces in the root of the same project.
much bigger codesize
The amount of code you write will initially not be very different. The practice of Dependency Injection however, only works great when applying SOLID as well, and SOLID promotes small focussed classes. Classes with one single responsibility. This means that you will have many small classes that are easy to understand and easy to compose into flexible systems. And don't forget: we shouldn't strive to write less code, but rather more maintainable code.
However, with a good SOLID design and the right abstractions in place, I experienced actually having to write much less code than I had to before. For instance, applying certain cross-cutting concerns (like logging, audit trailing, authorization, etc) can be applied by just writing a few lines of code in the infrastructure layer of the application, instead of having it to be spread out throughout the complete application. It even lead me to be able to do things that werent feasible before, because they forced me to make sweeping changes throughout the entire code base, which was so time consuming that management didn't allow me to do so.
ravioli-code instead of spaghetti-code
harder to understand when no IDE is used
This is kind of true. Dependency Injection promotes classes to become decoupled from one another. This can sometimes make it harder to browse to a code base, since a class usually depends on an abstraction instead of a concrete classes. In the past I found the flexibily that DI gives me outweigh the cost of finding the implementation by far. With Visual Studio 2015 I can simply do CTRL + F12 to find the implementations of an interface. If there is just one implementation, Visual Studio will jump right to that implementation.
slower performance
This is not true. The performance doesn't have to be any different than working with a code base of only static method calls. You however chose to have your classes with a Transient lifestyle which means it you new up instances all over the place. In my last applications I created all my classes just once per application, which gives roughly the same performance as only having static method calls, but with the benefit of the application being very flexible and maintainable. But note that even if you decide to new complete graphs of objects for each (web) request, the performance cost will most likely be orders of magnitude lower than any I/O (database, file system and web services calls) that you perform during that request, even with the slowest DI containers.
some errors are pushed to run-time
adding additional dependency (DI framework itself)
These issues both imply the usage of a DI library. DI libraries do object composition at runtime. A DI library however is not a required tool when practicing Dependency Injection. Small applications can benefit from using Dependency Injection without a tool; a practice called Pure DI. Your application might not benefit from using a DI container, but most applications actually benefit from using Dependency Injection (when used correctly) as a practice. Againt: tools are optional, writing maintainable code isn't.
But even if you use a DI library, there are libraries that have tools built-in that allow you to verify and diagnose your configuration. They won't give you compile-time support, but they allow you to run this analysis either when the application starts up or using a unit test. This prevents you from doing a regression on the complete application just to verify whether your container is wired correctly. My advise is to pick a DI container that helps you in detecting these configuration errors.
new staff have to learn DI first in order to work with it
This is kind of true, but Dependency Injection itself isn't actually hard to learn. What is actually hard to learn is to apply the SOLID principles correctly, and you need to learn this anyway when you want to write applications that need to be maintained by more than one developer for a considerate period of time. I rather invest into teaching the developers on my team to write SOLID code instead of just letting them crank out code; that will surely cause a maintenance hell later on.
a lot of boilerplate code
There is some boilerplate code when we look at code written in C# 6, but this isn't actually that bad, especially when you consider the advantages it gives. And future versions of C# will remove the boilerplate that is mainly caused by having to define constructors that take in arguments that are null-checked and assigned to private variables. C# 7 or 8 will surely fix this when record types and non-nullable reference types are introduced.
which is bad for creative people
I'm sorry, but this argument is plain bullshit. I've seen this argument used over and over again as an excuse to write bad code by developers who didn't want to learn about design patterns and software principles and practices. Being creative is no excuse for writing code that no one else can understand or code that is impossible to test. We need to apply accepted patterns and practices and within that boundary there is enough room to be creative, while writing good code. Writing code is not an art; it’s a craft.
Like I said, DI is not appropriate in all cases, and the practices around it take time to master. I can advise you to read the book Dependency Injection in .NET by Mark Seemann; it will give many answers and will give you a good sense how and when to apply it, and when not.
Be warned: I hate IoC.
There are many great answers here which are comforting. The main benefits according to Steven (very strong answer) are:
Improved testability
Improved flexibility
Improved maintainability
Improved scalability
My experiences are very different through, here they are for some balance:
(Bonus) Stupid Repository Pattern
Too often, this is included along with IoC. The repository pattern should only be used to access external data, and where interchangeability is a core expectation.
When you use this, with Entity Framework, you disable all the power of Entity Framework, this also happens with Service Layers.
Eg. Calling:
var employees = peopleService.GetPeople(false, false, true, true); //Terrible
It should be:
var employees = db.People.ActiveOnly().ToViewModel();
In this case using extension methods.
Who needs flexibility?
If you have no plans to change service implementations, you don't need it. If you think you'll have more than one implementation in the future, perhaps add IoC then, and only for that part.
But "Testability"!
Entity Framework (and probably other ORMs too), allow you to change the connection-string to point to an in-memory database. Granted, that's only available starting EF7. However, it can simply be a new (proper) test database in a staging environment.
Do you have other special test resources and service points? In this day and age, they're probably different WebService URI endpoints, which can also be configured in App.Config / Web.Config.
Automated Tests make your code maintainable
TDD - If it's a Web Application, use Jasmine or Selenium and have automated behaviour tests. This tests everything all the way to the user. It's an investment over time, starting by covering critical features and functions.
DevOps/SysOps - Maintain scripts for provisioning your whole environment (this is also best practice), spin up a staging environment and run all the tests. You can also clone your production environment and run your tests there. Don't make "maintainable" and "testable" your excuse for choosing IoC. Start with those requirements and find the best ways to meet those requirements.
Scalability - in what way?
(I probably need to read the book)
For coder scalability, Distributed Code Version Control, is the norm (although I hate merging).
For human resource scalability, you shouldn't be wasting days designing extra abstract layers for your project.
For production concurrent user scalability, you should be building, testing, then improving.
For server throughput scalability, you need to think a lot higher-level than IoC. Are you going to run a server on the customer LAN? Can you replicate your data? Are you replicating at the database level or application level? Is offline access important while mobile? These are substantial architecture questions, where IoC is rarely the answer.
Try F12
If you're using an IDE (which you should be doing), such as Visual Studio Community Edition, then you'll know how handy F12 can be, to navigate around code.
With IoC you'll be taken to the Interface, and then you'll need to find all references using a particular interface. Only one extra step, but for a tool that's used so much, it frustrates me.
Steven is on the ball
With Visual Studio 2015 I can simply do CTRL + F12 to find the
implementations of an interface.
Yes, but you have to then trawl through a list of both usages as well as the declaration. (Actually I think in the latest VS, the declaration lists separately, but it's still an extra mouse click, taking your hands away from the keyboard. And I should say this is a limitation of Visual Studio, not able to take you to an only interface implementation directly.
There are many 'textbook' arguments in favor of using IoC, but in my personal experience, the gains are/were:
Possibility to test only parts of the project, and mock some other parts. For example, if you have a component returning configuration from DB, it's easy to mock it so that your test can work without a real DB. With static classes this is not possible.
Better visibility and control of dependencies. With the static classes it's very easy to add some dependecies without even noticing, that can create problems later. With IoC this is more explicit and visible.
More explicit initialization order. With static classes this can be often a black box, and there can be latent problems due to circular usage.
The only inconvenience for me was that by placing everything before interfaces it's not possible to navigate directly to the implementation from the usage (F12).
However, it is the developers of a project who can judge best the pros and cons in the particular case.
Was there a reason why you didn't choose to use an IOC Library (StructureMap, Ninject, Autofac, etc)?
Using any of these would have made your life much easier.
Although David L has already made an excellent set of commentaries on your points, I'll add my own as well.
Much bigger codesize
I am not sure how you ended up with a larger codebase; the typical setup for an IOC library is pretty small, and since you are defining your invariants (dependencies) in the class constructors, you are also removing some code (i.e. the "new xyz()" stuff) that you don't need any more.
Ravioli-code instead of spaghetti-code
I happen to quite like ravioli :)
Slower performance, need to initialize all dependencies in constructor even if the method I want to call has only one dependency
If you are doing this then you are not really using Dependency Injection at all. You should be receiving ready-made, fully loaded object graphs via the dependency arguments declared in the constructor parameters of the class itself - not creating them in the constructor!
Most modern IOC libraries are ridiculously fast, and will never, ever be a performance problem.
Here's a good video that proves the point.
Harder to understand when no IDE is used
That's true, but it also means you can take the opportunity to think in terms of abstractions. So for example, you can look at a piece of code
public class Something
{
readonly IFrobber _frobber;
public Something(IFrobber frobber)
{
_frobber=frobber;
}
public void LetsFrobSomething(Thing theThing)
{
_frobber.Frob(theThing)
}
}
When you are looking at this code and trying to figure out if it works, or if it is the root cause of a problem, you can ignore the actual IFrobber implementation; it just represents the abstract capability to Frob something, and you don't need to mentally carry along how any particular Frobber might do its work. you can focus on making sure that this class does what it's supposed to - namely, delegating some work to a Frobber of some kind.
Note also that you don't even need to use interfaces here; you can go ahead and inject concrete implementations as well. However that tends to violate the Dependency Inversion principle (which is only tangenitally related to the DI we are talking about here) because it forces the class to depend on a concretion as opposed to an abstraction.
Some errors are pushed to run-time
No more or less than they would be with manually constructing graphs in the constructor;
Adding additional dependency (DI framework itself)
That is also true, but most IOC libraries are pretty small and unobtrusive, and at some point you have to decide if the tradeoff of having a slightly larger production artifact is worth it (it really is)
New staff have to learn DI first in order to work with it
That isn't really any different than would be the case with any new technology :) Learning to use an IOC library tends to open the mind to other possibilities like TDD, the SOLID principles and so forth, which is never a bad thing!
A lot of boilerplate code, which is bad for creative people (for example copy instances from constructor to properties...)
I don't understand this one, how you might end up with much boilerplate code; I wouldn't count storing the given dependencies in private readonly members as boilerplate worth talking about - bearing in mind that if you have more than 3 or 4 dependencies per class you are likely to be in violation of the SRP and should rethink your design.
Finally if you are not convinced by any of the arguments put forth here, I would still recommend you read Mark Seeman's "Dependency Injection in .Net". (or indeed anything else he has to say on DI which you can find on his blog).
I promise you will learn some useful things and I can tell you, it changed the way I write software for the better.
if you have to initialise dependencies manually in the code, you're doing something wrong. General patter for IoC is constructor injection or, probably, property injection. Class or controller shouldn't know about DI container at all.
Generally, all you have to do is:
configure container, like Interface = Class in Singleton scope
Use it, like Controller(Interface interface) {}
Benefit from controlling all dependencies in one place
I dont see any boilerplate code or slower performance or anything else you described. I can't really imaging how to write more or less complex app without it.
But generally, you need to decide what is more important. To please "creative people" or build maintainable and robust app.
Btw, to create property or filed from constructor you can use Alt+Enter in R# and it do all the job for you.
I'm writing a C# ASP.NET MVC web application using SOLID principles.
I've written a ViewModelService, which depends on a AccountService and a RepositoryService, so I've injected those two services in the the ViewModelServer.
The PermissionService depends on the HttpContextBase in order to use GetOwinContext() to get an instance of the UserManager. The controller has an instance of HttpContextBase that needs to be used - so it seems like I have to inject the HttpContextBase instance into the ViewModelService which then injects it into the PermissionService.
So, in terms of code I have:
public ViewModelService
public CategoryRepository(ApplicationDbContext context, IPermissionService permissionservice)
public AccountService(HttpContextBase httpcontext, IPrincipal securityprincipal)
to instantiate the ViewModelService, I then do this:
new ViewModelService(
new CategoryRepository(
new ApplicationDbContext(),
new PermissionService(
new AccountService(HttpContext, Thread.CurrentPrincipal),
new UserPasswordRepository(new ApplicationDbContext()),
new ApplicationSettingsService())),
new PasswordRepository(
new ApplicationDbContext(),
new PermissionService(
new AccountService(HttpContext, Thread.CurrentPrincipal),
new UserPasswordRepository(new ApplicationDbContext()),
new ApplicationSettingsService())),
new ModelValidatorService());
Should a dependency be injected from that many "levels" up, or is there a better way?
There's a balance to be struck.
On the one hand, you have the school of thought which would insist that all dependencies must be exposed by the class to be "properly" injected. (This is the school of thought which considers something like a Service Locator to be an anti-pattern.) There's merit to this, but taken to an extreme you find yourself where you are now. Just the right kind of complexity in some composite models, which themselves have composite models, results in aggregate roots which need tons of dependencies injected solely to satisfy dependencies of deeper models.
Personally I find that this creates coupling in situations like this. Which is what DI is intended to resolve, not to create.
On the other hand, you have the school of thought which allows for a Service Locator approach, where models can internally invoke some common domain service to resolve a dependency for it. There's merit to this, but taken to an extreme you find that your dependencies are less known and there's a potential for runtime errors if any given dependency can't be resolved. (Basically, you can get errors at a higher level because consuming objects never knew that consumed objects needed something which wasn't provided.)
Personally I've used a service locator approach a lot (mostly because it's a very handy pattern for introducing DI to a legacy domain as part of a larger refactoring exercise, which is a lot of what I do professionally) and have never run into such issues.
There's yin and yang either way. And I think each solution space has its own balance. If you're finding that direct injection is making the system difficult to maintain, it may be worth investigating service location. Conversely, it may also be worth investigating if the overall domain model itself is inherently coupled and this DI issue is simply a symptom of that coupling and not the cause of it.
Yes, the entire intent of Dependency Injection is that you compose big object graphs up-front. You compose object graphs from the Composition Root, which is a place in your application that has the Single Responsibility of composing object graphs. That's not any particular Controller, but a separate class that composes Controllers with their dependencies.
The Composition Root must have access to all types it needs to compose, unless you want to get into late-binding strategies (which I'll generally advise against, unless there's a specific need).
I am firmly of the opinion that Service Locators are worse than Dependency Injection. They can be a useful legacy technique, and a useful stepping stone on to something better, but if you are designing something new, then steer clear.
The main reason for this is that Service Locators lead to code that has implicit dependencies, and this makes the code less clear and breaks encapsulation. It can also lead to run time errors instead of compile time errors, and Interacting Tests.
Your example uses Constructor Injection, which is usually the most appropriate form of Dependency Injection:
public ViewModelService(ICategoryRepository categoryRepository, IPasswordRepository passwordRepository, IModelValidatorService modelValidator) { ... }
This has explicit dependencies, which is good. It means that you cannot create the object without passing in its dependencies, and if you try to you will get a compile time error rather than a run time one. It also is good for encapsulation, as just by looking at the interface of the class you know what dependencies it needs.
You could do this using service locators as below:
public ViewModelService()
{
var categoryRepository = CategoryRepositoryServiceLocator.Instance;
var passwordRepository = PasswordRepositoryServiceLocator.Instance;
var modelValidator FModelValidatorServiceLocator.Instance;
...
}
This has implicit dependencies, that you cannot tell just by looking at the interface, you must also look at the implementation (this breaks encapsulation). You can also forget to set up one of the Service Locators, which will lead to a run time exception.
In your example I thinky your ViewModelService is good. It references abstractions (ICategoryRepository etc) and doesn't care about how these abstractions are created. The code you use to create the ViewModelService is a bit ugly, and I would recommend using an Inversion of Control container (such as Castle Windsor, StructureMap etc) to help here.
In Castle Windsor, you could do something like the following:
container.Register(Classes.FromAssemblyNamed("Repositories").Pick().WithServiceAllInterfaces());
container.Register(Component.For<IAccountService>().ImplementedBy<AccountService>());
container.Register(Component.For<IApplicationDBContext>().ImplementedBy<IApplicationDBContext>());
container.Register(Component.For<IApplicationSettingsService>().ImplementedBy<IApplicationSettingsService>());
var viewModelService = _container.Resolve<ViewModelService>();
Make sure to read and understand the "Register, Resolve, Release" and "Composition Root" patterns before you start.
Good luck!
I've read a book "Dependency injection in .NET" by Mark Seemann and it opened my eyes on many things. But still few question left. Here is one of them:
Let's say we have a WCF service exposing API for working with some database:
public class MyService : IMyService
{
private ITableARepository _reposA;
private ITableARepository _reposB;
//....
public IEnumerable<EntityA> GetAEntities()
{
return _reposA.GetAll().Select(x=>x.ToDTO())
}
public IEnumerable<EntityB> GetBEntities()
{
return _reposB.GetAll().Select(x=>x.ToDTO())
}
//...
}
There may be dozens of repositories service depend on. Some methods use one, some methods another, some methods use few repositories.
And my question is how to correctly organize injection of repository dependencies into service?
Options I see:
Constructor injection. Create a huge constructor with dozens of arguments. Easy for usage, but hard for managing parameters list. Also it's extreemely bad for performance as each unused repository is a waste of resources even if it doesn't use separate DB connection.
Property injection. Optimizes performance, but usage becomes non-obvious. How should creator of the service know which properties to initialize for specific method call? Moreover this creator should be universal for each method call and be located in the composition root. So logic there becomes very complicated and error-prone.
Somewhat non-standard (not described in a book) approach: create a repository factory and depend on it instead of concrete repositories. But the book say factories are very often used incorrectly as a side way to overcome problems that can be resolved much better with proper DI usage. So this approach looks suspicious for me (while achieving both performance and transparency objectives).
Or is there a conceptual problem with this relation 1 to many dependencies?
I assume the answer should differ depending on service instance context mode (probably when it's Single instance, constructor injection is just fine; for PerCall option 3 looks best if to ignore the above warning; for perSession everything depends on the session lifetime: whether it's more close to Single instance or PerCall).
If it really depends on instance context mode, then it becomes hard to change it, because change requires large changes in the code (to move from constructor injection to property injection or to repository factory). But the whole concept of WCF service ensures it is simple to change the instance context mode (and it's not so unlikely that I will need to change it). That makes me even more confused about DI and WCF combination.
Could anyone explain how this case should be resolved correctly?
Create a huge constructor with dozens of arguments
You should not create classes with a huge number of constructor arguments. This is the constructor over-injection code-smell. Having constructors with a huge amount of arguments is an indication that such class does too much: violates the Single Responsibility Principle. This leads to code that is hard to maintain and extend.
Also it's extremely bad for performance as each unused repository is a waste of resources
Have you measured this? The amount of constructor arguments should be mainly irreverent for the performance of the application. This should not cause any noticeable difference in performance. And if it does, it becomes be time to look at the amount of work that your constructors do (since injection constructors should be simple) or its time to switch to a faster DI container if your constructors are simple. Creating a bunch of services classes should normally be blazingly fast.
even if it doesn't use separate DB connection.
The constructors should not open connections in the first place. Again: they should be simple.
Property injection. Optimizes performance
How should creator of the service know which properties to initialize for specific method call
The caller can't reliably determine which dependencies are required, since only constructor arguments are typically required. Requiring properties results in temporal coupling and you lose compile-time support.
Since the caller can't determine which properties are needed, all properties need to be injected and this makes the performance equivalent as with constructor injection, which -as I said- should not be a problem at all.
Somewhat non-standard (not described in a book) approach: create a repository factory and depend on it instead of concrete repositories.
Instead of injecting a repository factory, you could inject a repository provider, a pattern which is better known as the Unit of Work pattern. The unit of work may give access to repositories.
I assume the answer should differ depending on service instance context mode
No, since you should never use the WCF 'Single' mode. In most cases the dependencies you inject into your WCF services are not thread-safe and should not outlive a single request. Injecting them into a singleton WCF service causes Captive Dependencies and this is bad because it leads to all kinds of concurrency bugs.
The core problem here seems that your WCF Service classes are big and violate the Single Responsibily Principle, causing them to hard to create, maintain, and test. Fix this violation by either:
Splitting them up in multiple smaller classes, or
Moving functionality out of them into aggregate services and apply patterns such as the command/handler and query/handler patterns.
I'm currently using Ninject to handle DI on a C#/.Net/MVC application. When I trace the creation of instances of my services, I find that services are called and constructed quite a lot during a the life cycle, so I'm having to instantiate services and cache them, and then check for cached services before instantiating another. The constructors are sometimes quite heavy).
To me this seems ridiculous, as the services do not need unique constructor arguments, so instantiating them once is enough for the entire application scope.
What I've done as a quick alternative (just for proof-of-concept for now to see if it even works) is...
Created a static class (called AppServices) with all my service interfaces as it's properties.
Given this class an Init() method that instantiates a direct implementation of each service interface from my service library. This mimics binding them to a kernel if I was using Ninject (or other DI handler).
E.g.
public static class AppServices(){
public IMyService MyService;
public IMyOtherService MyOtherService;
public Init(){
MyService = new MyLib.MyService();
MyOtherService = new MyLib.MyOtherService();
}
}
On App_Start I call the Init() method to create a list of globally accessible services that are only instantiated once.
From then on, every time I need an instance of a service, I get it from AppServices. This way I don't have to keep constructing new instances that I don't need.
E.g.
var IMyService _myService = AppServices.MyService;
This works fine and I haven't had ANY issues arise yet. My problem is that this seems way too simple. It is only a few lines of code, creating a static class in application scope. Being as it does exactly what I would need Ninject to do, but in (what seems to me for my purposes) a much cleaner and performance-saving way, why do I need Ninject? I mean, these complicated dependency injection handlers are created for a reason right? There must be something wrong with my "simple" interpretation of DI, I just can't see it.
Can any one tell me why creating a global static container for my service instances is a bad idea, and maybe explain exactly what make Ninject (or any other DI handler) so necessary. I understand the concepts of DI so please don't try and explain what makes it so great. I know. I want to know exactly what it does under the hood that is so different to my App_Start method.
Thanks
Your question needs to be divided into two questions:
Is it really wrong to use the singleton pattern instead to inject dependencies?
Why do I need an IoC container?
1)
There are many reasons why you should not use the singleton pattern. Here are some of the major ones:
Testability
Yes you can test with static instances. But you can't test Isolated (FIRST). I have seen projects that searched a long time why tests start failing for no obvious reason until they realized that it is due to tests that were run in a different order. When you had that problem once you will always want your tests to be as isolated as possible. Static values couples tests.
This gets even worse when you also do integration/spec testing additional to unittesting.
Reusability
You can't simply reuse your components in other projects. Other projects will have to use that concept as well even if they might decide to use an IoC container.
Or you can't create another instance of your component with different dependencies. The components dependencies will be hard wired to the instances in your AppServices. You will have to change the components implementation to use different dependencies.
2) Doing DI does not mean that you have to use any IoC container. You can implement your own IDependencyResolver that creates your controllers manually and injects the same instance of your services wherever they are required. IoC containers use some performance but they simplyfy the creation of your object trees. You will have to decide yourself what matters more performance or simpler creation of your controllers.
I've got to the point in my design, where I am seriously considering a singleton.
As we all know, the "common" argument is "Never do it! It's terrible!", as if we'd littered our code with a bunch of goto statements.
ServiceStack is a wonderful framework. Myself and my team are sold on it, and we have a complicated web-service based infrastructure to implement. I have been encouraging an asynchronous design, and where possible - using SendAsync on the service-stack clients.
Given we have all these different systems doing different things, it occurred to me I'd like to have a common logger, (A web service in itself actually, with a fall-back to a local text file if the web service is not available - e.g. some demons are stalking the building). Whilst I am a big fan of Dependency Injection, it doesn't seem clean (at least, to me) to be passing a reference to a "use this logger client" to every single asynchronous request.
Given that ServiceStack's failure signature is a Func<TRESPONSE, Exception> (and I have no fault with this), I am not even sure that if the enclosing method that made the call in the first place would have a valid handle.
However, if we had a singleton logger at this point, it doesn't matter where we are in the world, what thread we are on, and what part of a myriad of anonymous functions we are in.
Is this an accepted valid case, or is it a non-argument - down with singletons?
Logging is one of the areas which makes sense to be a singleton, it should never have any side-effects to your code and you will almost always want the same logger to be used globally. The primary thing you should be concerned with when using Singletons is ThreadSafety, which in the case of most Loggers, they're ThreadSafe by default.
ServiceStack's Logging API allows you to both provide a substitutable Logging implementation by configuring it globally on App_Start with:
LogManager.LogFactory = new Log4NetFactory(configureLog4Net:true);
After this point every class now has access to Log4Net's logger defined in the Factory above:
class Any
{
static ILog log = LogManager.GetLogger(typeof(Any));
}
In all Test projects I prefer everything to be logged to the Console, so I just need to set it once with:
LogManager.LogFactory = new ConsoleLogFactory();
By default ServiceStack.Logging, logs to a benign NullLogger which ignores each log entry.
There's only one problem with classic implementation of a singleton -
it is easily accessible, and provokes direct use, which leads to strong coupling,
god objects, etc.
under classic implementation I mean this:
class Singleton
{
public static readonly Singleton Instance = new Singleton();
private Singleton(){}
public void Foo(){}
public void Bar(){}
}
If you use singleton only in terms of an object lifecycle strategy,
and let IoC framework manage this for you, maintaining loose coupling -
there is nothing wrong with having 'just one' instance of a class
for entire lifetime of application, as long as you make sure it is thread-safe.
If you are placing that common logging behind a static facade that application code calls, ask yourself how you would actually unit test that code. This is a problem that Dependency Injection tries to solve, but you are reintroducing it by letting application logic depend on a static class.
There are two other problems you might be having. To question I have for you is: Are you sure you don't log too much, and are you sure you aren't violating the SOLID principles.
I've written an SO answer a year back that discusses those two questions. I advice you to read it.
As always, I prefer to have a factory. This way I can change the implementation in future and maintain the client contract.
You could say that singleton's implmenentation could also change but factories are just more general. For example, the factory could implement arbitrary lifetime policy and change this policy over time or according to your needs. On the other hand, while this is technically possible to implement different lifetime policies for a singleton, what you get then should probably not be considered a "singleton" but rather a "singleton with specific lifetime policy". And this is probably just as bad as it sounds.
Whenever I am to use a singleton, I first consider a factory and most of the times, the factory just wins over singleton. If you really don't like factories, create a static class - a stateless class with static methods only. Chances are, you just don't need an object, just a set of methods.