In my mid-size project I used static classes for repositories, services etc. and it actually worked very well, even if the most of programmers will expect the opposite. My codebase was very compact, clean and easy to understand. Now I tried to rewrite everything and use IoC (Invertion of Control) and I was absolutely disappointed. I have to manually initialize dozen of dependencies in every class, controller etc., add more projects for interfaces and so on. I really don't see any benefits in my project and it seems that it causes more problems than solves. I found the following drawbacks in IoC/DI:
much bigger codesize
ravioli-code instead of spaghetti-code
slower performance, need to initialize all dependencies in constructor even if the method I want to call has only one dependency
harder to understand when no IDE is used
some errors are pushed to run-time
adding additional dependency (DI framework itself)
new staff have to learn DI first in order to work with it
a lot of boilerplate code, which is bad for creative people (for example copy instances from constructor to properties...)
We do not test the entire codebase, but only certain methods and use real database. So, should Dependency Injection be avoided when no mocking is required for testing?
The majority of your concerns seem to boil down to either misuse or misunderstanding.
much bigger codesize
This is usually a result of properly respecting both the Single Responsibility Principle and the Interface Segregation Principle. Is it drastically bigger? I suspect not as large as you claim. However, what it is doing is most likely boiling down classes to specific functionality, rather than having "catch-all" classes that do anything and everything. In most cases this is a sign of healthy separation of concerns, not an issue.
ravioli-code instead of spaghetti-code
Once again, this is most likely causing you to think in stacks instead of hard-to-see dependencies. I think this is a great benefit since it leads to proper abstraction and encapsulation.
slower performance Just use a fast container. My favorites are SimpleInjector and LightInject.
need to initialize all dependencies in constructor even
if the method I want to call has only one dependency
Once again, this is a sign that you are violating the Single Responsibility Principle. This is a good thing because it is forcing you to logically think through your architecture rather than adding willy-nilly.
harder to understand when no IDE is used some errors are pushed to run-time
If you are STILL not using an IDE, shame on you. There's no good argument for it with modern machines. In addition, some containers (SimpleInjector) will validate on first run if you so choose. You can easily detect this with a simple unit test.
adding additional dependency (DI framework itself)
You have to pick and choose your battles. If the cost of learning a new framework is less than the cost of maintaining spaghetti code (and I suspect it will be), then the cost is justified.
new staff have to learn DI first in order to work with it
If we shy away from new patterns, we never grow. I think of this as an opportunity to enrich and grow your team, not a way to hurt them. In addition, the tradeoff is learning the spaghetti code which might be far more difficult than picking up an industry-wide pattern.
a lot of boilerplate code which is bad for creative people (for example copy instances from constructor to properties...)
This is plain wrong. Mandatory dependencies should always be passed in via the constructor. Only optional dependencies should be set via properties, and that should only be done in very specific circumstances since oftentimes it is violating the Single Responsibility Principle.
We do not test the entire codebase, but only certain methods and use real database. So, should Dependency Injection be avoided when no mocking is required for testing?
I think this might be the biggest misconception of all. Dependency Injection isn't JUST for making testing easier. It is so you can glance at the signature of a class constructor and IMMEDIATELY know what is required to make that class tick. This is impossible with static classes since classes can call both up and down the stack whenever they like without rhyme or reason. Your goal should be to add consistency, clarity, and distinction to your code. This is the single biggest reason to use DI and it is why I highly recommend you revisit it.
Although IoC/DI is not some silver bullet that works in all cases, it is possible that you didn't apply it correctly. The set of principles behind Dependency Injection take time to master, or at least, it sure did for me. When applied right, it can bring (among others) the following benefits:
Improved testability
Improved flexibility
Improved maintainability
Improved parallel development
From your question, I can already extract some things that might have gone wrong in your case:
I have to manually initialize dozen of dependencies in every class
This implies that each class you create is responsible of creating the dependencies it requires. This is an anti-pattern known as Control Freak. A class should not new up its dependencies itself. You might even have applied the Service Locator anti-pattern where your class requests its dependencies by calling the container (or an abstraction that represents the container) to get a particular dependency. A class should just define the dependencies it requires as constructor arguments.
dozen of dependencies
This statement implies that you are violating the Single Responsibly Principle. This is actually not coupled to IoC/DI, your old code probably already violated the Single Responsibility Principle causing it to become hard to understand and maintain for other developers. It's often hard for the original author to understand why others have a hard time maintaining code, since the thing you wrote often fits nicely in your head. Often the violation of the SRP will cause others to have trouble understanding and maintaining code. And testing classes that violate SRP is often even harder. A class should have half a dozen dependencies at most.
add more projects for interfaces and so on
This implies that you are violating the Reused Abstraction Principle. In general, the majority of components/classes in your application should be covered by a dozen of abstractions. For instance, all classes that implement some use case probably deserve one single (generic) abstraction. Classes that implement queries also deserve one abstraction. For the systems that I write, 80% to 95% of my components (classes that contain the application's behavior) are covered by 5 to 12 (mostly generic) abstractions. Most of the time you don't need to create a new project solely for the interfaces.
Most of the time I place those interfaces in the root of the same project.
much bigger codesize
The amount of code you write will initially not be very different. The practice of Dependency Injection however, only works great when applying SOLID as well, and SOLID promotes small focussed classes. Classes with one single responsibility. This means that you will have many small classes that are easy to understand and easy to compose into flexible systems. And don't forget: we shouldn't strive to write less code, but rather more maintainable code.
However, with a good SOLID design and the right abstractions in place, I experienced actually having to write much less code than I had to before. For instance, applying certain cross-cutting concerns (like logging, audit trailing, authorization, etc) can be applied by just writing a few lines of code in the infrastructure layer of the application, instead of having it to be spread out throughout the complete application. It even lead me to be able to do things that werent feasible before, because they forced me to make sweeping changes throughout the entire code base, which was so time consuming that management didn't allow me to do so.
ravioli-code instead of spaghetti-code
harder to understand when no IDE is used
This is kind of true. Dependency Injection promotes classes to become decoupled from one another. This can sometimes make it harder to browse to a code base, since a class usually depends on an abstraction instead of a concrete classes. In the past I found the flexibily that DI gives me outweigh the cost of finding the implementation by far. With Visual Studio 2015 I can simply do CTRL + F12 to find the implementations of an interface. If there is just one implementation, Visual Studio will jump right to that implementation.
slower performance
This is not true. The performance doesn't have to be any different than working with a code base of only static method calls. You however chose to have your classes with a Transient lifestyle which means it you new up instances all over the place. In my last applications I created all my classes just once per application, which gives roughly the same performance as only having static method calls, but with the benefit of the application being very flexible and maintainable. But note that even if you decide to new complete graphs of objects for each (web) request, the performance cost will most likely be orders of magnitude lower than any I/O (database, file system and web services calls) that you perform during that request, even with the slowest DI containers.
some errors are pushed to run-time
adding additional dependency (DI framework itself)
These issues both imply the usage of a DI library. DI libraries do object composition at runtime. A DI library however is not a required tool when practicing Dependency Injection. Small applications can benefit from using Dependency Injection without a tool; a practice called Pure DI. Your application might not benefit from using a DI container, but most applications actually benefit from using Dependency Injection (when used correctly) as a practice. Againt: tools are optional, writing maintainable code isn't.
But even if you use a DI library, there are libraries that have tools built-in that allow you to verify and diagnose your configuration. They won't give you compile-time support, but they allow you to run this analysis either when the application starts up or using a unit test. This prevents you from doing a regression on the complete application just to verify whether your container is wired correctly. My advise is to pick a DI container that helps you in detecting these configuration errors.
new staff have to learn DI first in order to work with it
This is kind of true, but Dependency Injection itself isn't actually hard to learn. What is actually hard to learn is to apply the SOLID principles correctly, and you need to learn this anyway when you want to write applications that need to be maintained by more than one developer for a considerate period of time. I rather invest into teaching the developers on my team to write SOLID code instead of just letting them crank out code; that will surely cause a maintenance hell later on.
a lot of boilerplate code
There is some boilerplate code when we look at code written in C# 6, but this isn't actually that bad, especially when you consider the advantages it gives. And future versions of C# will remove the boilerplate that is mainly caused by having to define constructors that take in arguments that are null-checked and assigned to private variables. C# 7 or 8 will surely fix this when record types and non-nullable reference types are introduced.
which is bad for creative people
I'm sorry, but this argument is plain bullshit. I've seen this argument used over and over again as an excuse to write bad code by developers who didn't want to learn about design patterns and software principles and practices. Being creative is no excuse for writing code that no one else can understand or code that is impossible to test. We need to apply accepted patterns and practices and within that boundary there is enough room to be creative, while writing good code. Writing code is not an art; it’s a craft.
Like I said, DI is not appropriate in all cases, and the practices around it take time to master. I can advise you to read the book Dependency Injection in .NET by Mark Seemann; it will give many answers and will give you a good sense how and when to apply it, and when not.
Be warned: I hate IoC.
There are many great answers here which are comforting. The main benefits according to Steven (very strong answer) are:
Improved testability
Improved flexibility
Improved maintainability
Improved scalability
My experiences are very different through, here they are for some balance:
(Bonus) Stupid Repository Pattern
Too often, this is included along with IoC. The repository pattern should only be used to access external data, and where interchangeability is a core expectation.
When you use this, with Entity Framework, you disable all the power of Entity Framework, this also happens with Service Layers.
Eg. Calling:
var employees = peopleService.GetPeople(false, false, true, true); //Terrible
It should be:
var employees = db.People.ActiveOnly().ToViewModel();
In this case using extension methods.
Who needs flexibility?
If you have no plans to change service implementations, you don't need it. If you think you'll have more than one implementation in the future, perhaps add IoC then, and only for that part.
But "Testability"!
Entity Framework (and probably other ORMs too), allow you to change the connection-string to point to an in-memory database. Granted, that's only available starting EF7. However, it can simply be a new (proper) test database in a staging environment.
Do you have other special test resources and service points? In this day and age, they're probably different WebService URI endpoints, which can also be configured in App.Config / Web.Config.
Automated Tests make your code maintainable
TDD - If it's a Web Application, use Jasmine or Selenium and have automated behaviour tests. This tests everything all the way to the user. It's an investment over time, starting by covering critical features and functions.
DevOps/SysOps - Maintain scripts for provisioning your whole environment (this is also best practice), spin up a staging environment and run all the tests. You can also clone your production environment and run your tests there. Don't make "maintainable" and "testable" your excuse for choosing IoC. Start with those requirements and find the best ways to meet those requirements.
Scalability - in what way?
(I probably need to read the book)
For coder scalability, Distributed Code Version Control, is the norm (although I hate merging).
For human resource scalability, you shouldn't be wasting days designing extra abstract layers for your project.
For production concurrent user scalability, you should be building, testing, then improving.
For server throughput scalability, you need to think a lot higher-level than IoC. Are you going to run a server on the customer LAN? Can you replicate your data? Are you replicating at the database level or application level? Is offline access important while mobile? These are substantial architecture questions, where IoC is rarely the answer.
Try F12
If you're using an IDE (which you should be doing), such as Visual Studio Community Edition, then you'll know how handy F12 can be, to navigate around code.
With IoC you'll be taken to the Interface, and then you'll need to find all references using a particular interface. Only one extra step, but for a tool that's used so much, it frustrates me.
Steven is on the ball
With Visual Studio 2015 I can simply do CTRL + F12 to find the
implementations of an interface.
Yes, but you have to then trawl through a list of both usages as well as the declaration. (Actually I think in the latest VS, the declaration lists separately, but it's still an extra mouse click, taking your hands away from the keyboard. And I should say this is a limitation of Visual Studio, not able to take you to an only interface implementation directly.
There are many 'textbook' arguments in favor of using IoC, but in my personal experience, the gains are/were:
Possibility to test only parts of the project, and mock some other parts. For example, if you have a component returning configuration from DB, it's easy to mock it so that your test can work without a real DB. With static classes this is not possible.
Better visibility and control of dependencies. With the static classes it's very easy to add some dependecies without even noticing, that can create problems later. With IoC this is more explicit and visible.
More explicit initialization order. With static classes this can be often a black box, and there can be latent problems due to circular usage.
The only inconvenience for me was that by placing everything before interfaces it's not possible to navigate directly to the implementation from the usage (F12).
However, it is the developers of a project who can judge best the pros and cons in the particular case.
Was there a reason why you didn't choose to use an IOC Library (StructureMap, Ninject, Autofac, etc)?
Using any of these would have made your life much easier.
Although David L has already made an excellent set of commentaries on your points, I'll add my own as well.
Much bigger codesize
I am not sure how you ended up with a larger codebase; the typical setup for an IOC library is pretty small, and since you are defining your invariants (dependencies) in the class constructors, you are also removing some code (i.e. the "new xyz()" stuff) that you don't need any more.
Ravioli-code instead of spaghetti-code
I happen to quite like ravioli :)
Slower performance, need to initialize all dependencies in constructor even if the method I want to call has only one dependency
If you are doing this then you are not really using Dependency Injection at all. You should be receiving ready-made, fully loaded object graphs via the dependency arguments declared in the constructor parameters of the class itself - not creating them in the constructor!
Most modern IOC libraries are ridiculously fast, and will never, ever be a performance problem.
Here's a good video that proves the point.
Harder to understand when no IDE is used
That's true, but it also means you can take the opportunity to think in terms of abstractions. So for example, you can look at a piece of code
public class Something
{
readonly IFrobber _frobber;
public Something(IFrobber frobber)
{
_frobber=frobber;
}
public void LetsFrobSomething(Thing theThing)
{
_frobber.Frob(theThing)
}
}
When you are looking at this code and trying to figure out if it works, or if it is the root cause of a problem, you can ignore the actual IFrobber implementation; it just represents the abstract capability to Frob something, and you don't need to mentally carry along how any particular Frobber might do its work. you can focus on making sure that this class does what it's supposed to - namely, delegating some work to a Frobber of some kind.
Note also that you don't even need to use interfaces here; you can go ahead and inject concrete implementations as well. However that tends to violate the Dependency Inversion principle (which is only tangenitally related to the DI we are talking about here) because it forces the class to depend on a concretion as opposed to an abstraction.
Some errors are pushed to run-time
No more or less than they would be with manually constructing graphs in the constructor;
Adding additional dependency (DI framework itself)
That is also true, but most IOC libraries are pretty small and unobtrusive, and at some point you have to decide if the tradeoff of having a slightly larger production artifact is worth it (it really is)
New staff have to learn DI first in order to work with it
That isn't really any different than would be the case with any new technology :) Learning to use an IOC library tends to open the mind to other possibilities like TDD, the SOLID principles and so forth, which is never a bad thing!
A lot of boilerplate code, which is bad for creative people (for example copy instances from constructor to properties...)
I don't understand this one, how you might end up with much boilerplate code; I wouldn't count storing the given dependencies in private readonly members as boilerplate worth talking about - bearing in mind that if you have more than 3 or 4 dependencies per class you are likely to be in violation of the SRP and should rethink your design.
Finally if you are not convinced by any of the arguments put forth here, I would still recommend you read Mark Seeman's "Dependency Injection in .Net". (or indeed anything else he has to say on DI which you can find on his blog).
I promise you will learn some useful things and I can tell you, it changed the way I write software for the better.
if you have to initialise dependencies manually in the code, you're doing something wrong. General patter for IoC is constructor injection or, probably, property injection. Class or controller shouldn't know about DI container at all.
Generally, all you have to do is:
configure container, like Interface = Class in Singleton scope
Use it, like Controller(Interface interface) {}
Benefit from controlling all dependencies in one place
I dont see any boilerplate code or slower performance or anything else you described. I can't really imaging how to write more or less complex app without it.
But generally, you need to decide what is more important. To please "creative people" or build maintainable and robust app.
Btw, to create property or filed from constructor you can use Alt+Enter in R# and it do all the job for you.
Related
seems like with the new unity version has been added support for autowiring.
How many of you are familiar with it and strngly suggest me to use or not use it? Seems to me that the use of it limit my control on the DI especially for what regard the unit tests, am I thinking wrong?
I'm assuming that this question is about Auto-Registration, since Unity has had Auto-Wiring for years.
Since I wrote my When to use a DI Container article a couple of years ago, I've only become slightly more radical in my attitude towards DI Containers. In that article, I describe the benefits and trade-offs of using DI Containers, as opposed to Poor Man's DI (manually composing code).
My order of preference is now:
Manually write the code of the Composition Root (Poor Man's DI). This may seem like a lot of trouble, but gives you the best possible feedback, as well as it's easier to understand than using a DI Container.
Use Auto-Registration (AKA Convention over Configuration). While you lose compile-time feedback, the mechanism might actually pull your code towards a greater deal of consistency, because as long as you follow the conventions, things 'just work'. However, this requires that the team is comfortable with the Auto-Registration API of the chosen DI Container, which, in my experience, isn't likely to be the case.
Only use Explicit Register if you have a very compelling reason to do so (and no: not thoroughly understanding DI is not a good reason). These days, I almost never do this, so it's difficult for me to come up with some good cases, but advanced lifetime management may be one motivation.
It's been 1½ years since I last used a DI Container in any production code.
In summary, and in an effort to answer the specific question about Unity:
Seriously consider not using Unity at all (or any other DI Container).
If you must use Unity, use the Auto-Registration feature. Otherwise, you're likely to get more trouble than benefits from it.
Caveat: I'm writing this as a general response, based on my experience with DI and various DI Containers, including Explicit Registration and Auto-Registration. While I have some knowledge about previous versions of Unity, I don't know anything about the Auto-Registration features of the new version of Unity.
I've built a container which automatically register your services. All you need to do is to tag them with an attribute.
this is not autowiring per se, but that's part of my point. Unity have from the start been able to build classes which has not been registered in the container. And that's imho a big weekness as the class might be used with dependencies that it shouldnt use or that it will have a different lifetime than intended.
My choice to use an attribute was to be able to make sure that all services can be resolved and built. When you call the builder.Build() method my container will throw an exception if something can't be resolved.
Hence you will directly at the startup see if something is missing, rather then later at runtime.
So autowiring might seem good, but as you say: You'll loose control only to discover it later during runtime if something is missing.
Coming from a .NET/C# Background and having solid exposure to PRISM, I really like the idea of having a CompositionContainer to get just this one instance of a class whenever it is needed.
As through the ServiceLocator this instance is also globally accessible this pretty much sums up to the Singleton Pattern.
Now, my current Project is in c++, and I'm at the point of deciding how to manage plugins (external dll loading and stuff like that) for the program.
In C# I'd create a PluginService, export it as shared and channel everything through that one instance (the members would basically only amount to one list, holding the plugins and a bunch of methods). In c++ obviously I don't have a CompositionContainer or a ServiceLocator.
I could probably realize a basic version of this, but whatever I imagine involves using Singletons or Global variables for that matter. The general concern about this seems to be though: DON'T EVER DO GLOBALS AND MUCH LESS SINGLETONS.
what am I to do?
(and what I'm also interested in: is Microsoft here giving us a bad example of how to code, or is this an actual case of where singletons are the right choice?)
There's really no difference between C# and C++ in terms of whether globals and singletons are "good" or "bad".
The solution you outline is equally bad (or good) in both C# and C++.
What you seem to have discovered is simply that different people have different opinions. Some C# developers like to use singletons for something like this. And some C++ programmers feel the same way.
Some C++ programmers think a singleton is a terrible idea, and... some C# programmers feel the same way. :)
Microsoft has given many bad examples of how to code. Never ever accept their sample code as "good practices" just because it says Microsoft on the box. What matters is the code, not the name behind it.
Now, my main beef with singletons is not the global aspect of them.
Like most people, I generally dislike and distrust globals, but I won't say they should never be used. There are situations where it's just more convenient to make something globally accessible. They're not common (and I think most people still overuse globals), but they exist.
But the real problem with singletons is that they enforce an unnecessary and often harmful constraint on your code: they prevent you from creating multiple instances of an object, as though you, when you write the class, know how it's going to be used better than the actual user does.
When you write a class, say, a PluginService as you mentioned in a comment, you certainly have some idea of how you plan it to be used. You probably think "an instance of it should be globally accessible (which is debatable, because many classes should not access the pluginservice, but let's assume that we do want it to be global for now). And you probably think "I can't imagine why I'd want to have two instances".
But the problem is when you take this assumption and actively prevent the creation of two instances.
What if, two months from now, you find a need for creating two PluginServices? If you'd taken the easy route when you wrote the class, and had not built unnecessary constraints into it, then you could also take the easy route now, and simply create two instances.
But if you took the difficult path of writing extra code to prevent multiple instances from being created, then you now again have to take the difficult path: now you have to go back and change your class.
Don't build limitations into your code unless you have a reason: if it makes your job easier, go ahead and do it. And if it prevents harmful misuse of the class, go ahead and do it.
But in the singleton case it does neither of those: you create extra work for yourself, in order to prevent uses that might be perfectly legitimate.
You may be interested in reading this blog post I wrote to answer the question of singletons.
But to answer the specific question of how to handle your specific situation, I would recommend one of two approaches:
the "purist" approach would be to create a ServiceLocator which is not global. Pass it to those who need to locate services. In my experience, you'll probably find that this is much easier than it sounds. You tend to find out that it's not actually needed in as many different places as you thought it'd be. And it gives you a motivation to decouple the code, to minimize dependencies, to ensure that only those who really have a genuine need for the ServiceLocator get access to it. That's healthy.
or there's the pragmatic approach: create a single global instance of the ServiceLocator. Anyone who needs it can use it, and there's never any doubt about how to find it -- it's global, after all. But don't make it a singleton. Let it be possible to create other instances. If you never need to create another instance, then simply don't do it. But this leaves the door open so that if you do end up needing another instance, you can create it.
There are many situations where you end up needing multiple instances of a class that you thought would only ever need one instance. Configuration/settings objects, loggers or wrappers around some piece of hardware are all things people often call out as "this should obviously be a singleton, it makes no sense to have multiple instances", and in each of these cases, they're wrong. There are many cases where you want multiple instances of just such classes.
But the most universally applicable scenario is simply: testing.
You want to ensure that your ServiceLocator works. So you want to test it.
If it's singleton, that's really hard to do. A good test should run in a pristine, isolated environment, unaffected by previous tests. But a singleton lives for the duration of the application, so if you have multiple tests of the ServiceLocator, they'll all run on the same "dirty" instance, so each test might affect the state seen by the next test.
Instead, the tests should each create a new, clean ServiceLocator, so they can control exactly which state it is in. And to do that, you need to be able to create instances of the class.
So don't make it a singleton. :)
There's absolutely nothing wrong with singletons when they're
appropriate. I have my doubts concerning CompositionContainer (but
I'm not sure I understand what it is actually supposed to do), but
ServiceLocator is the sort of thing that will generally be a singleton
in any well designed application. Having two or more ServiceLocator
will result in the program not functionning as it should (because a
service will be registered in one of them, and you'll be looking it up
in another); enforcing this programatically is positive, at least if you
favor robust programming. In addition, in C++, the singleton idiom is
used to control the order of initialization; unless you make
ServiceLocator a singleton, you can't use it in the constructor of any
object with static lifetime.
While there is a small group of very vocal anti-singleton fanatics,
within the larger C++ community, you'll find that the consensus favors
singletons, in certain very restricted cases. They're easily abused
(but then, so are templates, dynamic allocation and polymorphism), but
they do solve one particular problem very nicely, and it would be silly
to forgo them for some arbitrary dogmatic reason when they're the best
solution for the problem.
Ninject, Sprint.NET, Unity, Autofac, Castle.Windsor are all examples are IoC frameworks that are available. However, I like the learning curve and control of writing my own. It is definitely common practice to not "re-invent the wheel" and just use pre-existing structures. If your comment is along those lines please be gentle.
Can IoC be implemented without the use of XML? It seems to me most, if not all, of the aforementioned frameworks use XML but I would much rather just write mine in C# instead of using XML to load a .dll. The C# is all converted into one .dll eventually anyway.
From my understanding, if wrong please correct, IoC can be used with DI to make the functionality of classes be based off of their definition and implementation while allowing for a separation of concerns.
This is accomplished in C# using microsoft's library System.ComponentModel.IContainer by having a class which inherits it. A class, such as Product, would have an interface IProduct. A generic constructor would then inherit from IContainer and in the constructor, allow a repository to be passed in, an instantiated object to be passed in, and a function to be passed in. This would allow a controller action to then instantiate an interface (IProduct), instantiate the generic constructor with the current repository instance, and then pass it the interface and function.
Is this setup accurate?
I am still trying to learn more about this topic, and have read the wiki articles on IoC, DI, and read about Castle.Windsor, ninject, Unity, and looked over multiple definitions from the MSDN regarding C# libraries which are used. Any assistance, corrections, or suggestions, are greatly appreciated. Thanks
Can IoC be implemented without the use of XML?
Yes, Ninject, Unity, Castle Windsor and Autofac can be configured without using any XML at all. (not sure about Spring.NET, last time I used it it was impossible, version 1.3)
From my understanding, if wrong please correct, IoC can be used with
DI to make the functionality of classes be based off of their
definition and implementation while allowing for a separation of
concerns.
If under "IoC" you mean "IoC container" then yes, it can be used with DI, but since DI is a particular case of Inversion Of Control your IoC container will be just a container for you dependencies. By just having it your will not magically get any DI-friendly types. It's just a support for managing your inverted dependencies.
Edit
As Mystere Man pointed in his answer you need to improve you understanding of the IoC containers. So I would recommend to read this wonderful book (from Mark Seeman) about all that stuff.
I think it is a great exercise to start without a DI container. Before focusing on using a DI framework, focus on best patterns and practices. Especially, design all classes around Dependency Injection and make sure your code follows the SOLID principles. Both sounds pretty easy, but this takes a shift in mindset and a lot of practice before you will get this right (but is well worth it).
When you do this, and do this well, you will quickly notice that your application will evolve in amazing ways. Your code will be testable and extendable in ways that you never imagined before, without your code to rot over time (however, it keeps constant focus to prevent code from rotting).
Still, when you do all this right (which –again- takes a lot of practice), you will still have one part of your application that, despite your best efforts, will get more complex and harder to maintain, as the application grows. This is the part of the application where you wire all dependencies together: the Composition Root.
And this is where DI containers come in. They have fancy names and compete with each other over features, but their goal can be stated in a single sentence:
The goal of a DI container is to keep the Composition Root
maintainable.
Although you can write your own simple DI container to wire up your dependencies, to prevent your Composition Root to become a big fragile, ever changing ball of mud, the container must at least have one crucial feature: Automatic Constructor Injection (a.k.a. auto-wiring). With auto-wiring, the container will look at the constructor arguments of a type that it needs to create, and it will inject the dependencies in it based on the types of those arguments. This feature will make the difference between a maintenance nightmare and a healthy Composition Root. Although creating your own container that supports auto-wiring isn't that hard (with expression trees it takes about 20 lines of code), the moment you start needing auto-wiring is the time to start using one of the existing DI frameworks.
So in conclusion, if you feel it helps you in the learning experience by doing this by hand, please do, as long as you stick to SOLID, DI, DRY, and TDD. When the burden of changing your Composition Root for each change in the application gets too big (which will be sooner than you might expect), switch to an established framework.
I would suggest using an existing DI container first, to understand how it works from the end user perspective. Then you can go about re-designing the wheel. My favorite saying is "You have to know the rules before you can break them".
Some of what you've said doesn't make a lot of sense. you don't have to use System.ComponentModel.IContainer in any framekwork i know of. Maybe Unity requires that (Microsoft's container) but none of the others do. I'm not familiar with Unity thogh.
Other than testability, what's the big advantage of utilizing D.I. (and I'm not talking about a D.I. framework or IoC) over static classes? Particularly for an application where you know a service won't be swapped out.
In one of our c# application, our team is utilizing Dependency Injection in the web web GUI, the service layer, and the repository layer rather than using static methods. In the past, we'd have POCOs (busines entity objects) that were created, modified, passed around, and saved by static classes.
For example, in the past we might have written:
CreditEntity creditObj = CreditEntityManager.GetCredit(customerId);
Decimal creditScore = CreditEntityManager.CalculateScore(creditObj);
return creditScore;
Now, with D.I., the same code would be:
//not shown, _creditService instantiation/injection in c-tors
CreditEntity creditObj = _creditService.GetCredit(customerId);
Decimal creditScore = _creditService.CalculateScore(creditObj);
return creditScore;
Not much different, but now we have dozens of service classes that have much broader scope, which means we should treat them just as if they were static (i.e. no private member variables unless they are used to define further dependencies). Plus, if any of those methods utilize a resource (database/web service/etc) we find it harder to manage concurrency issues unless we remove the dependency and utilize the old static or using(...) methods.
The question for D.I. might be: is CreditEntityManager in fact the natural place to centralize knowledge about how to find a CreditEntity and where to go to CalculateScore?
I think the theory of D.I. is that a modular application involved in thing X doesn't necessarily know how to hook up with thing Y even though X needs Y.
In your example, you are showing the code flow after the service providers have been located and incorporated in data objects. At that point, sure, with and without D.I. it looks about the same, even potentially exactly the same depending on programming language, style, etc.
The key is how those different services are hooked up together. In D.I., potentially a third party object essentially does configuration management, but after that is done the code should be about the same. The point of D.I. isn't to improve later code but to try and match the modular nature of the problem with the modular nature of the program, in order to avoid having to edit modules and program logic that are logically correct, but are hooking up with the wrong service providers.
It allows you to swap out implementations without cracking open the code. For example, in one of my applications, we created an interface called IDataService that defined methods for querying a data source. For the first few production releases, we used an implementation for Oracle using nHibernate. Later, we wanted to switch to an object database, so we wrote and implementation for db4o, added its assembly to the execution directory and changed a line in the config file. Presto! We were using db4o without having to crack open the code.
This has been discussed exactly 1002 times. Here's one such discussion that I remember (read in order):
http://scruffylookingcatherder.com/archive/2007/08/07/dependency-injection.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-More-than-a-testing-seam.aspx
http://kohari.org/2007/08/15/defending-dependency-injection
http://scruffylookingcatherder.com/archive/2007/08/16/tilting-at-windmills.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-IAmDonQuixote.aspx
http://scruffylookingcatherder.com/archive/2007/08/20/poking-bears.aspx
http://ayende.com/Blog/archive/2007/08/21/Dependency-Injection-Applicability-Benefits-and-Mocking.aspx
About your particular problems, it seems that you're not managing your services lifestyles correctly... for example, if one of your services is stateful (which should be quite rare) it probably has to be transient. I recommend that you create as many SO questions about this as you need to in order to clear all doubts.
There is a Guice video which gives a nice sample case for using D.I. If you are using a lot of 3-rd party services which need to be hooked upto dynamically D.I will be a great help.
I apologize for the subjectiveness of this question, but I am a little stuck and I would appreciate some guidance and advice from anyone who's had to deal with this issue before:
I have (what's become) a very large RESTful API project written in C# 2.0 and some of my classes have become monstrous. My main API class is an example of this -- with several dozen members and methods (probably approaching hundreds). As you can imagine, it's becoming a small nightmare, not only to maintain this code but even just navigating the code has become a chore.
I am reasonably new to the SOLID principles, and I am massive fan of design patterns (but I am still at that stage where I can implement them, but not quite enough to know when to use them - in situations where its not so obvious).
I need to break my classes down in size, but I am at a loss of how best to go about doing it. Can my fellow StackOverflow'ers please suggest ways that they have taken existing code monoliths and cut them down to size?
Single Responsibility Principle - A class should have only one reason to change. If you have a monolithic class, then it probably has more than one reason to change. Simply define your one reason to change, and be as granular as reasonable. I would suggest to start "large". Refactor one third of the code out into another class. Once you have that, then start over with your new class. Going straight from one class to 20 is too daunting.
Open/Closed Principle - A class should be open for extension, but closed for change. Where reasonable, mark your members and methods as virtual or abstract. Each item should be relatively small in nature, and give you some base functionality or definition of behavior. However, if you need to change the functionality later, you will be able to add code, rather than change code to introduce new/different functionality.
Liskov Substitution Principle - A class should be substitutable for its base class. The key here, in my opinion, is do to inheritance correctly. If you have a huge case statement, or two pages of if statements that check the derived type of the object, then your violating this principle and need to rethink your approach.
Interface Segregation Principle - In my mind, this principle closely resembles the Single Responsibility principle. It just applies specifically to a high level (or mature) class/interface. One way to use this principle in a large class is to make your class implement an empty interface. Next, change all of the types that use your class to be the type of the interface. This will break your code. However, it will point out exactly how you are consuming your class. If you have three instances that each use their own subset of methods and properties, then you now know that you need three different interfaces. Each interface represents a collective set of functionality, and one reason to change.
Dependency Inversion Principle - The parent / child allegory made me understand this. Think of a parent class. It defines behavior, but isn't concerned with the dirty details. It's dependable. A child class, however, is all about the details, and can't be depended upon because it changes often. You always want to depend upon the parent, responsible classes, and never the other way around. If you have a parent class depending upon a child class, you'll get unexpected behavior when you change something. In my mind, this is the same mindset of SOA. A service contract defines inputs, outputs, and behavior, with no details.
Of course, my opinions and understandings may be incomplete or wrong. I would suggest learning from people who have mastered these principles, like Uncle Bob. A good starting point for me was his book, Agile Principles, Patterns, and Practices in C#. Another good resource was Uncle Bob on Hanselminutes.
Of course, as Joel and Jeff pointed out, these are principles, not rules. They are to be tools to help guide you, not the law of the land.
EDIT:
I just found these SOLID screencasts which look really interesting. Each one is approximately 10-15 minutes long.
There's a classic book by Martin Fowler - Refactoring: Improving the Design of Existing Code.
There he provides a set of design techniques and example of decisions to make your existing codebase more manageable and maintainable (and that what SOLID principals are all about). Even though there are some standard routines in refactoring it is a very custom process and one solution couldn't be applied to all project.
Unit testing is one of the corner pillars for this process to succeed. You do need to cover your existing codebase with enough code coverage so that you'd be sure you don't break stuff while changing it. Actually using modern unit testing framework with mocking support will lead encourage you to better design.
There are tools like ReSharper (my favorite) and CodeRush to assist with tedious code changes. But those are usually trivial mechanical stuff, making design decisions is much more complex process and there's no so much tool support. Using class diagrams and UML helps. That what I would start from, actually. Try to make sense of what is already there and bring some structure to it. Then from there you can make decisions about decomposition and relations between different components and change your code accordingly.
Hope this helps and happy refactoring!
It will be a time consuming process. You need to read the code and identify parts that do not meet the SOLID principles and refactor into new classes. Using a VS add-in like Resharper (http://www.jetbrains.com) will assist with the refactoring process.
Ideally you will have good coverage of automated unit tests so that you can ensure your changes do not introduce problems with the code.
More Information
In the main API class, you need to identify methods that relate to each other and create a class that more specifically represents what actions the method performs.
e.g.
Let's say I had an Address class with separate variables containing street number, name, etc. This class is responsible for inserting, updating, deleting, etc. If I also needed to format an address a specific way for a postal address, I could have a method called GetFormattedPostalAddress() that returned the formatted address.
Alternatively, I could refactor this method into a class called AddressFormatter that takes an Address in it constructor and has a Get property called PostalAddress that returns the formatted address.
The idea is to separate different responsibilities into separate classes.
What I've done when presented with this type of thing (and I'll readily admit that I haven't used SOLID principles before, but from what little I know of them, they sound good) is to look at the existing codebase from a connectivity point of view. Essentially, by looking at the system, you should be able to find some subset of functionality that is internally highly coupled (many frequent interactions) but externally loosely coupled (few infrequent interactions). Usually, there are a few of these pieces in any large codebase; they are candidates for excision. Essentially, once you've identified your candidates, you have to enumerate the points at which they are externally coupled to the system as a whole. This should give you a good idea of the level of interdependency involved. There usually is a fair bit of interdependency involved. Evaluate the subsets and their connection points for refactoring; frequently (but not always) there ends up being a couple of clear structural refactorings that can increase the decoupling. With an eye on those refactorings, use the existing couplings to define the minimal interface required to allow the subsystem to work with the rest of the system. Look for commonalities in those interfaces (frequently, you find more than you'd expect!). And finally, implement these changes that you've identified.
The process sounds terrible, but in practice, it's actually pretty straightforward. Mind you, this is not a roadmap towards getting to a completely perfectly designed system (for that, you'd need to start from scratch), but it very certainly will decrease the complexity of the system as a whole and increase the code comprehensibility.
OOD - Object Oriented Design
SOLID - class design
Single Responsibility Principle - SRP - introduced by Uncle Bob. Method, class, module are responsible only for doing single thing(one single task)
Open/Closed Principle - OCP - introduced by Bertrand Meyer. Method, class, module are open for extension and closed for modification. Use a power of inheritance, abstraction, polymorphism, extension, wrapper. [Java example], [Swift example]
[Liskov Substitution Principle] - LSP - introduced by Barbara Liskov and Jeannette Wing. A subtype can replace supertype without side effects
Interface Segregation Principle - ISP - introduced by Uncle Bob. Your interface should be as small as possible
[Dependency Inversion Principle(DIP)] - DIP - introduced by Uncle Bob. Internal class, layer should not be depended on external class, layer. For example when you have aggregation[About] dependency you should rather use some abstraction/interfaces. [DIP vs DI vs IoC]
6 principles about packages/modules(.jar, .aar, .framework):
what to put inside a package
The Release Reuse Equivalency
The Common Closure
The Common Reuse
couplings between packages
The Acyclic Dependencies
The Stable Dependencies
The Stable Abstractions
[Protocol Oriented Programming(POP)]