I'm in the process of creating unit tests for an existing serie of classes that use spring.net IApplicationContext to resolve types and I want to mock the dependencies resolved by spring.net.
I'm having problems getting spring.net to use my mocked objects, basically getting ContextRegistry.GetContext() to return them in the actual application.
The best solution I've found so far is something similar to this, but is not very clean to manage in the actual unit test code, or defining my own IApplicationContext on the fly and register it in the registry which has the same problem.
My question is whether I'm missing some framework that ties these things together or some pattern I can use that would allow me to define things easily.
Your classes are using IApplicationContext as a service locator. Many consider "Service Locator" an anti-pattern. I suggest you form your own opinion on this; if you agree that it is an anti-pattern, then you can take this opportunity to factor out the dependency on IApplicationContext and replace them by explicit dependencies on the objects your class needs.
If you (have to) stick to the current approach, then I'm afraid it's really difficult to get a clean solution. In your situation, I'd configure my containers specifically for testing (perhaps including mocks like described in your linked blog post) and use Spring.net unit test support for easy wiring of my test objects. But I'd feel really uncomfortable - like you ...
Related
Managing Dependency Injection in C# with Autofac explains in a very concise way with downloadable source code
Dependency injection by hand
var di = new EmployeeObserver(employees, new Printer(Console.Out));
di.FindExperts();
with autofac:
ContainerBuilder autofac = new ContainerBuilder();
autofac.Register(o => new EmployeeObserver(o.Resolve<IList<Employee>>(), o.Resolve<IPrinter>()));
autofac.RegisterType<Printer>().As<IPrinter>();
autofac.RegisterInstance(employees).As<IList<Employee>>();
autofac.RegisterInstance(Console.Out).As<TextWriter>();
using (var container = autofac.Build())
{
container.Resolve<EmployeeObserver>().FindExperts();
}
In some other Q&As, it says we can see the advantage usage of autofac while writing unit test.
Apart from that, could someone give reasons or details more why should I use more complicate code with autofac instead of by manual dependency injection?
It says:
May be on this particular example it's hard to see why this approach
is better than configuring dependency injection by hand, but you
should notice one important thing - with Autofac each component is
configured independently of all the others, and this is what will make
a big difference when your application become more complex.
Can you point an example of complex version of this case, that shows advantage of autofac usage vs dependency by hand that I will stuck with?
Using or not using a DI container has no effect on unit testing. When you unit test, you don't use a DI container because unit tests usually deal with a single or a few classes that you can wire together easily.
Please note that whether to use or not use a DI container to compose your objects is still a highly opinionated question. Here I provide a perspective that I have based on my experience using dependency injection in my projects.
In an article of mine, Why DI containers fail with “complex” object graphs, I define the concept of a simple object graph like this:
An object graph of any size and any depth that has the following two attributes:
a) For any interface (or abstraction) at most one class that implements such interface is used in the object graph.
b) For any class, at most one instance is used in the object graph (or multiple instances with the exact same construction parameter arguments). This single instance can be used multiple times in the graph.
When you have a simple object graph, use a DI container.
For an example of a simple object graph, consider that you have 20 service interfaces that is each implemented by a single class. E.g. IEmailService is only implemented by EmailService, and ISMSService is only implemented by SMSService, etc., and that you have 30 ASP.NET controllers each depending on any number of these service interfaces. Also some of the services depend on other service interfaces, e.g. OrderService depends on IEmailService.
When you don't have a simple object graph, i.e., you have a complex object graph (which is the case for most large applications that apply the SOLID principles), don't use a DI container and instead use Pure DI.
If you use a DI container with a complex object graph, you will end up using complex features of the container like named registrations to distinguish between different implementations of the same interface or between objects that take different construction parameters. This will make your composition root hard to maintain.
You might want to a look at this article of mine about how Pure DI can make your composition root clean.
In some other Q&As, it says we can see the advantage usage of autofac while writing unit test
You've missed the point.
The ability to mock dependency, and, hence, to write unit test, is the advantage of dependency injection as a pattern itself.
Advantage of any DI-container (not only Autofac) is ability to configure dependencies somehow and use this configuration in complex scenarios.
Imagine, that you have some class, that depends on some service, which, in turn depends on some other service, and so on.
It is hard to implement this using poor man's DI, but DI-containers can deal with this.
I'm writing a C# ASP.NET MVC web application using SOLID principles.
I've written a ViewModelService, which depends on a AccountService and a RepositoryService, so I've injected those two services in the the ViewModelServer.
The PermissionService depends on the HttpContextBase in order to use GetOwinContext() to get an instance of the UserManager. The controller has an instance of HttpContextBase that needs to be used - so it seems like I have to inject the HttpContextBase instance into the ViewModelService which then injects it into the PermissionService.
So, in terms of code I have:
public ViewModelService
public CategoryRepository(ApplicationDbContext context, IPermissionService permissionservice)
public AccountService(HttpContextBase httpcontext, IPrincipal securityprincipal)
to instantiate the ViewModelService, I then do this:
new ViewModelService(
new CategoryRepository(
new ApplicationDbContext(),
new PermissionService(
new AccountService(HttpContext, Thread.CurrentPrincipal),
new UserPasswordRepository(new ApplicationDbContext()),
new ApplicationSettingsService())),
new PasswordRepository(
new ApplicationDbContext(),
new PermissionService(
new AccountService(HttpContext, Thread.CurrentPrincipal),
new UserPasswordRepository(new ApplicationDbContext()),
new ApplicationSettingsService())),
new ModelValidatorService());
Should a dependency be injected from that many "levels" up, or is there a better way?
There's a balance to be struck.
On the one hand, you have the school of thought which would insist that all dependencies must be exposed by the class to be "properly" injected. (This is the school of thought which considers something like a Service Locator to be an anti-pattern.) There's merit to this, but taken to an extreme you find yourself where you are now. Just the right kind of complexity in some composite models, which themselves have composite models, results in aggregate roots which need tons of dependencies injected solely to satisfy dependencies of deeper models.
Personally I find that this creates coupling in situations like this. Which is what DI is intended to resolve, not to create.
On the other hand, you have the school of thought which allows for a Service Locator approach, where models can internally invoke some common domain service to resolve a dependency for it. There's merit to this, but taken to an extreme you find that your dependencies are less known and there's a potential for runtime errors if any given dependency can't be resolved. (Basically, you can get errors at a higher level because consuming objects never knew that consumed objects needed something which wasn't provided.)
Personally I've used a service locator approach a lot (mostly because it's a very handy pattern for introducing DI to a legacy domain as part of a larger refactoring exercise, which is a lot of what I do professionally) and have never run into such issues.
There's yin and yang either way. And I think each solution space has its own balance. If you're finding that direct injection is making the system difficult to maintain, it may be worth investigating service location. Conversely, it may also be worth investigating if the overall domain model itself is inherently coupled and this DI issue is simply a symptom of that coupling and not the cause of it.
Yes, the entire intent of Dependency Injection is that you compose big object graphs up-front. You compose object graphs from the Composition Root, which is a place in your application that has the Single Responsibility of composing object graphs. That's not any particular Controller, but a separate class that composes Controllers with their dependencies.
The Composition Root must have access to all types it needs to compose, unless you want to get into late-binding strategies (which I'll generally advise against, unless there's a specific need).
I am firmly of the opinion that Service Locators are worse than Dependency Injection. They can be a useful legacy technique, and a useful stepping stone on to something better, but if you are designing something new, then steer clear.
The main reason for this is that Service Locators lead to code that has implicit dependencies, and this makes the code less clear and breaks encapsulation. It can also lead to run time errors instead of compile time errors, and Interacting Tests.
Your example uses Constructor Injection, which is usually the most appropriate form of Dependency Injection:
public ViewModelService(ICategoryRepository categoryRepository, IPasswordRepository passwordRepository, IModelValidatorService modelValidator) { ... }
This has explicit dependencies, which is good. It means that you cannot create the object without passing in its dependencies, and if you try to you will get a compile time error rather than a run time one. It also is good for encapsulation, as just by looking at the interface of the class you know what dependencies it needs.
You could do this using service locators as below:
public ViewModelService()
{
var categoryRepository = CategoryRepositoryServiceLocator.Instance;
var passwordRepository = PasswordRepositoryServiceLocator.Instance;
var modelValidator FModelValidatorServiceLocator.Instance;
...
}
This has implicit dependencies, that you cannot tell just by looking at the interface, you must also look at the implementation (this breaks encapsulation). You can also forget to set up one of the Service Locators, which will lead to a run time exception.
In your example I thinky your ViewModelService is good. It references abstractions (ICategoryRepository etc) and doesn't care about how these abstractions are created. The code you use to create the ViewModelService is a bit ugly, and I would recommend using an Inversion of Control container (such as Castle Windsor, StructureMap etc) to help here.
In Castle Windsor, you could do something like the following:
container.Register(Classes.FromAssemblyNamed("Repositories").Pick().WithServiceAllInterfaces());
container.Register(Component.For<IAccountService>().ImplementedBy<AccountService>());
container.Register(Component.For<IApplicationDBContext>().ImplementedBy<IApplicationDBContext>());
container.Register(Component.For<IApplicationSettingsService>().ImplementedBy<IApplicationSettingsService>());
var viewModelService = _container.Resolve<ViewModelService>();
Make sure to read and understand the "Register, Resolve, Release" and "Composition Root" patterns before you start.
Good luck!
I have a a fairly significant dependency graph for an object I want to test. What is the easiest way to resolve my dependencies without having to register mocks everywhere?
For example, I have a dependency graph like this:
PublicApi
ApiService
AccountingFacade
BillingService
BillingValidation
BillingRepository
UserService
UserRepository
I want to test PublicApi.CreateUser(), and I want it to run through all the code, but I want to mock the repositories so I don't have to write anything to the database. Should I just use a DI container and register all my services, replacing the repositories with mocks, then resolve PublicApi and run the method?
I was looking into AutoFixture, and it looks like it might be able to handle something like this, but I can't quite wrap my head around the whole 'Freeze' vs 'Register' and it's integration with Moq.
For Unittests you should only mock the direct dependencies. In your case you create PublicApi and inject a mock for ApiService and verify if PublicApi is calling the appropriate methods with the correct values on the ApiService Mock.
The same way you test all the other components isolated from the deeper dependencies.
If you want to test the combination of several components, that isn't unit testing but rather integration testing. Therefore it depends of how you are putting your classes together. e.g. if you are using an IoC container, it probably supports replacing the configuration for the repositories in some way. In this case you can use the configuration of the application and replace the repositories and potentially also the views with mocks.
This may not be helpful in the least but I will say it anyway.
It seems you are trying to test too much at once, why not just test BillingService -> BillingValidation, then BillingService -> BillingRepository etc. This way you would have a suite of tests proving that each one works, then when you are up at the PublicApi Layer you only need to mock ApiService, as you have already tested everything beneath it, so there is no value in testing it again.
Generally I will only test 1 layer at a time, but I dont know your full scenario so you may have something which I am not accounting for, so if this is the case and you REALLY need to test all of this together I would just bring in a simple and lightweight DI framework like Ninject or something.
This way you can just bind all your types to mocks, then instanciate your PublicApi from it.
With ninject it would look something like:
Kernel.Bind<UserRepository>.ToConst(YourMockUserRepositoryInstance);
Kernel.Bind<UserService>.ToConst(YourMockUserServiceInstance);
Kernel.Bind<BillingRepository>.ToConst(YourMockBillingRepositoryInstance);
Kernel.Bind<BillingValidation>.ToConst(YourMockBillingValidationInstance);
Kernel.Bind<BillingService>.ToConst(YourMockBillingServiceInstance);
Kernel.Bind<AccountingFacade>.ToConst(YourMockAccountingFacadeInstance);
Kernel.Bind<ApiService>.ToConst(YourMockApiServiceInstance);
Kernel.Bind<PublicApi>.ToSelf();
var publicApi = Kernel.Get<PublicApi>();
Although you have to ask yourself, what are you testing here? if its just interactions I would do as I first mention, if its more then maybe think about the latter choice. Either way I hope it gives you some options.
(C#, WCF Service, Rhino Mocks, MbUNit)
I have been writing tests for code already in place (yes I know its the wrong way around but that's how its worked out on my current contract). I've done quite a bit of re-factoring to support mocking - injecting dependencies, adding additional interfaces etc - all of which have improved the design. Generally my testing experience has been going well (exposing fragility and improving decoupling). For any object I have been creating the dependant mocks and this sits well with me and makes sense.
The app essentially has 4 physical layers. The database, a repository layer for data access, a WCF service that is connected to the repository via a management (or business logic) layer, so top down it looks like this;
WCF
Managers
Repository
Database
Testing the managers and repository layer has been simple enough, mocking the dependencies with Rhino Mocks and injecting them into the layer under test as such.
My problem is in testing the top WCF layer. As my service doesn't have the constructor to allow me inject the dependencies, I'm not sure how I go about mocking a dependency when testing the public methods (ServiceContracts) on the service.
I hope that's made sense and any help greatly appreciated. I am aware of TypeMockIsolator etc, but really don't want to go down that route both for budget and other reasons I won't go into here. Besides I'm sure there are plenty of clever 'Stackers' who have the info I need.
Thanks in advance.
Is there a specific reason you cant have a constructor on your service?
If not, you can have overloaded constructors with a default constructor wiring up the defaults and one parametrized constructor with your dependencies. You can now test the parametrized ctor and rely on the default ctor for creating the instance in production.
public MyService() : this(new DefaultDep1(), new DefaultDep2())
{
}
public MyService(IDep1 d1, IDep2 d2)
{
}
A better solution if you use dependency injection would be to use the WCF IInstanceProvider interface to create your service instance and provide the needed dependencies via that injection point. An example using Structure Map can be found here.
You could make the WCF services a thin layer over the 'managers', so they have little or no logic in them which needs testing. Then don't test them and just test the managers. Alternatively, you could achieve a similar effect by having another 'service' layer which contains the logic from the services, which can be tested, and make the actual WCF code just pass through to that.
Our WCF service gets all its dependencies from a Factory object, and we give the service a constructor which takes IFactory. So if we want to write a test which mocks one of the dependencies, say an IDependency, we only need to inject a mock factory, which is programmed to give mocked IDependency object back to the service.
If your using an inversion of control (IoC), such as Unity, AutoFac or Castle Windsor, and a mocking framework (eg. Moq, NMock, RhinoMocks) this is simple enough as long as you have the right design.
For a good tutorial on how to do it with RhinoMock and Windsor, take a look at this blog article - http://ayende.com/Blog/archive/2007/02/10/WCF-Mocking-and-IoC-Oh-MY.aspx
If you're using Castle Windsor, take a look at the WCF facility, it lets you use non-default constructor and inject dependencies into your services, among other things.
I am currently writing an open source SDK for a program that I use and I'm using an IoC container internally(NInject) to wire up all my internal dependencies.
I have some objects that are marked as internal so that I don't crowd the public API as they are only used internally and shouldn't been seen by the user, stuff like factories and other objects. The problem that I'm having is that NInject can't create internal objects which means that I have to mark all my internal objects public which crowds up the public API.
My question is: Is there someway to get around this problem or am I doing it all wrong?
PS. I have thought about using InternalsVisiableTo attribute but I feel like that is a bit of a smell.
Quick look at the other answers: it doesn't seem like you are doing something so different that there is something fundamentally wrong with Ninject that you would need to modify it or replace it. In many cases, you can't "go straight for [the] internals" because they rely upon unresolved dependency injection; hence the usage of Ninject in the first place. Also it sounds like you already do have an internal set of interfaces which is why the question was posed.
Thoughts: one problem with using Ninject directly in your SDK or library is that then your users will have to use Ninject in their code. This probably isn't an issue for you because it is your IoC choice so you were going to use it anyway. What if they want to use another IoC container, then now they effectively have two running duplicating efforts. Worse yet what if they want to use Ninject v2 and you've used v1.5 then that really complicates the situation.
Best case: if you can refactor your classes such that they get everything they need through Dependency Injection then this is the cleanest because the library code doesn't need any IoC container. The app can wire up the dependencies and it just flows. This isn't always possible though, as sometimes the library classes need to create instances which have dependencies that you can't resolve through injection.
Suggestion: The CommonServiceLocator (and the Ninject adapter for it) were specifically designed for this situation (libraries with dependencies). You code against the CommonServiceLocator and then the application specifies which DI/IoC actually backs the interface.
It is a bit of a pain in that now you have to have Ninject and the CommonServiceLocator in your app, but the CommonServiceLocator is quite lightweight. Your SDK/library code only uses the CommonServiceLocator which is fairly clean.
I guess you don't even need that. IoC is for public stuff. Go straight for internals.
But - that's just my intuition...
Create a secondary, internal API which is different from the external API. You may need to do the split manually...
I'm going to vote for the InternalsVisibleTo solution. Totally not a smell, really. The point of the attribute is to enable the sort of behavior you are wanting, so rather than jumping through all sorts of elaborate hoops to make things work without it, just use the functionality provided by the framework for solving this particular problem.
I would also suggest, if you want to hide your choice of container from the user, using ILMerge to combine the Ninject assemblies with your SDK assembly, and apply the /internalize argument to change the visibility of the Ninject assemblies to internal, so the Ninject namespaces don't leak out of your library (sorry, couldn't find a link to the ILMerge docs online, but there is a doc file in the download). There is also this nice blog post about integrating ILMerge into your build process.
You can
modify Ninject
pick a different container