I have a a fairly significant dependency graph for an object I want to test. What is the easiest way to resolve my dependencies without having to register mocks everywhere?
For example, I have a dependency graph like this:
PublicApi
ApiService
AccountingFacade
BillingService
BillingValidation
BillingRepository
UserService
UserRepository
I want to test PublicApi.CreateUser(), and I want it to run through all the code, but I want to mock the repositories so I don't have to write anything to the database. Should I just use a DI container and register all my services, replacing the repositories with mocks, then resolve PublicApi and run the method?
I was looking into AutoFixture, and it looks like it might be able to handle something like this, but I can't quite wrap my head around the whole 'Freeze' vs 'Register' and it's integration with Moq.
For Unittests you should only mock the direct dependencies. In your case you create PublicApi and inject a mock for ApiService and verify if PublicApi is calling the appropriate methods with the correct values on the ApiService Mock.
The same way you test all the other components isolated from the deeper dependencies.
If you want to test the combination of several components, that isn't unit testing but rather integration testing. Therefore it depends of how you are putting your classes together. e.g. if you are using an IoC container, it probably supports replacing the configuration for the repositories in some way. In this case you can use the configuration of the application and replace the repositories and potentially also the views with mocks.
This may not be helpful in the least but I will say it anyway.
It seems you are trying to test too much at once, why not just test BillingService -> BillingValidation, then BillingService -> BillingRepository etc. This way you would have a suite of tests proving that each one works, then when you are up at the PublicApi Layer you only need to mock ApiService, as you have already tested everything beneath it, so there is no value in testing it again.
Generally I will only test 1 layer at a time, but I dont know your full scenario so you may have something which I am not accounting for, so if this is the case and you REALLY need to test all of this together I would just bring in a simple and lightweight DI framework like Ninject or something.
This way you can just bind all your types to mocks, then instanciate your PublicApi from it.
With ninject it would look something like:
Kernel.Bind<UserRepository>.ToConst(YourMockUserRepositoryInstance);
Kernel.Bind<UserService>.ToConst(YourMockUserServiceInstance);
Kernel.Bind<BillingRepository>.ToConst(YourMockBillingRepositoryInstance);
Kernel.Bind<BillingValidation>.ToConst(YourMockBillingValidationInstance);
Kernel.Bind<BillingService>.ToConst(YourMockBillingServiceInstance);
Kernel.Bind<AccountingFacade>.ToConst(YourMockAccountingFacadeInstance);
Kernel.Bind<ApiService>.ToConst(YourMockApiServiceInstance);
Kernel.Bind<PublicApi>.ToSelf();
var publicApi = Kernel.Get<PublicApi>();
Although you have to ask yourself, what are you testing here? if its just interactions I would do as I first mention, if its more then maybe think about the latter choice. Either way I hope it gives you some options.
Related
We've been using Simple Injector with good success, in a fairly substantial application. We've been using constructor injection for all of our production classes, and configuring Simple Injector to populate everything, and everything's peachy.
We've not, though, used Simple Injector to manage the dependency trees for our unit tests. Instead, we've been new'ing up everything manually.
I just spent a couple of days working through a major refactoring, and nearly all of my time was in fixing these manually-constructed dependency trees in our unit tests.
This has me wondering - does anyone have any patterns they use to configure the dependency trees they use in unit tests? For us, at least, in our tests our dependency trees tend to be fairly simple, but there are a lot of them.
Anyone have a method they use to manage these?
For true unit tests (i.e. those which only test one class, and mock all of its dependencies), it doesn't make any sense to use a DI framework. In these tests:
if you find that you have a lot of repetitive code for newing up an instance of your class with all the mocks you've created, one useful strategy is to create all of your mocks and create the instance for the subject-under-test in your Setup method (these can all be private instance fields), and then each individual test's "arrange" area just has to call the appropriate Setup() code on the methods it needs to mock. This way, you end up with only one new PersonController(...) statement per test class.
if you're needing to create a lot of domain/data objects, it's useful to create Builder objects that start with sane values for testing. So instead of invoking a huge constructor all over your code, with a bunch of fake values, you're mostly just calling, e.g., var person = new PersonBuilder().Build(), possibly with just a few chained method calls for pieces of data that you specifically care about in that test. You may also be interested in AutoFixture, but I've never used it so I can't vouch for it.
If you're writing integration tests, where you need to test the interaction between several parts of the system, but you still need to be able to mock specific pieces, consider creating Builder classes for your services, so you can say, e.g. var personController = new PersonControllerBuilder.WithRealDatabase(connection).WithAuthorization(new AllowAllAuthorizationService()).Build().
If you're writing end-to-end, or "scenario" tests, where you need to test the whole system, then it makes sense to set up your DI framework, leveraging the same configuration code that your real product uses. You can alter the configuration slightly to give yourself better programmatic control over things like which user is logged in and such. You can still leverage the other builder classes you've created for constructing data, too.
var user = new PersonBuilder().Build();
using(Login.As(user))
{
var controller = Container.Get<PersonController>();
var result = controller.GetCurrentUser();
Assert.AreEqual(result.Username, user.Username)
}
Refrain from using your DI container within your unit tests. In unit tests, you try to test one class or module in isolation, and there is little use for a DI container in that area.
Things are different with integration testing, since you want to test how the components in your system integrate and work together. In that case you often use your production DI configuration and swap out some of your services for fake services (e.g. a EmailService) but stick as close to the real thing as you can. In this case you would typically use your Container to resolve the whole object graph.
The desire to use a DI container in the unit tests as well, often stems from ineffective patterns. For instance, in case you try to create the class under test with all its dependencies in each test, you get lots of duplicated initialization code, and a little change in your class under test can in that case ripple through the system and require you to change dozens of unit tests. This obviously causes maintainability problems.
One pattern that helped me out here a lot in the past is the use of a simple SUT-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateClassUnderTest(
ILogger logger = null,
IMailSender mailSender = null,
IEventPublisher publisher = null)
{
return new ClassUnderTest(
logger ?? new FakeLogger(),
mailSender ?? new FakeMailer(),
publisher ?? new FakePublisher());
}
The factory method's arguments duplicate the class's constructor arguments, but makes them all optional. For any particular dependency that is not supplied by the caller, a new default fake implementation will be injected.
This typically works very well, because in most tests you are just interested in one or two dependencies. The other dependencies might be required for the class to function, but might not be interesting for that specific test. The factory method, therefore, allows you to only supply the dependencies that are interesting for the test at hand, while removing the noise of unused dependencies from the test method. As an example using the factory method, here's a test method:
public void Doing_something_will_always_log_a_message()
{
// Arrange
var logger = new ListLogger();
ClassUnderTest sut = CreateClassUnderTest(logger: logger);
// Act
sut.DoSomething();
// Arrange
Assert.IsTrue(logger.Count > 0);
}
If you are interested in learning how to write Readable, Trustworthy and Maintainable (RTM) tests, Roy Osherove's book The Art of Unit Testing (second edition) is an excellent read. This has helped me tremendously in my understanding of writing great unit tests. If you’re interested in a deep-dive into Dependency Injection and its related patterns, consider reading Dependency Injection Principles, Practices, and Patterns (which I co-authored).
We've been using Simple Injector with good success, in a fairly substantial application. We've been using constructor injection for all of our production classes, and configuring Simple Injector to populate everything, and everything's peachy.
We've not, though, used Simple Injector to manage the dependency trees for our unit tests. Instead, we've been new'ing up everything manually.
I just spent a couple of days working through a major refactoring, and nearly all of my time was in fixing these manually-constructed dependency trees in our unit tests.
This has me wondering - does anyone have any patterns they use to configure the dependency trees they use in unit tests? For us, at least, in our tests our dependency trees tend to be fairly simple, but there are a lot of them.
Anyone have a method they use to manage these?
For true unit tests (i.e. those which only test one class, and mock all of its dependencies), it doesn't make any sense to use a DI framework. In these tests:
if you find that you have a lot of repetitive code for newing up an instance of your class with all the mocks you've created, one useful strategy is to create all of your mocks and create the instance for the subject-under-test in your Setup method (these can all be private instance fields), and then each individual test's "arrange" area just has to call the appropriate Setup() code on the methods it needs to mock. This way, you end up with only one new PersonController(...) statement per test class.
if you're needing to create a lot of domain/data objects, it's useful to create Builder objects that start with sane values for testing. So instead of invoking a huge constructor all over your code, with a bunch of fake values, you're mostly just calling, e.g., var person = new PersonBuilder().Build(), possibly with just a few chained method calls for pieces of data that you specifically care about in that test. You may also be interested in AutoFixture, but I've never used it so I can't vouch for it.
If you're writing integration tests, where you need to test the interaction between several parts of the system, but you still need to be able to mock specific pieces, consider creating Builder classes for your services, so you can say, e.g. var personController = new PersonControllerBuilder.WithRealDatabase(connection).WithAuthorization(new AllowAllAuthorizationService()).Build().
If you're writing end-to-end, or "scenario" tests, where you need to test the whole system, then it makes sense to set up your DI framework, leveraging the same configuration code that your real product uses. You can alter the configuration slightly to give yourself better programmatic control over things like which user is logged in and such. You can still leverage the other builder classes you've created for constructing data, too.
var user = new PersonBuilder().Build();
using(Login.As(user))
{
var controller = Container.Get<PersonController>();
var result = controller.GetCurrentUser();
Assert.AreEqual(result.Username, user.Username)
}
Refrain from using your DI container within your unit tests. In unit tests, you try to test one class or module in isolation, and there is little use for a DI container in that area.
Things are different with integration testing, since you want to test how the components in your system integrate and work together. In that case you often use your production DI configuration and swap out some of your services for fake services (e.g. a EmailService) but stick as close to the real thing as you can. In this case you would typically use your Container to resolve the whole object graph.
The desire to use a DI container in the unit tests as well, often stems from ineffective patterns. For instance, in case you try to create the class under test with all its dependencies in each test, you get lots of duplicated initialization code, and a little change in your class under test can in that case ripple through the system and require you to change dozens of unit tests. This obviously causes maintainability problems.
One pattern that helped me out here a lot in the past is the use of a simple SUT-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateClassUnderTest(
ILogger logger = null,
IMailSender mailSender = null,
IEventPublisher publisher = null)
{
return new ClassUnderTest(
logger ?? new FakeLogger(),
mailSender ?? new FakeMailer(),
publisher ?? new FakePublisher());
}
The factory method's arguments duplicate the class's constructor arguments, but makes them all optional. For any particular dependency that is not supplied by the caller, a new default fake implementation will be injected.
This typically works very well, because in most tests you are just interested in one or two dependencies. The other dependencies might be required for the class to function, but might not be interesting for that specific test. The factory method, therefore, allows you to only supply the dependencies that are interesting for the test at hand, while removing the noise of unused dependencies from the test method. As an example using the factory method, here's a test method:
public void Doing_something_will_always_log_a_message()
{
// Arrange
var logger = new ListLogger();
ClassUnderTest sut = CreateClassUnderTest(logger: logger);
// Act
sut.DoSomething();
// Arrange
Assert.IsTrue(logger.Count > 0);
}
If you are interested in learning how to write Readable, Trustworthy and Maintainable (RTM) tests, Roy Osherove's book The Art of Unit Testing (second edition) is an excellent read. This has helped me tremendously in my understanding of writing great unit tests. If you’re interested in a deep-dive into Dependency Injection and its related patterns, consider reading Dependency Injection Principles, Practices, and Patterns (which I co-authored).
I'm in the process of creating unit tests for an existing serie of classes that use spring.net IApplicationContext to resolve types and I want to mock the dependencies resolved by spring.net.
I'm having problems getting spring.net to use my mocked objects, basically getting ContextRegistry.GetContext() to return them in the actual application.
The best solution I've found so far is something similar to this, but is not very clean to manage in the actual unit test code, or defining my own IApplicationContext on the fly and register it in the registry which has the same problem.
My question is whether I'm missing some framework that ties these things together or some pattern I can use that would allow me to define things easily.
Your classes are using IApplicationContext as a service locator. Many consider "Service Locator" an anti-pattern. I suggest you form your own opinion on this; if you agree that it is an anti-pattern, then you can take this opportunity to factor out the dependency on IApplicationContext and replace them by explicit dependencies on the objects your class needs.
If you (have to) stick to the current approach, then I'm afraid it's really difficult to get a clean solution. In your situation, I'd configure my containers specifically for testing (perhaps including mocks like described in your linked blog post) and use Spring.net unit test support for easy wiring of my test objects. But I'd feel really uncomfortable - like you ...
(C#, WCF Service, Rhino Mocks, MbUNit)
I have been writing tests for code already in place (yes I know its the wrong way around but that's how its worked out on my current contract). I've done quite a bit of re-factoring to support mocking - injecting dependencies, adding additional interfaces etc - all of which have improved the design. Generally my testing experience has been going well (exposing fragility and improving decoupling). For any object I have been creating the dependant mocks and this sits well with me and makes sense.
The app essentially has 4 physical layers. The database, a repository layer for data access, a WCF service that is connected to the repository via a management (or business logic) layer, so top down it looks like this;
WCF
Managers
Repository
Database
Testing the managers and repository layer has been simple enough, mocking the dependencies with Rhino Mocks and injecting them into the layer under test as such.
My problem is in testing the top WCF layer. As my service doesn't have the constructor to allow me inject the dependencies, I'm not sure how I go about mocking a dependency when testing the public methods (ServiceContracts) on the service.
I hope that's made sense and any help greatly appreciated. I am aware of TypeMockIsolator etc, but really don't want to go down that route both for budget and other reasons I won't go into here. Besides I'm sure there are plenty of clever 'Stackers' who have the info I need.
Thanks in advance.
Is there a specific reason you cant have a constructor on your service?
If not, you can have overloaded constructors with a default constructor wiring up the defaults and one parametrized constructor with your dependencies. You can now test the parametrized ctor and rely on the default ctor for creating the instance in production.
public MyService() : this(new DefaultDep1(), new DefaultDep2())
{
}
public MyService(IDep1 d1, IDep2 d2)
{
}
A better solution if you use dependency injection would be to use the WCF IInstanceProvider interface to create your service instance and provide the needed dependencies via that injection point. An example using Structure Map can be found here.
You could make the WCF services a thin layer over the 'managers', so they have little or no logic in them which needs testing. Then don't test them and just test the managers. Alternatively, you could achieve a similar effect by having another 'service' layer which contains the logic from the services, which can be tested, and make the actual WCF code just pass through to that.
Our WCF service gets all its dependencies from a Factory object, and we give the service a constructor which takes IFactory. So if we want to write a test which mocks one of the dependencies, say an IDependency, we only need to inject a mock factory, which is programmed to give mocked IDependency object back to the service.
If your using an inversion of control (IoC), such as Unity, AutoFac or Castle Windsor, and a mocking framework (eg. Moq, NMock, RhinoMocks) this is simple enough as long as you have the right design.
For a good tutorial on how to do it with RhinoMock and Windsor, take a look at this blog article - http://ayende.com/Blog/archive/2007/02/10/WCF-Mocking-and-IoC-Oh-MY.aspx
If you're using Castle Windsor, take a look at the WCF facility, it lets you use non-default constructor and inject dependencies into your services, among other things.
I'm just wondering what the best practice is for rewiring the bindings in a kernel.
I have a class with a kernel and a private class module with the default production bindings.
For tests I want to override these bindings so I can swap in my Test Doubles / Mocks objects.
does
MyClass.Kernel.Load(new InlineModule(m=> m.Bind<IDepend>().To<TestDoubleDepend>()))
override any existing bindings for IDepend?
I try to use the DI kernel directly in my code as little as possible, instead relying on constructor injection (or properties in select cases, such as Attribute classes). Where I must, however, I use an abstraction layer, so that I can set the DI kernel object, making it mockable in unit tests.
For example:
public interface IDependencyResolver : IDisposable
{
T GetImplementationOf<T>();
}
public static class DependencyResolver
{
private static IDependencyResolver s_resolver;
public static T GetImplementationOf<T>()
{
return s_resolver.GetImplementationOf<T>();
}
public static void RegisterResolver( IDependencyResolver resolver )
{
s_resolver = resolver;
}
public static void DisposeResolver()
{
s_resolver.Dispose();
}
}
Using a pattern like this, you can set the IDependencyResolver from unit tests by calling RegisterResolver with a mock or fake implementation that returns whatever objects you want without having to wire up full modules. It also has a secondary benefit of abstracting your code from a particular IoC container, should you choose to switch to a different one in the future.
Naturally, you'd also want to add additional methods to IDependencyResolver as your needs dictate, I'm just including the basics here as an example. Yes, this would then require that you write a super simple wrapper around the Ninject kernel which implements IDependencyResolver as well.
The reason you want to do this is that your unit tests should really only be testing one thing and by using your actual IoC container, you're really exercising more than the one class under test, which can lead to false negatives that make your tests brittle and (more importantly) shake developer faith in their accuracy over time. This can lead to test apathy and abandonment since it becomes possible for tests to fail but the software to still work correctly ("don't worry, that one always fails, it's not a big deal").
I am just hoping something like this works
var kernel = new StandardKernel(new ProductionModule(config));
kernel.Rebind<ITimer>().To<TimerImpl>().InSingletonScope();
where ProductionModule is my production bindings and I override by calling Rebind in the specific test case. I call rebind on the few items I rebind.
ADVANTAGE: If anyone adds new bindings to production module, I inherit them so it won't break in this fashion which can be nice. This all works in Guice in java...hoping it works here too.
What I tend to do is have a separate test project complete with it's own bindings -- I'm of course assuming that we're talking about unit tests of some sort. The test project uses its own kernel and loads the module in the test project into that kernel. The tests in the project are executed during CI builds and by full builds executed from a build script, though the tests are never deployed into production.
I realize your project/solution setup may not allow this sort of organization, but it seems to be pretty typical from what I've seen.
Peter Mayer's approach shuoul be useful for Unit Test, but IMHO, isn't it easier to just inject manually a Mock using a constructor/property injection?
Seems to me that using specific bindings for a test project will be more useful for other kind of test (integration, functional) but even in that case you surely need to change the bindings depending on the test.
My approach is some kind of a mix of kronhrbaugh and Hamish Smith, creatin a "dependency resolver" where you can register and unregister the modules to be used.
I would add a constructor to MyClass that accepts a Module.
This wouldn't be used in production but would be used in test.
In the test code I would pass a Module that defined the test doubles required.
For a project I am working on, I created separate modules for each environment (test, development, stage, production, etc.). Those modules define the bindings.
Because dev, stage and production all use many of the same bindings, I created a common module they all derive from. Each one then adds it environment specific bindings.
I also have a KernelFactory, that when handed an environment token will spool up an IKernel with the appropriate modules.
This allows me to switch my environment token which will in turn change all of my bindings automatically.
But if this is for unit testing, I agree with the above comments that a simple constructor allowing manual binding is the way to go as it keeps Ninject out of your tests.