I'm using the Simple Injector Dependency Injection framework and it looks cool and nice. But after building a configuration and use it, now I want to know how to change from one configuration to another.
Scenario: Let's imagine I've set up a configuration in the Global Asax and I have the public and global Container instance there. Now I want to make some tests and I want them to use mock classes so I want to change the configuration.
I can, of course, build another configuration and assign it to the global Container created by default, so that every time I run a test the alternative configuration will be set. But on doing that and though I'm in development context the Container is changed for everyone, even for normal requests. I know I'm testing in this context and that shouldn't matter, but I have the feeling that this is not the way for doing this... and I wonder how to change from one configuration to another in the correct way.
When doing unit tests, you shouldn't use the container at all. Just create the class under test by calling its constructor and supplying it with the proper mock objects.
One pattern that helped me out here a lot in the past is the use of a simple test class-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateValidClassUnderTest(params object[] dependencies)
{
return new ClassUnderTest(
dependencies.OfType<ILogger>().SingleOrDefault() ?? new FakeLogger(),
dependencies.OfType<IMailSender>().SingleOrDefault() ?? new FakeMailer(),
dependencies.OfType<IEventPublisher>().SingleOrDefault() ?? new FakePublisher());
}
For integration tests it's much more common to use the container, and swap a few dependencies of the container. Still, those integration tests will not use the container that you created in your application_start, but each integration test will in that case most likely have its own new container instance, since each test should run in isolation. And even if you did use a single container from application_start, your integration tests are ran from a separate project and won't interfere with your running application.
Although each integration test should get its own container instance (if any) you still want to reuse as much of the container configuration code as possible. This can be done by extracting this code to a method that either returns a new configured container instance when called, or configure a supplied container instance (and return nothing). This method should typically do an incomplete configuration and the caller (either your tests or global asax) should add the missing configurations.
Extracting this code: allows you to have multiple end application that partly share the same configuration; allows you to verify the container in an integration test; and allows you to add services that need to be mocked by your integration tests.
To make life easier, Simple Injector allows you to replace existing registrations with new one (for instance a mocked one). You can enable this as follows:
container.Options.AllowOverridingRegistrations = true;
But be careful with this! This option can hide the fact that you accidentally override a registration. In my experience it is in most cases much better to build up an incomplete container and add the missing registrations afterwards instead of overriding them. Or if you decide to override, enable the feature at the last possible moment to prevent any accidental misconfigurations.
Related
We've been using Simple Injector with good success, in a fairly substantial application. We've been using constructor injection for all of our production classes, and configuring Simple Injector to populate everything, and everything's peachy.
We've not, though, used Simple Injector to manage the dependency trees for our unit tests. Instead, we've been new'ing up everything manually.
I just spent a couple of days working through a major refactoring, and nearly all of my time was in fixing these manually-constructed dependency trees in our unit tests.
This has me wondering - does anyone have any patterns they use to configure the dependency trees they use in unit tests? For us, at least, in our tests our dependency trees tend to be fairly simple, but there are a lot of them.
Anyone have a method they use to manage these?
For true unit tests (i.e. those which only test one class, and mock all of its dependencies), it doesn't make any sense to use a DI framework. In these tests:
if you find that you have a lot of repetitive code for newing up an instance of your class with all the mocks you've created, one useful strategy is to create all of your mocks and create the instance for the subject-under-test in your Setup method (these can all be private instance fields), and then each individual test's "arrange" area just has to call the appropriate Setup() code on the methods it needs to mock. This way, you end up with only one new PersonController(...) statement per test class.
if you're needing to create a lot of domain/data objects, it's useful to create Builder objects that start with sane values for testing. So instead of invoking a huge constructor all over your code, with a bunch of fake values, you're mostly just calling, e.g., var person = new PersonBuilder().Build(), possibly with just a few chained method calls for pieces of data that you specifically care about in that test. You may also be interested in AutoFixture, but I've never used it so I can't vouch for it.
If you're writing integration tests, where you need to test the interaction between several parts of the system, but you still need to be able to mock specific pieces, consider creating Builder classes for your services, so you can say, e.g. var personController = new PersonControllerBuilder.WithRealDatabase(connection).WithAuthorization(new AllowAllAuthorizationService()).Build().
If you're writing end-to-end, or "scenario" tests, where you need to test the whole system, then it makes sense to set up your DI framework, leveraging the same configuration code that your real product uses. You can alter the configuration slightly to give yourself better programmatic control over things like which user is logged in and such. You can still leverage the other builder classes you've created for constructing data, too.
var user = new PersonBuilder().Build();
using(Login.As(user))
{
var controller = Container.Get<PersonController>();
var result = controller.GetCurrentUser();
Assert.AreEqual(result.Username, user.Username)
}
Refrain from using your DI container within your unit tests. In unit tests, you try to test one class or module in isolation, and there is little use for a DI container in that area.
Things are different with integration testing, since you want to test how the components in your system integrate and work together. In that case you often use your production DI configuration and swap out some of your services for fake services (e.g. a EmailService) but stick as close to the real thing as you can. In this case you would typically use your Container to resolve the whole object graph.
The desire to use a DI container in the unit tests as well, often stems from ineffective patterns. For instance, in case you try to create the class under test with all its dependencies in each test, you get lots of duplicated initialization code, and a little change in your class under test can in that case ripple through the system and require you to change dozens of unit tests. This obviously causes maintainability problems.
One pattern that helped me out here a lot in the past is the use of a simple SUT-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateClassUnderTest(
ILogger logger = null,
IMailSender mailSender = null,
IEventPublisher publisher = null)
{
return new ClassUnderTest(
logger ?? new FakeLogger(),
mailSender ?? new FakeMailer(),
publisher ?? new FakePublisher());
}
The factory method's arguments duplicate the class's constructor arguments, but makes them all optional. For any particular dependency that is not supplied by the caller, a new default fake implementation will be injected.
This typically works very well, because in most tests you are just interested in one or two dependencies. The other dependencies might be required for the class to function, but might not be interesting for that specific test. The factory method, therefore, allows you to only supply the dependencies that are interesting for the test at hand, while removing the noise of unused dependencies from the test method. As an example using the factory method, here's a test method:
public void Doing_something_will_always_log_a_message()
{
// Arrange
var logger = new ListLogger();
ClassUnderTest sut = CreateClassUnderTest(logger: logger);
// Act
sut.DoSomething();
// Arrange
Assert.IsTrue(logger.Count > 0);
}
If you are interested in learning how to write Readable, Trustworthy and Maintainable (RTM) tests, Roy Osherove's book The Art of Unit Testing (second edition) is an excellent read. This has helped me tremendously in my understanding of writing great unit tests. If you’re interested in a deep-dive into Dependency Injection and its related patterns, consider reading Dependency Injection Principles, Practices, and Patterns (which I co-authored).
Okay, I know this sounds like a weird request but here is my problem. I want to write some integration tests for my WCF service; I have a few key paths that I want to ensure behave properly. One test is to make sure that the correct exceptions are thrown in key places and that they propagate up the pipeline correctly without being intercepted in the wrong place.
So to do this I am overriding an existing registration with a mock object that will throw the exception I want to test for at the location I want it thrown. That part works fine.
Next, I want to resolve my command handler (the system under test), invoke the handle method, and assert that the correct exception happens.
The issue is that when I resolve my command handler I actually get back a loooong chain of decorators with my command handler all the way at the bottom. At the very top of this chain sits a decorator that is my global exception handler. It is this exception handling decorator at the top that I need to unregister because it prevents me from being able to assert that the exception was thrown. My container bootstrapper is quite complex so I have absolutely no desire to recreate a copy of it in my test project minus this one decorator.
If it were just a standard registration I could simply override the registration with a mock exception handler that rethrows the exception. As far as I can tell, though, it does not seem to be possible to override a decorator's registration. I would prefer not to go that route, anyway. It just over complicates the test with an additional mock. It would be much better if I could just unregister the decorator.
If it is not possible to unregister a decorator what would be my next best solution? Add option flags to my bootstrapper to enable/disable certain registrations?
Thanks.
It's impossible to remove a registration in Simple Injector. You can replace an existing registration, but that method does not work when dealing with decorators. Decorators are added in Simple Injector internally by adding a delegate to the ExpressionBuilt event. Since the registered delegate is stored nowhere, it is currently technically impossible to 'deregister' a decorator registration.
The way around this is to simply not register that decorator at all. This might sound silly, but this is a practice I use all the time, even with other containers.
What you can do for instance is to extract the common part of your registrations to a separate method, let's call it BuildUp. This method lacks the registrations that differ from the different applications that use it. In your case you have at least 2 'applications'; the real application and the integration test project. Both projects can call the BuildUp and add extra registrations before or after calling BuildUp. For instance:
var container = new Container();
container.Options.DefaultScopedLifestyle = new WebRequestLifestyle();
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(InnerMostCommandHandlerDecorator<>));
CompositionRoot.BuildUp(container);
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(OuterMostCommandHandlerDecorator<>));
This method seems to work very well in your case, since you want to add an 'outer most' decorator. Besides, letting the BuildUp leave 'holes' in your registration makes it often very easy to see when some application forgets to fill in the blanks, since your can let Simple Injector fail fast by calling container.Verify().
Another common way us to pass a configuration object to the BuildUp method. This configuration object can contain the necessary information for making the right set of registrations as the caller requires. For instance, such configuration object can have a simple boolean flag:
public static void Build(Container container, ApplicationConfig config) {
// Lot's of registrations
if (config.AddGlobalExceptionHandler) {
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(GlobalExceptionHandlerCommandHandlerDecorator<>));
}
}
Passing on a configuration object onto the BuildUp method is also a great way to decouple your BuildUp method from the configuration system. This allows you to more easily call BuildUp during integration testing, without being forced to have a copy of the complete configuration file in the test project.
Instead of using a flag property, you can also have the complete list of decorators inside the configuration object and let the BuildUp method iterate over it and register them. This allows the caller to remove or add decorators from the list before they are registered:
var config = new ApplicationConfig();
// Remove decorator
config.CommandHandlerDecorators.Remove(
typeof(AuthorizationCommandHandlerDecorator<>));
// Add decorator after another decorator
config.CommandHandlerDecorators.Insert(
index: 1 + config.CommandHandlerDecorators.IndexOf(
typeof(TransactionCommandHandlerDecorator<>)),
item: typeof(DeadlockRetryCommandHandlerDecorator<>));
// Add an outer most decorator
config.CommandHandlerDecorators.Add(
typeof(TestPerformanceProfilingCommandHandlerDecorator<>));
CompositionRoot.BuildUp(container, config);
public static void BuildUp(Container container, ApplicationConfig config) {
// Lot's of registrations here.
config.CommandHandlerDecorators.ForEach(type =>
container.RegisterDecorator(typeof(ICommandHandler<>), type));
}
I've used all three methods in the past very successfully. Which option to choice depends on your needs.
As far as I know it is not possible to remove any registration.
With unit testing you normally would not use the container at all. Since your performing integration tests, using the container is indeed a must.
I can think of 2 ways of doing what you want.
The first is passing some option flag to your bootstrapper which swaps between production and testing environment.
The second is thinking about your testing approach. From your question it seems that a certain part in your ICommandHandler chain should throw an exception.
I would think this is pretty simple to test using a normal unit test instead of an integration test. In this case you wouldn't use the container but create the chain by hand.
A unittest for you commandhandler would be as simple as:
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))]
public void CommandHandlerThrowsCorrectException()
{
var handler = new Decorator1(new Decorator2(new MyHandler()));
handler.Handle(new MyCommand);
}
You can use other integration tests to check if a command handed to the WCF service results in building and handling the correct ICommandHandler chain.
I normally use the following test setup:
Use unit tests for testing all cases you have at hand, without using a container => which means at least one unit test per component in the application
Use unit tests for testing each separate decorator, without use of the container => so does the GenericExceptionCommandHandlerDecorator in your case, handle an exception
Use unit tests for testing if the registrations made to the container are correct and if decorators are applied in the correct order, ofcourse with the use of the container. Part of this job is already done by the container, if you use the built-in verification of the made registrations, using container.Verify()
Use as less as possible integration tests, just to test if the application works and flows as it should be. Because each component and decorator is unit tested, the need to test the behavior of the application in integration tests is far less. There will be always scenario's where you want to mimic user interaction with the application, but it should be rare and mostly covered by unit tests.
We've been using Simple Injector with good success, in a fairly substantial application. We've been using constructor injection for all of our production classes, and configuring Simple Injector to populate everything, and everything's peachy.
We've not, though, used Simple Injector to manage the dependency trees for our unit tests. Instead, we've been new'ing up everything manually.
I just spent a couple of days working through a major refactoring, and nearly all of my time was in fixing these manually-constructed dependency trees in our unit tests.
This has me wondering - does anyone have any patterns they use to configure the dependency trees they use in unit tests? For us, at least, in our tests our dependency trees tend to be fairly simple, but there are a lot of them.
Anyone have a method they use to manage these?
For true unit tests (i.e. those which only test one class, and mock all of its dependencies), it doesn't make any sense to use a DI framework. In these tests:
if you find that you have a lot of repetitive code for newing up an instance of your class with all the mocks you've created, one useful strategy is to create all of your mocks and create the instance for the subject-under-test in your Setup method (these can all be private instance fields), and then each individual test's "arrange" area just has to call the appropriate Setup() code on the methods it needs to mock. This way, you end up with only one new PersonController(...) statement per test class.
if you're needing to create a lot of domain/data objects, it's useful to create Builder objects that start with sane values for testing. So instead of invoking a huge constructor all over your code, with a bunch of fake values, you're mostly just calling, e.g., var person = new PersonBuilder().Build(), possibly with just a few chained method calls for pieces of data that you specifically care about in that test. You may also be interested in AutoFixture, but I've never used it so I can't vouch for it.
If you're writing integration tests, where you need to test the interaction between several parts of the system, but you still need to be able to mock specific pieces, consider creating Builder classes for your services, so you can say, e.g. var personController = new PersonControllerBuilder.WithRealDatabase(connection).WithAuthorization(new AllowAllAuthorizationService()).Build().
If you're writing end-to-end, or "scenario" tests, where you need to test the whole system, then it makes sense to set up your DI framework, leveraging the same configuration code that your real product uses. You can alter the configuration slightly to give yourself better programmatic control over things like which user is logged in and such. You can still leverage the other builder classes you've created for constructing data, too.
var user = new PersonBuilder().Build();
using(Login.As(user))
{
var controller = Container.Get<PersonController>();
var result = controller.GetCurrentUser();
Assert.AreEqual(result.Username, user.Username)
}
Refrain from using your DI container within your unit tests. In unit tests, you try to test one class or module in isolation, and there is little use for a DI container in that area.
Things are different with integration testing, since you want to test how the components in your system integrate and work together. In that case you often use your production DI configuration and swap out some of your services for fake services (e.g. a EmailService) but stick as close to the real thing as you can. In this case you would typically use your Container to resolve the whole object graph.
The desire to use a DI container in the unit tests as well, often stems from ineffective patterns. For instance, in case you try to create the class under test with all its dependencies in each test, you get lots of duplicated initialization code, and a little change in your class under test can in that case ripple through the system and require you to change dozens of unit tests. This obviously causes maintainability problems.
One pattern that helped me out here a lot in the past is the use of a simple SUT-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateClassUnderTest(
ILogger logger = null,
IMailSender mailSender = null,
IEventPublisher publisher = null)
{
return new ClassUnderTest(
logger ?? new FakeLogger(),
mailSender ?? new FakeMailer(),
publisher ?? new FakePublisher());
}
The factory method's arguments duplicate the class's constructor arguments, but makes them all optional. For any particular dependency that is not supplied by the caller, a new default fake implementation will be injected.
This typically works very well, because in most tests you are just interested in one or two dependencies. The other dependencies might be required for the class to function, but might not be interesting for that specific test. The factory method, therefore, allows you to only supply the dependencies that are interesting for the test at hand, while removing the noise of unused dependencies from the test method. As an example using the factory method, here's a test method:
public void Doing_something_will_always_log_a_message()
{
// Arrange
var logger = new ListLogger();
ClassUnderTest sut = CreateClassUnderTest(logger: logger);
// Act
sut.DoSomething();
// Arrange
Assert.IsTrue(logger.Count > 0);
}
If you are interested in learning how to write Readable, Trustworthy and Maintainable (RTM) tests, Roy Osherove's book The Art of Unit Testing (second edition) is an excellent read. This has helped me tremendously in my understanding of writing great unit tests. If you’re interested in a deep-dive into Dependency Injection and its related patterns, consider reading Dependency Injection Principles, Practices, and Patterns (which I co-authored).
I have to test IoC Service implemented in MyMobileApp.MyService.Module.
At the moment my unit test project MyMobileApp.MyService.Module.Tests has referenced on IoC Module MyMobileApp.MyService.Module
CFTestRunner is used for debugging and Visual Studio test runner can be also used to run test with output to GUI window(???)
IMHO this architecture has drawbacks:
Services must be initialized manually in [TestInitialize]:
try
{
var module = new Module(); // Drawback: what if 2 services should be used?
module.AddServices();
}
catch (Exception) // Services must be added only once
{
// In TestInitialize
// ClassInitilaize method is not executed if unit test is running under CFTestRunner???
}
this.myService = RootWorkItem.Services.Get();
To test another service implementation I should change reference and recompile test project
Therefore I have some improvements ideas:
Is it good to use testable version of ProfileCatalog to allow IoC container add testing services automatically? How can this ProfileCatalog correctly deployed on devcie/emulator?
Is it good to use inject services in test class (if it possible)?
Also where can I found sample code/project(s)/articles with good testing patterns - I mean testing IoC services
Typically I interface all services. This allows creation and injection of mock services into the ServicesCollection. My Test project will then usually contain several implementations of the interface, allowing me to use whichever is appropriate for a given test. You then create an inject the appropriate items in the TestInitialize or in the TestClass constructor. Your Module should pull dependency services by Interface, so it will get your test instances when you want.
If you look at the source for CFTestRunner, you can see I didn't do a lot of ancillary support beyond the TestMethod and TestInitialize attributes. You could certainly extend it if you need, but the reason it's so thin is because I never actually needed anything else.
You can't really test the direct Module loading because CFTestRunner isn't a SmartClientApplication so the IModuleInfoStore is never going to get loaded. I suppose you could create a derivative CFTestRunner that is a SmartClientApplication, which would allow you to create an attribute for your TestClass that would let you return an IModuleInfoStore.
I'm just wondering what the best practice is for rewiring the bindings in a kernel.
I have a class with a kernel and a private class module with the default production bindings.
For tests I want to override these bindings so I can swap in my Test Doubles / Mocks objects.
does
MyClass.Kernel.Load(new InlineModule(m=> m.Bind<IDepend>().To<TestDoubleDepend>()))
override any existing bindings for IDepend?
I try to use the DI kernel directly in my code as little as possible, instead relying on constructor injection (or properties in select cases, such as Attribute classes). Where I must, however, I use an abstraction layer, so that I can set the DI kernel object, making it mockable in unit tests.
For example:
public interface IDependencyResolver : IDisposable
{
T GetImplementationOf<T>();
}
public static class DependencyResolver
{
private static IDependencyResolver s_resolver;
public static T GetImplementationOf<T>()
{
return s_resolver.GetImplementationOf<T>();
}
public static void RegisterResolver( IDependencyResolver resolver )
{
s_resolver = resolver;
}
public static void DisposeResolver()
{
s_resolver.Dispose();
}
}
Using a pattern like this, you can set the IDependencyResolver from unit tests by calling RegisterResolver with a mock or fake implementation that returns whatever objects you want without having to wire up full modules. It also has a secondary benefit of abstracting your code from a particular IoC container, should you choose to switch to a different one in the future.
Naturally, you'd also want to add additional methods to IDependencyResolver as your needs dictate, I'm just including the basics here as an example. Yes, this would then require that you write a super simple wrapper around the Ninject kernel which implements IDependencyResolver as well.
The reason you want to do this is that your unit tests should really only be testing one thing and by using your actual IoC container, you're really exercising more than the one class under test, which can lead to false negatives that make your tests brittle and (more importantly) shake developer faith in their accuracy over time. This can lead to test apathy and abandonment since it becomes possible for tests to fail but the software to still work correctly ("don't worry, that one always fails, it's not a big deal").
I am just hoping something like this works
var kernel = new StandardKernel(new ProductionModule(config));
kernel.Rebind<ITimer>().To<TimerImpl>().InSingletonScope();
where ProductionModule is my production bindings and I override by calling Rebind in the specific test case. I call rebind on the few items I rebind.
ADVANTAGE: If anyone adds new bindings to production module, I inherit them so it won't break in this fashion which can be nice. This all works in Guice in java...hoping it works here too.
What I tend to do is have a separate test project complete with it's own bindings -- I'm of course assuming that we're talking about unit tests of some sort. The test project uses its own kernel and loads the module in the test project into that kernel. The tests in the project are executed during CI builds and by full builds executed from a build script, though the tests are never deployed into production.
I realize your project/solution setup may not allow this sort of organization, but it seems to be pretty typical from what I've seen.
Peter Mayer's approach shuoul be useful for Unit Test, but IMHO, isn't it easier to just inject manually a Mock using a constructor/property injection?
Seems to me that using specific bindings for a test project will be more useful for other kind of test (integration, functional) but even in that case you surely need to change the bindings depending on the test.
My approach is some kind of a mix of kronhrbaugh and Hamish Smith, creatin a "dependency resolver" where you can register and unregister the modules to be used.
I would add a constructor to MyClass that accepts a Module.
This wouldn't be used in production but would be used in test.
In the test code I would pass a Module that defined the test doubles required.
For a project I am working on, I created separate modules for each environment (test, development, stage, production, etc.). Those modules define the bindings.
Because dev, stage and production all use many of the same bindings, I created a common module they all derive from. Each one then adds it environment specific bindings.
I also have a KernelFactory, that when handed an environment token will spool up an IKernel with the appropriate modules.
This allows me to switch my environment token which will in turn change all of my bindings automatically.
But if this is for unit testing, I agree with the above comments that a simple constructor allowing manual binding is the way to go as it keeps Ninject out of your tests.