Ninject kernel binding overrides - c#

I'm just wondering what the best practice is for rewiring the bindings in a kernel.
I have a class with a kernel and a private class module with the default production bindings.
For tests I want to override these bindings so I can swap in my Test Doubles / Mocks objects.
does
MyClass.Kernel.Load(new InlineModule(m=> m.Bind<IDepend>().To<TestDoubleDepend>()))
override any existing bindings for IDepend?

I try to use the DI kernel directly in my code as little as possible, instead relying on constructor injection (or properties in select cases, such as Attribute classes). Where I must, however, I use an abstraction layer, so that I can set the DI kernel object, making it mockable in unit tests.
For example:
public interface IDependencyResolver : IDisposable
{
T GetImplementationOf<T>();
}
public static class DependencyResolver
{
private static IDependencyResolver s_resolver;
public static T GetImplementationOf<T>()
{
return s_resolver.GetImplementationOf<T>();
}
public static void RegisterResolver( IDependencyResolver resolver )
{
s_resolver = resolver;
}
public static void DisposeResolver()
{
s_resolver.Dispose();
}
}
Using a pattern like this, you can set the IDependencyResolver from unit tests by calling RegisterResolver with a mock or fake implementation that returns whatever objects you want without having to wire up full modules. It also has a secondary benefit of abstracting your code from a particular IoC container, should you choose to switch to a different one in the future.
Naturally, you'd also want to add additional methods to IDependencyResolver as your needs dictate, I'm just including the basics here as an example. Yes, this would then require that you write a super simple wrapper around the Ninject kernel which implements IDependencyResolver as well.
The reason you want to do this is that your unit tests should really only be testing one thing and by using your actual IoC container, you're really exercising more than the one class under test, which can lead to false negatives that make your tests brittle and (more importantly) shake developer faith in their accuracy over time. This can lead to test apathy and abandonment since it becomes possible for tests to fail but the software to still work correctly ("don't worry, that one always fails, it's not a big deal").

I am just hoping something like this works
var kernel = new StandardKernel(new ProductionModule(config));
kernel.Rebind<ITimer>().To<TimerImpl>().InSingletonScope();
where ProductionModule is my production bindings and I override by calling Rebind in the specific test case. I call rebind on the few items I rebind.
ADVANTAGE: If anyone adds new bindings to production module, I inherit them so it won't break in this fashion which can be nice. This all works in Guice in java...hoping it works here too.

What I tend to do is have a separate test project complete with it's own bindings -- I'm of course assuming that we're talking about unit tests of some sort. The test project uses its own kernel and loads the module in the test project into that kernel. The tests in the project are executed during CI builds and by full builds executed from a build script, though the tests are never deployed into production.
I realize your project/solution setup may not allow this sort of organization, but it seems to be pretty typical from what I've seen.

Peter Mayer's approach shuoul be useful for Unit Test, but IMHO, isn't it easier to just inject manually a Mock using a constructor/property injection?
Seems to me that using specific bindings for a test project will be more useful for other kind of test (integration, functional) but even in that case you surely need to change the bindings depending on the test.
My approach is some kind of a mix of kronhrbaugh and Hamish Smith, creatin a "dependency resolver" where you can register and unregister the modules to be used.

I would add a constructor to MyClass that accepts a Module.
This wouldn't be used in production but would be used in test.
In the test code I would pass a Module that defined the test doubles required.

For a project I am working on, I created separate modules for each environment (test, development, stage, production, etc.). Those modules define the bindings.
Because dev, stage and production all use many of the same bindings, I created a common module they all derive from. Each one then adds it environment specific bindings.
I also have a KernelFactory, that when handed an environment token will spool up an IKernel with the appropriate modules.
This allows me to switch my environment token which will in turn change all of my bindings automatically.
But if this is for unit testing, I agree with the above comments that a simple constructor allowing manual binding is the way to go as it keeps Ninject out of your tests.

Related

Should I use a dependency injection container in Unit Tests? [duplicate]

We've been using Simple Injector with good success, in a fairly substantial application. We've been using constructor injection for all of our production classes, and configuring Simple Injector to populate everything, and everything's peachy.
We've not, though, used Simple Injector to manage the dependency trees for our unit tests. Instead, we've been new'ing up everything manually.
I just spent a couple of days working through a major refactoring, and nearly all of my time was in fixing these manually-constructed dependency trees in our unit tests.
This has me wondering - does anyone have any patterns they use to configure the dependency trees they use in unit tests? For us, at least, in our tests our dependency trees tend to be fairly simple, but there are a lot of them.
Anyone have a method they use to manage these?
For true unit tests (i.e. those which only test one class, and mock all of its dependencies), it doesn't make any sense to use a DI framework. In these tests:
if you find that you have a lot of repetitive code for newing up an instance of your class with all the mocks you've created, one useful strategy is to create all of your mocks and create the instance for the subject-under-test in your Setup method (these can all be private instance fields), and then each individual test's "arrange" area just has to call the appropriate Setup() code on the methods it needs to mock. This way, you end up with only one new PersonController(...) statement per test class.
if you're needing to create a lot of domain/data objects, it's useful to create Builder objects that start with sane values for testing. So instead of invoking a huge constructor all over your code, with a bunch of fake values, you're mostly just calling, e.g., var person = new PersonBuilder().Build(), possibly with just a few chained method calls for pieces of data that you specifically care about in that test. You may also be interested in AutoFixture, but I've never used it so I can't vouch for it.
If you're writing integration tests, where you need to test the interaction between several parts of the system, but you still need to be able to mock specific pieces, consider creating Builder classes for your services, so you can say, e.g. var personController = new PersonControllerBuilder.WithRealDatabase(connection).WithAuthorization(new AllowAllAuthorizationService()).Build().
If you're writing end-to-end, or "scenario" tests, where you need to test the whole system, then it makes sense to set up your DI framework, leveraging the same configuration code that your real product uses. You can alter the configuration slightly to give yourself better programmatic control over things like which user is logged in and such. You can still leverage the other builder classes you've created for constructing data, too.
var user = new PersonBuilder().Build();
using(Login.As(user))
{
var controller = Container.Get<PersonController>();
var result = controller.GetCurrentUser();
Assert.AreEqual(result.Username, user.Username)
}
Refrain from using your DI container within your unit tests. In unit tests, you try to test one class or module in isolation, and there is little use for a DI container in that area.
Things are different with integration testing, since you want to test how the components in your system integrate and work together. In that case you often use your production DI configuration and swap out some of your services for fake services (e.g. a EmailService) but stick as close to the real thing as you can. In this case you would typically use your Container to resolve the whole object graph.
The desire to use a DI container in the unit tests as well, often stems from ineffective patterns. For instance, in case you try to create the class under test with all its dependencies in each test, you get lots of duplicated initialization code, and a little change in your class under test can in that case ripple through the system and require you to change dozens of unit tests. This obviously causes maintainability problems.
One pattern that helped me out here a lot in the past is the use of a simple SUT-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateClassUnderTest(
ILogger logger = null,
IMailSender mailSender = null,
IEventPublisher publisher = null)
{
return new ClassUnderTest(
logger ?? new FakeLogger(),
mailSender ?? new FakeMailer(),
publisher ?? new FakePublisher());
}
The factory method's arguments duplicate the class's constructor arguments, but makes them all optional. For any particular dependency that is not supplied by the caller, a new default fake implementation will be injected.
This typically works very well, because in most tests you are just interested in one or two dependencies. The other dependencies might be required for the class to function, but might not be interesting for that specific test. The factory method, therefore, allows you to only supply the dependencies that are interesting for the test at hand, while removing the noise of unused dependencies from the test method. As an example using the factory method, here's a test method:
public void Doing_something_will_always_log_a_message()
{
// Arrange
var logger = new ListLogger();
ClassUnderTest sut = CreateClassUnderTest(logger: logger);
// Act
sut.DoSomething();
// Arrange
Assert.IsTrue(logger.Count > 0);
}
If you are interested in learning how to write Readable, Trustworthy and Maintainable (RTM) tests, Roy Osherove's book The Art of Unit Testing (second edition) is an excellent read. This has helped me tremendously in my understanding of writing great unit tests. If you’re interested in a deep-dive into Dependency Injection and its related patterns, consider reading Dependency Injection Principles, Practices, and Patterns (which I co-authored).

How should I test IoC services properly?

I have to test IoC Service implemented in MyMobileApp.MyService.Module.
At the moment my unit test project MyMobileApp.MyService.Module.Tests has referenced on IoC Module MyMobileApp.MyService.Module
CFTestRunner is used for debugging and Visual Studio test runner can be also used to run test with output to GUI window(???)
IMHO this architecture has drawbacks:
Services must be initialized manually in [TestInitialize]:
try
{
var module = new Module(); // Drawback: what if 2 services should be used?
module.AddServices();
}
catch (Exception) // Services must be added only once
{
// In TestInitialize
// ClassInitilaize method is not executed if unit test is running under CFTestRunner???
}
this.myService = RootWorkItem.Services.Get();
To test another service implementation I should change reference and recompile test project
Therefore I have some improvements ideas:
Is it good to use testable version of ProfileCatalog to allow IoC container add testing services automatically? How can this ProfileCatalog correctly deployed on devcie/emulator?
Is it good to use inject services in test class (if it possible)?
Also where can I found sample code/project(s)/articles with good testing patterns - I mean testing IoC services
Typically I interface all services. This allows creation and injection of mock services into the ServicesCollection. My Test project will then usually contain several implementations of the interface, allowing me to use whichever is appropriate for a given test. You then create an inject the appropriate items in the TestInitialize or in the TestClass constructor. Your Module should pull dependency services by Interface, so it will get your test instances when you want.
If you look at the source for CFTestRunner, you can see I didn't do a lot of ancillary support beyond the TestMethod and TestInitialize attributes. You could certainly extend it if you need, but the reason it's so thin is because I never actually needed anything else.
You can't really test the direct Module loading because CFTestRunner isn't a SmartClientApplication so the IModuleInfoStore is never going to get loaded. I suppose you could create a derivative CFTestRunner that is a SmartClientApplication, which would allow you to create an attribute for your TestClass that would let you return an IModuleInfoStore.

What's wrong with globally instantiating services on app start, instead of Ninject dependency injection?

I'm currently using Ninject to handle DI on a C#/.Net/MVC application. When I trace the creation of instances of my services, I find that services are called and constructed quite a lot during a the life cycle, so I'm having to instantiate services and cache them, and then check for cached services before instantiating another. The constructors are sometimes quite heavy).
To me this seems ridiculous, as the services do not need unique constructor arguments, so instantiating them once is enough for the entire application scope.
What I've done as a quick alternative (just for proof-of-concept for now to see if it even works) is...
Created a static class (called AppServices) with all my service interfaces as it's properties.
Given this class an Init() method that instantiates a direct implementation of each service interface from my service library. This mimics binding them to a kernel if I was using Ninject (or other DI handler).
E.g.
public static class AppServices(){
public IMyService MyService;
public IMyOtherService MyOtherService;
public Init(){
MyService = new MyLib.MyService();
MyOtherService = new MyLib.MyOtherService();
}
}
On App_Start I call the Init() method to create a list of globally accessible services that are only instantiated once.
From then on, every time I need an instance of a service, I get it from AppServices. This way I don't have to keep constructing new instances that I don't need.
E.g.
var IMyService _myService = AppServices.MyService;
This works fine and I haven't had ANY issues arise yet. My problem is that this seems way too simple. It is only a few lines of code, creating a static class in application scope. Being as it does exactly what I would need Ninject to do, but in (what seems to me for my purposes) a much cleaner and performance-saving way, why do I need Ninject? I mean, these complicated dependency injection handlers are created for a reason right? There must be something wrong with my "simple" interpretation of DI, I just can't see it.
Can any one tell me why creating a global static container for my service instances is a bad idea, and maybe explain exactly what make Ninject (or any other DI handler) so necessary. I understand the concepts of DI so please don't try and explain what makes it so great. I know. I want to know exactly what it does under the hood that is so different to my App_Start method.
Thanks
Your question needs to be divided into two questions:
Is it really wrong to use the singleton pattern instead to inject dependencies?
Why do I need an IoC container?
1)
There are many reasons why you should not use the singleton pattern. Here are some of the major ones:
Testability
Yes you can test with static instances. But you can't test Isolated (FIRST). I have seen projects that searched a long time why tests start failing for no obvious reason until they realized that it is due to tests that were run in a different order. When you had that problem once you will always want your tests to be as isolated as possible. Static values couples tests.
This gets even worse when you also do integration/spec testing additional to unittesting.
Reusability
You can't simply reuse your components in other projects. Other projects will have to use that concept as well even if they might decide to use an IoC container.
Or you can't create another instance of your component with different dependencies. The components dependencies will be hard wired to the instances in your AppServices. You will have to change the components implementation to use different dependencies.
2) Doing DI does not mean that you have to use any IoC container. You can implement your own IDependencyResolver that creates your controllers manually and injects the same instance of your services wherever they are required. IoC containers use some performance but they simplyfy the creation of your object trees. You will have to decide yourself what matters more performance or simpler creation of your controllers.

Does composition root needs unit testing?

I was trying to find an answer but it seems it's not directly discussed a lot. I have a composition root of my application where I create a DI-container and register everything there and then resolve needed top-level classes that gets all dependencies. As this is all happening internally - it becomes hard to unit test the composition root. You can do virtual methods, protected fields and so on, but I am not a big fan of introducing such things just to be able to unit test. There are no big problems with other classes as they all use constructor injection. So the question is - does it make much sense to test the composition root at all? It does have some additional logic, but not much and in most cases any failures there would pop up during application start.
Some code that I have:
public void Initialize(/*Some configuration parameters here*/)
{
m_Container = new UnityContainer();
/*Regestering dependencies*/
m_Distributor = m_Container.Resolve<ISimpleFeedMessageDistributor>();
}
public void Start()
{
if (m_Distributor == null)
{
throw new ApplicationException("Initialize should be called before start");
}
m_Distributor.Start();
}
public void Close()
{
if (m_Distributor != null)
{
m_Distributor.Close();
}
}
does it make much sense to test the composition root at all?
Would you like to know whether your application is written correctly? You probably do and that's why you write tests. For this same reason you should test your composition root.
These tests however are specifically targeted at the correctness of the wiring of the system. You don't want to test whether a single class functions correctly, since that's already covered by some unit test. Neither do you want to test whether classes call other classes in the right order, because that's what you want to test in your normal integration tests (call an MVC controller and see whether the call ends up in the database is an example of such integration test).
Here are some things you probably should test:
That all top-level classes can be resolved. This prevents you from having to click through all screens in the application to find out whether everything is wired correctly.
That components only depend on equally or longer lived services. When components depend on another component that is configured with a shorter lifetime, that component will 'promote' the lifetime of that dependency, which will often lead to bugs that are hard to reproduce and fix. Checking for this kind of issues is important. This type of error is also known as a lifestyle mismatch or captive dependency.
That decorators and other interception mechanisms that are crucial for the correctness of the application are applied correctly. Decorators could for instance add cross cutting concerns such as transaction handling, security and caching and it is important that these concerns are executed in the right order (for instance a security check must be performed before querying the cache), but it can be hard to test this using a normal integration test.
To be able to do this however, you will need to have a verifiable DI configuration.
Do note that not everybody shares this opinion though. My experience however is that verifying the correctness of your configuration is highly valuable.
So testing these things can be challenging with some IoC containers, while other IoC container have facilities to help you with this (but Unity unfortunately lacks most of those features).
Some containers even have some sort of verification method that can be called that will verify the configuration. What 'verify' means differs for each library. Simple Injector for instance (I'm the lead dev for Simple Injector) has a Verify method that will simply iterate all registrations and call GetInstance on each of them to ensure every instance can be created. I always advice users you call Verify in their composition root whenever possible. This is not always possible for instance because when the configuration gets big, a call to Verify can cause the application to start too slowly. But still, it's a good starting point and can remove a lot of pain. If it takes to long, you can always move the call to an automated test.
And for Simple Injector, this is just the beginning. Simple Injector contains Diagnostic Services that checks the container on common misconfigurations, such as the earlier stated 'lifestyle mismatch'.
So you should absolutely want to test, but I'm not sure whether to call those tests "unit tests", although I manage to run those tests in isolation (without having to hit a database or web service).

Use different configurations with Simple Injector

I'm using the Simple Injector Dependency Injection framework and it looks cool and nice. But after building a configuration and use it, now I want to know how to change from one configuration to another.
Scenario: Let's imagine I've set up a configuration in the Global Asax and I have the public and global Container instance there. Now I want to make some tests and I want them to use mock classes so I want to change the configuration.
I can, of course, build another configuration and assign it to the global Container created by default, so that every time I run a test the alternative configuration will be set. But on doing that and though I'm in development context the Container is changed for everyone, even for normal requests. I know I'm testing in this context and that shouldn't matter, but I have the feeling that this is not the way for doing this... and I wonder how to change from one configuration to another in the correct way.
When doing unit tests, you shouldn't use the container at all. Just create the class under test by calling its constructor and supplying it with the proper mock objects.
One pattern that helped me out here a lot in the past is the use of a simple test class-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateValidClassUnderTest(params object[] dependencies)
{
return new ClassUnderTest(
dependencies.OfType<ILogger>().SingleOrDefault() ?? new FakeLogger(),
dependencies.OfType<IMailSender>().SingleOrDefault() ?? new FakeMailer(),
dependencies.OfType<IEventPublisher>().SingleOrDefault() ?? new FakePublisher());
}
For integration tests it's much more common to use the container, and swap a few dependencies of the container. Still, those integration tests will not use the container that you created in your application_start, but each integration test will in that case most likely have its own new container instance, since each test should run in isolation. And even if you did use a single container from application_start, your integration tests are ran from a separate project and won't interfere with your running application.
Although each integration test should get its own container instance (if any) you still want to reuse as much of the container configuration code as possible. This can be done by extracting this code to a method that either returns a new configured container instance when called, or configure a supplied container instance (and return nothing). This method should typically do an incomplete configuration and the caller (either your tests or global asax) should add the missing configurations.
Extracting this code: allows you to have multiple end application that partly share the same configuration; allows you to verify the container in an integration test; and allows you to add services that need to be mocked by your integration tests.
To make life easier, Simple Injector allows you to replace existing registrations with new one (for instance a mocked one). You can enable this as follows:
container.Options.AllowOverridingRegistrations = true;
But be careful with this! This option can hide the fact that you accidentally override a registration. In my experience it is in most cases much better to build up an incomplete container and add the missing registrations afterwards instead of overriding them. Or if you decide to override, enable the feature at the last possible moment to prevent any accidental misconfigurations.

Categories