Does composition root needs unit testing? - c#

I was trying to find an answer but it seems it's not directly discussed a lot. I have a composition root of my application where I create a DI-container and register everything there and then resolve needed top-level classes that gets all dependencies. As this is all happening internally - it becomes hard to unit test the composition root. You can do virtual methods, protected fields and so on, but I am not a big fan of introducing such things just to be able to unit test. There are no big problems with other classes as they all use constructor injection. So the question is - does it make much sense to test the composition root at all? It does have some additional logic, but not much and in most cases any failures there would pop up during application start.
Some code that I have:
public void Initialize(/*Some configuration parameters here*/)
{
m_Container = new UnityContainer();
/*Regestering dependencies*/
m_Distributor = m_Container.Resolve<ISimpleFeedMessageDistributor>();
}
public void Start()
{
if (m_Distributor == null)
{
throw new ApplicationException("Initialize should be called before start");
}
m_Distributor.Start();
}
public void Close()
{
if (m_Distributor != null)
{
m_Distributor.Close();
}
}

does it make much sense to test the composition root at all?
Would you like to know whether your application is written correctly? You probably do and that's why you write tests. For this same reason you should test your composition root.
These tests however are specifically targeted at the correctness of the wiring of the system. You don't want to test whether a single class functions correctly, since that's already covered by some unit test. Neither do you want to test whether classes call other classes in the right order, because that's what you want to test in your normal integration tests (call an MVC controller and see whether the call ends up in the database is an example of such integration test).
Here are some things you probably should test:
That all top-level classes can be resolved. This prevents you from having to click through all screens in the application to find out whether everything is wired correctly.
That components only depend on equally or longer lived services. When components depend on another component that is configured with a shorter lifetime, that component will 'promote' the lifetime of that dependency, which will often lead to bugs that are hard to reproduce and fix. Checking for this kind of issues is important. This type of error is also known as a lifestyle mismatch or captive dependency.
That decorators and other interception mechanisms that are crucial for the correctness of the application are applied correctly. Decorators could for instance add cross cutting concerns such as transaction handling, security and caching and it is important that these concerns are executed in the right order (for instance a security check must be performed before querying the cache), but it can be hard to test this using a normal integration test.
To be able to do this however, you will need to have a verifiable DI configuration.
Do note that not everybody shares this opinion though. My experience however is that verifying the correctness of your configuration is highly valuable.
So testing these things can be challenging with some IoC containers, while other IoC container have facilities to help you with this (but Unity unfortunately lacks most of those features).
Some containers even have some sort of verification method that can be called that will verify the configuration. What 'verify' means differs for each library. Simple Injector for instance (I'm the lead dev for Simple Injector) has a Verify method that will simply iterate all registrations and call GetInstance on each of them to ensure every instance can be created. I always advice users you call Verify in their composition root whenever possible. This is not always possible for instance because when the configuration gets big, a call to Verify can cause the application to start too slowly. But still, it's a good starting point and can remove a lot of pain. If it takes to long, you can always move the call to an automated test.
And for Simple Injector, this is just the beginning. Simple Injector contains Diagnostic Services that checks the container on common misconfigurations, such as the earlier stated 'lifestyle mismatch'.
So you should absolutely want to test, but I'm not sure whether to call those tests "unit tests", although I manage to run those tests in isolation (without having to hit a database or web service).

Related

Remove registered decorator in Simple Injector

Okay, I know this sounds like a weird request but here is my problem. I want to write some integration tests for my WCF service; I have a few key paths that I want to ensure behave properly. One test is to make sure that the correct exceptions are thrown in key places and that they propagate up the pipeline correctly without being intercepted in the wrong place.
So to do this I am overriding an existing registration with a mock object that will throw the exception I want to test for at the location I want it thrown. That part works fine.
Next, I want to resolve my command handler (the system under test), invoke the handle method, and assert that the correct exception happens.
The issue is that when I resolve my command handler I actually get back a loooong chain of decorators with my command handler all the way at the bottom. At the very top of this chain sits a decorator that is my global exception handler. It is this exception handling decorator at the top that I need to unregister because it prevents me from being able to assert that the exception was thrown. My container bootstrapper is quite complex so I have absolutely no desire to recreate a copy of it in my test project minus this one decorator.
If it were just a standard registration I could simply override the registration with a mock exception handler that rethrows the exception. As far as I can tell, though, it does not seem to be possible to override a decorator's registration. I would prefer not to go that route, anyway. It just over complicates the test with an additional mock. It would be much better if I could just unregister the decorator.
If it is not possible to unregister a decorator what would be my next best solution? Add option flags to my bootstrapper to enable/disable certain registrations?
Thanks.
It's impossible to remove a registration in Simple Injector. You can replace an existing registration, but that method does not work when dealing with decorators. Decorators are added in Simple Injector internally by adding a delegate to the ExpressionBuilt event. Since the registered delegate is stored nowhere, it is currently technically impossible to 'deregister' a decorator registration.
The way around this is to simply not register that decorator at all. This might sound silly, but this is a practice I use all the time, even with other containers.
What you can do for instance is to extract the common part of your registrations to a separate method, let's call it BuildUp. This method lacks the registrations that differ from the different applications that use it. In your case you have at least 2 'applications'; the real application and the integration test project. Both projects can call the BuildUp and add extra registrations before or after calling BuildUp. For instance:
var container = new Container();
container.Options.DefaultScopedLifestyle = new WebRequestLifestyle();
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(InnerMostCommandHandlerDecorator<>));
CompositionRoot.BuildUp(container);
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(OuterMostCommandHandlerDecorator<>));
This method seems to work very well in your case, since you want to add an 'outer most' decorator. Besides, letting the BuildUp leave 'holes' in your registration makes it often very easy to see when some application forgets to fill in the blanks, since your can let Simple Injector fail fast by calling container.Verify().
Another common way us to pass a configuration object to the BuildUp method. This configuration object can contain the necessary information for making the right set of registrations as the caller requires. For instance, such configuration object can have a simple boolean flag:
public static void Build(Container container, ApplicationConfig config) {
// Lot's of registrations
if (config.AddGlobalExceptionHandler) {
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(GlobalExceptionHandlerCommandHandlerDecorator<>));
}
}
Passing on a configuration object onto the BuildUp method is also a great way to decouple your BuildUp method from the configuration system. This allows you to more easily call BuildUp during integration testing, without being forced to have a copy of the complete configuration file in the test project.
Instead of using a flag property, you can also have the complete list of decorators inside the configuration object and let the BuildUp method iterate over it and register them. This allows the caller to remove or add decorators from the list before they are registered:
var config = new ApplicationConfig();
// Remove decorator
config.CommandHandlerDecorators.Remove(
typeof(AuthorizationCommandHandlerDecorator<>));
// Add decorator after another decorator
config.CommandHandlerDecorators.Insert(
index: 1 + config.CommandHandlerDecorators.IndexOf(
typeof(TransactionCommandHandlerDecorator<>)),
item: typeof(DeadlockRetryCommandHandlerDecorator<>));
// Add an outer most decorator
config.CommandHandlerDecorators.Add(
typeof(TestPerformanceProfilingCommandHandlerDecorator<>));
CompositionRoot.BuildUp(container, config);
public static void BuildUp(Container container, ApplicationConfig config) {
// Lot's of registrations here.
config.CommandHandlerDecorators.ForEach(type =>
container.RegisterDecorator(typeof(ICommandHandler<>), type));
}
I've used all three methods in the past very successfully. Which option to choice depends on your needs.
As far as I know it is not possible to remove any registration.
With unit testing you normally would not use the container at all. Since your performing integration tests, using the container is indeed a must.
I can think of 2 ways of doing what you want.
The first is passing some option flag to your bootstrapper which swaps between production and testing environment.
The second is thinking about your testing approach. From your question it seems that a certain part in your ICommandHandler chain should throw an exception.
I would think this is pretty simple to test using a normal unit test instead of an integration test. In this case you wouldn't use the container but create the chain by hand.
A unittest for you commandhandler would be as simple as:
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))]
public void CommandHandlerThrowsCorrectException()
{
var handler = new Decorator1(new Decorator2(new MyHandler()));
handler.Handle(new MyCommand);
}
You can use other integration tests to check if a command handed to the WCF service results in building and handling the correct ICommandHandler chain.
I normally use the following test setup:
Use unit tests for testing all cases you have at hand, without using a container => which means at least one unit test per component in the application
Use unit tests for testing each separate decorator, without use of the container => so does the GenericExceptionCommandHandlerDecorator in your case, handle an exception
Use unit tests for testing if the registrations made to the container are correct and if decorators are applied in the correct order, ofcourse with the use of the container. Part of this job is already done by the container, if you use the built-in verification of the made registrations, using container.Verify()
Use as less as possible integration tests, just to test if the application works and flows as it should be. Because each component and decorator is unit tested, the need to test the behavior of the application in integration tests is far less. There will be always scenario's where you want to mimic user interaction with the application, but it should be rare and mostly covered by unit tests.

What is the benefits of mocking the dependencies in unit testing?

I am working on unit testing stuffs for my controller and service layers(C#,MVC). And I am using Moq dll for mocking the real/dependency objects in unit testing.
But I am little bit confuse regarding mocking the dependencies or real objects. Lets take a example of below unit test method :-
[TestMethod]
public void ShouldReturnDtosWhenCustomersFound_GetCustomers ()
{
// Arrrange
var name = "ricky";
var description = "this is the test";
// setup mocked dal to return list of customers
// when name and description passed to GetCustomers method
_customerDalMock.Setup(d => d.GetCustomers(name, description)).Returns(_customerList);
// Act
List<CustomerDto> actual = _CustomerService.GetCustomers(name, description);
// Assert
Assert.IsNotNull(actual);
Assert.IsTrue(actual.Any());
// verify all setups of mocked dal were called by service
_customerDalMock.VerifyAll();
}
In the above unit test method I am mocking the GetCustomers method and returning a customer list. Which is already defined. And looks like below:
List<Customer> _customerList = new List<Customer>
{
new Customer { CustomerID = 1, Name="Mariya",Description="description"},
new Customer { CustomerID = 2, Name="Soniya",Description="des"},
new Customer { CustomerID = 3, Name="Bill",Description="my desc"},
new Customer { CustomerID = 4, Name="jay",Description="test"},
};
And lets have a look on the Assertion of Customer mocked object and actual object Assertion :-
Assert.AreEqual(_customer.CustomerID, actual.CustomerID);
Assert.AreEqual(_customer.Name, actual.Name);
Assert.AreEqual(_customer.Description, actual.Description);
But here I am not understanding that it(above unit test) always work fine. Means we are just testing(in Assertion) which we passed or which we are returning(in mocking object). And we know that the real/actual object will always return which list or object that we passed.
So what is the meaning of doing unit testing or mocking here?
The true purpose of mocking is to achieve true isolation.
Say you have a CustomerService class, that depends on a CustomerRepository. You write a few unit tests covering the features provided by CustomerService. They all pass.
A month later, a few changes were made, and suddenly your CustomerServices unit tests start failing - and you need to find where the problem is.
So you assume:
Because a unit test that tests CustomerServices is failing, the problem must be in that class!!
Right? Wrong! The problem could be either in CustomerServices or in any of its depencies, i.e., CustomerRepository. If any of its dependencies fail, chances are the class under test will fail too.
Now picture a huge chain of dependencies: A depends on B, B depends on C, ... Y depends on Z. If a fault is introduced in Z, all your unit tests will fail.
And that's why you need to isolate the class under test from its dependencies (may it be a domain object, a database connection, file resources, etc). You want to test a unit.
Your example is too simplistic to show off the real benefit of mocking. That's because your logic under test isn't really doing much beyond returning some data.
But imagine as an example that your logic did something based on wall clock time, say scheduled some process every hour. In a situation like that, mocking the time source lets you actually unit test such logic so that your test doesn't have to run for hours, waiting for the time to pass.
In addition to what already been said:
We can have classes without dependencies. And the only thing we have is unit testing without mocks and stubs.
When we have dependencies there are several kinds of them:
Service that our class uses mostly in a 'fire and forget' way, i.e. services that do not affect control flow of the consuming code.
We can mock these (and all other kinds) services to test they were called correctly (integration testing) or simply for injecting as they could be required by our code.
Two Way Services that provide result but do not have an internal
state and do not affect the state of the system. They can be dubbed complex data transformations.
By mocking these services you can test you expectations about code behavior for different variants of service implementation without need to heave all of them.
Services which affect the state of the system or depend on real world
phenomena or something out of your control. '#500 - Internal Server Error' gave a good example of the time service.
With mocking you can let the time flow at the speed (and direction) whatever is needed. Another example is working with DB. When unit testing it is usually desirable not to change DB state what is not true about functional test. For such kind of services 'isolation' is the main (but not the only) motivation for mocking.
Services with internal state your code depends on.
Consider Entity Framework:
When SaveChanges() is called, many things happen behind the scene. EF detects changes and fixups navigation properties. Also EF won't allow you to add several entities with the same key.
Evidently, it can be very difficult to mock the behavior and the complexity of such dependencies...but usually you have not if they are designed well. If you heavily rely on the functionality some component provides you hardly will be able to substitute this dependency. What is probably needed is isolation again. You don't want to leave traces when testing, thus butter approach is to tell EF not to use real DB. Yes, dependency means more than a mere interface. More often it is not the methods signatures but the contract for expected behavior. For instance IDbConnection has Open() and Close() methods what implies certain sequence of calls.
Sure, it is not strict classification. Better to treat it as extremes.
#dcastro writes: You want to test a unit. Yet the statement doesn't answer the question whether you should.
Lets not discount integration tests. Sometimes knowing that some composite part of the system has a failure is ok.
As to example with the chain of dependencies given by #dcastro we can try to find the place where the bag is likely to by:
Assume, Z is a final dependency. We create unit tests without mocks for it. All boundary conditions are known. 100% coverage is a must here. After that we say that Z works correctly. And if Z fails our unit tests must indicate it.
The analogue comes from engineering. Nobody tests each screw and bolt when building a plane.Statistic methods are used to prove with some certainty that factory producing the details works fine.
On the other hand, for very critical parts of your system it is reasonable to spend time and mock complex behavior of the dependency. Yes, the more complex it is the less maintainable tests are going to be. And here I'd rather call them as the specification checks.
Yes your api and tests both can be wrong but code review and other forms of testing can assure the correctness of the code to some degree. And as soon as these tests fail after some changes are made you either need to change specs and corresponding tests or find the bug and cover the case with the test.
I highly recommend you watching Roy's videos: http://youtube.com/watch?v=fAb_OnooCsQ
In this very case mocking allowed you to fake a database connection, so that you can run a test in place and in-memory, without relying on any additional resource, i.e. the database. This tests asserts that, when a service is called, a corresponded method of DAL is called.
However the later asserts of the list and the values in list aren't necessary. As you correctly noticed you just asserting that the values you "mocked" are returned. This would be useful within the mocking framework itself, to assert that the mocking methods behave as expected. But in your code is is just excess.
In general case, mocking allow one to:
Test behaviour (when something happens, then a particular method is executed)
Fake resources (for example, email servers, web servers, HTTP API request/response, database)
In contrast, unit-tests without mocking usually allow you to test the state. That is, you can detect a change in a state of an object, when a particular method was called.
All previous answers assume that mocking has some value, and then they proceed to explain what that value supposedly is.
For the sake of future generations that might arrive at this question looking to satisfy their philosophical objections on the issue, here is a dissenting opinion:
Mocking, despite being a nifty trick, should be avoided at (almost) all costs.
When you mock a dependency of your code-under-test, you are by definition making two kinds of assumptions:
Assumptions about the behavior of the dependency
Assumptions about the inner workings of your code-under-test
It can be argued that the assumptions about the behavior of the dependency are innocent because they are simply a stipulation of how the real dependency should behave according to some requirements or specification document. I would be willing to accept this, with the footnote that they are still assumptions, and whenever you make assumptions you are living your life dangerously.
Now, what cannot be argued is that the assumptions you are making about the inner workings of your code-under-test are essentially turning your test into a white-box test: the mock expects the code-under-test to issue specific calls to its dependencies, with specific parameters, and as the mock returns specific results, the code-under-test is expected to behave in specific ways.
White-box testing might be suitable if you are building high criticality (aerospace grade) software, where the goal is to leave absolutely nothing to chance, and cost is not a concern. It is orders of magnitude more labor intensive than black-box testing, so it is immensely expensive, and it is a complete overkill for commercial software, where the goal is simply to meet the requirements, not to ensure that every single bit in memory has some exact expected value at any given moment.
White-box testing is labor intensive because it renders tests extremely fragile: every single time you modify the code-under-test, even if the modification is not in response to a change in requirements, you will have to go modify every single mock you have written to test that code. That is an insanely high maintenance level.
How to avoid mocks and black-box testing
Use fakes instead of mocks
For an explanation of what the difference is, you can read this article by Martin Fowler: https://martinfowler.com/bliki/TestDouble.html but to give you an example, an in-memory database can be used as fake in place of a full-blown RDBMS. (Note how fakes are a lot less fake than mocks.)
Fakes will give you the same amount of isolation as mocks would, but without all the risky and costly assumptions, and most importantly, without all the fragility.
Do integration testing instead of unit testing
Using the fakes whenever possible, of course.
For a longer article with my thoughts on the subject, see https://blog.michael.gr/2021/12/white-box-vs-black-box-testing.html

Use different configurations with Simple Injector

I'm using the Simple Injector Dependency Injection framework and it looks cool and nice. But after building a configuration and use it, now I want to know how to change from one configuration to another.
Scenario: Let's imagine I've set up a configuration in the Global Asax and I have the public and global Container instance there. Now I want to make some tests and I want them to use mock classes so I want to change the configuration.
I can, of course, build another configuration and assign it to the global Container created by default, so that every time I run a test the alternative configuration will be set. But on doing that and though I'm in development context the Container is changed for everyone, even for normal requests. I know I'm testing in this context and that shouldn't matter, but I have the feeling that this is not the way for doing this... and I wonder how to change from one configuration to another in the correct way.
When doing unit tests, you shouldn't use the container at all. Just create the class under test by calling its constructor and supplying it with the proper mock objects.
One pattern that helped me out here a lot in the past is the use of a simple test class-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateValidClassUnderTest(params object[] dependencies)
{
return new ClassUnderTest(
dependencies.OfType<ILogger>().SingleOrDefault() ?? new FakeLogger(),
dependencies.OfType<IMailSender>().SingleOrDefault() ?? new FakeMailer(),
dependencies.OfType<IEventPublisher>().SingleOrDefault() ?? new FakePublisher());
}
For integration tests it's much more common to use the container, and swap a few dependencies of the container. Still, those integration tests will not use the container that you created in your application_start, but each integration test will in that case most likely have its own new container instance, since each test should run in isolation. And even if you did use a single container from application_start, your integration tests are ran from a separate project and won't interfere with your running application.
Although each integration test should get its own container instance (if any) you still want to reuse as much of the container configuration code as possible. This can be done by extracting this code to a method that either returns a new configured container instance when called, or configure a supplied container instance (and return nothing). This method should typically do an incomplete configuration and the caller (either your tests or global asax) should add the missing configurations.
Extracting this code: allows you to have multiple end application that partly share the same configuration; allows you to verify the container in an integration test; and allows you to add services that need to be mocked by your integration tests.
To make life easier, Simple Injector allows you to replace existing registrations with new one (for instance a mocked one). You can enable this as follows:
container.Options.AllowOverridingRegistrations = true;
But be careful with this! This option can hide the fact that you accidentally override a registration. In my experience it is in most cases much better to build up an incomplete container and add the missing registrations afterwards instead of overriding them. Or if you decide to override, enable the feature at the last possible moment to prevent any accidental misconfigurations.

Should I be using mock objects when unit testing?

In my ASP.Net MVC application I am using IoC to facilitate unit testing. The structure of my application is a Controller -> Service Class -> Repository type of structure. In order to do unit testing, I have I have an InMemoryRepository class that inherits my IRepository, which instead of going out to a database, it uses an internal List<T> member. When I construct my unit tests, I just pass an instance of an internal repository instead of my EF repository.
My service classes retrieve objects from the repository through an AsQueryable interface that my repository classes implement, thus allowing me to use Linq in my service classes without the service class while still abstracting the data access layer out. In practice this seems to work well.
The problem that I am seeing is that every time I see Unit Testing talked about, they are using mock objects instead of the internal method that I see. On the face value it makes sense, because if my InMemoryRepository fails, not only will my InMemoryRepository unit tests fail, but that failure will cascade down into my service classes and controllers as well. More realistically I am more concerned about failures in my service classes affecting controller unit tests.
My method also requires me to do more setup for each unit test, and as things become more complicated (e.g. I implement authorization into the service classes) the setup becomes much more complicated, because I then have to make sure each unit test authorizes it with the service classes correctly so the main aspect of that unit test doesn't fail. I can clearly see how mock objects would help out in that regard.
However, I can't see how to solve this completely with mocks and still have valid tests. For example, one of my unit tests is that if I call _service.GetDocumentById(5), It gets the correct document from the repository. The only way this is a valid unit test (as far as I understand it) is if I have 2 or 3 documents stored, and my GetdocumentById() method correctly retrieves the one with an Id of 5.
How would I have a mocked repository with an AsQueryable call, and how would I make sure I don't mask any issues I make with my Linq statements by hardcoding the return statements when setting up the mocked repository? Is it better to keep my service class unit test using the InMemoryRepository but change my controller unit tests to use mocked service objects?
Edit:
After going over my structure again I remembered a complication that is preventing mocking in controller unit tests, as I forgot my structure is a bit more complicated than I originally said.
A Repository is a data store for one type of object, so if my document service class needs document entities, it creates a IRepository<Document>.
Controllers are passed an IRepositoryFactory. The IRepositoryFactory is a class which is supposed to make it easy to create repositories without having to repositories directly into the controller, or having the controller worry about what service classes require which repositories. I have an InMemoryRepositoryFactory, which gives the service classes InMemoryRepository<Entity> instantiations, and the same idea goes for my EFRepositoryFactory.
In the controller's constructors, private service class objects are instantiated by passing in the IRepositoryFactory object that is passed into that controller.
So for example
public class DocumentController : Controller
{
private DocumentService _documentService;
public DocumentController(IRepositoryFactory factory)
{
_documentService = new DocumentService(factory);
}
...
}
I can't see how to mock my service layer with this architecture so that my controllers are unit tested and not integration tested. I could have a bad architecture for unit testing, but I'm not sure how to better solve the issues that made me want to make a repository factory in the first place.
One solution to your problem is to change your controllers to demand IDocumentService instances instead of constructing the services themselves:
public class DocumentController : Controller
{
private IDocumentService _documentService;
// The controller doesn't construct the service itself
public DocumentController(IDocumentService documentService)
{
_documentService = documentService;
}
...
}
In your real application, let your IoC container inject IRepositoryFactory instances into your services. In your controller unit tests, just mock the services as needed.
(And see Misko Hevry's article about constructors doing real work for an extended discussion of the benefits of restructuring your code like this.)
Personally, I would design the system around the Unit of Work pattern that references repositories. This could make things much simpler and allows you to have more complex operations running atomically. You would typically have a IUnitOfWorkFactory that is supplied as dependency in the Service classes. A service class would create a new unit of work and that unit of work references repositories. You can see an example of this here.
If I understand correctly you are concerned about errors in one piece of (low level) code failing a lot of tests, making it harder to see the actual problem. You take InMemoryRepository as a concrete example.
While your concern is valid, I personally wouldn't worry about a failing InMemoryRepository. It is a test objects, and you should keep those tests objects as simple as possible. This prevents you from having to write tests for your test objects. Most of the time I assume they are correct (however, I sometimes use self checks in such a class by writing Assert statements). A test will fail when such an object misbehaves. It's not optimal, but you would normally find out quick enough what the problem is in my experience. To be productive, you will have to draw a line somewhere.
Errors in the controller caused by the service are another cup of tea IMO. While you could mock the service, this would make testing more difficult and less trustworthy. It would be better to NOT test the service at all. Only test the controller! The controller will call into the service and if your service doens't behave well, your controller tests would find out. This way you only test the top level objects in your application. Code coverage will help you spot parts of your code you don't test. Of course this isn't possible in all scenario's, but this often works well. When the service works with a mocked repository (or unit of work) this would work very well.
Your second concern was that those depedencies make you have much test setup. I've got two things to say about this.
First of all, I try to minimize my dependency inversion to only what I need to be able to run my unit tests. Calls to the system clock, database, Smtp server and file system should be faked to make unit tests fast and reliable. Other things I try not to invert, because the more you mock, the less reliable the tests become. You are testing less. Minimizing the dependency inversion (to what you need to have good RTM unit tests) helps making test setup easier.
But (second point) you also need to write your unit tests in a way that they are readable and maintainable (the hard part about unit testing, or in fact making software in general). Having big test setups makes them hard to understand and makes test code hard to change when a class gets a new dependency. I found out that one of the best ways to make tests more readable and maintainable is to use simple factory methods in your test class to centralize the creation of types that you need in the test (I never use mocking frameworks). There are two patterns that I use. One is a simple factory method, such as one that creates a valid type:
FakeDocumentService CreateValidService()
{
return CreateValidService(CreateInitializedContext());
}
FakeDocumentService CreateValidService(InMemoryUnitOfWork context)
{
return new FakeDocumentSerice(context);
}
This way tests can simply call these methods and when they need a valid object they simply call one of the factory methods. Of course when one of these methods accidentally creates an invalid object, many tests will fail. It's hard to prevent this, but easily fixed. And easily fixed means that the tests are maintainable.
The other pattern I use is the use of a container type that holds the arguments/properties of the actual object you want to create. This gets especially useful when an object has many different properties and/or constructor arguments. Mix this with a factory for the container and a builder method for the object to create and you get very readable test code:
[TestMethod]
public void Operation_WithValidArguments_Succeeds()
{
// Arrange
var validArgs = CreateValidArgs();
var service = BuildNewService(validArgs);
// Act
service.Operation();
}
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))]
public void Operation_NegativeAge_ThrowsException()
{
// Arrange
var invalidArgs = CreateValidArgs();
invalidArgs.Age = -1;
var service = BuildNewService(invalidArgs);
// Act
service.Operation();
}
This allows you to let the test only specify what matters! This is very important to make tests readable! The CreateValidArgs() method could create an container with over 100 arguments that would make a valid SUT (system under test). You now centralized in one place the default valid configuration. I hope this makes sense.
Your third concern was about not being able to test if LINQ queries behave expectedly with the given LINQ provider. This is a valid problem, because it is quite easy to write LINQ (to Expression tree) queries that run perfectly when used over in-memory objects, but fail when querying the database. Sometimes it is impossible to translate a query (because you call an .NET method that has no counterpart in the database) or the LINQ provider has limitations (or bugs). Especially the LINQ provider of Entity Framework 3.5 sucks hard.
However, this is a problem you cannot solve with unit tests per definition. Because when you call the database in your tests, it's not a unit test anymore. Unit tests however never totally replace manual testing :-)
Still, it's a valid concern. In addition to unit testing you can do integration testing. In this case you run your code with the real provider and a (dedicated) test database. Run each test within a database transaction and rollback the transaction at the end of the test (TransactionScope works great with this!). Note however that writing maintainable integration tests is even harder than writing maintainable unit tests. You have to make sure that the model of your test database is in sync. Each integration test should insert the data it needs for that test in the database, which is often a lot of work to write and maintain. Best is to keep the amount of integration tests to a minimum. Have enough integration tests to make you feel confident about making changes to the system. For instance, having to call a service method with a complicated LINQ statement in a single test will often be enough to test if your LINQ provider is able to build valid SQL out of it. Most of the time I just assume the LINQ provider will have the same behavior as the LINQ to Objects (.AsQueryable()) provider. Again, you will have to draw the line somewhere.
I hope this helps.
I think your approach is sound for testing the service layer itself, but, as you suggested, it would be better if the service layer is mocked out completely for your business logic and other high-level testing. This makes your higher-level tests easier to implement/maintain, as there's no need to exercise the service layer again if it's already been tested.

Ninject kernel binding overrides

I'm just wondering what the best practice is for rewiring the bindings in a kernel.
I have a class with a kernel and a private class module with the default production bindings.
For tests I want to override these bindings so I can swap in my Test Doubles / Mocks objects.
does
MyClass.Kernel.Load(new InlineModule(m=> m.Bind<IDepend>().To<TestDoubleDepend>()))
override any existing bindings for IDepend?
I try to use the DI kernel directly in my code as little as possible, instead relying on constructor injection (or properties in select cases, such as Attribute classes). Where I must, however, I use an abstraction layer, so that I can set the DI kernel object, making it mockable in unit tests.
For example:
public interface IDependencyResolver : IDisposable
{
T GetImplementationOf<T>();
}
public static class DependencyResolver
{
private static IDependencyResolver s_resolver;
public static T GetImplementationOf<T>()
{
return s_resolver.GetImplementationOf<T>();
}
public static void RegisterResolver( IDependencyResolver resolver )
{
s_resolver = resolver;
}
public static void DisposeResolver()
{
s_resolver.Dispose();
}
}
Using a pattern like this, you can set the IDependencyResolver from unit tests by calling RegisterResolver with a mock or fake implementation that returns whatever objects you want without having to wire up full modules. It also has a secondary benefit of abstracting your code from a particular IoC container, should you choose to switch to a different one in the future.
Naturally, you'd also want to add additional methods to IDependencyResolver as your needs dictate, I'm just including the basics here as an example. Yes, this would then require that you write a super simple wrapper around the Ninject kernel which implements IDependencyResolver as well.
The reason you want to do this is that your unit tests should really only be testing one thing and by using your actual IoC container, you're really exercising more than the one class under test, which can lead to false negatives that make your tests brittle and (more importantly) shake developer faith in their accuracy over time. This can lead to test apathy and abandonment since it becomes possible for tests to fail but the software to still work correctly ("don't worry, that one always fails, it's not a big deal").
I am just hoping something like this works
var kernel = new StandardKernel(new ProductionModule(config));
kernel.Rebind<ITimer>().To<TimerImpl>().InSingletonScope();
where ProductionModule is my production bindings and I override by calling Rebind in the specific test case. I call rebind on the few items I rebind.
ADVANTAGE: If anyone adds new bindings to production module, I inherit them so it won't break in this fashion which can be nice. This all works in Guice in java...hoping it works here too.
What I tend to do is have a separate test project complete with it's own bindings -- I'm of course assuming that we're talking about unit tests of some sort. The test project uses its own kernel and loads the module in the test project into that kernel. The tests in the project are executed during CI builds and by full builds executed from a build script, though the tests are never deployed into production.
I realize your project/solution setup may not allow this sort of organization, but it seems to be pretty typical from what I've seen.
Peter Mayer's approach shuoul be useful for Unit Test, but IMHO, isn't it easier to just inject manually a Mock using a constructor/property injection?
Seems to me that using specific bindings for a test project will be more useful for other kind of test (integration, functional) but even in that case you surely need to change the bindings depending on the test.
My approach is some kind of a mix of kronhrbaugh and Hamish Smith, creatin a "dependency resolver" where you can register and unregister the modules to be used.
I would add a constructor to MyClass that accepts a Module.
This wouldn't be used in production but would be used in test.
In the test code I would pass a Module that defined the test doubles required.
For a project I am working on, I created separate modules for each environment (test, development, stage, production, etc.). Those modules define the bindings.
Because dev, stage and production all use many of the same bindings, I created a common module they all derive from. Each one then adds it environment specific bindings.
I also have a KernelFactory, that when handed an environment token will spool up an IKernel with the appropriate modules.
This allows me to switch my environment token which will in turn change all of my bindings automatically.
But if this is for unit testing, I agree with the above comments that a simple constructor allowing manual binding is the way to go as it keeps Ninject out of your tests.

Categories