We've been using Simple Injector with good success, in a fairly substantial application. We've been using constructor injection for all of our production classes, and configuring Simple Injector to populate everything, and everything's peachy.
We've not, though, used Simple Injector to manage the dependency trees for our unit tests. Instead, we've been new'ing up everything manually.
I just spent a couple of days working through a major refactoring, and nearly all of my time was in fixing these manually-constructed dependency trees in our unit tests.
This has me wondering - does anyone have any patterns they use to configure the dependency trees they use in unit tests? For us, at least, in our tests our dependency trees tend to be fairly simple, but there are a lot of them.
Anyone have a method they use to manage these?
For true unit tests (i.e. those which only test one class, and mock all of its dependencies), it doesn't make any sense to use a DI framework. In these tests:
if you find that you have a lot of repetitive code for newing up an instance of your class with all the mocks you've created, one useful strategy is to create all of your mocks and create the instance for the subject-under-test in your Setup method (these can all be private instance fields), and then each individual test's "arrange" area just has to call the appropriate Setup() code on the methods it needs to mock. This way, you end up with only one new PersonController(...) statement per test class.
if you're needing to create a lot of domain/data objects, it's useful to create Builder objects that start with sane values for testing. So instead of invoking a huge constructor all over your code, with a bunch of fake values, you're mostly just calling, e.g., var person = new PersonBuilder().Build(), possibly with just a few chained method calls for pieces of data that you specifically care about in that test. You may also be interested in AutoFixture, but I've never used it so I can't vouch for it.
If you're writing integration tests, where you need to test the interaction between several parts of the system, but you still need to be able to mock specific pieces, consider creating Builder classes for your services, so you can say, e.g. var personController = new PersonControllerBuilder.WithRealDatabase(connection).WithAuthorization(new AllowAllAuthorizationService()).Build().
If you're writing end-to-end, or "scenario" tests, where you need to test the whole system, then it makes sense to set up your DI framework, leveraging the same configuration code that your real product uses. You can alter the configuration slightly to give yourself better programmatic control over things like which user is logged in and such. You can still leverage the other builder classes you've created for constructing data, too.
var user = new PersonBuilder().Build();
using(Login.As(user))
{
var controller = Container.Get<PersonController>();
var result = controller.GetCurrentUser();
Assert.AreEqual(result.Username, user.Username)
}
Refrain from using your DI container within your unit tests. In unit tests, you try to test one class or module in isolation, and there is little use for a DI container in that area.
Things are different with integration testing, since you want to test how the components in your system integrate and work together. In that case you often use your production DI configuration and swap out some of your services for fake services (e.g. a EmailService) but stick as close to the real thing as you can. In this case you would typically use your Container to resolve the whole object graph.
The desire to use a DI container in the unit tests as well, often stems from ineffective patterns. For instance, in case you try to create the class under test with all its dependencies in each test, you get lots of duplicated initialization code, and a little change in your class under test can in that case ripple through the system and require you to change dozens of unit tests. This obviously causes maintainability problems.
One pattern that helped me out here a lot in the past is the use of a simple SUT-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateClassUnderTest(
ILogger logger = null,
IMailSender mailSender = null,
IEventPublisher publisher = null)
{
return new ClassUnderTest(
logger ?? new FakeLogger(),
mailSender ?? new FakeMailer(),
publisher ?? new FakePublisher());
}
The factory method's arguments duplicate the class's constructor arguments, but makes them all optional. For any particular dependency that is not supplied by the caller, a new default fake implementation will be injected.
This typically works very well, because in most tests you are just interested in one or two dependencies. The other dependencies might be required for the class to function, but might not be interesting for that specific test. The factory method, therefore, allows you to only supply the dependencies that are interesting for the test at hand, while removing the noise of unused dependencies from the test method. As an example using the factory method, here's a test method:
public void Doing_something_will_always_log_a_message()
{
// Arrange
var logger = new ListLogger();
ClassUnderTest sut = CreateClassUnderTest(logger: logger);
// Act
sut.DoSomething();
// Arrange
Assert.IsTrue(logger.Count > 0);
}
If you are interested in learning how to write Readable, Trustworthy and Maintainable (RTM) tests, Roy Osherove's book The Art of Unit Testing (second edition) is an excellent read. This has helped me tremendously in my understanding of writing great unit tests. If you’re interested in a deep-dive into Dependency Injection and its related patterns, consider reading Dependency Injection Principles, Practices, and Patterns (which I co-authored).
Related
We've been using Simple Injector with good success, in a fairly substantial application. We've been using constructor injection for all of our production classes, and configuring Simple Injector to populate everything, and everything's peachy.
We've not, though, used Simple Injector to manage the dependency trees for our unit tests. Instead, we've been new'ing up everything manually.
I just spent a couple of days working through a major refactoring, and nearly all of my time was in fixing these manually-constructed dependency trees in our unit tests.
This has me wondering - does anyone have any patterns they use to configure the dependency trees they use in unit tests? For us, at least, in our tests our dependency trees tend to be fairly simple, but there are a lot of them.
Anyone have a method they use to manage these?
For true unit tests (i.e. those which only test one class, and mock all of its dependencies), it doesn't make any sense to use a DI framework. In these tests:
if you find that you have a lot of repetitive code for newing up an instance of your class with all the mocks you've created, one useful strategy is to create all of your mocks and create the instance for the subject-under-test in your Setup method (these can all be private instance fields), and then each individual test's "arrange" area just has to call the appropriate Setup() code on the methods it needs to mock. This way, you end up with only one new PersonController(...) statement per test class.
if you're needing to create a lot of domain/data objects, it's useful to create Builder objects that start with sane values for testing. So instead of invoking a huge constructor all over your code, with a bunch of fake values, you're mostly just calling, e.g., var person = new PersonBuilder().Build(), possibly with just a few chained method calls for pieces of data that you specifically care about in that test. You may also be interested in AutoFixture, but I've never used it so I can't vouch for it.
If you're writing integration tests, where you need to test the interaction between several parts of the system, but you still need to be able to mock specific pieces, consider creating Builder classes for your services, so you can say, e.g. var personController = new PersonControllerBuilder.WithRealDatabase(connection).WithAuthorization(new AllowAllAuthorizationService()).Build().
If you're writing end-to-end, or "scenario" tests, where you need to test the whole system, then it makes sense to set up your DI framework, leveraging the same configuration code that your real product uses. You can alter the configuration slightly to give yourself better programmatic control over things like which user is logged in and such. You can still leverage the other builder classes you've created for constructing data, too.
var user = new PersonBuilder().Build();
using(Login.As(user))
{
var controller = Container.Get<PersonController>();
var result = controller.GetCurrentUser();
Assert.AreEqual(result.Username, user.Username)
}
Refrain from using your DI container within your unit tests. In unit tests, you try to test one class or module in isolation, and there is little use for a DI container in that area.
Things are different with integration testing, since you want to test how the components in your system integrate and work together. In that case you often use your production DI configuration and swap out some of your services for fake services (e.g. a EmailService) but stick as close to the real thing as you can. In this case you would typically use your Container to resolve the whole object graph.
The desire to use a DI container in the unit tests as well, often stems from ineffective patterns. For instance, in case you try to create the class under test with all its dependencies in each test, you get lots of duplicated initialization code, and a little change in your class under test can in that case ripple through the system and require you to change dozens of unit tests. This obviously causes maintainability problems.
One pattern that helped me out here a lot in the past is the use of a simple SUT-specific factory method. This method centralizes the creation of the class under test and minimizes the amount of changes that need to be made when the dependencies of the class under test change. This is how such factory method could look like:
private ClassUnderTest CreateClassUnderTest(
ILogger logger = null,
IMailSender mailSender = null,
IEventPublisher publisher = null)
{
return new ClassUnderTest(
logger ?? new FakeLogger(),
mailSender ?? new FakeMailer(),
publisher ?? new FakePublisher());
}
The factory method's arguments duplicate the class's constructor arguments, but makes them all optional. For any particular dependency that is not supplied by the caller, a new default fake implementation will be injected.
This typically works very well, because in most tests you are just interested in one or two dependencies. The other dependencies might be required for the class to function, but might not be interesting for that specific test. The factory method, therefore, allows you to only supply the dependencies that are interesting for the test at hand, while removing the noise of unused dependencies from the test method. As an example using the factory method, here's a test method:
public void Doing_something_will_always_log_a_message()
{
// Arrange
var logger = new ListLogger();
ClassUnderTest sut = CreateClassUnderTest(logger: logger);
// Act
sut.DoSomething();
// Arrange
Assert.IsTrue(logger.Count > 0);
}
If you are interested in learning how to write Readable, Trustworthy and Maintainable (RTM) tests, Roy Osherove's book The Art of Unit Testing (second edition) is an excellent read. This has helped me tremendously in my understanding of writing great unit tests. If you’re interested in a deep-dive into Dependency Injection and its related patterns, consider reading Dependency Injection Principles, Practices, and Patterns (which I co-authored).
How can a IoC Container be used for unit testing? Is it useful to manage mocks in a huge solution (50+ projects) using IoC? Any experiences? Any C# libraries that work well for using it in unit tests?
Generally speaking, a DI Container should not be necessary for unit testing because unit testing is all about separating responsibilities.
Consider a class that uses Constructor Injection
public MyClass(IMyDependency dep) { }
In your entire application, it may be that there's a huge dependency graph hidden behind IMyDependency, but in a unit test, you flatten it all down to a single Test Double.
You can use dynamic mocks like Moq or RhinoMocks to generate the Test Double, but it is not required.
var dep = new Mock<IMyDependency>().Object;
var sut = new MyClass(dep);
In some cases, an auto-mocking container can be nice to have, but you don't need to use the same DI Container that the production application uses.
I often use an IoC container in my tests. Granted, they are not "unit tests" in the pure sense. IMO They are more BDDish and facilitate refactoring. Tests are there to give you confidence to refactor. Poorly written tests can be like pouring cement into your code.
Consider the following:
[TestFixture]
public class ImageGalleryFixture : ContainerWiredFixture
{
[Test]
public void Should_save_image()
{
container.ConfigureMockFor<IFileRepository>()
.Setup(r => r.Create(It.IsAny<IFile>()))
.Verifiable();
AddToGallery(new RequestWithRealFile());
container.VerifyMockFor<IFileRepository>();
}
private void AddToGallery(AddBusinessImage request)
{
container.Resolve<BusinessPublisher>().Consume(request);
}
}
There are several things that happen when adding an image to the gallery. The image is resized, a thumbnail is generated, and the files are stored on AmazonS3. By using a container I can more easily isolate just the behavior I want to test, which in this case is the persisting part.
An auto-mocking container extension comes in handy when using this technique:
http://www.agileatwork.com/auto-mocking-unity-container-extension/
How can a Ioc Container be used for unit testing?
IoC will enforce programming paradigms that will make unit testing in isolation (i.e. using mocks) easier: use of interfaces, no new(), no singletons...
But using the IoC container for testing is not really a requirement, it will just provide some facilities e.g. injection of mocks but you could do it manually.
Is it useful to manage mocks in a huge solution (50+ projects) using IoC?
I'm not sure what you mean by managing mocks using IoC. Anyway, IoC containers can usually do more than just injecting mocks when it comes to testing. And if you have decent IDE support that makes refactoring possible, why not using it?
Any experience?
Yes, on a huge solution, you need more than ever a non error-prone and refactoring-adverse solution (i.e. either through a type safe IoC container or good IDE support).
Using containers with ability to resolve unregistered/uknown services like SimpleInjector, DryIoc (its mine) can return mocks for not yet implemented interfaces.
Which means that you can start development with first simple implementation and its mocked dependencies, and replace them with real thing as you progress.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Using IoC for Unit Testing
I think I do have a Problem understanding the way Unit Tests and/or Dependency Injection are working. I'm using NUnit and Rhino Mocks for Unit testing and Ninject as an Dependency Incection Framework. In general, I though those two would fit perfeclty - but somehow, it seems like it gets just more complicated and harder to understand.
(I'll try to make up a good example, to keep it clean and easy. It's about me, riding a bike).
1.) Without DI / Unit Tests:
Without knowing of DI and Unit Tests, my code would have looked like that - and I'd be happy:
public class Person
{
public void Travel()
{
Bike bike = new Bike();
bike.Ride();
}
}
public class Bike
{
public void Ride()
{
Console.WriteLine("Riding a Bike");
}
}
To ride my bike i would just need: new Person().Travel();
2.) With DI:
I don't want that tight coupling, so I need an Interface and a NinjectModule! I'd have some Overhead, but that will be fine, as long as the code is easy to read and understand. I'll just pass the code for the modified Person class, the Bike class is unchanged:
public class Person
{
IKernel kernel = new StandardKernel(new TransportationModule());
public void Travel()
{
ITransportation transportation = kernel.Get<ITransportation>();
transportation.Ride();
}
}
I could still ride my bike with just: new Person().Travel();
3.) Considering Unit-Testing (without DI):
To be able to check if the Ride-Method is called properly, I'll need a Mock. As far as I know, there are gernerally two ways to inject an Interface: Constructor Injection and Setter Injection. I choose Constructor Injection for my example:
public class Person
{
ITransportation transportation;
public person(ITransportation transportation)
{
this.transportation = transportation;
}
public void Travel()
{
transportation.Ride();
}
}
This time, i would neet to pass the bike: new Person(new Bike()).Travel();
4.) With DI and preparing for Unit-Tests
The class in 3. Considering Unit-Testing (without DI) would do the job without modification, but I would need to call new Person(kernel.Get<ITransportation>());. Through that, it feels like I'm loosing the benefit from DI - the Person class could call Travel without any coupling and any need to know what kind of class the transportation is. Also, I think this form is lacking a lot of the readability of example 2.
Is this how it is done? Or are there other - more elegant ways achieving Dependency Injection and the possibility to unit test (and mock)?
(Looking back, it seems the example is really bad - everyone should know what kind of transportation device he is riding at the moment...)
Generally I try to avoid using a IoC container for my unit testing - just use mocks and stubs to pass in the dependencies.
Your problem starts in scenario 2: This is not DI - this is the service locator (anti-)pattern. For real Dependency Injection you need to pass in your dependencies, preferably via constructor injection.
Scenario 3 is looking good, this is DI and generally also how you are enabled to test your classes in isolation - pass in the dependencies you need. I rarely find the need to use a full DI container for unit testing since each class under test only will have a few dependencies, each of which can either be stubbed or mocked to perform the test.
I even would argue that if you need a IoC container, your tests are probably not fine-grained enough or you have too many dependencies. In the later case some refactoring might be in order to form aggregate classes from two or more of the dependencies you are using (only if there is any semantic connection of course). This will eventually drop the number of dependencies to a level that you are comfortable with. That maximum number is different for each person, I personally strive to have 4 at most, at least I can count them on one hand and mocking is not too much of a burden.
A last and crucial argument against using an IoC container in unit testing is behavioral testing: How can you be sure the class under test behaves the way you want it to if you are not in full control of your dependencies?
Arguably you can achieve this by stubbing out all dependencies with types that set flags for certain actions, but this is a big effort. It is much, much easier to use a mocking framework like RhinoMocks or Moq to verify that certain methods were called with the arguments you specify. For that you need to mock the dependencies you want to verify calls on, an IoC container cannot help you here.
You are getting some things confused.
Implementation 3 is better then number 2, because you don't need to setup the DI framework in your unit tests.
So when testing number 3 you would do:
ITransportation transportationMock = MockRepository.GenerateStricktMock<ITransportation>();
// setup exceptations on your mock
var person = new Person(transportationMock);
The DI framework is something that's only needed when constructing object trees in production code. In your test code you have total control of what you want to test. When Unit Testing a class, you mock out all dependencies.
If you also want to do some integration testing you would pass a real Bike to your person class and test it.
The idea of testing classes in total isolation is that you can control each and every code path. You can make the dependency return correct or incorrect values or you can even have it throw an exception. If everything is working and you have a nice code coverage just from your unit tests, you will only need a couple of bigger tests to make sure that your DI is wired correctly.
The key to writing testable code is to split object creation from business logic.
My 2 cents...
Although point 2 is an example of the Dependency Inversion Principle (DIP), it uses the Service Location Pattern, rather than Dependency Injection.
Your point 3 Illustrates Dependency Injection, where the IoC container would inject the dependency (ITransportation) into the constructor during build up of Person.
In your real app AND the unit test, you would want to use the IoC container to build Person as well (i.e. don't new Person directly). Either use the service locator pattern (kernel.Get<Person>(); ), or DI (e.g. Setter) if your Unit Test Framework supports this.
This would then Build Up Person AND its Dependencies (viz the Configured concrete class for ITransportation) and Inject that into Person as well (obviously, in the unit test your IoC would be configured for the mocked / stubbed ITransportation)
Finally, it is the Dependencies that you would want to Mock, i.e. ITransportation, so that you can test the Transport() method of Person.
Since Bike has no dependencies, it can be Unit Tested directly / independently (You don't need a mock to test Bike.Ride() unless a dependency is added to Bike).
In my ASP.Net MVC application I am using IoC to facilitate unit testing. The structure of my application is a Controller -> Service Class -> Repository type of structure. In order to do unit testing, I have I have an InMemoryRepository class that inherits my IRepository, which instead of going out to a database, it uses an internal List<T> member. When I construct my unit tests, I just pass an instance of an internal repository instead of my EF repository.
My service classes retrieve objects from the repository through an AsQueryable interface that my repository classes implement, thus allowing me to use Linq in my service classes without the service class while still abstracting the data access layer out. In practice this seems to work well.
The problem that I am seeing is that every time I see Unit Testing talked about, they are using mock objects instead of the internal method that I see. On the face value it makes sense, because if my InMemoryRepository fails, not only will my InMemoryRepository unit tests fail, but that failure will cascade down into my service classes and controllers as well. More realistically I am more concerned about failures in my service classes affecting controller unit tests.
My method also requires me to do more setup for each unit test, and as things become more complicated (e.g. I implement authorization into the service classes) the setup becomes much more complicated, because I then have to make sure each unit test authorizes it with the service classes correctly so the main aspect of that unit test doesn't fail. I can clearly see how mock objects would help out in that regard.
However, I can't see how to solve this completely with mocks and still have valid tests. For example, one of my unit tests is that if I call _service.GetDocumentById(5), It gets the correct document from the repository. The only way this is a valid unit test (as far as I understand it) is if I have 2 or 3 documents stored, and my GetdocumentById() method correctly retrieves the one with an Id of 5.
How would I have a mocked repository with an AsQueryable call, and how would I make sure I don't mask any issues I make with my Linq statements by hardcoding the return statements when setting up the mocked repository? Is it better to keep my service class unit test using the InMemoryRepository but change my controller unit tests to use mocked service objects?
Edit:
After going over my structure again I remembered a complication that is preventing mocking in controller unit tests, as I forgot my structure is a bit more complicated than I originally said.
A Repository is a data store for one type of object, so if my document service class needs document entities, it creates a IRepository<Document>.
Controllers are passed an IRepositoryFactory. The IRepositoryFactory is a class which is supposed to make it easy to create repositories without having to repositories directly into the controller, or having the controller worry about what service classes require which repositories. I have an InMemoryRepositoryFactory, which gives the service classes InMemoryRepository<Entity> instantiations, and the same idea goes for my EFRepositoryFactory.
In the controller's constructors, private service class objects are instantiated by passing in the IRepositoryFactory object that is passed into that controller.
So for example
public class DocumentController : Controller
{
private DocumentService _documentService;
public DocumentController(IRepositoryFactory factory)
{
_documentService = new DocumentService(factory);
}
...
}
I can't see how to mock my service layer with this architecture so that my controllers are unit tested and not integration tested. I could have a bad architecture for unit testing, but I'm not sure how to better solve the issues that made me want to make a repository factory in the first place.
One solution to your problem is to change your controllers to demand IDocumentService instances instead of constructing the services themselves:
public class DocumentController : Controller
{
private IDocumentService _documentService;
// The controller doesn't construct the service itself
public DocumentController(IDocumentService documentService)
{
_documentService = documentService;
}
...
}
In your real application, let your IoC container inject IRepositoryFactory instances into your services. In your controller unit tests, just mock the services as needed.
(And see Misko Hevry's article about constructors doing real work for an extended discussion of the benefits of restructuring your code like this.)
Personally, I would design the system around the Unit of Work pattern that references repositories. This could make things much simpler and allows you to have more complex operations running atomically. You would typically have a IUnitOfWorkFactory that is supplied as dependency in the Service classes. A service class would create a new unit of work and that unit of work references repositories. You can see an example of this here.
If I understand correctly you are concerned about errors in one piece of (low level) code failing a lot of tests, making it harder to see the actual problem. You take InMemoryRepository as a concrete example.
While your concern is valid, I personally wouldn't worry about a failing InMemoryRepository. It is a test objects, and you should keep those tests objects as simple as possible. This prevents you from having to write tests for your test objects. Most of the time I assume they are correct (however, I sometimes use self checks in such a class by writing Assert statements). A test will fail when such an object misbehaves. It's not optimal, but you would normally find out quick enough what the problem is in my experience. To be productive, you will have to draw a line somewhere.
Errors in the controller caused by the service are another cup of tea IMO. While you could mock the service, this would make testing more difficult and less trustworthy. It would be better to NOT test the service at all. Only test the controller! The controller will call into the service and if your service doens't behave well, your controller tests would find out. This way you only test the top level objects in your application. Code coverage will help you spot parts of your code you don't test. Of course this isn't possible in all scenario's, but this often works well. When the service works with a mocked repository (or unit of work) this would work very well.
Your second concern was that those depedencies make you have much test setup. I've got two things to say about this.
First of all, I try to minimize my dependency inversion to only what I need to be able to run my unit tests. Calls to the system clock, database, Smtp server and file system should be faked to make unit tests fast and reliable. Other things I try not to invert, because the more you mock, the less reliable the tests become. You are testing less. Minimizing the dependency inversion (to what you need to have good RTM unit tests) helps making test setup easier.
But (second point) you also need to write your unit tests in a way that they are readable and maintainable (the hard part about unit testing, or in fact making software in general). Having big test setups makes them hard to understand and makes test code hard to change when a class gets a new dependency. I found out that one of the best ways to make tests more readable and maintainable is to use simple factory methods in your test class to centralize the creation of types that you need in the test (I never use mocking frameworks). There are two patterns that I use. One is a simple factory method, such as one that creates a valid type:
FakeDocumentService CreateValidService()
{
return CreateValidService(CreateInitializedContext());
}
FakeDocumentService CreateValidService(InMemoryUnitOfWork context)
{
return new FakeDocumentSerice(context);
}
This way tests can simply call these methods and when they need a valid object they simply call one of the factory methods. Of course when one of these methods accidentally creates an invalid object, many tests will fail. It's hard to prevent this, but easily fixed. And easily fixed means that the tests are maintainable.
The other pattern I use is the use of a container type that holds the arguments/properties of the actual object you want to create. This gets especially useful when an object has many different properties and/or constructor arguments. Mix this with a factory for the container and a builder method for the object to create and you get very readable test code:
[TestMethod]
public void Operation_WithValidArguments_Succeeds()
{
// Arrange
var validArgs = CreateValidArgs();
var service = BuildNewService(validArgs);
// Act
service.Operation();
}
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))]
public void Operation_NegativeAge_ThrowsException()
{
// Arrange
var invalidArgs = CreateValidArgs();
invalidArgs.Age = -1;
var service = BuildNewService(invalidArgs);
// Act
service.Operation();
}
This allows you to let the test only specify what matters! This is very important to make tests readable! The CreateValidArgs() method could create an container with over 100 arguments that would make a valid SUT (system under test). You now centralized in one place the default valid configuration. I hope this makes sense.
Your third concern was about not being able to test if LINQ queries behave expectedly with the given LINQ provider. This is a valid problem, because it is quite easy to write LINQ (to Expression tree) queries that run perfectly when used over in-memory objects, but fail when querying the database. Sometimes it is impossible to translate a query (because you call an .NET method that has no counterpart in the database) or the LINQ provider has limitations (or bugs). Especially the LINQ provider of Entity Framework 3.5 sucks hard.
However, this is a problem you cannot solve with unit tests per definition. Because when you call the database in your tests, it's not a unit test anymore. Unit tests however never totally replace manual testing :-)
Still, it's a valid concern. In addition to unit testing you can do integration testing. In this case you run your code with the real provider and a (dedicated) test database. Run each test within a database transaction and rollback the transaction at the end of the test (TransactionScope works great with this!). Note however that writing maintainable integration tests is even harder than writing maintainable unit tests. You have to make sure that the model of your test database is in sync. Each integration test should insert the data it needs for that test in the database, which is often a lot of work to write and maintain. Best is to keep the amount of integration tests to a minimum. Have enough integration tests to make you feel confident about making changes to the system. For instance, having to call a service method with a complicated LINQ statement in a single test will often be enough to test if your LINQ provider is able to build valid SQL out of it. Most of the time I just assume the LINQ provider will have the same behavior as the LINQ to Objects (.AsQueryable()) provider. Again, you will have to draw the line somewhere.
I hope this helps.
I think your approach is sound for testing the service layer itself, but, as you suggested, it would be better if the service layer is mocked out completely for your business logic and other high-level testing. This makes your higher-level tests easier to implement/maintain, as there's no need to exercise the service layer again if it's already been tested.
How can a IoC Container be used for unit testing? Is it useful to manage mocks in a huge solution (50+ projects) using IoC? Any experiences? Any C# libraries that work well for using it in unit tests?
Generally speaking, a DI Container should not be necessary for unit testing because unit testing is all about separating responsibilities.
Consider a class that uses Constructor Injection
public MyClass(IMyDependency dep) { }
In your entire application, it may be that there's a huge dependency graph hidden behind IMyDependency, but in a unit test, you flatten it all down to a single Test Double.
You can use dynamic mocks like Moq or RhinoMocks to generate the Test Double, but it is not required.
var dep = new Mock<IMyDependency>().Object;
var sut = new MyClass(dep);
In some cases, an auto-mocking container can be nice to have, but you don't need to use the same DI Container that the production application uses.
I often use an IoC container in my tests. Granted, they are not "unit tests" in the pure sense. IMO They are more BDDish and facilitate refactoring. Tests are there to give you confidence to refactor. Poorly written tests can be like pouring cement into your code.
Consider the following:
[TestFixture]
public class ImageGalleryFixture : ContainerWiredFixture
{
[Test]
public void Should_save_image()
{
container.ConfigureMockFor<IFileRepository>()
.Setup(r => r.Create(It.IsAny<IFile>()))
.Verifiable();
AddToGallery(new RequestWithRealFile());
container.VerifyMockFor<IFileRepository>();
}
private void AddToGallery(AddBusinessImage request)
{
container.Resolve<BusinessPublisher>().Consume(request);
}
}
There are several things that happen when adding an image to the gallery. The image is resized, a thumbnail is generated, and the files are stored on AmazonS3. By using a container I can more easily isolate just the behavior I want to test, which in this case is the persisting part.
An auto-mocking container extension comes in handy when using this technique:
http://www.agileatwork.com/auto-mocking-unity-container-extension/
How can a Ioc Container be used for unit testing?
IoC will enforce programming paradigms that will make unit testing in isolation (i.e. using mocks) easier: use of interfaces, no new(), no singletons...
But using the IoC container for testing is not really a requirement, it will just provide some facilities e.g. injection of mocks but you could do it manually.
Is it useful to manage mocks in a huge solution (50+ projects) using IoC?
I'm not sure what you mean by managing mocks using IoC. Anyway, IoC containers can usually do more than just injecting mocks when it comes to testing. And if you have decent IDE support that makes refactoring possible, why not using it?
Any experience?
Yes, on a huge solution, you need more than ever a non error-prone and refactoring-adverse solution (i.e. either through a type safe IoC container or good IDE support).
Using containers with ability to resolve unregistered/uknown services like SimpleInjector, DryIoc (its mine) can return mocks for not yet implemented interfaces.
Which means that you can start development with first simple implementation and its mocked dependencies, and replace them with real thing as you progress.