I've been trying to shift myself into a more test driven methodology when writing my .net MVC based app. I'm doing all my dependency injection using constructor-based injection. Thus far it's going well, but I've found myself doing something repeatedly and I'm wondering if there is a better practice out there.
Let's say I want to test a Controller. It has a dependency on a Unit Of Work (database) object. Simple enough... I write my controller to take that interface in its constructor, and my DI framework (Ninject) can inject it at runtime. Easy. Meanwhile in my unit test, I can manually construct my controller with a mocked-up database object. I like that I can write lots of individual self-contained tests that take care of all the object construction and testing.
Now I've moved on and started adding new features & functions to my Controller object. The Controller now has one or two more dependencies. I can write more tests using these 3 dependencies, but my older tests are all broken (won't compile), as the compiler throws a whole bunch of errors like this:
'MyProject.Web.Api.Controllers.MyExampleController' does not contain a constructor that takes 3 arguments
What I've been doing (which smells bad) is going back and updating all my unit tests, changing the construction code, and adding null parameters for all the new dependencies that my old tests don't care about, like this:
From this:
var controllerToTest = new MyExampleController(mockUOW.Object);
To this:
var controllerToTest = new MyExampleController(mockUOW.Object, null, null);
This gets everything to compile and gets my tests to run again, but I don't like the prospect of going back and editing tons of old tests just to change calls to my object constructors.
Is there a better way to write your unit tests (or a better way to write your classes and do DI?) so they don't all break when you add a new dependency?
You've run into a common issue when unit testing: duplication of code causing significant refactoring overhead. The solution is to try to restrict the instantiation of the object(s) under test to a single place. If you can, try to do it in the TestInitialize method of your tests:
[TestInitialize]
public void Init()
{
this.mockUOW = new Mock<ISomeDependency>();
this.mock2 = new Mock<IAnotherDependency>();
this.mock3 = new Mock<IYetAnotherDependency>();
// Do initial set-up on your mocks
this.controllerToTest = new MyExampleController(this.mockUOW, this.mock2, this.mock3);
}
Sometimes, this isn't practical: you need to do specific set-up in each test before you create an instance of the class under test. In this situation, move the code to create the object into a well-named method, and call that:
[TestMethod]
public void MyTestMethod()
{
// Do any required set-up on mocks, etc.
this.CreateController();
}
private void CreateController()
{
this.controllerToTest = new MyExampleController(this.mockUOW, this.mock2, this.mock3);
}
Now you (hopefully) have only a single place in your tests to update when dependencies get added to one of your controllers.
Note also that that the CreateController method isn't named CreateMyExampleController. As a best practice, I try to avoid method names in tests that are specific to certain classes or methods. Here, for example, including the class name in the method adds another refactoring dependency which is easily overlooked.
You just inline the mock initialization. Instead of supplying null just supply new Mock<IClassName>.Object().
You can try using automock (on codeplex) to automatically mock out your top-level objects, this reduces the amount of retyping that you need to fix objects.
Tests should not share the same context - you should be trying to test a stateless system. For the most part without refactoring tools enabled you're just going to have to deal with it.
Resharper is very powerful in this regard. When you add a new class, you can Ctrl+F6 and add a new parameter to a constructor. It will prompt you with how to fill caller locations - just type your mock in there and everywhere that it matters (if you are injecting all your dependencies the right way) will be filled in automatically.
If your DI container supports multiple instances (i.e. Unity does) you can use local container to resolve all references instead of calling constructor.
Pseudocode:
var container = new MyDependencyContainer();
container.RegisterInstance<IMyInterface>(mockedMyInterface);
var myClass = container.Reslove(myClass);
To simplify code in each test you can have container/mocks as field and configure container in method marked with [TestInitialize] attribute.
Note that such code will not break at compile time when you add dependency to constructor, but likely still require changes (i.e. adding new dependencies to container).
Related
Am new to unit testing, and just getting started writing unit tests for an existing code base.
I would like to write a unit test for the following method of a class.
public int ProcessFileRowQueue()
{
var fileRowsToProcess = this.EdiEntityManager.GetFileRowEntitiesToProcess();
foreach (var fileRowEntity in fileRowsToProcess)
{
ProcessFileRow(fileRowEntity);
}
return fileRowsToProcess.Count;
}
The problem is with GetFileRowEntitiesToProcess(); The Entity Manager is a wrapper around the Entity Framework Context. I have searched on this and found one solution is to have a test database of a known state to test with. However, it seems to me that creating a few entities in the test code would yield more consistent test results.
But as it exists, I don't see a way to mock the Manager without some refactoring.
Is there a best practice for resolving this? I apologize for this question being a bit naïve, but I just want to make sure I go down the right road for the rest of the project.
I'm hearing two questions here:
Should I mock EdiEntityManager?
Yes. It's a dependency external to the code being tested, its creation and behavior are defined outside of that code. So for testing purposes a mock with known behavior should be injected.
How can I mock EdiEntityManager?
That we can't know from the code posted. It depends on what that type is, how it's created and supplied to that containing object, etc. To answer this part of the question, you should attempt to:
Create a mock with known behavior for the one method being invoked (GetFileRowEntitiesToProcess()).
Inject that mock into this containing object being tested.
For either of these efforts, discover what may prevent that from happening. Each such discovery is either going to involve learning a little bit more about the types and the mocks, or is going to reveal a need for refactoring to allow testability. The code posted doesn't reveal that.
As an example, suppose EdiEntityManager is created in the constructor:
public SomeObject()
{
this.EdiEntityManager = new EntityManager();
}
That would be something that prevents mocking because it gets in the way of Step 2 above. Instead, the constructor would be refactored to require rather than instantiate:
public SomeObject(EntityManager ediEntityManager)
{
this.EdiEntityManager = ediEntityManager;
}
That would allow a test to supply a mock, and conforms with the Dependency Inversion Principle.
Or perhaps EntityManager is too concrete a type and difficult to mock/inject, then perhaps the actual type should be an interface which EntityManager defines. Worst case scenario with this problem could be that you don't control the type at all and simply need to define a wrapper object (which itself has a mockable interface) to enclose the EntityManager dependency.
I'm trying to create a unit test for an ASP.NET that has the following constructor definition (filled with Ninject when running the real application):
public OrderController(IViewModelFactory modelFactory, INewsRepository repository, ILoggedUserHelper loggedUserHelper,
IDelegateHelper delegateHelper, ICustomerContextWrapper customerContext) {
this.factory = modelFactory;
this.loggedUserHelper = loggedUserHelper;
this.delegateHelper = delegateHelper;
this.customerContext = customerContext;
}
I want to test the methods inside the OrderController class, but in order to isolate it, I have to mock all and every of those dependencies, which becomes outright ridiculous (having to also mock subdependencies probably).
In this case, which is the best practice to Unit Test this class?
Well, you have to provide test doubles for all dependencies, not necessarily mocks.
Luckily,this is the 21st century and there are tools to make the job easier for us. You can use AutoFixture to create an instance of OrderController and inject mocks as necessary.
var fixture = new Fixture().Customize(new AutoConfiguredMoqCustomization());
var orderController = fixture.Create<OrderController>();
Which, basically, is equivalent to:
var factory = new Mock<IViewModelFactory>();
var repository = new Mock<INewsRepository>();
var delegateHelper = new Mock<IDelegateHelper >();
var customerContext = new Mock<ICustomerContextWrapper >();
var orderController = new OrderController(factory.Object, repository.Object, delegateHelper.Object, customerContext.Object);
If those dependencies depend on other types, those will be setup as well. AutoFixture with the AutoConfiguredMoqCustomization customization will build an entire graph of dependencies.
If you need access to, say, the repository mock, so you can do some additional setups or assertions on it later, you can freeze it. Freezing a type will make the fixture container contain only one instance of that type, e.g.:
var fixture = new Fixture().Customize(new AutoConfiguredMoqCustomization());
var repositoryMock = fixture.Freeze<Mock<INewsRepository>>();
repositoryMock.Setup(x => x.Retrieve()).Returns(1);
//the frozen instance will be injected here
var orderController = fixture.Create<OrderController>();
repositoryMock.Verify(x => x.Retrieve(), Times.Once);
I've used Moq in these examples, but AutoFixture also integrates with NSubstitute, RhinoMock and Foq.
Disclosure: I'm one of the project's contributors
No, you don't. The different concepts of test object implementations you can use are known as Test Doubles. Mocks are just one type of Test Double as defined by Gerard Meszaros in his book:
Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryTestDatabase is a good example).
Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.
Spies are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent.
Mocks are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting.
You only need to give as many stubs, fakes and dummies as required for your test to pass.
Dummies take very little work to generate and may be enough to cover your scenario. An example would be a constructor that takes an IEmailer and an ILogWriter. If you're only testing the Log method, you only need to provide enough of an implementation of IEmailer in order for the test to not throw Argument exceptions.
Also, regarding your point about sub-dependencies... Moq will take care of that for you, because Moq implementations of your interface won't take dependencies.
I currently have a repository that is using Entity Framework for my CRUD operations.
This is injected into my service that needs to use this repo.
Using AutoMapper, I project the entity Model onto a Poco model and the poco gets returned by the service.
If my objects have multiple properties, what is a correct way to set-up and then assert my properties?
If my service has multiple repo dependencies what is the correct way to setup all my mocks? * - A class [setup] where all the mocks and objects are configured for these test fixtures?*****
I want to avoid having 10 tests and each test has 50 asserts on properties and dozens on mocks set-up for each test. This makes maintainability and readability difficult.
I have read Art of Unit Testing and did not discover any suggestions how to handle this case.
The tooling I am using is Rhino Mocks and NUnit.
I also found this on SO but it doesn't answer my question: Correctly Unit Test Service / Repository Interaction
Here is a sample that expresses what I am describing:
public void Save_ReturnSavedDocument()
{
//Simulate DB object
var repoResult = new EntityModel.Document()
{
DocumentId = 2,
Message = "TestMessage1",
Name = "Name1",
Email = "Email1",
Comment = "Comment1"
};
//Create mocks of Repo Methods - Might have many dependencies
var documentRepository = MockRepository.GenerateStub<IDocumentRepository>();
documentRepository.Stub(m => m.Get()).IgnoreArguments().Return(new List<EntityModel.Document>()
{
repoResult
}.AsQueryable());
documentRepository.Stub(a => a.Save(null, null)).IgnoreArguments().Return(repoResult);
//instantiate service and inject repo
var documentService = new DocumentService(documentRepository);
var savedDocument = documentService.Save(new Models.Document()
{
ID = 0,
DocumentTypeId = 1,
Message = "TestMessage1"
});
//Assert that properties are correctly mapped after save
Assert.AreEqual(repoResult.Message, savedDocument.Message);
Assert.AreEqual(repoResult.DocumentId, savedDocument.DocumentId);
Assert.AreEqual(repoResult.Name, savedDocument.Name);
Assert.AreEqual(repoResult.Email, savedDocument.Email);
Assert.AreEqual(repoResult.Comment, savedDocument.Comment);
//Many More properties here
}
First of all, each test should only have one assertion (unless the other validates the real one) e.q. if you want to assert that all elements of a list are distinct, you may want to assert first that the list is not empty. Otherwise you may get a false positive. In other cases there should only be one assert for each test. Why? If the test fails, it's name tells you exactly what is wrong. If you have multiple asserts and the first one fails you don't know if the rest was ok. All you know than is "something went wrong".
You say you don't want to setup all mocks/stubs in 10 tests. This is why most frameworks offer you a Setup method which runs before each test. This is where you can put most of your mocks configuration in one place and reuse it. In NUnit you just create a method and decorate it with a [SetUp] attribute.
If you want to test a method with different values of a parameter you can use NUnit's [TestCase] attributes. This is very elegant and you don't have to create multiple identical tests.
Now lets talk about the useful tools.
AutoFixture this is an amazing and very powerful tool that allows you to create an object of a class which requires multiple dependencies. It setups the dependencies with dummy mocks automatically, and allows you to manually setup only the ones you need in a particular test. Say you need to create a mock for UnitOfWork which takes 10 repositories as dependencies. In your test you only need to setup one of them. Autofixture allows you to create that UnitOfWork, setup that one particular repository mock (or more if you need). The rest of the dependencies will be set up automatically with dummy mocks. This saves you a huge amount of useless code. It is a little bit like an IOC container for your test.
It can also generate fake objects with random data for you. So e.q. the whole initialization of EntityModel.Document would be just one line
var repoResult = _fixture.Create<EntityModel.Document>();
Especially take a look at:
Create
Freeze
AutoMockCustomization
Here you will find my answer explaining how to use AutoFixture.
SemanticComparison Tutorial This is what will help you to avoid multiple assertions while comparing properties of objects of different types. If the properties have the same names it will to it almost automatically. If not, you can define the mappings. It will also tell you exactly which properties do not match and show their values.
Fluent assertions This just provides you a nicer way to assert stuff.
Instead of
Assert.AreEqual(repoResult.Message, savedDocument.Message);
You can do
repoResult.Message.Should().Be(savedDocument.Message);
To sum up. These tools will help you create your test with much less code and will make them much more readable. It takes time to get to know them well. Especially AutoFixture, but when you do, they become first things you add to your test projects - believe me :). Btw, they are all available from Nuget.
One more tip. If you have problems with testing a class it usually indicates a bad architecture. The solution usually is to extract smaller classes from the problematic class. (Single Responsibility Principal) Than you can easily test the small classes for business logic. And easily test the original class for interactions with them.
Consider using anonymous types:
public void Save_ReturnSavedDocument()
{
// (unmodified code)...
//Assert that properties are correctly mapped after save
Assert.AreEqual(
new
{
repoResult.Message,
repoResult.DocumentId,
repoResult.Name,
repoResult.Email,
repoResult.Comment,
},
new
{
savedDocument.Message,
savedDocument.DocumentId,
savedDocument.Name,
savedDocument.Email,
savedDocument.Comment,
});
}
There is one thing to look-out for: nullable types (eg. int?) and properties that might have slightly different types (float vs double) - but you can workaround this by casting properties to specific types (eg. (int?)repoResult.DocumentId ).
Another option would be to create a custom assert class/method(s).
Basically, the trick is to push as much clutter as you can outside of the unittests, so that only the behaviour
that is to be tested remains.
Some ways to do that:
Don't declare instances of your model/poco classes inside each test, but rather use a static TestData class that
exposes these instances as properties. Usually these instances are useful for more than one test as well.
For added robustness, have the properties on the TestData class create and return a new object instance every time
they're accessed, so that one unittest cannot affect the next by modifying the testdata.
On your testclass, declare a helper method that accepts the (usually mocked) repositories and returns the
system-under-test (or "SUT", i.e. your service). This is mainly useful in situations where configuring the SUT
takes more than 2 or more statements, since it tidies up your test code.
As an alternative to 2, have your testclass expose properties for each of the mocked Repositories, so that you don't need to declare these in your unittests; you can even pre-initialize them with a default behaviour to reduce the configuration per unittest even further.
The helper method that returns the SUT then doesn't take the mocked Repositories as arguments, but rather contstructs the SUT using the properties. You might want to reinitialize each Repository property on each [TestInitialize].
To reduce the clutter for comparing each property of your Poco with the corresponding property on the Model object, declare a helper method on your test class that does this for you (i.e. void AssertPocoEqualsModel(Poco p, Model m)). Again, this removes some clutter and you get the reusability for free.
Or, as an alternative to 4, don't compare all properties in every unittest, but rather test the mapping code in only one place with a separate set of unittests. This has the added benefit that, should the mapping ever include new properties or change
in any other way, you don't have to update 100-odd unittests.
When not testing the property mappings, you should just verify that the SUT returns the correct object instances (i.e. based on Id or Name), and that just the properties that might be changed (by the business logic being currently tested)
contain the correct values (such as Order total).
Personally, I prefer 5 because of its maintainability, but this isn't always possible and then 4 is usually a viable alternative.
Your test code would then look like this (unverified, just for demonstration purposes):
[TestClass]
public class DocumentServiceTest
{
private IDocumentRepository DocumentRepositoryMock { get; set; }
[TestInitialize]
public void Initialize()
{
DocumentRepositoryMock = MockRepository.GenerateStub<IDocumentRepository>();
}
[TestMethod]
public void Save_ReturnSavedDocument()
{
//Arrange
var repoResult = TestData.AcmeDocumentEntity;
DocumentRepositoryMock
.Stub(m => m.Get())
.IgnoreArguments()
.Return(new List<EntityModel.Document>() { repoResult }.AsQueryable());
DocumentRepositoryMock
.Stub(a => a.Save(null, null))
.IgnoreArguments()
.Return(repoResult);
//Act
var documentService = CreateDocumentService();
var savedDocument = documentService.Save(TestData.AcmeDocumentModel);
//Assert that properties are correctly mapped after save
AssertEntityEqualsModel(repoResult, savedDocument);
}
//Helpers
private DocumentService CreateDocumentService()
{
return new DocumentService(DocumentRepositoryMock);
}
private void AssertEntityEqualsModel(EntityModel.Document entityDoc, Models.Document modelDoc)
{
Assert.AreEqual(entityDoc.Message, modelDoc.Message);
Assert.AreEqual(entityDoc.DocumentId, modelDoc.DocumentId);
//...
}
}
public static class TestData
{
public static EntityModel.Document AcmeDocumentEntity
{
get
{
//Note that a new instance is returned on each invocation:
return new EntityModel.Document()
{
DocumentId = 2,
Message = "TestMessage1",
//...
}
};
}
public static Models.Document AcmeDocumentModel
{
get { /* etc. */ }
}
}
In general, if your having a hard time creating a concise test, your testing the wrong thing or the code your testing has to many responsibilities. (In my experience)
In specific, it looks like your testing the wrong thing here. If your repo is using entity framework, your getting the same object back that your sending in. Ef just updates to Id for new objects and any time stamp fields you might have.
Also, if you can't get one of your asserts to fail without a second assert failing, then you don't need one of them. Is it really possible for "name" to come back ok but for "email" to fail? If so, they should be in separate tests.
Finally, trying to do some tdd might help. Comment out all the could in your service.save. Then, write a test that fails. Then un comment out only enough code to make your test pass. Them write your next failing test. Can't write a test that fails? Then your done.
I have a number of controllers that I am testing, each of which has a dependency on a repository. This is how I am supplying the mocked repository in the case of each test fixture:
[SetUp]
public void SetUp()
{
var repository = RepositoryMockHelper.MockRepository();
controller = new HomeController(repository.Object);
}
And here is the MockRepository helper method for good measure:
internal static Mock<IRepository> MockRepository()
{
var repository = new Mock<IRepository>();
var posts = new InMemoryDbSet<Post>()
{
new Post {
...
},
...
};
repository.Setup(db => db.Posts).Returns(posts);
return repository;
}
... = code removed for the sake of brevity.
My intention is to use a new instance of InMemoryDbSet for each test. I thought that using the SetUp attribute would achieve this but clearly not.
When I run all of the tests, the results are inconsistent as the tests do not appear to be isolated. One test will for example, remove an element from from the data store and assert that the count has been decremented but according to the whim of the test runner, another test might have incremented the count, causing both tests to fail.
Am I approaching these tests in the correct way? How can I resolve this issue?
The package you reference you are using for your InMemoryDataSet uses a static backing data structure, and so will persist across test runs. This is why you're seeing the inconsistent behaviors. You can get around this by using another package (as you mention), or pass in a new HashSet to the constructor in every test so it doesn't use a static member.
As to the rest of your question, I think you're approaching the testing well. The only thing is that since all of your tests have the same setup (in your SetUp method), they could influence each other. I.e., if you have a test that relies on having no Foo objects in the set and one that needs at least one, then you're either going to add one in SetUp and remove it in the test that doesn't need it, or vice versa. It could be more clear to have specific set up procedures as #BillSambrone mentioned.
As #PatrickQuirk pointed out, I think your problem is due to what InMemoryDbSet does under the covers.
Regarding the "Am I approaching this right ?" part :
If, as I suspect, your Repository exposes some kind of IDbSet, it's probably a leaky abstraction. The contract of an IDbSet is far too specific for what a typical Repository client wants to do with the data. Better to return an IEnumerable or some sort of read-only collection instead.
From what you describe, it seems that consumers of Posts will manipulate it as a read-write collection. This isn't a typical Repository implementation - you'd normally have separate methods such as Get(), Add(), etc. The actual internal data collection is never exposed, which means you can easily stub or mock out just the individual Repository operations you need and not fear the kind of issues you had with your test data.
Whatever is in your [SetUp] method will be called for each test. This is probably behavior that you don't want.
You can either place the code you have in the [SetUp] method inside each individual test, or you can create a separate private method within your unit test class that will spin up a freshly mocked DbSet for you to keep things DRY.
I'd like to start using unit tests, but I'm having a hard time understanding how I can use them with my current project.
My current project is an application which collects files into a 'Catalog'. A Catalog can then extract information from the files it contains such as thumbnails and other properties. Users can also tag the files with other custom meta data such as "Author" and "Notes". It could easily be compared to a photo album application like Picasa, or Adobe Lightroom.
I've separated the code to create and manipulate a Catalog into a separate DLL which I'd now like to test. However, the majority of my classes are never meant to be instantiated on their own. Instead everything happens through my Catalog class. For example there's no way I can test my File class on its own, as a File is only accessible through a Catalog.
As an alternative to unit tests I think it would make more sense for me to write a test program that run through a series of actions including creating a catalog, re-opening the catalog that was created, and manipulating the contents of the catalog. See the code below.
//NOTE: The real version would have code to log the results and any exceptions thrown
//input data
string testCatalogALocation = "C:\TestCatalogA"
string testCatalogBLocation = "C:\TestCatalogB"
string testFileLocation = "C:\testfile.jpg"
string testFileName = System.IO.Path.GetFileName(testFileLocation);
//Test creating catalogs
Catalog catAtemp = Catalog(testCatalogALocation)
Catalog catBtemp = Catalog(testCatalogBLocation );
//test opening catalogs
Catalog catA = Catalog.OpenCatalog(testCatalogALocation);
Catalog catB = Catalog.OpenCatalog(testCatalogBLocation );
using(FileStream fs = new FileStream(testFileLocation )
{
//test importing a file
catA.ImportFile(testFileName,fs);
}
//test retrieving a file
File testFile = catA.GetFile(System.IO.Path.GetFileName(testFileLocation));
//test copying between catalogs
catB.CopyFileTo(testFile);
//Clean Up after test
System.IO.Directory.Delete(testCatalogALocation);
System.IO.Directory.Delete(testCatalogBLocation);
First, am I missing something? Is there some way to unit test a program like this? Second, is there some way to create a procedural type test like the code above but be able to take advantage of the testing tools building into Visual Studio? Will a "Generic Test" in VS2010 allow me to do this?
Update
Thanks for all the responses everyone. Actually my classes do in fact inherit from a series of interfaces. Here's a class diagram for anyone that is interested. Actually I have more interfaces then I have classes. I just left out the interfaces from my example for the sake of simplicity.
Thanks for all the suggestions to use mocking. I'd heard the term in the past, but never really understood what a "mock" was until now. I understand how I could create a mock of my IFile interface, which represents a single file in a catalog. I also understand how I could create a mock version of my ICatalog interface to test how two catalogs interact.
Yet I don't understand how I can test my concrete ICatalog implementations as they strongly related to their back end data sources. Actual the whole purpose of my Catalog classes is to read, write, and manipulate their external data/resources.
You ought to read about SOLID code principles. In particular the 'D' on SOLID stands for the Dependency Injection/Inversion Principle, this is where the class you're trying to test doesn't depend on other concrete classes and external implementations, but instead depends on interfaces and abstractions. You rely on an IoC (Inversion of Control) Container (such as Unity, Ninject, or Castle Windsor) to dynamically inject the concrete dependency at runtime, but during Unit Testing you inject a mock/stub instead.
For instance consider following class:
public class ComplexAlgorithm
{
protected DatabaseAccessor _data;
public ComplexAlgorithm(DatabaseAccessor dataAccessor)
{
_data = dataAccessor;
}
public int RunAlgorithm()
{
// RunAlgorithm needs to call methods from DatabaseAccessor
}
}
RunAlgorithm() method needs to hit the database (via DatabaseAccessor) making it difficult to test. So instead we change DatabaseAccessor into an interface.
public class ComplexAlgorithm
{
protected IDatabaseAccessor _data;
public ComplexAlgorithm(IDatabaseAccessor dataAccessor)
{
_data = dataAccessor;
}
// rest of class (snip)
}
Now ComplexAlgorithm depends on an interface IDatabaseAccessor which can easily be mocked for when we need to Unit test ComplexAlgorithm in isolation. For instance:
public class MyFakeDataAccessor : IDatabaseAccessor
{
public IList<Thing> GetThings()
{
// Return a fake/pretend list of things for testing
return new List<Thing>()
{
new Thing("Thing 1"),
new Thing("Thing 2"),
new Thing("Thing 3"),
new Thing("Thing 4")
};
}
// Other methods (snip)
}
[Test]
public void Should_Return_8_With_Four_Things_In_Database()
{
// Arrange
IDatabaseAccessor fakeData = new MyFakeDataAccessor();
ComplexAlgorithm algorithm = new ComplexAlgorithm(fakeData);
int expectedValue = 8;
// Act
int actualValue = algorithm.RunAlgorithm();
// Assert
Assert.AreEqual(expectedValue, actualValue);
}
We're essentially 'decoupling' the two classes from each other. Decoupling is another important software engineering principle for writing more maintainable and robust code.
This is really the tip of the tip of the iceberg as far as Dependency Injection, SOLID and Decoupling go, but it's what you need in order to effectively Unit test your code.
Here is a simple algorithm that can help get you started. There are other techniques to decouple code, but this can often get you pretty far, particularly if your code is not too large and deeply entrenched.
Identify the locations where you depend on external data/resources and determine whether you have classes that isolate each dependency.
If necessary, refactor to achieve the necessary insulation. This is the most challenging part to do safely, so focus on the lowest-risk changes first.
Extract interfaces for the classes that isolate external data.
When you construct your classes, pass in the external dependencies as interfaces rather than having the class instantiate them itself.
Create test implementations of your interfaces that don't depend on the external resources. This is also where you can add 'sensing' code for your tests to make sure the appropriate calls are being used. Mocking frameworks can be very helpful here, but it can be a good exercise to create the stub classes manually for a simple project, as it gives you a sense of what your test classes are doing. Manual stub classes typically set public properties to indicate when/how methods are called and have public properties to indicate how particular calls should behave.
Write tests that call methods on your classes, using the stubbed dependencies to sense whether the class is doing the right things in different cases. An easy way to start, if you already have functional code written, is to map out the different pathways and write tests that cover the different cases, asserting the behavior that currently occurs. These are known as characterization tests and they can give you the confidence to start refactoring your code, since now you know you're at least not changing the behavior you've already established.
Best of luck. Writing good unit tests requires a change of perspective, which will develop naturally as you work to identify dependencies and create the necessary isolation for testing. At first, the code will feel uglier, with additional layers of indirection that were previously unnecessarily, but as you learn various isolation techniques and refactor (which you can now do more easily, with tests to support it), you may find that things actually become cleaner and easier to understand.
This is a pure case where Dependency Injection plays a vital role.
As Shady suggest to read about mocking & stubbing. To achive this , you should consider using some Dependency Injectors like ( Unity in .net).
Also read about Dependency Injection Here
http://martinfowler.com/articles/injection.html
the majority of my classes are never
meant to be instantiated on their own
This is where the D - the Design D - comes into TDD. It's bad design to have classes that are tightly coupled. That badness manifests itself immediately when you try to unit test such a class - and if you start with unit tests, you'll never find yourself in this situation. Writing testable code compels us to better design.
I'm sorry; this isn't an answer to your question, but I see others have already mentioned mocking and DI, and those answers are fine. But you put the TDD tag on this question, and this is the TDD answer to your question: don't put yourself in the situation of tightly coupled classes.
What you have now is Legacy Code. That is: code that has been implemented without tests. For your initial tests I would definitely test through the Catalog class until you can break all those dependencies. So your first set of tests will be integration/acceptance tests.
If you don't expect for any behavior to change, then leave it at that, but if you do make a change, I suggest that you TDD the change and build up unit tests with the changes.