Dependency Injection & its relationship with automated testing via an example - c#

Through SO, I found my way to this page: http://www.blackwasp.co.uk/DependencyInjection.aspx
There they provide a snippet of C# code to use as an example of code that could benefit from dependency injection:
public class PaymentTerms
{
PaymentCalculator _calculator = new PaymentCalculator();
public decimal Price { get; set; }
public decimal Deposit { get; set; }
public int Years { get; set; }
public decimal GetMonthlyPayment()
{
return _calculator.GetMonthlyPayment(Price, Deposit, Years);
}
}
public class PaymentCalculator
{
public decimal GetMonthlyPayment(decimal Price, decimal Deposit, int Years)
{
decimal total = Price * (1 + Years * 0.1M);
decimal monthly = (total - Deposit) / (Years * 12);
return Math.Round(monthly, 2, MidpointRounding.AwayFromZero);
}
}
They also include this quote:
One of the key problems with the above code is the instantiation of
the PaymentCalculator object from within the PaymentTerms class. As
the dependency is initialised within the containing class, the two
classes are tightly coupled. If, in the future, several types of
payment calculator are required, it will not be possible to integrate
them without modifying the PaymentTerms class. Similarly, if you wish
to use a different object during automated testing to isolate testing
of the PaymentTerms class, this cannot be introduced.
My question is about the statement in bold:
Did the author actually mean Unit Testing or is there something about automated testing that I'm missing?
If the author DID intend to write automated testing, how would modifying this class to use dependency injection aid in the process of automated testing?
In either case, is this only applicable when there are multiple types of payment calculators?
If so, is it typically worth implementing DI right from the start, even with no knowledge of requirements changing in the future? Obviously this requires some discretion that would be learned through experience, so I'm just trying to get a baseline onto which to build.

Did the author actually mean Unit Testing or is there something about
automated testing that I'm missing?
I read this to mean unit testing. You can run unit tests by hand or in an automated fashion if you have a continuous integration/build process.
If the author DID intend to write automated testing, how would
modifying this class to use dependency injection aid in the process of
automated testing?
The modification would help all testing, automated or not.
In either case, is this only applicable when there are multiple types
of payment calculators?
It can also come in handy if your injected class is interface-based and you'd like to introduce a proxy without having to change the client code.
If so, is it typically worth implementing DI right from the start,
even with no knowledge of requirements changing in the future?
Obviously this requires some discretion that would be learned through
experience, so I'm just trying to get a baseline onto which to build.
It can help from the start, if you have some understanding of how it works and what it's good for.
There's a benefit even if requirements don't change. Your apps will be layered better and be based on interfaces for non-value objects (immutable objects like Address and Phone that are just data and don't change). Those are both best practices, regardless of whether you use a DI engine or not.
UPDATE: Here's a bit more about the benefits of interface-based design and immutable value objects.
A value object is immutable: Once you create it, you don't change its value. This means it's inherently thread-safe. You can share it anywhere in your app. Examples would be Java's primitive wrappers (e.g. java.lang.Integer, a Money class. etc.)
Let's say you needed a Person for your app. You might make it an immutable value object:
package model;
public class Person {
private final String first;
private final String last;
public Person(String first, String last) {
this.first = first;
this.last = last;
}
// getters, setters, equals, hashCode, and toString follow
}
You'd like to persist Person, so you'll need a data access object (DAO) to perform CRUD operations. Start with an interface, because the implementations could depend on how you choose to persist objects.
package persistence;
public interface PersonDao {
List<Person> find();
Person find(Long id);
Long save(Person p);
void update(Person p);
void delete(Person p);
}
You can ask the DI engine to inject a particular implementation for that interface into any service that needs to persist Person instances.
What if you want transactions? Easy. You can use an aspect to advise your service methods. One way to handle transactions is to use "throws advice" to open the transaction on entering the method and either committing after if it succeeds or rolling it back if it throws an exception. The client code need not know that there's an aspect handling transactions; all it knows about is the DAO interface.

The author of the BlackWasp article means automted Unit Testing - that would have been clear if you'd followed its automated testing link, which leads to a page entitled "Creating Unit Tests" that begins "The third part of the Automated Unit Testing tutorial examines ...".
Unit Testing advocates generally love Dependency Injection because it allows them to see inside the thing they're testing. Thus, if you know that PaymentTerms.GetMonthlyPayment() should call PaymentCalculator.GetMonthlyPayment() to perform the calculation, you can replace the calculator with one of your own construction that allows you to see that it has, indeed, been called. Not because you want to change the calculation of m=((p*(1+y*.1))-d)/(y*12) to 5, but because the application that uses PaymentTerms might someday want to change how the payment is calculated, and so the tester wants to ensure that the calculator is indeed called.
This use of Dependency Injection doesn't make Functional Testing, either automated or manual, any easier or any better, because good functional tests use as much of the actual application as possible. For a functional test, you don't care that the PaymentCalculator is called, you care that the application calculates the correct payment as described by the business requirements. That entails either calculating the payment separately in the test and comparing the result, or supplying known loan terms and checking for the known payment value. Neither of those are aided by Dependency Injection.
There's a completely different discussion to be had about whether Dependency Injection is a Good or Bad Thing from a design and programming perspective. But you didn't ask for that, and I'm not going to lob any hand grenades in this q&a.
You also asked in a comment "This is the heart of what I'm trying to understand. The piece I'm still struggling with is why does it need to be a FakePaymentCalculator? Why not just create an instance of a real, legitimate PaymentCalculator and test with that?", and the answer is really very simple: There is no reason to do so for this example, because the object being faked ("mocked" is the more common term) is extremely lightweight and simple. But imagine that the PaymentCalculator object stored its calculation rules in a database somehow, and that the rules might vary depending on when the calculation was being performed, or on the length of the loan, etc. A unit test would now require standing up a database server, creating its schema, populating its rules, etc. For such a more-realistic example, having a FakePaymentCalculator() might make the difference between a test you run every time you compile the code and a test you run as rarely as possible.

If the author DID intend to write automated testing, how would modifying this class to use dependency injection aid in the process of automated testing?
One of the biggest benefits would be to be able to substitute the PaymentCalculator with a mock/fake implementation during the test.
If PaymentTerms was implemented like this:
public class PaymentTerms
{
IPaymentCalculator _calculator;
public PaymentTerms(IPaymentCalculator calculator)
{
this._calculator = calculator;
}
...
}
(Where IPaymentCalculator is the interface declaring the services of the PaymentCalculator class.)
This way, in a unit test, you would be able to do this:
IPaymentCalculator fakeCalculator = new FakePaymentCalculator()
PaymentTerms paymentTerms = new PaymentTerms(fakeCalculator);
// Test the behaviour of PaymentTerms, which uses a fake in the test.
With the PaymentCalculator type hardcoded into PaymentTerms, there would be no way to do this.
UPDATE: You asked in comment:
Hypothetically speaking, if the PaymentCalculator class had some instance properties, the person developing the unit test would probably create the FakePaymentCalculator class with a constructor that always used the same values for the instance properties, right? So how then are permutations tested? Or is the idea that the unit test for PaymentTerms populates the properties for FakePaymentCalculator and tests several permutations?
I don't think you have to test any permutations. In this specific case, the only task of the PaymentTerms.GetMonthlyPaymend() is to call _calculator.GetMonthlyPayment() with the specified parameters. And that is the only thing you need to unit test, when you write the unit test for that method.
For example, you could do the following:
public class FakePaymentCalculator
{
public decimal Price { get; set; }
public decimal Deposit { get; set; }
public int Years { get; set; }
public void GetMonthlyPayment(decimal price, decimal deposit, int years)
{
this.Price = price;
this.Deposit = deposit;
this.Years = years;
}
}
And in the unit test, you could do this:
IPaymentCalculator fakeCalculator = new FakePaymentCalculator()
PaymentTerms paymentTerms = new PaymentTerms(fakeCalculator);
// Calling the method which we are testing.
paymentTerms.GetMonthlyPayment(1, 2, 3);
// Check if the appropriate method of the calculator has been called with the correct parameters.
Assert.AreEqual(1, fakeCalculator.Price);
Assert.AreEqual(2, fakeCalculator.Deposit);
Assert.AreEqual(3, fakeCalculator.Years);
This way we tested the only thing, which is the responsibility of the PaymentTerms.GetMonthlyPayment(), that is calling the GetMonthlyPayment() method of the calculator.
However, for this kind of tests, using a mock would be much more simpler than implementing an own fake. If you're interested, I recommend you to try out
Moq, which is a really simple, yet useful Mock library for .NET.

Related

How can I decouple system design from unit tests (as suggested by Uncle Bob)?

Uncle Bob (Bob Martin) mentioned in his blog that in order to decouple our system's design from unit tests, we should not expose our concrete classes directly to the unit tests. Rather, we should just expose an API that represents our system, and then use this API for unit testing.
A rough representation of Uncle Bob's suggestion
According to my understanding, I think that by an API, he meant an interface. So the unit tests should be interacting with interfaces instead of real classes.
My question is this: If we are exposing only interfaces to our unit tests, how do these unit tests get access to the actual implementations to verify their behavior? Should we use DI in our tests to inject the real classes at run time? Is there any way for the code below to work?
ILoanEligibility.cs
public interface ILoanEligibility
{
bool HasCorrectType(string loanType);
}
LoanEligibility.cs
public class LoanEligibility : ILoanEligibility
{
public bool HasCorrectType(string loanType)
{
if(loanType.Equals("Personal"))
{
return true;
}
return false;
}
}
Unit Test
[TestClass]
public class LoanEligibilityTest
{
ILoanEligibility _loanEligibility;
[TestMethod]
public void TestLoanTypePersonal()
{
//Arrange
string loanType = "Personal";
//Act
bool expected = _loanEligibility.HasCorrectType(loanType);
//Assert
Assert.IsTrue(expected);
}
}
The above unit test tries to see if LoanEligibility.HasCorrectType() method works properly for "Personal" type. Obviously, the test will fail as we are not using a concrete class, but rather an interface, in accordance with Uncle Bob's suggestion (if I understood it correctly).
How do I make this test pass? Any suggestions would be helpful.
Edit 1
Thank you #bleepzter for suggesting Moq. Following is the modified unit test class, testing both valid and invalid cases.
[TestClass]
public class LoanEligibilityTest
{
private Mock<ILoanEligibility> _loanEligibility;
[TestMethod]
public void TestLoanTypePersonal()
{
SetMockLoanEligibility();
//Arrange
string loanType = "Personal";
//Act
bool expected = _loanEligibility.Object.HasCorrectType(loanType);
//Assert
Assert.IsTrue(expected);
}
[TestMethod]
public void TestLoanTypeInvalid()
{
SetMockLoanEligibility();
//Arrange
string loanType = "House";
//Act
bool expected = _loanEligibility.Object.HasCorrectType(loanType);
//Assert
Assert.IsFalse(expected);
}
public void SetMockLoanEligibility()
{
_loanEligibility = new Mock<ILoanEligibility>();
_loanEligibility.Setup(loanElg => loanElg.HasCorrectType("Personal"))
.Returns(true);
}
}
But now I am confused. Since we are not really testing our concrete class but rather its mock, are these unit tests really telling us anything, other than probably that our mocks are working fine?
To answer your question - you would use mocking framework such as Moq.
The overall idea is that Interfaces or abstract classes provide "contracts" or a set of standardized API's which you can code against.
The implementation of these interfaces or abstract classes can be unit tested individually. This is not a problem, and in fact - that is what you should do on a regular basis.
However, the complexity arises when those implementations are dependencies of other objects. In that regard - to unit test such a complex object, you first have to construct the implementation of the dependency, plug that dependency into the instance of whatever you are testing.
This process becomes quite burdensome because as the dependency chain grows - the variability of how the code behaves can be quite complex. To simplify the tests and also to be able to unit tests multiple conditions in complex dependency chains - we use mocking frameworks.
What the mock provides is a way to "fake" an implementation with specific parameters (input/output, whatever they may be) and plug those fakes into the dependency graph. And while yes - you can mock concrete objects - it is a lot easier to mock contracts defined by an interface or an abstract class.
A decent starting point to understand those concepts is the moq framework documentation. https://github.com/Moq/moq4/wiki/Quickstart
Edit:
I see there is a confusion about what this means so I wanted to elaborate.
Common design patterns (known as S.O.L.I.D) dictate that an object should do 1 thing, and 1 thing only and do it well. This is known as the Single Responsibility Principle.
Another core concept is that an object should depend upon abstractions and not concrete implementations. This concept is known as the Dependency Inversion Principle.
Finally - the Liskov Substitution Principle, dictates that object in a program should be replaceable with instances of their sub-types without altering the correctness of the program. In other words - if your objects depend on abstractions, then you can provide different implementations (taking advantage of inheritance) for those abstractions without fundamentally altering the behavior of the application.
Which also neatly jumps into the Open/Closed principle. IE - software entities should be open for extension, but closed for modification. (Think of providing different implementations for those abstractions).
Finally - we have the Inversion of Control principle - a complex object should not be responsible for creating its own dependencies; something else should be responsible for creating them, and they should be "injected" via constructor, method, or property injection wherever they are needed.
So how does this apply in "decoupling system design" from unit tests?
The answer is very simple.
Suppose we are writing a software that models cars.
A car has a body and wheels, and all sorts of other internal components.
For simplicity we will say that an object of type Car has a constructor that takes four wheel objects as parameters:
public class Wheel {
public double Radius { get; set; }
public double RPM { get; set; }
public void Spin(){ ... }
public double GetLinearVelocity() { ... }
}
public class LinearMovement{
public double Velocity { get; set; }
}
public class Car {
private Wheel wheelOne;
private Wheel wheelTwo;
private Wheel wheelThree;
private Wheel wheelFour;
public Car(Wheel one, Wheel two, Wheel three, Wheel four){
wheelOne = one;
wheelTwo = two;
wheelThree = three;
wheelFour = four;
}
public LinearMovement Move(){
wheelOne.Spin();
wheelTwo.Spin();
wheelThree.Spin();
wheelFour.Spin();
speedOne = wheelOne.GetLinearVelocity();
speedTwo = wheelTwo.GetLinearVelocity();
speedThree = wheelThree.GetLinearVelocity();
speedFour = wheelFour.GetLinearVelocity();
return new LinearMovement(){
Velocity = (speedOne + speedTwo + speedThree + speedFour) / 4
};
}
}
The ability of a car to move is governed the kind of wheels the car has. A wheel can have a soft rubber thereby gluing the car to the road around corners, or it can be very narrow for deep snow but very slow speeds.
Therefore - the idea of a wheel becomes an abstraction. There is all sorts of wheels out there, and a concrete implementation of a wheel cannot possibly cover all of them. Enter the dependency inversion principle.
We make wheel an abstraction using IWheel interface to declare the basic minimum functionality for what any wheel should be capable of doing in order to work with our car. (In our case it should spin at least...)
public interface IWheel {
double Radius { get; set; }
double RPM { get; set; }
void Spin();
double GetLinearVelocity();
}
public class BasicWheel : IWheel {
public double Radius { get; set; }
public double RPM { get; set; }
public void Spin(){ ... }
public double GetLinearVelocity() { ... }
}
public class Car {
...
public Car(IWheel one, IWheel two, IWheel three, IWheel four){
...
}
public LinearMovement Move(){
wheelOne.Spin();
wheelTwo.Spin();
wheelThree.Spin();
wheelFour.Spin();
speedOne = wheelOne.GetLinearVelocity();
speedTwo = wheelTwo.GetLinearVelocity();
speedThree = wheelThree.GetLinearVelocity();
speedFour = wheelFour.GetLinearVelocity();
return new LinearMovement(){
Velocity = (speedOne + speedTwo + speedThree + speedFour) / 4
};
}
}
So that's great, we got an abstraction to define a basic functionality of a wheel and we coded the car against that abstraction. Nothing changed in the code of how the car moves - thereby satisfying the Liskov Substitution Principle.
So now if instead of creating a car with basic wheels, and we create a car with RacingPerformanceWheels the code that governs how the car moves stays the same. This satisfies the Open / Closed principle.
However - it poses another problem. The actual velocity of the car - is dependent on the average linear velocity of all 4 wheels. So depending on the wheel - the car will behave differently.
How do we test the behavior of the car given that there could be a million different types of wheels out there?!?
Enter the mocking framework. Because the movement of the car depends on the abstract notion of a wheel defined by the interface IWheel - we can now mock different implementations of such wheel, each with predefined parameters.
The concrete wheel implementations/objects themselves (BasicWheel, RacingPerformanceWheel, etc..) should be unit tested without mocks. The reason is that they do not have dependencies of their own. If the wheel had a dependency in it's constructor - than mocking should be used for that dependency.
To test the car object - mocks should be used to describe each IWheel instance (dependency) that is passed to the constructor of the car. This provides a couple of advantages - decoupling the overall system design from unit tests:
1) We don't care what wheels there are in the system. There could be 1 million of them.
2) We care that for specific wheel dimensions, at given angular velocity (RPM) - the car should achieve a very specific linear velocity.
The mock of IWheel for the requirements of #2 would tell us if our vehicle is working properly, and if not - we could change our code to correct the mistake.
Rather, we should just expose an API that represents our system, and
then use this API for unit testing.
Correct
According to my understanding, I think that by an API, he meant an
interface. So the unit tests should be interacting with interfaces
instead of real classes.
Here you misunderstood first statement.
First in unit tests you need to test actual implementation to verify their behaviour.
Then in unit tests you will instantiate actual classes but you allow to use only methods and types consumer of your API have access to.
In your particular example
[TestClass]
public class LoanEligibilityTest
{
[TestMethod]
public void TestLoanTypePersonal()
{
//Arrange
ILoanEligibility loanEligibility = new LoanEligibility(); // actual implementation
string loanType = "Personal";
//Act
bool expected = _loanEligibility.HasCorrectType(loanType);
//Assert
Assert.IsTrue(expected);
}
}
Suggestion: With Arrange-Act-Assert approach in "Act" section you allow to use only methods and types provided by API.
If we are exposing only interfaces to our unit tests, how do these unit tests get access to the actual implementations to verify their behavior?
Any way you like.
One approach I've found satisfactory is to write the checks in an abstract class, and pass in an instance of the system under test from the constructor of an otherwise empty class that extends the abstract class.
In a lot of ways, test frameworks are ... well... "frameworks" (obviously)... and therefore it can make sense to think about your testable components as something to be injected into the framework. See Mark Seemann for an exploration of what a DI friendly framework might look like, and decide if you think those ideas are reasonable for your test suites.
You can do test first in this style, but I'm going to admit that some of the moves that separate the concerns are going to feel a little bit contrived -- introducing interfaces early, because you really understand what API is going to be comfortable to use, is perhaps dubious.
(One answer might be to take time out to spike the interface, before getting invested in writing checks for the implementation.)

What should you do about nested ViewModels when you are unit testing?

I was working on creating some unit tests for my ViewModels in my project. I didn't really have a problem as most of them were very simple, but ran into an issue when I had a new (unfinished) ViewModel inside of my other ViewModel.
public class OrderViewModel : ViewModelBase
{
public OrderViewModel(IDataService dataService, int orderId)
{
\\ ...
Payments = new ObservableCollection<PaymentViewModel>();
}
public ObservableCollection<PaymentViewModel> Payments { get; private set; }
public OrderStatus Status { ... } //INPC
public void AddPayment()
{
var vm = new PaymentViewModel();
payments.Add(vm);
// TODO: Subscribe to PaymentViewModel.OnPropertyChanged so that
// if the payment is valid, we update the Status to ready.
}
}
I want to create a unit test so that if any of my PaymentViewModel's IsValid property changes and all of them are true that Status should be OrderStatus.Ready. I can implement the class, but what gets me worried is that my unit test will break if the problem is in PaymentViewModel.
I'm not sure if this is OK or not, but it just feels like I should not have to worry about whether or not PaymentViewModel operates properly in order for my unit test for OrderViewModel is correct.
public void GivenPaymentIsValidChangesAndAllPaymentsAreValid_ThenStatusIsReady()
{
var vm = new OrderViewModel();
vm.AddPayment();
vm.AddPayment();
foreach (var payment in vm.Payments)
{
Assert.AreNotEqual(vm.Status, OrderStatus.Ready);
MakePaymentValid(payment);
}
// Now all payments should be valid, so the order status should be ready.
Assert.AreEqual(vm.Status, OrderStatus.Ready);
}
The problem is, how do I write MakePaymentValid in such a way that I guarantee that the PaymentViewModel's behavior will not negatively impact my unit test? Because if it does, then my unit test will fail based on another piece of code not working, rather than my code. Or, should it fail if PaymentViewModel is wrong as well? I am just torn in that I don't think that my tests for OrderViewModel should fail if PaymentViewModel has a bug.
I realize that I could always create an interface like how I do with IDataService, but it seems to me that that is a bit of an overkill to have every single ViewModel have an interface and injected in some how?
When it comes to unit testing you absolutely should separate out your tests from any external dependencies. Keep in mind that this does not mean you must pass in some interface; you will run into situations where you have a specific class being utilized, whether that class is in or out of your control.
Imagine that instead of your example, you were relying on DateTime.Now. Some would argue to abstract it away into some sort of interface IDateTimeService, which could work. Alternatively, you could take advantage of Microsoft Fakes: Shims & Stubs.
Microsoft Fakes will allow you to create Shim* instances. There is a lot to go over on this subject, but the image Microsoft provides illustrates that the usage of Fakes goes beyond classes out of your control (it includes components within your control as well).
Notice how the component you are testing (OrderViewModel) should be isolated from System.dll (i.e. DateTime.Now), other components (PaymentViewModel), and external items as well (if you relied on a Database or Web Service). The Shim is for faking classes, whereas the Stub is for faking (mocking) interfaces.
Once you add a Fakes assembly, simply use the ShimPaymentViewModel class to provide the behavior you expect that it should. If, for whatever reason, the real PaymentViewModel class misbehaves and your application crashes you can at least be assured that the problem was not due to the OrderViewModel. Of course, to avoid that you should include some unit tests for PaymentViewModel to ensure that it behaves properly regardless of what other classes are utilizing it or how they are utilizing it.
TL;DR;
Yes, completely isolate your component when it comes to testing by taking advantage of Microsoft Fakes. Oh, and Microsoft Fakes does play nicely with other frameworks, so don't feel like that by using it that you are foregoing other options; it works in conjunction with other frameworks.

Handling Multiple Mocks and Asserts in Unit Tests

I currently have a repository that is using Entity Framework for my CRUD operations.
This is injected into my service that needs to use this repo.
Using AutoMapper, I project the entity Model onto a Poco model and the poco gets returned by the service.
If my objects have multiple properties, what is a correct way to set-up and then assert my properties?
If my service has multiple repo dependencies what is the correct way to setup all my mocks? * - A class [setup] where all the mocks and objects are configured for these test fixtures?*****
I want to avoid having 10 tests and each test has 50 asserts on properties and dozens on mocks set-up for each test. This makes maintainability and readability difficult.
I have read Art of Unit Testing and did not discover any suggestions how to handle this case.
The tooling I am using is Rhino Mocks and NUnit.
I also found this on SO but it doesn't answer my question: Correctly Unit Test Service / Repository Interaction
Here is a sample that expresses what I am describing:
public void Save_ReturnSavedDocument()
{
//Simulate DB object
var repoResult = new EntityModel.Document()
{
DocumentId = 2,
Message = "TestMessage1",
Name = "Name1",
Email = "Email1",
Comment = "Comment1"
};
//Create mocks of Repo Methods - Might have many dependencies
var documentRepository = MockRepository.GenerateStub<IDocumentRepository>();
documentRepository.Stub(m => m.Get()).IgnoreArguments().Return(new List<EntityModel.Document>()
{
repoResult
}.AsQueryable());
documentRepository.Stub(a => a.Save(null, null)).IgnoreArguments().Return(repoResult);
//instantiate service and inject repo
var documentService = new DocumentService(documentRepository);
var savedDocument = documentService.Save(new Models.Document()
{
ID = 0,
DocumentTypeId = 1,
Message = "TestMessage1"
});
//Assert that properties are correctly mapped after save
Assert.AreEqual(repoResult.Message, savedDocument.Message);
Assert.AreEqual(repoResult.DocumentId, savedDocument.DocumentId);
Assert.AreEqual(repoResult.Name, savedDocument.Name);
Assert.AreEqual(repoResult.Email, savedDocument.Email);
Assert.AreEqual(repoResult.Comment, savedDocument.Comment);
//Many More properties here
}
First of all, each test should only have one assertion (unless the other validates the real one) e.q. if you want to assert that all elements of a list are distinct, you may want to assert first that the list is not empty. Otherwise you may get a false positive. In other cases there should only be one assert for each test. Why? If the test fails, it's name tells you exactly what is wrong. If you have multiple asserts and the first one fails you don't know if the rest was ok. All you know than is "something went wrong".
You say you don't want to setup all mocks/stubs in 10 tests. This is why most frameworks offer you a Setup method which runs before each test. This is where you can put most of your mocks configuration in one place and reuse it. In NUnit you just create a method and decorate it with a [SetUp] attribute.
If you want to test a method with different values of a parameter you can use NUnit's [TestCase] attributes. This is very elegant and you don't have to create multiple identical tests.
Now lets talk about the useful tools.
AutoFixture this is an amazing and very powerful tool that allows you to create an object of a class which requires multiple dependencies. It setups the dependencies with dummy mocks automatically, and allows you to manually setup only the ones you need in a particular test. Say you need to create a mock for UnitOfWork which takes 10 repositories as dependencies. In your test you only need to setup one of them. Autofixture allows you to create that UnitOfWork, setup that one particular repository mock (or more if you need). The rest of the dependencies will be set up automatically with dummy mocks. This saves you a huge amount of useless code. It is a little bit like an IOC container for your test.
It can also generate fake objects with random data for you. So e.q. the whole initialization of EntityModel.Document would be just one line
var repoResult = _fixture.Create<EntityModel.Document>();
Especially take a look at:
Create
Freeze
AutoMockCustomization
Here you will find my answer explaining how to use AutoFixture.
SemanticComparison Tutorial This is what will help you to avoid multiple assertions while comparing properties of objects of different types. If the properties have the same names it will to it almost automatically. If not, you can define the mappings. It will also tell you exactly which properties do not match and show their values.
Fluent assertions This just provides you a nicer way to assert stuff.
Instead of
Assert.AreEqual(repoResult.Message, savedDocument.Message);
You can do
repoResult.Message.Should().Be(savedDocument.Message);
To sum up. These tools will help you create your test with much less code and will make them much more readable. It takes time to get to know them well. Especially AutoFixture, but when you do, they become first things you add to your test projects - believe me :). Btw, they are all available from Nuget.
One more tip. If you have problems with testing a class it usually indicates a bad architecture. The solution usually is to extract smaller classes from the problematic class. (Single Responsibility Principal) Than you can easily test the small classes for business logic. And easily test the original class for interactions with them.
Consider using anonymous types:
public void Save_ReturnSavedDocument()
{
// (unmodified code)...
//Assert that properties are correctly mapped after save
Assert.AreEqual(
new
{
repoResult.Message,
repoResult.DocumentId,
repoResult.Name,
repoResult.Email,
repoResult.Comment,
},
new
{
savedDocument.Message,
savedDocument.DocumentId,
savedDocument.Name,
savedDocument.Email,
savedDocument.Comment,
});
}
There is one thing to look-out for: nullable types (eg. int?) and properties that might have slightly different types (float vs double) - but you can workaround this by casting properties to specific types (eg. (int?)repoResult.DocumentId ).
Another option would be to create a custom assert class/method(s).
Basically, the trick is to push as much clutter as you can outside of the unittests, so that only the behaviour
that is to be tested remains.
Some ways to do that:
Don't declare instances of your model/poco classes inside each test, but rather use a static TestData class that
exposes these instances as properties. Usually these instances are useful for more than one test as well.
For added robustness, have the properties on the TestData class create and return a new object instance every time
they're accessed, so that one unittest cannot affect the next by modifying the testdata.
On your testclass, declare a helper method that accepts the (usually mocked) repositories and returns the
system-under-test (or "SUT", i.e. your service). This is mainly useful in situations where configuring the SUT
takes more than 2 or more statements, since it tidies up your test code.
As an alternative to 2, have your testclass expose properties for each of the mocked Repositories, so that you don't need to declare these in your unittests; you can even pre-initialize them with a default behaviour to reduce the configuration per unittest even further.
The helper method that returns the SUT then doesn't take the mocked Repositories as arguments, but rather contstructs the SUT using the properties. You might want to reinitialize each Repository property on each [TestInitialize].
To reduce the clutter for comparing each property of your Poco with the corresponding property on the Model object, declare a helper method on your test class that does this for you (i.e. void AssertPocoEqualsModel(Poco p, Model m)). Again, this removes some clutter and you get the reusability for free.
Or, as an alternative to 4, don't compare all properties in every unittest, but rather test the mapping code in only one place with a separate set of unittests. This has the added benefit that, should the mapping ever include new properties or change
in any other way, you don't have to update 100-odd unittests.
When not testing the property mappings, you should just verify that the SUT returns the correct object instances (i.e. based on Id or Name), and that just the properties that might be changed (by the business logic being currently tested)
contain the correct values (such as Order total).
Personally, I prefer 5 because of its maintainability, but this isn't always possible and then 4 is usually a viable alternative.
Your test code would then look like this (unverified, just for demonstration purposes):
[TestClass]
public class DocumentServiceTest
{
private IDocumentRepository DocumentRepositoryMock { get; set; }
[TestInitialize]
public void Initialize()
{
DocumentRepositoryMock = MockRepository.GenerateStub<IDocumentRepository>();
}
[TestMethod]
public void Save_ReturnSavedDocument()
{
//Arrange
var repoResult = TestData.AcmeDocumentEntity;
DocumentRepositoryMock
.Stub(m => m.Get())
.IgnoreArguments()
.Return(new List<EntityModel.Document>() { repoResult }.AsQueryable());
DocumentRepositoryMock
.Stub(a => a.Save(null, null))
.IgnoreArguments()
.Return(repoResult);
//Act
var documentService = CreateDocumentService();
var savedDocument = documentService.Save(TestData.AcmeDocumentModel);
//Assert that properties are correctly mapped after save
AssertEntityEqualsModel(repoResult, savedDocument);
}
//Helpers
private DocumentService CreateDocumentService()
{
return new DocumentService(DocumentRepositoryMock);
}
private void AssertEntityEqualsModel(EntityModel.Document entityDoc, Models.Document modelDoc)
{
Assert.AreEqual(entityDoc.Message, modelDoc.Message);
Assert.AreEqual(entityDoc.DocumentId, modelDoc.DocumentId);
//...
}
}
public static class TestData
{
public static EntityModel.Document AcmeDocumentEntity
{
get
{
//Note that a new instance is returned on each invocation:
return new EntityModel.Document()
{
DocumentId = 2,
Message = "TestMessage1",
//...
}
};
}
public static Models.Document AcmeDocumentModel
{
get { /* etc. */ }
}
}
In general, if your having a hard time creating a concise test, your testing the wrong thing or the code your testing has to many responsibilities. (In my experience)
In specific, it looks like your testing the wrong thing here. If your repo is using entity framework, your getting the same object back that your sending in. Ef just updates to Id for new objects and any time stamp fields you might have.
Also, if you can't get one of your asserts to fail without a second assert failing, then you don't need one of them. Is it really possible for "name" to come back ok but for "email" to fail? If so, they should be in separate tests.
Finally, trying to do some tdd might help. Comment out all the could in your service.save. Then, write a test that fails. Then un comment out only enough code to make your test pass. Them write your next failing test. Can't write a test that fails? Then your done.

How can I use unit testing when classes depend on one another or external data?

I'd like to start using unit tests, but I'm having a hard time understanding how I can use them with my current project.
My current project is an application which collects files into a 'Catalog'. A Catalog can then extract information from the files it contains such as thumbnails and other properties. Users can also tag the files with other custom meta data such as "Author" and "Notes". It could easily be compared to a photo album application like Picasa, or Adobe Lightroom.
I've separated the code to create and manipulate a Catalog into a separate DLL which I'd now like to test. However, the majority of my classes are never meant to be instantiated on their own. Instead everything happens through my Catalog class. For example there's no way I can test my File class on its own, as a File is only accessible through a Catalog.
As an alternative to unit tests I think it would make more sense for me to write a test program that run through a series of actions including creating a catalog, re-opening the catalog that was created, and manipulating the contents of the catalog. See the code below.
//NOTE: The real version would have code to log the results and any exceptions thrown
//input data
string testCatalogALocation = "C:\TestCatalogA"
string testCatalogBLocation = "C:\TestCatalogB"
string testFileLocation = "C:\testfile.jpg"
string testFileName = System.IO.Path.GetFileName(testFileLocation);
//Test creating catalogs
Catalog catAtemp = Catalog(testCatalogALocation)
Catalog catBtemp = Catalog(testCatalogBLocation );
//test opening catalogs
Catalog catA = Catalog.OpenCatalog(testCatalogALocation);
Catalog catB = Catalog.OpenCatalog(testCatalogBLocation );
using(FileStream fs = new FileStream(testFileLocation )
{
//test importing a file
catA.ImportFile(testFileName,fs);
}
//test retrieving a file
File testFile = catA.GetFile(System.IO.Path.GetFileName(testFileLocation));
//test copying between catalogs
catB.CopyFileTo(testFile);
//Clean Up after test
System.IO.Directory.Delete(testCatalogALocation);
System.IO.Directory.Delete(testCatalogBLocation);
First, am I missing something? Is there some way to unit test a program like this? Second, is there some way to create a procedural type test like the code above but be able to take advantage of the testing tools building into Visual Studio? Will a "Generic Test" in VS2010 allow me to do this?
Update
Thanks for all the responses everyone. Actually my classes do in fact inherit from a series of interfaces. Here's a class diagram for anyone that is interested. Actually I have more interfaces then I have classes. I just left out the interfaces from my example for the sake of simplicity.
Thanks for all the suggestions to use mocking. I'd heard the term in the past, but never really understood what a "mock" was until now. I understand how I could create a mock of my IFile interface, which represents a single file in a catalog. I also understand how I could create a mock version of my ICatalog interface to test how two catalogs interact.
Yet I don't understand how I can test my concrete ICatalog implementations as they strongly related to their back end data sources. Actual the whole purpose of my Catalog classes is to read, write, and manipulate their external data/resources.
You ought to read about SOLID code principles. In particular the 'D' on SOLID stands for the Dependency Injection/Inversion Principle, this is where the class you're trying to test doesn't depend on other concrete classes and external implementations, but instead depends on interfaces and abstractions. You rely on an IoC (Inversion of Control) Container (such as Unity, Ninject, or Castle Windsor) to dynamically inject the concrete dependency at runtime, but during Unit Testing you inject a mock/stub instead.
For instance consider following class:
public class ComplexAlgorithm
{
protected DatabaseAccessor _data;
public ComplexAlgorithm(DatabaseAccessor dataAccessor)
{
_data = dataAccessor;
}
public int RunAlgorithm()
{
// RunAlgorithm needs to call methods from DatabaseAccessor
}
}
RunAlgorithm() method needs to hit the database (via DatabaseAccessor) making it difficult to test. So instead we change DatabaseAccessor into an interface.
public class ComplexAlgorithm
{
protected IDatabaseAccessor _data;
public ComplexAlgorithm(IDatabaseAccessor dataAccessor)
{
_data = dataAccessor;
}
// rest of class (snip)
}
Now ComplexAlgorithm depends on an interface IDatabaseAccessor which can easily be mocked for when we need to Unit test ComplexAlgorithm in isolation. For instance:
public class MyFakeDataAccessor : IDatabaseAccessor
{
public IList<Thing> GetThings()
{
// Return a fake/pretend list of things for testing
return new List<Thing>()
{
new Thing("Thing 1"),
new Thing("Thing 2"),
new Thing("Thing 3"),
new Thing("Thing 4")
};
}
// Other methods (snip)
}
[Test]
public void Should_Return_8_With_Four_Things_In_Database()
{
// Arrange
IDatabaseAccessor fakeData = new MyFakeDataAccessor();
ComplexAlgorithm algorithm = new ComplexAlgorithm(fakeData);
int expectedValue = 8;
// Act
int actualValue = algorithm.RunAlgorithm();
// Assert
Assert.AreEqual(expectedValue, actualValue);
}
We're essentially 'decoupling' the two classes from each other. Decoupling is another important software engineering principle for writing more maintainable and robust code.
This is really the tip of the tip of the iceberg as far as Dependency Injection, SOLID and Decoupling go, but it's what you need in order to effectively Unit test your code.
Here is a simple algorithm that can help get you started. There are other techniques to decouple code, but this can often get you pretty far, particularly if your code is not too large and deeply entrenched.
Identify the locations where you depend on external data/resources and determine whether you have classes that isolate each dependency.
If necessary, refactor to achieve the necessary insulation. This is the most challenging part to do safely, so focus on the lowest-risk changes first.
Extract interfaces for the classes that isolate external data.
When you construct your classes, pass in the external dependencies as interfaces rather than having the class instantiate them itself.
Create test implementations of your interfaces that don't depend on the external resources. This is also where you can add 'sensing' code for your tests to make sure the appropriate calls are being used. Mocking frameworks can be very helpful here, but it can be a good exercise to create the stub classes manually for a simple project, as it gives you a sense of what your test classes are doing. Manual stub classes typically set public properties to indicate when/how methods are called and have public properties to indicate how particular calls should behave.
Write tests that call methods on your classes, using the stubbed dependencies to sense whether the class is doing the right things in different cases. An easy way to start, if you already have functional code written, is to map out the different pathways and write tests that cover the different cases, asserting the behavior that currently occurs. These are known as characterization tests and they can give you the confidence to start refactoring your code, since now you know you're at least not changing the behavior you've already established.
Best of luck. Writing good unit tests requires a change of perspective, which will develop naturally as you work to identify dependencies and create the necessary isolation for testing. At first, the code will feel uglier, with additional layers of indirection that were previously unnecessarily, but as you learn various isolation techniques and refactor (which you can now do more easily, with tests to support it), you may find that things actually become cleaner and easier to understand.
This is a pure case where Dependency Injection plays a vital role.
As Shady suggest to read about mocking & stubbing. To achive this , you should consider using some Dependency Injectors like ( Unity in .net).
Also read about Dependency Injection Here
http://martinfowler.com/articles/injection.html
the majority of my classes are never
meant to be instantiated on their own
This is where the D - the Design D - comes into TDD. It's bad design to have classes that are tightly coupled. That badness manifests itself immediately when you try to unit test such a class - and if you start with unit tests, you'll never find yourself in this situation. Writing testable code compels us to better design.
I'm sorry; this isn't an answer to your question, but I see others have already mentioned mocking and DI, and those answers are fine. But you put the TDD tag on this question, and this is the TDD answer to your question: don't put yourself in the situation of tightly coupled classes.
What you have now is Legacy Code. That is: code that has been implemented without tests. For your initial tests I would definitely test through the Catalog class until you can break all those dependencies. So your first set of tests will be integration/acceptance tests.
If you don't expect for any behavior to change, then leave it at that, but if you do make a change, I suggest that you TDD the change and build up unit tests with the changes.

NUnit testing the application, not the environment or database

I want to be better at using NUnit for testing the applications I write, but I often find that the unit tests I write have a direct link to the environment or underlying database on the development machine instead.
Let me make an example.
I'm writing a class which has the single responsibility of retriving a string, which has been stored in the registry by another application. The key is stored in HKCU\Software\CustomApplication\IniPath.
The Test I end up writing looks like this;
[Test]
public void GetIniDir()
{
RegistryReader r = new RegistryReader();
Assert.AreEqual(#"C:\Programfiles\CustomApplication\SomeDir", r.IniDir);
}
But the problem here is that the string #"C:\Programfiles\CustomApplication\SomeDir" is really just correct right now. Tomorrow it might have changed to #"C:\Anotherdir\SomeDir", and suddenly that breaks my unit tests, even though the code hasn't changed.
This problem is also seen when I create a class which does CRUD operations against a database. The data in the database can change all the time, and this in turn makes the tests fail. So even if my class does what it is intended to do it will fail because the database returns more customers that it had when I originally wrote the test.
[Test]
public void GetAllCustomersCount()
{
DAL d = new DAL();
Assert.AreEqual(249, d.GetCustomerCount());
}
Do you guys have any tips on writing Tests which do not rely on the surrounding environment as much?
The solution to this problem is well-known: mocking. Refactor your code to interfaces, then develop fake classes to implement those interfaces or mock them with a mocking framework, such as RhinoMocks, easyMock, Moq, et. al. Using fake or mock classes allow you to define what the interface returns for your test without having to actually interact with the external entity, such as a database.
For more info on mocking via SO, try this Google search: http://www.google.com/search?q=mock+site:stackoverflow.com. You may also be interesting in the definitions at: What's the difference between faking, mocking, and stubbing?
Additionally, good development practices, such as dependency injection (as #Patrik suggests), which allows the decoupling of your classes from its dependencies, and the avoidance of static objects, which makes unit testing harder, will facilitate your testing. Using TDD practices -- where the tests are developed first -- will help you to naturally develop applications that incorporate these design principles.
The easiest way is to make the dependencies explicit using dependency injection. For example, your first example has a dependency on the registry, make this dependency explicit by passing an IRegistry (an interface that you'll define) instance and then only use this passed in dependency to read from the registry. This way you can pass in an IRegistry-stub when testing that always return a known value, in production you instead use an implementation that actually reads from the registry.
public interface IRegistry
{
string GetCurrentUserValue(string key);
}
public class RegistryReader
{
public RegistryReader(IRegistry registry)
{
...
// make the dependency explicit in the constructor.
}
}
[TestFixture]
public class RegistryReaderTests
{
[Test]
public void Foo_test()
{
var stub = new StubRegistry();
stub.ReturnValue = "known value";
RegistryReader testedReader = new RegistryReader(stub);
// test here...
}
public class StubRegistry
: IRegistry
{
public string ReturnValue;
public string GetCurrentUserValue(string key)
{
return ReturnValue;
}
}
}
In this quick example i use manual stubbing, of course you could use any mocking framework for this.
Other way is to create separate database for tests.
You should read up on the inversion of control principle and how to use the dependency injection technique - that really helps you write testable code.
In your case, you should probably have an interface - e.g. IIniDirProvider - which is implemented by RegistryBasedIniDirProvider, which is resposible for providing the initial directory based off a specific key in the registry.
Then, when some other class needs to look up the initial directory, that other class should have the following ctor:
public SomeOtherClass(IIniDirProvider iniDirProvider)
{
this.iniDirProvider = iniDirProvider;
}
-allowing you to pass in a mock IIniDirProvider when you need to unit test SomeOtherClass. That way your unit test will not depend on anything being present in the registry.

Categories