Handling Multiple Mocks and Asserts in Unit Tests - c#

I currently have a repository that is using Entity Framework for my CRUD operations.
This is injected into my service that needs to use this repo.
Using AutoMapper, I project the entity Model onto a Poco model and the poco gets returned by the service.
If my objects have multiple properties, what is a correct way to set-up and then assert my properties?
If my service has multiple repo dependencies what is the correct way to setup all my mocks? * - A class [setup] where all the mocks and objects are configured for these test fixtures?*****
I want to avoid having 10 tests and each test has 50 asserts on properties and dozens on mocks set-up for each test. This makes maintainability and readability difficult.
I have read Art of Unit Testing and did not discover any suggestions how to handle this case.
The tooling I am using is Rhino Mocks and NUnit.
I also found this on SO but it doesn't answer my question: Correctly Unit Test Service / Repository Interaction
Here is a sample that expresses what I am describing:
public void Save_ReturnSavedDocument()
{
//Simulate DB object
var repoResult = new EntityModel.Document()
{
DocumentId = 2,
Message = "TestMessage1",
Name = "Name1",
Email = "Email1",
Comment = "Comment1"
};
//Create mocks of Repo Methods - Might have many dependencies
var documentRepository = MockRepository.GenerateStub<IDocumentRepository>();
documentRepository.Stub(m => m.Get()).IgnoreArguments().Return(new List<EntityModel.Document>()
{
repoResult
}.AsQueryable());
documentRepository.Stub(a => a.Save(null, null)).IgnoreArguments().Return(repoResult);
//instantiate service and inject repo
var documentService = new DocumentService(documentRepository);
var savedDocument = documentService.Save(new Models.Document()
{
ID = 0,
DocumentTypeId = 1,
Message = "TestMessage1"
});
//Assert that properties are correctly mapped after save
Assert.AreEqual(repoResult.Message, savedDocument.Message);
Assert.AreEqual(repoResult.DocumentId, savedDocument.DocumentId);
Assert.AreEqual(repoResult.Name, savedDocument.Name);
Assert.AreEqual(repoResult.Email, savedDocument.Email);
Assert.AreEqual(repoResult.Comment, savedDocument.Comment);
//Many More properties here
}

First of all, each test should only have one assertion (unless the other validates the real one) e.q. if you want to assert that all elements of a list are distinct, you may want to assert first that the list is not empty. Otherwise you may get a false positive. In other cases there should only be one assert for each test. Why? If the test fails, it's name tells you exactly what is wrong. If you have multiple asserts and the first one fails you don't know if the rest was ok. All you know than is "something went wrong".
You say you don't want to setup all mocks/stubs in 10 tests. This is why most frameworks offer you a Setup method which runs before each test. This is where you can put most of your mocks configuration in one place and reuse it. In NUnit you just create a method and decorate it with a [SetUp] attribute.
If you want to test a method with different values of a parameter you can use NUnit's [TestCase] attributes. This is very elegant and you don't have to create multiple identical tests.
Now lets talk about the useful tools.
AutoFixture this is an amazing and very powerful tool that allows you to create an object of a class which requires multiple dependencies. It setups the dependencies with dummy mocks automatically, and allows you to manually setup only the ones you need in a particular test. Say you need to create a mock for UnitOfWork which takes 10 repositories as dependencies. In your test you only need to setup one of them. Autofixture allows you to create that UnitOfWork, setup that one particular repository mock (or more if you need). The rest of the dependencies will be set up automatically with dummy mocks. This saves you a huge amount of useless code. It is a little bit like an IOC container for your test.
It can also generate fake objects with random data for you. So e.q. the whole initialization of EntityModel.Document would be just one line
var repoResult = _fixture.Create<EntityModel.Document>();
Especially take a look at:
Create
Freeze
AutoMockCustomization
Here you will find my answer explaining how to use AutoFixture.
SemanticComparison Tutorial This is what will help you to avoid multiple assertions while comparing properties of objects of different types. If the properties have the same names it will to it almost automatically. If not, you can define the mappings. It will also tell you exactly which properties do not match and show their values.
Fluent assertions This just provides you a nicer way to assert stuff.
Instead of
Assert.AreEqual(repoResult.Message, savedDocument.Message);
You can do
repoResult.Message.Should().Be(savedDocument.Message);
To sum up. These tools will help you create your test with much less code and will make them much more readable. It takes time to get to know them well. Especially AutoFixture, but when you do, they become first things you add to your test projects - believe me :). Btw, they are all available from Nuget.
One more tip. If you have problems with testing a class it usually indicates a bad architecture. The solution usually is to extract smaller classes from the problematic class. (Single Responsibility Principal) Than you can easily test the small classes for business logic. And easily test the original class for interactions with them.

Consider using anonymous types:
public void Save_ReturnSavedDocument()
{
// (unmodified code)...
//Assert that properties are correctly mapped after save
Assert.AreEqual(
new
{
repoResult.Message,
repoResult.DocumentId,
repoResult.Name,
repoResult.Email,
repoResult.Comment,
},
new
{
savedDocument.Message,
savedDocument.DocumentId,
savedDocument.Name,
savedDocument.Email,
savedDocument.Comment,
});
}
There is one thing to look-out for: nullable types (eg. int?) and properties that might have slightly different types (float vs double) - but you can workaround this by casting properties to specific types (eg. (int?)repoResult.DocumentId ).
Another option would be to create a custom assert class/method(s).

Basically, the trick is to push as much clutter as you can outside of the unittests, so that only the behaviour
that is to be tested remains.
Some ways to do that:
Don't declare instances of your model/poco classes inside each test, but rather use a static TestData class that
exposes these instances as properties. Usually these instances are useful for more than one test as well.
For added robustness, have the properties on the TestData class create and return a new object instance every time
they're accessed, so that one unittest cannot affect the next by modifying the testdata.
On your testclass, declare a helper method that accepts the (usually mocked) repositories and returns the
system-under-test (or "SUT", i.e. your service). This is mainly useful in situations where configuring the SUT
takes more than 2 or more statements, since it tidies up your test code.
As an alternative to 2, have your testclass expose properties for each of the mocked Repositories, so that you don't need to declare these in your unittests; you can even pre-initialize them with a default behaviour to reduce the configuration per unittest even further.
The helper method that returns the SUT then doesn't take the mocked Repositories as arguments, but rather contstructs the SUT using the properties. You might want to reinitialize each Repository property on each [TestInitialize].
To reduce the clutter for comparing each property of your Poco with the corresponding property on the Model object, declare a helper method on your test class that does this for you (i.e. void AssertPocoEqualsModel(Poco p, Model m)). Again, this removes some clutter and you get the reusability for free.
Or, as an alternative to 4, don't compare all properties in every unittest, but rather test the mapping code in only one place with a separate set of unittests. This has the added benefit that, should the mapping ever include new properties or change
in any other way, you don't have to update 100-odd unittests.
When not testing the property mappings, you should just verify that the SUT returns the correct object instances (i.e. based on Id or Name), and that just the properties that might be changed (by the business logic being currently tested)
contain the correct values (such as Order total).
Personally, I prefer 5 because of its maintainability, but this isn't always possible and then 4 is usually a viable alternative.
Your test code would then look like this (unverified, just for demonstration purposes):
[TestClass]
public class DocumentServiceTest
{
private IDocumentRepository DocumentRepositoryMock { get; set; }
[TestInitialize]
public void Initialize()
{
DocumentRepositoryMock = MockRepository.GenerateStub<IDocumentRepository>();
}
[TestMethod]
public void Save_ReturnSavedDocument()
{
//Arrange
var repoResult = TestData.AcmeDocumentEntity;
DocumentRepositoryMock
.Stub(m => m.Get())
.IgnoreArguments()
.Return(new List<EntityModel.Document>() { repoResult }.AsQueryable());
DocumentRepositoryMock
.Stub(a => a.Save(null, null))
.IgnoreArguments()
.Return(repoResult);
//Act
var documentService = CreateDocumentService();
var savedDocument = documentService.Save(TestData.AcmeDocumentModel);
//Assert that properties are correctly mapped after save
AssertEntityEqualsModel(repoResult, savedDocument);
}
//Helpers
private DocumentService CreateDocumentService()
{
return new DocumentService(DocumentRepositoryMock);
}
private void AssertEntityEqualsModel(EntityModel.Document entityDoc, Models.Document modelDoc)
{
Assert.AreEqual(entityDoc.Message, modelDoc.Message);
Assert.AreEqual(entityDoc.DocumentId, modelDoc.DocumentId);
//...
}
}
public static class TestData
{
public static EntityModel.Document AcmeDocumentEntity
{
get
{
//Note that a new instance is returned on each invocation:
return new EntityModel.Document()
{
DocumentId = 2,
Message = "TestMessage1",
//...
}
};
}
public static Models.Document AcmeDocumentModel
{
get { /* etc. */ }
}
}

In general, if your having a hard time creating a concise test, your testing the wrong thing or the code your testing has to many responsibilities. (In my experience)
In specific, it looks like your testing the wrong thing here. If your repo is using entity framework, your getting the same object back that your sending in. Ef just updates to Id for new objects and any time stamp fields you might have.
Also, if you can't get one of your asserts to fail without a second assert failing, then you don't need one of them. Is it really possible for "name" to come back ok but for "email" to fail? If so, they should be in separate tests.
Finally, trying to do some tdd might help. Comment out all the could in your service.save. Then, write a test that fails. Then un comment out only enough code to make your test pass. Them write your next failing test. Can't write a test that fails? Then your done.

Related

How to write unit test [duplicate]

I've read various articles about mocking vs stubbing in testing, including Martin Fowler's Mocks Aren't Stubs, but still don't understand the difference.
Foreword
There are several definitions of objects, that are not real. The general term is test double. This term encompasses: dummy, fake, stub, mock.
Reference
According to Martin Fowler's article:
Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it 'sent', or maybe only how many messages it 'sent'.
Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.
Style
Mocks vs Stubs = Behavioral testing vs State testing
Principle
According to the principle of Test only one thing per test, there may be several stubs in one test, but generally there is only one mock.
Lifecycle
Test lifecycle with stubs:
Setup - Prepare object that is being tested and its stubs collaborators.
Exercise - Test the functionality.
Verify state - Use asserts to check object's state.
Teardown - Clean up resources.
Test lifecycle with mocks:
Setup data - Prepare object that is being tested.
Setup expectations - Prepare expectations in mock that is being used by primary object.
Exercise - Test the functionality.
Verify expectations - Verify that correct methods has been invoked in mock.
Verify state - Use asserts to check object's state.
Teardown - Clean up resources.
Summary
Both mocks and stubs testing give an answer for the question: What is the result?
Testing with mocks are also interested in: How the result has been achieved?
Stub
I believe the biggest distinction is that a stub you have already written with predetermined behavior. So you would have a class that implements the dependency (abstract class or interface most likely) you are faking for testing purposes and the methods would just be stubbed out with set responses. They would not do anything fancy and you would have already written the stubbed code for it outside of your test.
Mock
A mock is something that as part of your test you have to setup with your expectations. A mock is not setup in a predetermined way so you have code that does it in your test. Mocks in a way are determined at runtime since the code that sets the expectations has to run before they do anything.
Difference between Mocks and Stubs
Tests written with mocks usually follow an initialize -> set expectations -> exercise -> verify pattern to testing. While the pre-written stub would follow an initialize -> exercise -> verify.
Similarity between Mocks and Stubs
The purpose of both is to eliminate testing all the dependencies of a class or function so your tests are more focused and simpler in what they are trying to prove.
A stub is a simple fake object. It just makes sure test runs smoothly.
A mock is a smarter stub. You verify your test passes through it.
Here's a description of each one followed by with real world sample.
Dummy - just bogus values to satisfy the API.
Example: If you're testing a method of a class which requires many mandatory parameters in a constructor which have no effect on your test, then you may create dummy objects for the purpose of creating new instances of a class.
Fake - create a test implementation of a class which may have a dependency on some external infrastructure. (It's good practice that your unit test does NOT actually interact with external infrastructure.)
Example: Create fake implementation for accessing a database, replace it with in-memory collection.
Stub - override methods to return hard-coded values, also referred to as state-based.
Example: Your test class depends on a method Calculate() taking 5 minutes to complete. Rather than wait for 5 minutes you can replace its real implementation with stub that returns hard-coded values; taking only a small fraction of the time.
Mock - very similar to Stub but interaction-based rather than state-based. This means you don't expect from Mock to return some value, but to assume that specific order of method calls are made.
Example: You're testing a user registration class. After calling Save, it should call SendConfirmationEmail.
Stubs and Mocks are actually sub types of Mock, both swap real implementation with test implementation, but for different, specific reasons.
In the codeschool.com course, Rails Testing for Zombies, they give this definition of the terms:
Stub
For replacing a method with code that returns a specified result.
Mock
A stub with an assertion that the method gets called.
So as Sean Copenhaver described in his answer, the difference is that mocks set expectations (i.e. make assertions, about whether or how they get called).
Stubs don't fail your tests, mock can.
Reading all the explanations above, let me try to condense:
Stub: a dummy piece of code that lets the test run, but you don't care what happens to it. Substitutes for real working code.
Mock: a dummy piece of code that you verify is called correctly as part of the test. Substitutes for real working code.
Spy: a dummy piece of code that intercepts and verifies some calls to real working code, avoiding the need to substitute all the real code.
I think the simplest and clearer answer about this question is given from Roy Osherove in his book The art of Unit Testing (page 85)
The easiest way to tell we’re dealing with a stub is to notice that the stub can never fail the test. The asserts the test uses are always against
the class under test.
On the other hand, the test will use a mock object to verify whether the
test failed or not. [...]
Again, the mock object is the object we use to see if the test failed or not.
Stub and mock are both fakes.
If you are making assertions against the fake it means you are using the fake as a mock, if you are using the fake only to run the test without assertion over it you are using the fake as a stub.
A Mock is just testing behaviour, making sure certain methods are called.
A Stub is a testable version (per se) of a particular object.
What do you mean an Apple way?
If you compare it to debugging:
Stub is like making sure a method returns the correct value
Mock is like actually stepping into the method and making sure everything inside is correct before returning the correct value.
This slide explain the main differences very good.
*From CSE 403 Lecture 16 , University of Washington (slide created by "Marty Stepp")
To be very clear and practical:
Stub: A class or object that implements the methods of the class/object to be faked and returns always what you want.
Example in JavaScript:
var Stub = {
method_a: function(param_a, param_b){
return 'This is an static result';
}
}
Mock: The same of stub, but it adds some logic that "verifies" when a method is called so you can be sure some implementation is calling that method.
As #mLevan says imagine as an example that you're testing a user registration class. After calling Save, it should call SendConfirmationEmail.
A very stupid code Example:
var Mock = {
calls: {
method_a: 0
}
method_a: function(param_a, param_b){
this.method_a++;
console.log('Mock.method_a its been called!');
}
}
let see Test Doubles:
Fake: Fakes are objects that have working implementations, but not the same as production one. Such as: in-memory implementation of Data Access Object or Repository.
Stub: Stub is an object that holds predefined data and uses it to answer calls during tests. Such as: an object that needs to grab some data from the database to respond to a method call.
Mocks: Mocks are objects that register calls they receive.
In test assertion, we can verify on Mocks that all expected actions were performed. Such as: a functionality that calls e-mail sending service.
for more just check this.
Using a mental model really helped me understand this, rather than all of the explanations and articles, that didn't quite "sink in".
Imagine your kid has a glass plate on the table and he starts playing with it. Now, you're afraid it will break. So, you give him a plastic plate instead. That would be a Mock (same behavior, same interface, "softer" implementation).
Now, say you don't have the plastic replacement, so you explain "If you continue playing with it, it will break!". That's a Stub, you provided a predefined state in advance.
A Dummy would be the fork he didn't even use... and a Spy could be something like providing the same explanation you already used that worked.
I think the most important difference between them is their intentions.
Let me try to explain it in WHY stub vs. WHY mock
Suppose I'm writing test code for my mac twitter client's public timeline controller
Here is test sample code
twitter_api.stub(:public_timeline).and_return(public_timeline_array)
client_ui.should_receive(:insert_timeline_above).with(public_timeline_array)
controller.refresh_public_timeline
STUB: The network connection to twitter API is very slow, which make my test slow. I know it will return timelines, so I made a stub simulating HTTP twitter API, so that my test will run it very fast, and I can running the test even I'm offline.
MOCK: I haven't written any of my UI methods yet, and I'm not sure what methods I need to write for my ui object. I hope to know how my controller will collaborate with my ui object by writing the test code.
By writing mock, you discover the objects collaboration relationship by verifying the expectation are met, while stub only simulate the object's behavior.
I suggest to read this article if you're trying to know more about mocks: http://jmock.org/oopsla2004.pdf
I like the explanantion put out by Roy Osherove [video link].
Every class or object created is a Fake. It is a Mock if you verify
calls against it. Otherwise its a stub.
Stub
A stub is an object used to fake a method that has pre-programmed behavior. You may want to use this instead of an existing method in order to avoid unwanted side-effects (e.g. a stub could make a fake fetch call that returns a pre-programmed response without actually making a request to a server).
Mock
A mock is an object used to fake a method that has pre-programmed behavior as well as pre-programmed expectations. If these expectations are not met then the mock will cause the test to fail (e.g. a mock could make a fake fetch call that returns a pre-programmed response without actually making a request to a server which would expect e.g. the first argument to be "http://localhost:3008/" otherwise the test would fail.)
Difference
Unlike mocks, stubs do not have pre-programmed expectations that could fail your test.
Stubs vs. Mocks
Stubs
provide specific answers to methods calls
ex: myStubbedService.getValues() just return a String needed by the code under test
used by code under test to isolate it
cannot fail test
ex: myStubbedService.getValues() just returns the stubbed value
often implement abstract methods
Mocks
"superset" of stubs; can assert that certain methods are called
ex: verify that myMockedService.getValues() is called only once
used to test behaviour of code under test
can fail test
ex: verify that myMockedService.getValues() was called once; verification fails, because myMockedService.getValues() was not called by my tested code
often mocks interfaces
I was reading The Art of Unit Testing, and stumbled upon the following definition:
A fake is a generic term that can be used to describe either a stub or a mock object (handwritten or otherwise), because they both look like the real object. Whether a fake is a stub or a mock depends on how it's used in the current test. if it's used to check an interaction (asserted against), it's a mock object. Otherwise, it's a stub.
The generic term he uses is a Test Double (think stunt double). Test Double is a generic term for any case where you replace a production object for testing purposes. There are various kinds of double that Gerard lists:
Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryTestDatabase is a good example).
Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.
Spies are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent(also called Partial Mock).
Mocks are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting.
Source
A fake is a generic term that can be used to describe either a stub
or a mock object (handwritten or otherwise), because they both look like the
real object. Whether a fake is a stub or a mock depends on how it’s used in
the current test. If it’s used to check an interaction (asserted against), it’s a
mock object. Otherwise, it’s a stub.
Fakes makes sure test runs smoothly. It means that reader of your future test will understand what will be the behavior of the fake object, without needing to read its source code (without needing to depend on external resource).
What does test run smoothly mean?
Forexample in below code:
public void Analyze(string filename)
{
if(filename.Length<8)
{
try
{
errorService.LogError("long file entered named:" + filename);
}
catch (Exception e)
{
mailService.SendEMail("admin#hotmail.com", "ErrorOnWebService", "someerror");
}
}
}
You want to test mailService.SendEMail() method, to do that you need to simulate an Exception in you test method, so you just need to create a Fake Stub errorService class to simulate that result, then your test code will be able to test mailService.SendEMail() method. As you see you need to simulate a result which is from an another External Dependency ErrorService class.
Mocks: help to emulate and examine outcoming interactions. These interactions
are calls the SUT makes to its dependencies to change their state.
Stubs: help to emulate incoming interactions. These interactions are calls the
SUT makes to its dependencies to get input data.
source : Unit Testing Principles, Practices, and Patterns - Manning
Right from the paper Mock Roles, not Objects, by the developers of jMock :
Stubs are dummy implementations of production code that return canned
results. Mock Objects act as stubs, but also include assertions to
instrument the interactions of the target object with its neighbours.
So, the main differences are:
expectations set on stubs are usually generic, while expectations set on mocks can be more "clever" (e.g. return this on the first call, this on the second etc.).
stubs are mainly used to setup indirect inputs of the SUT, while mocks can be used to test both indirect inputs and indirect outputs of the SUT.
To sum up, while also trying to disperse the confusion from Fowler's article title: mocks are stubs, but they are not only stubs.
I came across this interesting article by UncleBob The Little Mocker. It explains all the terminology in a very easy to understand manner, so its useful for beginners. Martin Fowlers article is a hard read especially for beginners like me.
a lot of valid answers up there but I think worth to mention this form uncle bob:
https://8thlight.com/blog/uncle-bob/2014/05/14/TheLittleMocker.html
the best explanation ever with examples!
A mock is both a technical and a functional object.
The mock is technical. It is indeed created by a mocking library (EasyMock, JMockit and more recently Mockito are known for these) thanks to byte code generation.
The mock implementation is generated in a way where we could instrument it to return a specific value when a method is invoked but also some other things such as verifying that a mock method was invoked with some specific parameters (strict check) or whatever the parameters (no strict check).
Instantiating a mock :
#Mock Foo fooMock
Recording a behavior :
when(fooMock.hello()).thenReturn("hello you!");
Verifying an invocation :
verify(fooMock).hello()
These are clearly not the natural way to instantiate/override the Foo class/behavior. That's why I refer to a technical aspect.
But the mock is also functional because it is an instance of the class we need to isolate from the SUT. And with recorded behaviors on it, we could use it in the SUT in the same way than we would do with a stub.
The stub is just a functional object : that is an instance of the class we need to isolate from the SUT and that's all.
That means that both the stub class and all behaviors fixtures needed during our unit tests have to be defined explicitly.
For example to stub hello() would need to subclass the Foo class (or implements its interface it has it) and to override hello() :
public class HelloStub extends Hello{
public String hello {
return "hello you!";
}
}
If another test scenario requires another value return, we would probably need to define a generic way to set the return :
public class HelloStub extends Hello{
public HelloStub(String helloReturn){
this.helloReturn = helloReturn;
}
public String hello {
return helloReturn;
}
}
Other scenario : if I had a side effect method (no return) and I would check that that method was invoked, I should probably have added a boolean or a counter in the stub class to count how many times the method was invoked.
Conclusion
The stub requires often much overhead/code to write for your unit test. What mock prevents thanks to providing recording/verifying features out of the box.
That's why nowadays, the stub approach is rarely used in practice with the advent of excellent mock libraries.
About the Martin Fowler Article : I don't think to be a "mockist" programmer while I use mocks and I avoid stubs.
But I use mock when it is really required (annoying dependencies) and I favor test slicing and mini-integration tests when I test a class with dependencies which mocking would be an overhead.
Plus useful answers, One of the most powerful point of using Mocks than Subs
If the collaborator [which the main code depend on it] is not under our control (e.g. from a third-party library),
In this case, stub is more difficult to write rather than mock.
Stub
A stub is an object that holds predefined data and uses it to answer calls during tests. It is used when you can’t or don’t want to involve objects that would answer with real data or have undesirable side effects.
An example can be an object that needs to grab some data from the database to respond to a method call. Instead of the real object, we introduced a stub and defined what data should be returned.
example of Stub:
public class GradesService {
private final Gradebook gradebook;
public GradesService(Gradebook gradebook) {
this.gradebook = gradebook;
}
Double averageGrades(Student student) {
return average(gradebook.gradesFor(student));
}
}
Instead of calling database from Gradebook store to get real students grades, you preconfigure stub with grades that will be returned. You define just enough data to test average calculation algorithm.
public class GradesServiceTest {
private Student student;
private Gradebook gradebook;
#Before
public void setUp() throws Exception {
gradebook = mock(Gradebook.class);
student = new Student();
}
#Test
public void calculates_grades_average_for_student() {
//stubbing gradebook
when(gradebook.gradesFor(student)).thenReturn(grades(8, 6, 10));
double averageGrades = new GradesService(gradebook).averageGrades(student);
assertThat(averageGrades).isEqualTo(8.0);
}
}
Mock
Mocks are objects that register calls they receive. In test assertion you can verify on Mocks that all expected actions were performed. You use mocks when you don’t want to invoke production code or when there is no easy way to verify, that intended code was executed. There is no return value and no easy way to check system state change. An example can be a functionality that calls e-mail sending service.
You don’t want to send e-mails each time you run a test. Moreover, it is not easy to verify in tests that a right email was send. Only thing you can do is to verify the outputs of the functionality that is exercised in our test. In other worlds, verify that the e-mail sending service was called.
Example of Mock:
public class SecurityCentral {
private final Window window;
private final Door door;
public SecurityCentral(Window window, Door door) {
this.window = window;
this.door = door;
}
void securityOn() {
window.close();
door.close();
}
}
You don’t want to close real doors to test that security method is working, right? Instead, you place door and window mocks objects in the test code.
public class SecurityCentralTest {
Window windowMock = mock(Window.class);
Door doorMock = mock(Door.class);
#Test
public void enabling_security_locks_windows_and_doors() {
SecurityCentral securityCentral = new SecurityCentral(windowMock, doorMock);
securityCentral.securityOn();
verify(doorMock).close();
verify(windowMock).close();
}
}
Thanks a lot to Michał Lipski for his good article. For further reading:
Test Double – Martin Fowler https://martinfowler.com/bliki/TestDouble.html
Test Double – xUnit Patterns http://xunitpatterns.com/Test%20Double.html
Mocks Aren’t Stubs – Martin Fowler https://martinfowler.com/articles/mocksArentStubs.html
Command Query Separation – Martin Fowler https://martinfowler.com/bliki/CommandQuerySeparation.html
Stub helps us to run test. How? It gives values which helps to run test. These values are itself not real and we created these values just to run the test. For example we create a HashMap to give us values which are similar to values in database table. So instead of directly interacting with database we interact with Hashmap.
Mock is an fake object which runs the test. where we put assert.
See below example of mocks vs stubs using C# and Moq framework. Moq doesn't have a special keyword for Stub but you can use Mock object to create stubs too.
namespace UnitTestProject2
{
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
[TestClass]
public class UnitTest1
{
/// <summary>
/// Test using Mock to Verify that GetNameWithPrefix method calls Repository GetName method "once" when Id is greater than Zero
/// </summary>
[TestMethod]
public void GetNameWithPrefix_IdIsTwelve_GetNameCalledOnce()
{
// Arrange
var mockEntityRepository = new Mock<IEntityRepository>();
mockEntityRepository.Setup(m => m.GetName(It.IsAny<int>()));
var entity = new EntityClass(mockEntityRepository.Object);
// Act
var name = entity.GetNameWithPrefix(12);
// Assert
mockEntityRepository.Verify(m => m.GetName(It.IsAny<int>()), Times.Once);
}
/// <summary>
/// Test using Mock to Verify that GetNameWithPrefix method doesn't call Repository GetName method when Id is Zero
/// </summary>
[TestMethod]
public void GetNameWithPrefix_IdIsZero_GetNameNeverCalled()
{
// Arrange
var mockEntityRepository = new Mock<IEntityRepository>();
mockEntityRepository.Setup(m => m.GetName(It.IsAny<int>()));
var entity = new EntityClass(mockEntityRepository.Object);
// Act
var name = entity.GetNameWithPrefix(0);
// Assert
mockEntityRepository.Verify(m => m.GetName(It.IsAny<int>()), Times.Never);
}
/// <summary>
/// Test using Stub to Verify that GetNameWithPrefix method returns Name with a Prefix
/// </summary>
[TestMethod]
public void GetNameWithPrefix_IdIsTwelve_ReturnsNameWithPrefix()
{
// Arrange
var stubEntityRepository = new Mock<IEntityRepository>();
stubEntityRepository.Setup(m => m.GetName(It.IsAny<int>()))
.Returns("Stub");
const string EXPECTED_NAME_WITH_PREFIX = "Mr. Stub";
var entity = new EntityClass(stubEntityRepository.Object);
// Act
var name = entity.GetNameWithPrefix(12);
// Assert
Assert.AreEqual(EXPECTED_NAME_WITH_PREFIX, name);
}
}
public class EntityClass
{
private IEntityRepository _entityRepository;
public EntityClass(IEntityRepository entityRepository)
{
this._entityRepository = entityRepository;
}
public string Name { get; set; }
public string GetNameWithPrefix(int id)
{
string name = string.Empty;
if (id > 0)
{
name = this._entityRepository.GetName(id);
}
return "Mr. " + name;
}
}
public interface IEntityRepository
{
string GetName(int id);
}
public class EntityRepository:IEntityRepository
{
public string GetName(int id)
{
// Code to connect to DB and get name based on Id
return "NameFromDb";
}
}
}

Is it necessary to mock all and every dependency in Unit Tests?

I'm trying to create a unit test for an ASP.NET that has the following constructor definition (filled with Ninject when running the real application):
public OrderController(IViewModelFactory modelFactory, INewsRepository repository, ILoggedUserHelper loggedUserHelper,
IDelegateHelper delegateHelper, ICustomerContextWrapper customerContext) {
this.factory = modelFactory;
this.loggedUserHelper = loggedUserHelper;
this.delegateHelper = delegateHelper;
this.customerContext = customerContext;
}
I want to test the methods inside the OrderController class, but in order to isolate it, I have to mock all and every of those dependencies, which becomes outright ridiculous (having to also mock subdependencies probably).
In this case, which is the best practice to Unit Test this class?
Well, you have to provide test doubles for all dependencies, not necessarily mocks.
Luckily,this is the 21st century and there are tools to make the job easier for us. You can use AutoFixture to create an instance of OrderController and inject mocks as necessary.
var fixture = new Fixture().Customize(new AutoConfiguredMoqCustomization());
var orderController = fixture.Create<OrderController>();
Which, basically, is equivalent to:
var factory = new Mock<IViewModelFactory>();
var repository = new Mock<INewsRepository>();
var delegateHelper = new Mock<IDelegateHelper >();
var customerContext = new Mock<ICustomerContextWrapper >();
var orderController = new OrderController(factory.Object, repository.Object, delegateHelper.Object, customerContext.Object);
If those dependencies depend on other types, those will be setup as well. AutoFixture with the AutoConfiguredMoqCustomization customization will build an entire graph of dependencies.
If you need access to, say, the repository mock, so you can do some additional setups or assertions on it later, you can freeze it. Freezing a type will make the fixture container contain only one instance of that type, e.g.:
var fixture = new Fixture().Customize(new AutoConfiguredMoqCustomization());
var repositoryMock = fixture.Freeze<Mock<INewsRepository>>();
repositoryMock.Setup(x => x.Retrieve()).Returns(1);
//the frozen instance will be injected here
var orderController = fixture.Create<OrderController>();
repositoryMock.Verify(x => x.Retrieve(), Times.Once);
I've used Moq in these examples, but AutoFixture also integrates with NSubstitute, RhinoMock and Foq.
Disclosure: I'm one of the project's contributors
No, you don't. The different concepts of test object implementations you can use are known as Test Doubles. Mocks are just one type of Test Double as defined by Gerard Meszaros in his book:
Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryTestDatabase is a good example).
Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.
Spies are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent.
Mocks are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting.
You only need to give as many stubs, fakes and dummies as required for your test to pass.
Dummies take very little work to generate and may be enough to cover your scenario. An example would be a constructor that takes an IEmailer and an ILogWriter. If you're only testing the Log method, you only need to provide enough of an implementation of IEmailer in order for the test to not throw Argument exceptions.
Also, regarding your point about sub-dependencies... Moq will take care of that for you, because Moq implementations of your interface won't take dependencies.

How can I isolate a data source such as DbSet?

I have a number of controllers that I am testing, each of which has a dependency on a repository. This is how I am supplying the mocked repository in the case of each test fixture:
[SetUp]
public void SetUp()
{
var repository = RepositoryMockHelper.MockRepository();
controller = new HomeController(repository.Object);
}
And here is the MockRepository helper method for good measure:
internal static Mock<IRepository> MockRepository()
{
var repository = new Mock<IRepository>();
var posts = new InMemoryDbSet<Post>()
{
new Post {
...
},
...
};
repository.Setup(db => db.Posts).Returns(posts);
return repository;
}
... = code removed for the sake of brevity.
My intention is to use a new instance of InMemoryDbSet for each test. I thought that using the SetUp attribute would achieve this but clearly not.
When I run all of the tests, the results are inconsistent as the tests do not appear to be isolated. One test will for example, remove an element from from the data store and assert that the count has been decremented but according to the whim of the test runner, another test might have incremented the count, causing both tests to fail.
Am I approaching these tests in the correct way? How can I resolve this issue?
The package you reference you are using for your InMemoryDataSet uses a static backing data structure, and so will persist across test runs. This is why you're seeing the inconsistent behaviors. You can get around this by using another package (as you mention), or pass in a new HashSet to the constructor in every test so it doesn't use a static member.
As to the rest of your question, I think you're approaching the testing well. The only thing is that since all of your tests have the same setup (in your SetUp method), they could influence each other. I.e., if you have a test that relies on having no Foo objects in the set and one that needs at least one, then you're either going to add one in SetUp and remove it in the test that doesn't need it, or vice versa. It could be more clear to have specific set up procedures as #BillSambrone mentioned.
As #PatrickQuirk pointed out, I think your problem is due to what InMemoryDbSet does under the covers.
Regarding the "Am I approaching this right ?" part :
If, as I suspect, your Repository exposes some kind of IDbSet, it's probably a leaky abstraction. The contract of an IDbSet is far too specific for what a typical Repository client wants to do with the data. Better to return an IEnumerable or some sort of read-only collection instead.
From what you describe, it seems that consumers of Posts will manipulate it as a read-write collection. This isn't a typical Repository implementation - you'd normally have separate methods such as Get(), Add(), etc. The actual internal data collection is never exposed, which means you can easily stub or mock out just the individual Repository operations you need and not fear the kind of issues you had with your test data.
Whatever is in your [SetUp] method will be called for each test. This is probably behavior that you don't want.
You can either place the code you have in the [SetUp] method inside each individual test, or you can create a separate private method within your unit test class that will spin up a freshly mocked DbSet for you to keep things DRY.

How should I inspect the object graph created by a factory in a unit test

I have some code similar to the following:
public interface IMyClass
{
MyEnum Value { get; }
IMyItemCollection Items { get; }
}
public class MyConcreteClassFactory : MyClassFactoryBase
{
public override IMyClass Create(MyEnum value)
{
var itemBuilder = new MyRemoteItemBuilder();
var itemCollection = new MyLazyItemCollection(itemBuilder)
return new MyClass(value, itemCollection);
}
}
The 'real' code should only care about the fact that the factory returns an instance of IMyClass - not what the concrete implementation is. Still, I would like to test that the factory class does what it is supposed to - build a concrete object graph.
Should I write some tests that call the create method and inspect the properties of the returned object? This question seems to indicate so, but does that still apply if I need to inspect several layers of classes and properties to verify the object graph created by the factory? Wouldn't that lead to test code such as:
var created = objectUnderTest.Create(MyEnum.A);
var itemBuilder = (created.Items as MyLazyItemCollection).Builder;
Assert.IsInstanceOfType(itemBuilder, typeof(MyRemoteItemBuilder));
I'm not particularly fond of the downcast of created.Items, but as I see it that would be the only way to assert that the factory created the MyLazyItemCollection correctly, since not every IMyItemCollection can be expected to have a builder property... And this is only the second layer of the graph. I might need to dig even further into the dependencies of the MyRemoteItemBuilder to see if they were created correctly:
var service = ((created.Items as MyLazyItemCollection)
.Builder as MyRemoteItemBuilder).Service;
Assert.IsInstanceOfType(service, typeof(MyService));
Should I test my factory in this manner, accepting the unsightly nested downcasts - this is test code, after all - or should I pull the IMyItemCollection construction into another factory and add this as a dependency to my MyConcreteClassFactory (so I could inject it from the test code and assert that the value of created.Items was the instance created by my mocked factory). I expect the latter would quickly lead to an explosion in factory-factories and factory-factory-factories. After all, the user of the MyConcreteClassFactory shouldn't need to be bothered with the fact that she has to supply specific sub factories, should she..?
It is of cause depends on needs, but my answer is NO.
You should never design your tests in a manner "Test implementation (or detail)", that might work really nice at the beginning, but after a while you will be in trouble. Implementation is chaning really fast, with each change you are forced to correct many test cases.
In opposite, you must "Test behaviour". It basically means you abstract of all details (concrete class) and you tests cases testing some valuable scenarios, instead of details.
My options is to create "Test implementation" cases, then I'm doind TDD. But later on they have to be refactored out with "Test behaviour" cases.
This is quite important if you case not only about count of test case, but quality in building real Safety Net.

Is it possible to Assert a method has been called in VS2005 Unit Testing?

I'm writing some unit tests and I need to be able to Assert whether a method has been called based upon the setup data.
E.g.
String testValue = "1234";
MyClass target = new MyClass();
target.Value = testValue;
target.RunConversion();
// required Assertion
Assert.MethodCalled(MyClass.RunSpecificConversion);
Then there would be a second test where testValue is null and I would want to assert that the method has NOT been called.
Update for specific test scenario:
I haev a class which represents information deserialized from XML. This class contains a number of pieces of information that I need to convert into my own classes.
The XML class is information about a person, including account info and a few phone numbers in different fields.
I have a method to create my Account class from the XML class, and methods to create the correct phone classes from the XML class. I have unit tests for each of these methods, but I'd like to test that when the account convertion is called, it runs the phone conversions as the results are actually properties of the account class.
I know I could test the properties of the account class after feeding in the correct information, however I have otehr nested properties that have further nested and testing the entire tree could become very cumbersome. I guess I could just have each level test the next level below it, but ideally I'd like to make sure the correct conversino methods are being called and the code is not being duplicated in the implementation.
Without using a Mocking framework such as Moq, TypeMock, RhinoMocks that can verify your expectations, I would look at parsing the stack trace.
The MSDN documentation here should help.
Kindness,
Dan
You want to investigate the use of mocking frameworks.
Or you can create your own fake objects to record the calls. You'll need to create an interface that the class implements.
One method I have seen of creating fakes looks like this:
interface MyInterface
{
void Method();
}
// Real class
class MyClass : MyInterface
{
}
// Fake class for recording calls
class FakeMyClass : MyInterface
{
public bool MethodCalled;
public void Method()
{
this.MethodCalled = true;
}
}
You then need to use some dependency injection to get this fake class used instead of the real one whilst running the tests.
Of course the issues with this is that the Fake class will only record method calls but not actually do anything real. This won't always be applicable. It works okay in a Model-View-Presenter environment.
You can use a mocking framework like Rhino Mocks or moq. You can also use Isolation framework like Isolator to do that.
Another option is to inherit the class you want to verify against and raise a flag inside it that the method was called. Instead of the assert in your test assert against the flag. (basically it's a handrolled mock)
Example using Isolator:
Isolate.Verify.WasCalledWithAnyArguments(() => target.RunSpecificConversion());
Disclaimer - I work at Typemock
Another perspective - you might be unit testing at too low a level which can cause your tests to be brittle.
Typically, it is better to test the business requirement rather than implementation details. e.g. you run a conversion with "1234" as the input, and the conversion should reverse the input, so you expect "4321" as the output.
Don't test that you expect "1234" to be converted by a specific sequence of steps. In the future you might change the implementation details, then the test will fail even if the business requirements are still being met.
Of course your test in the question could be an actual business requirement in which case it would be correct.
The other case when you would want to do this is if invoking the conversion in the real MyClass is not suitable for a unit test, i.e. requires a lot of setup, or is time intensive. Then you will need to mock or stub it out.
Reply to question edit:
Based on your scenario, I would still be inclined to test by checking the output rather than checking for whether specific methods were called.
You could have tests with different XML inputs to ensure that the different conversion methods have to be called in order to pass the tests.
And I wouldn't rely on tests to check whether there was duplicate code, but rather would refactor away duplicate code when I came across it, and just rely on the unit tests to ensure that the code still performs the same function after refactoring.

Categories