Creating mock objects - c#

I created simple GUI in WPF. I would like to show there some data got from database. But for now I have only GUI and few functions that do simple calculations on received data. I know that my goal is to create mock objects that would generate some "false" data, but I have no idea how to start. Could you tell me how to create one of them, I could then create analogously the rest. Here is my class that does calculations:
public Statistic showUsersPostCount(Options options)
{
Query q = (Query)this.client.DoQuery();
q.AddAuthor(options.Login);
q.SetSinceDate(options.DateFrom);
q.SetUntilDate(options.DateTo);
q.AddTitleWord(options.Discussion);
List<Entity> list = (List<Entity>)q.PerformQuery();
Statistic statistic = new Statistic();
statistic.UsersPostCount = list.Count;
return statistic;
}
this functions returns some simple statistic. But I do not have code for class Query. How can I mock object of this class?

With the code provided... You can't, in order to mock it, you need some way of providing an alternative object for the dependency (which in this case is the .client object).
As it is, that method has only one input 'options', but that has relatively minimal influence over the code there.
Additionally, you claim to be showing an example of a class - but you're not - you're only showing a method named showUsersPostCount.

Assuming that your code is a method within a class that you want to mock, your first step would be creating an interface for the class to implement, if you have not already done so.
You can then pass your interface (rather than your concrete class) to a mocking framework (I've used Moq, but I assume nmock works very similarly). You can then fill in the mock data that you want your properties/methods to return through the mocking framework.

As others have mentioned, your code isn't mockable as is... at least with standard mocking tools. There is always Moles, which prides itself on allowing you to "Mock the Unmockable". Moles would allow you to mock that method, as-is.
That said, if you have to resort to Moles to mock things that you control internally (the tool was really designed for mocking external dependencies such as databases and files and whatnot), you should probably consider making your design more flexible. A testable (testable without Moles, that is) design is more likely to be a good design, on the whole.

Related

What advantages do interfaces in a closed system bring except for making testing easier?

I am at a point where I need to mock certain classes in order to test some parts of my software. But I can, of course, only mock them using an interface. For example if I have something like this (using Moq):
[TestMethod]
public void AddTestTaskTest()
{
TestTask assignable = null;
var contextMock = new Mock<ApplicationDatabaseContext>();
var appDbAdaptorMock = new Mock<ApplicationDatabaseAdaptor>(contextMock.Object);
var dbOpMock = new Mock<TestTaskDbOperator>();
void Action(TestTask t) => assignable = t;
dbOpMock.Setup(p => p.Add(It.IsAny<TestTask>())).Callback<TestTask>(Action);
appDbAdaptorMock.Setup(d => d.TestTasks).Returns(dbOpMock.Object);
var db = new ApplicationDatabaseController(appDbAdaptorMock.Object);
var task = CreateTestTaskObject(1);
db.AddTestTask(task);
Assert.AreEqual(task, assignable);
}
(The test doesn't make a lot of sense yet, but its about the principle)
Then I'd obviously have to create an interface IApplicationDatabaseAdaptor so that I could overwrite properties and methods with the mock. I have seen lots of suggestions to do so on SO and other in other places with the reasoning that it enables decoupling.
What do they mean by "enables decoupling"? Why do so many people encourage the use of such interfaces and why is it not considered bad practice to create such interfaces? Especially if I know I am only going to be using them for mocking and nothing else at all.
Anything which is not encapsulated should have and interface defined for few reasons.
Unit testing the code using that class by creating mocks.
For test driven development where you would mostly define interfaces for sub components.
Interface segregation.
if you dont want any of these then go ahead and you can use NSubstitute which will allow you to mock concrete classes too.
Decoupling in real world scenario means
you can delay the concrete implementation of components by mocking
interfaces.
You can switch implementations without touching code outside the changed implemention.
Classes are "coupled" when one class needs another class. So when you declare in interface just one "know" about the other. The main goal is to make all classes as "loose as possible" in the whole application. It makes your code more maintainable and makes it easier and more safe to change code in our application.
But we can think of another good examples of using interfaces:
Image you are writing and app which uses webservices to save its data. Now the customers wants to work offline, because he does not have access to the internet when working with the app but later when he is in the office again.
If you "hide" your communication with the database behind an interface its now easy for you the write a new implementation of the interface to communicate with a local database and you don´t have to rewrite any code: Just create another implementation of the interface and use that.
Another example is that when you use interfaces you can, as you did, mock up data. Image you have a DatabaseProvider which reads data from the database. Now you have a class doing something with that data and you want to test if your class is working correctly. How will you do that?
In the testclass you can mock the data coming from the database and let your class use this to do its magic. Thats only one big advantage of using interfaces for Data access.

How to Mock an entity manager

Am new to unit testing, and just getting started writing unit tests for an existing code base.
I would like to write a unit test for the following method of a class.
public int ProcessFileRowQueue()
{
var fileRowsToProcess = this.EdiEntityManager.GetFileRowEntitiesToProcess();
foreach (var fileRowEntity in fileRowsToProcess)
{
ProcessFileRow(fileRowEntity);
}
return fileRowsToProcess.Count;
}
The problem is with GetFileRowEntitiesToProcess(); The Entity Manager is a wrapper around the Entity Framework Context. I have searched on this and found one solution is to have a test database of a known state to test with. However, it seems to me that creating a few entities in the test code would yield more consistent test results.
But as it exists, I don't see a way to mock the Manager without some refactoring.
Is there a best practice for resolving this? I apologize for this question being a bit naïve, but I just want to make sure I go down the right road for the rest of the project.
I'm hearing two questions here:
Should I mock EdiEntityManager?
Yes. It's a dependency external to the code being tested, its creation and behavior are defined outside of that code. So for testing purposes a mock with known behavior should be injected.
How can I mock EdiEntityManager?
That we can't know from the code posted. It depends on what that type is, how it's created and supplied to that containing object, etc. To answer this part of the question, you should attempt to:
Create a mock with known behavior for the one method being invoked (GetFileRowEntitiesToProcess()).
Inject that mock into this containing object being tested.
For either of these efforts, discover what may prevent that from happening. Each such discovery is either going to involve learning a little bit more about the types and the mocks, or is going to reveal a need for refactoring to allow testability. The code posted doesn't reveal that.
As an example, suppose EdiEntityManager is created in the constructor:
public SomeObject()
{
this.EdiEntityManager = new EntityManager();
}
That would be something that prevents mocking because it gets in the way of Step 2 above. Instead, the constructor would be refactored to require rather than instantiate:
public SomeObject(EntityManager ediEntityManager)
{
this.EdiEntityManager = ediEntityManager;
}
That would allow a test to supply a mock, and conforms with the Dependency Inversion Principle.
Or perhaps EntityManager is too concrete a type and difficult to mock/inject, then perhaps the actual type should be an interface which EntityManager defines. Worst case scenario with this problem could be that you don't control the type at all and simply need to define a wrapper object (which itself has a mockable interface) to enclose the EntityManager dependency.

Unit test a method which is on high abstraction level

A similar topic has been discussed in The value of high level unit tests and mock objects
However, I'd like to describe a specific situation and ask your opinion about how I should write a unit test.
I am developing an ordinary 3-tier application, which uses Entity Framework. Above EF, I have two layers:
Repositories: They directly access the EF ObjectContext and do all the CRUD work (actually, these classes are generated with a T4 template). All Repository class has an appropriate interface.
Managers: They implement the higher level business logic, they do not access directly the ObjectContext, rather use an appropriate Repository. Managers do not know the concrete Repository-implementation, only the interface (I use dependency injection, and mocks in the unit test).
Without further description, here is the class I'd like to write unit tests for:
public class PersonManager
{
private IPersonRepository personRepository; // This is injected.
// Constructor for injection is here.
public void ComplexMethod()
{
// High level business logic
bool result = this.SimpleMethod1();
if(result)
this.SimpleMethod2(1);
else
this.SimpleMethod2(2);
}
public bool SimpleMethod1()
{
// Doing some low-level work with the repository.
}
public void SimpleMethod2(int param)
{
// Doing some low-level work with the repository.
}
}
It is really easy to unit test SimpleMethod1 and SimpleMethod2 by instantiating the PersonManager with a mock of the PersonRepository.
But I can not find any convenient way to unit test ComplexMethod.
Do you have any recommendation about how should I do that? Or that should not be unit tested at all? Maybe I should not use the this reference for the method calls in ComplexMethod, rather access the PersonManager itself via an interface, and replace that with a mock too?
Thanks in advance for any advice.
Guillaume's answer is good (+1), but I wanted to give an additional observation. What I see in the code you've posted is the basis for a very common question from people trying to figure out (or argue against) TDD, which is:
"How/why should I test ComplexMethod() since it depends on SimpleMethod1() and SimpleMethod2(), which are already tested and have their own behavior that I'd have to account for in tests of ComplexMethod()? I'd have to basically duplicate all the tests of SimpleMethod1() and SimpleMethod2() in order to fully test ComplexMethod(), and that's just stupid."
Shortly after, they usually find out about partial mocks. Using partial mocks, you could mock SimpleMethod1() and SimpleMethod2() and then test ComplexMethod() using normal mock mechanisms. "Sounds great," they think, "This will solve my problem perfectly!". A good mock framework should strongly discourage using partial mocks in this way, though, because the reality is:
Your tests are telling you about a design problem.
Specifically, they're telling you that you've mixed concerns and/or abstraction levels in one class. They're telling you that SimpleMethod1() and SimpleMethod2() should be extracted to another class which this class depends on. No matter how many times I see this scenario, and no matter how vehemently the developer argues, the tests are proven right in the end 100% of the time.
I don't see what the problem is. You can test your complex method while mocking the repository, there is no problem.
Your would need two unit-tests, each one would use the same sequence of expectations and executions that you have in your tests of the SimpleMethod1 (I assume you already have two unit-tests for SimpleMethod1, one for a return of "true", one for "false") and also the same expectations that you have for your test SimpleMethod2 with a fixed parameter 1, or 2 respectively.
Granted, there would be some "duplication" in your testing class, but that's not a problem.
Also note your tests for SimpleMethod2 should not make any assumption for the parameter passed: in "real-life" you can have only 1 or 2 as a parameter (and that's what your unit-test for ComplexMethod would have), but your unit-tests for SImpleMethod2 should test it whatever the parameter is: any int.
And finally, if ComplexMethod is the ONLY way to call SimpleMethod1 and/or SimpleMethod2, you should consider making these private, and have only unit-tests for ComplexMethod.
Does that make sense?

How can I use unit testing when classes depend on one another or external data?

I'd like to start using unit tests, but I'm having a hard time understanding how I can use them with my current project.
My current project is an application which collects files into a 'Catalog'. A Catalog can then extract information from the files it contains such as thumbnails and other properties. Users can also tag the files with other custom meta data such as "Author" and "Notes". It could easily be compared to a photo album application like Picasa, or Adobe Lightroom.
I've separated the code to create and manipulate a Catalog into a separate DLL which I'd now like to test. However, the majority of my classes are never meant to be instantiated on their own. Instead everything happens through my Catalog class. For example there's no way I can test my File class on its own, as a File is only accessible through a Catalog.
As an alternative to unit tests I think it would make more sense for me to write a test program that run through a series of actions including creating a catalog, re-opening the catalog that was created, and manipulating the contents of the catalog. See the code below.
//NOTE: The real version would have code to log the results and any exceptions thrown
//input data
string testCatalogALocation = "C:\TestCatalogA"
string testCatalogBLocation = "C:\TestCatalogB"
string testFileLocation = "C:\testfile.jpg"
string testFileName = System.IO.Path.GetFileName(testFileLocation);
//Test creating catalogs
Catalog catAtemp = Catalog(testCatalogALocation)
Catalog catBtemp = Catalog(testCatalogBLocation );
//test opening catalogs
Catalog catA = Catalog.OpenCatalog(testCatalogALocation);
Catalog catB = Catalog.OpenCatalog(testCatalogBLocation );
using(FileStream fs = new FileStream(testFileLocation )
{
//test importing a file
catA.ImportFile(testFileName,fs);
}
//test retrieving a file
File testFile = catA.GetFile(System.IO.Path.GetFileName(testFileLocation));
//test copying between catalogs
catB.CopyFileTo(testFile);
//Clean Up after test
System.IO.Directory.Delete(testCatalogALocation);
System.IO.Directory.Delete(testCatalogBLocation);
First, am I missing something? Is there some way to unit test a program like this? Second, is there some way to create a procedural type test like the code above but be able to take advantage of the testing tools building into Visual Studio? Will a "Generic Test" in VS2010 allow me to do this?
Update
Thanks for all the responses everyone. Actually my classes do in fact inherit from a series of interfaces. Here's a class diagram for anyone that is interested. Actually I have more interfaces then I have classes. I just left out the interfaces from my example for the sake of simplicity.
Thanks for all the suggestions to use mocking. I'd heard the term in the past, but never really understood what a "mock" was until now. I understand how I could create a mock of my IFile interface, which represents a single file in a catalog. I also understand how I could create a mock version of my ICatalog interface to test how two catalogs interact.
Yet I don't understand how I can test my concrete ICatalog implementations as they strongly related to their back end data sources. Actual the whole purpose of my Catalog classes is to read, write, and manipulate their external data/resources.
You ought to read about SOLID code principles. In particular the 'D' on SOLID stands for the Dependency Injection/Inversion Principle, this is where the class you're trying to test doesn't depend on other concrete classes and external implementations, but instead depends on interfaces and abstractions. You rely on an IoC (Inversion of Control) Container (such as Unity, Ninject, or Castle Windsor) to dynamically inject the concrete dependency at runtime, but during Unit Testing you inject a mock/stub instead.
For instance consider following class:
public class ComplexAlgorithm
{
protected DatabaseAccessor _data;
public ComplexAlgorithm(DatabaseAccessor dataAccessor)
{
_data = dataAccessor;
}
public int RunAlgorithm()
{
// RunAlgorithm needs to call methods from DatabaseAccessor
}
}
RunAlgorithm() method needs to hit the database (via DatabaseAccessor) making it difficult to test. So instead we change DatabaseAccessor into an interface.
public class ComplexAlgorithm
{
protected IDatabaseAccessor _data;
public ComplexAlgorithm(IDatabaseAccessor dataAccessor)
{
_data = dataAccessor;
}
// rest of class (snip)
}
Now ComplexAlgorithm depends on an interface IDatabaseAccessor which can easily be mocked for when we need to Unit test ComplexAlgorithm in isolation. For instance:
public class MyFakeDataAccessor : IDatabaseAccessor
{
public IList<Thing> GetThings()
{
// Return a fake/pretend list of things for testing
return new List<Thing>()
{
new Thing("Thing 1"),
new Thing("Thing 2"),
new Thing("Thing 3"),
new Thing("Thing 4")
};
}
// Other methods (snip)
}
[Test]
public void Should_Return_8_With_Four_Things_In_Database()
{
// Arrange
IDatabaseAccessor fakeData = new MyFakeDataAccessor();
ComplexAlgorithm algorithm = new ComplexAlgorithm(fakeData);
int expectedValue = 8;
// Act
int actualValue = algorithm.RunAlgorithm();
// Assert
Assert.AreEqual(expectedValue, actualValue);
}
We're essentially 'decoupling' the two classes from each other. Decoupling is another important software engineering principle for writing more maintainable and robust code.
This is really the tip of the tip of the iceberg as far as Dependency Injection, SOLID and Decoupling go, but it's what you need in order to effectively Unit test your code.
Here is a simple algorithm that can help get you started. There are other techniques to decouple code, but this can often get you pretty far, particularly if your code is not too large and deeply entrenched.
Identify the locations where you depend on external data/resources and determine whether you have classes that isolate each dependency.
If necessary, refactor to achieve the necessary insulation. This is the most challenging part to do safely, so focus on the lowest-risk changes first.
Extract interfaces for the classes that isolate external data.
When you construct your classes, pass in the external dependencies as interfaces rather than having the class instantiate them itself.
Create test implementations of your interfaces that don't depend on the external resources. This is also where you can add 'sensing' code for your tests to make sure the appropriate calls are being used. Mocking frameworks can be very helpful here, but it can be a good exercise to create the stub classes manually for a simple project, as it gives you a sense of what your test classes are doing. Manual stub classes typically set public properties to indicate when/how methods are called and have public properties to indicate how particular calls should behave.
Write tests that call methods on your classes, using the stubbed dependencies to sense whether the class is doing the right things in different cases. An easy way to start, if you already have functional code written, is to map out the different pathways and write tests that cover the different cases, asserting the behavior that currently occurs. These are known as characterization tests and they can give you the confidence to start refactoring your code, since now you know you're at least not changing the behavior you've already established.
Best of luck. Writing good unit tests requires a change of perspective, which will develop naturally as you work to identify dependencies and create the necessary isolation for testing. At first, the code will feel uglier, with additional layers of indirection that were previously unnecessarily, but as you learn various isolation techniques and refactor (which you can now do more easily, with tests to support it), you may find that things actually become cleaner and easier to understand.
This is a pure case where Dependency Injection plays a vital role.
As Shady suggest to read about mocking & stubbing. To achive this , you should consider using some Dependency Injectors like ( Unity in .net).
Also read about Dependency Injection Here
http://martinfowler.com/articles/injection.html
the majority of my classes are never
meant to be instantiated on their own
This is where the D - the Design D - comes into TDD. It's bad design to have classes that are tightly coupled. That badness manifests itself immediately when you try to unit test such a class - and if you start with unit tests, you'll never find yourself in this situation. Writing testable code compels us to better design.
I'm sorry; this isn't an answer to your question, but I see others have already mentioned mocking and DI, and those answers are fine. But you put the TDD tag on this question, and this is the TDD answer to your question: don't put yourself in the situation of tightly coupled classes.
What you have now is Legacy Code. That is: code that has been implemented without tests. For your initial tests I would definitely test through the Catalog class until you can break all those dependencies. So your first set of tests will be integration/acceptance tests.
If you don't expect for any behavior to change, then leave it at that, but if you do make a change, I suggest that you TDD the change and build up unit tests with the changes.

Is it possible to Assert a method has been called in VS2005 Unit Testing?

I'm writing some unit tests and I need to be able to Assert whether a method has been called based upon the setup data.
E.g.
String testValue = "1234";
MyClass target = new MyClass();
target.Value = testValue;
target.RunConversion();
// required Assertion
Assert.MethodCalled(MyClass.RunSpecificConversion);
Then there would be a second test where testValue is null and I would want to assert that the method has NOT been called.
Update for specific test scenario:
I haev a class which represents information deserialized from XML. This class contains a number of pieces of information that I need to convert into my own classes.
The XML class is information about a person, including account info and a few phone numbers in different fields.
I have a method to create my Account class from the XML class, and methods to create the correct phone classes from the XML class. I have unit tests for each of these methods, but I'd like to test that when the account convertion is called, it runs the phone conversions as the results are actually properties of the account class.
I know I could test the properties of the account class after feeding in the correct information, however I have otehr nested properties that have further nested and testing the entire tree could become very cumbersome. I guess I could just have each level test the next level below it, but ideally I'd like to make sure the correct conversino methods are being called and the code is not being duplicated in the implementation.
Without using a Mocking framework such as Moq, TypeMock, RhinoMocks that can verify your expectations, I would look at parsing the stack trace.
The MSDN documentation here should help.
Kindness,
Dan
You want to investigate the use of mocking frameworks.
Or you can create your own fake objects to record the calls. You'll need to create an interface that the class implements.
One method I have seen of creating fakes looks like this:
interface MyInterface
{
void Method();
}
// Real class
class MyClass : MyInterface
{
}
// Fake class for recording calls
class FakeMyClass : MyInterface
{
public bool MethodCalled;
public void Method()
{
this.MethodCalled = true;
}
}
You then need to use some dependency injection to get this fake class used instead of the real one whilst running the tests.
Of course the issues with this is that the Fake class will only record method calls but not actually do anything real. This won't always be applicable. It works okay in a Model-View-Presenter environment.
You can use a mocking framework like Rhino Mocks or moq. You can also use Isolation framework like Isolator to do that.
Another option is to inherit the class you want to verify against and raise a flag inside it that the method was called. Instead of the assert in your test assert against the flag. (basically it's a handrolled mock)
Example using Isolator:
Isolate.Verify.WasCalledWithAnyArguments(() => target.RunSpecificConversion());
Disclaimer - I work at Typemock
Another perspective - you might be unit testing at too low a level which can cause your tests to be brittle.
Typically, it is better to test the business requirement rather than implementation details. e.g. you run a conversion with "1234" as the input, and the conversion should reverse the input, so you expect "4321" as the output.
Don't test that you expect "1234" to be converted by a specific sequence of steps. In the future you might change the implementation details, then the test will fail even if the business requirements are still being met.
Of course your test in the question could be an actual business requirement in which case it would be correct.
The other case when you would want to do this is if invoking the conversion in the real MyClass is not suitable for a unit test, i.e. requires a lot of setup, or is time intensive. Then you will need to mock or stub it out.
Reply to question edit:
Based on your scenario, I would still be inclined to test by checking the output rather than checking for whether specific methods were called.
You could have tests with different XML inputs to ensure that the different conversion methods have to be called in order to pass the tests.
And I wouldn't rely on tests to check whether there was duplicate code, but rather would refactor away duplicate code when I came across it, and just rely on the unit tests to ensure that the code still performs the same function after refactoring.

Categories