Assert that "nothing happened" when writing unit test - c#

When writing unit test, Is there a simple way to ensure that nothing unexpected happened ?
Since the list of possible side effect is infinite, adding tons of Assert to ensure that nothing changed at every steps seems vain and it obfuscate the purpose of the test.
I might have missed some framework feature or good practice.
I'm using C#7, .net 4.6, MSTest V1.
edit:
The simpler example would be to test the setter of a viewmodel, 2 things should happen: the value should change and PropertyChanged event should be raised.
These 2 things are easy to check but now I need to make sure that other properties values didn't changed, no other event was raised, the system clipboard was not touched...

You're missing the point of unit tests. They are "proofs". You cannot logically prove a negative assertion, so there's no point in even trying.
The assertions in each unit test should prove that the desired behavior was accomplished. That's all.
If we reduce the question to absurdity, every unit test would require that we assert that the function under test didn't start a thermonuclear war.
Unit tests are not the only kind of tests you'll need to perform. There are functional tests, integration tests, usability tests, etc. Each one has its own focus. For unit tests, the focus is proving the expected behavior of a single function. So if the function is supposed to accomplish 2 things, just assert that each of those 2 things happened, and move on.

One of the options to ensure that nothing 'bad' or unexpected happens is to ensure good practices of using dependency injection and mocking:
[Test]
public void TestSomething()
{
// Arrange
var barMock = RhinoMocks.MockRepository.GenerateStrictMock<IBar>();
var foo = new Foo(barMock);
// Act
foo.DoSomething();
// Assert
...
}
In the example above if Foo accidentally touches Bar, that will result in an exception (the strict mock) and the test fails. Such approach might not be applicable in all test cases, but serves as a good addition to other potential practices.

Some addition to your edit:
In Test Driven Development you are writing only code, which will pass the test and nothing more. Furthermore you want to choose the simplest possible solutoin to accomplish this goal.
That said you will start most likely with a failing unit-test. In your situation you will not get a failing unit test at the beginning.
If you push it to the limits, you will have to check that format C:\ is not called in your application when you want to check every outcome. You might want to have a look at design principles like the KISS-principle (Keep it simple, stupid).

If the scope of "check that nothing else happened" is to ensure the state of the model didn't change, which it appears is the case from the question.
Write a helper function that takes the model before your event and the model after and compares them. Let it return the properties that are changed, then you can assert that only those properties that you intended to update are in the return list. This sort of helper is portable, maintainable, and reusable
Checking model state is a valid application of a unit test.

This is only possible in referentially transparent languages such as Safe Haskell.

Related

Can a Test Class have properties?

I'm creating a TestClass for unit testing an application, and one thing I'd like to do is run a test method, check method has run correctly, store a value in class property within test class base on outcome, and then use that value in a later method.
I've tried doing this and found that as soon as the compiler moves from one method to another, all the properties I have set are wiped clean. I have checked with breakpoints and at the end of the first method the value is in the property, and then at the beginning of the second method that same property is null.
Looked this up and no one else seems to be attempting the same thing, so is it possible to share a value between methods or am I taking the wrong approach?
Thanks in advance.
You are taking the wrong approach.
Unit tests, by definition, should be completely self-contained and deterministic. They should not depend on one another.
You should be able to refactor out the repetitive portion of your first unit test into a helper method which can be invoked by your other unit test. The work will be done twice, but Unit Tests should be really fast, so the overhead should be really minimal.
It's not the compiler - it's the test runner which will (potentially) create a new instance for each test.
Tests should generally be independent - even if you could find a way of getting this to work, I would avoid doing so. Design your way around it as best you can.
This smells like bad practice to me, no matter what testing framework you're using. All automated tests (let alone formal unit tests) should be independent of one another. A static field/property might work, but I would recommend refactoring your tests first.

MVC - Unit testing the wrong things?

Whilst practicing some TDD at work for an ASP.Net MVC project, I ran into a number of scenarios where I was writing tests to ensure that particular actions returned the correct views or had particular attributes on them ([ChildActionOnly] etc). (in fact, I found a number of interesting posts here SO about useful extension methods to help acheive this).
When I was first introduced to the concepts of unit testing and TDD when on a course a few years ago, the emphasis was based strongly on that tests should be focusing on testing logic behind the user-desired features and functionality - the core project 'requirements' if you will.
My question is - if this is the case, are menial tests checking for the correct view file to be rendered, or an action having a particular attribute etc not really encompassing what the unit testing methodology is all about? Am I writing tests for the wrong reasons (i.e. simply protecting myself and other colleagues from making a refactoring mistake) or are these valid cases of valuable unit tests?
If a handler method could return one of two (or more) views depending on some logic then a unit test that asserted the correct view would be useful. Same goes for a handler method that inserted particular attributes depending on the logic.
Am I writing tests for the wrong reasons (i.e. simply protecting other
colleagues from making a refactoring mistake) or are these valid cases
of valuable unit tests?
Catching regression errors is one of the benefits of unit tests, especially usefull when refactoring. If a you inadvertently changed the view returned while doing some refactoring that would be very usefull to catch early - rather than waiting for a test that only ran when the application was runnning.
If your action returns different views (or action results) depending on some logic. Write a test.
If you always return View() then don't.
If you return a view with a name different from your action's name - then you could argue it would be a good idea to write a test.
It depends. To use one of your examples: Checking if an action has a particular attribute is okay, but it's better to write a test that verifies a behavior is functioning as expected, that would fail if the attribute was missing.
That said, ultimately your tests are a safety net. If it has a reasonable chance of preventing a bug from slipping in at some point in the future, it's doing its job.
If it's a simple test that has low overhead and a low chance of breaking arbitrarily in the future, go for it. I'd rather have too many tests than too few.
Indeed, your tests should be testing your logic. That should not go away. However, ideally you should write tests for anything that can go wrong. For instance, insuring that all methods that need to be secured have a proper Authorize attribute. This is a security test.
Ultimate, you decide what's useful for you to test.

Why this particular test could be useful?

I recently joined a company where they use TDD, but it's still not clear for me, for example, we have a test:
[TestMethod]
public void ShouldReturnGameIsRunning()
{
var game = new Mock<IGame>();
game.Setup(g => g.IsRunning).Returns(true);
Assert.IsTrue(game.Object.IsRunning);
}
What's the purpose of it? As far as I understand, this is testing nothing! I say that because it's mocking an interface, and saying, return true for IsRunning, it will never return a different value...
I'm probably thinking wrong, as everybody says it's a good practice etc etc... or is this test wrong?
This test is excerising an interface, and verifying that the interface exposes a Boolean getter named IsRunning.
This test was probably written before the interface existed, and probably well before any concrete classes of IGame existed either. I would imagine, if the project is of any maturity, there are other tests that exercise the actual implementions and verify behavior for that level, and probably use mocking in the more traditional isolationist context rather than stubbing theoretical shapes.
The value of these kinds of tests is that it forces one to think about how a shape or object is going to be used. Something along the lines of "I don't know exactly what a game is going to do yet, but I do know that I'm going to want to check if it's running...". So this style of testing is more to drive design decisions, as well as validate behavior.
Not the most valuable test on the planet, but serves its purpose in my opinion.
Edit: I was on the train last night when I was typing this, but I wanted to address Andrew Whitaker comment directly in the answer.
One could argue that the signature in the interface itself is enough of an enforcement of the desired contract. However, this really depends on how you treat your interfaces and ultimately how you validate the system that you are trying to test.
There may be tangible value in explicitly stating this functionality as a test. You are explicitly verifying that this is a desired functionality for an IGame.
Lets take a use case, lets say that as a designer you know that you want to expose a public getter for IsRunning, so you add it to the interface. But lets say that another developer gets this and sees a property, but doesn't see any usages of it anywhere else in the code, one might assume that it is eligible for deletion (or code-rot removal, which is normally a good thing) without harm. The existence of this test, however, explicitly states this properity should exist.
Without this test, a developer could modify the interface, drop the implementation and the project would still compile and run seemingly correctly. And it would be until much later that someone might notice that interface has been changed.
With this test, even if it appears as if no one is using it, your test suite will fail. The self documenting nature of the test tells you that ShouldReturnGameIsRunning(), which at the very least should spawn a conversation if IGame should really expose an IsRunning getter.
The test is invalid. Moq is a framework used to "mock" objects used by the method under test. If you were testing IsRunning, you might use Mock to provide a certain implementation for those methods that IsRunning depends on, for example to cause certain execution paths to execute.
People can tell you all day that testing first is better; you won't realize that it actually is until you start doing it yourself and realize that you have fewer bugs with code you wrote tests for first, which will save you and your co-workers time and probably make the quality of your code higher.
It could be that the test has been written (before) the code which is good practice.
It might equally be that this is a pointless test created by a tool.
Another possibility is that the code was written by somebody who, at the time, didn't understand how to use that particular mocking framework - and wrote one or more simple tests to learn. Kent Beck talked about that sort of "learning test" in his original TDD book.
This test could have had a point in the past. It could have been testing a concrete class. It could have been testing an abstract one -- as it sometimes makes sense to mock an abstract class that you want to test. I can see how a (less informed(like myself)) follower of the TDD way of doing things could have left a small mess like this. Cleanup of dead code and dead test code, does not really fit in the natural workflow of red, green, refactor (since it doesn't break any tests!).
However, history is not justification for a test that does nothing but confuse the reader. So, be a good boyscout and remove it.
In my honest oppinion this is just rubbish, it only proves that moq works as expected

Why should I use a mocking framework instead of fakes?

There are some other variations of this question here at SO, but please read the entire question.
By just using fakes, we look at the constructor to see what kind of dependencies that a class have and then create fakes for them accordingly.
Then we write a test for a method by just looking at it's contract (method signature). If we can't figure out how to test the method by doing so, shouldn't we rather try to refactor the method (most likely break it up in smaller pieces) than to look inside it to figure our how we should test it? In other words, it also gives us a quality control by doing so.
Isn't mocks a bad thing since they require us to look inside the method that we are going to test? And therefore skip the whole "look at the signature as a critic".
Update to answer the comment
Say a stub then (just a dummy class providing the requested objects).
A framework like Moq makes sure that Method A gets called with the arguments X and Y. And to be able to setup those checks, one needs to look inside the tested method.
Isn't the important thing (the method contract) forgotten when setting up all those checks, as the focus is shifted from the method signature/contract to look inside the method and create the checks.
Isn't it better to try to test the method by just looking at the contract? After all, when we use the method we'll just look at the contract when using it. So it's quite important the it's contract is easy to follow and understand.
This is a bit of a grey area and I think that there is some overlap. On the whole I would say using mock objects is preferred by me.
I guess some of it depends on how you go about testing code - test or code first?
If you follow a test driven design plan with objects implementing interfaces then you effectively produce a mock object as you go.
Each test treats the tested object / method as a black box.
It focuses you onto writing simpler method code in that you know what answer you want.
But above all else it allows you to have runtime code that uses mock objects for unwritten areas of the code.
On the macro level it also allows for major areas of the code to be switched at runtime to use mock objects e.g. a mock data access layer rather than one with actual database access.
Fakes are just stupid dummy objects. Mocks enable you to verify that the controlflow of the unit is correct (e.g. that it calls the correct functions with the expected arguments). Doing so is very often a good way to test things. An example is that a saveProject()-function probably want's to call something like saveToProject() on the objects to be saved. I consider doing this a lot better than saving the project to a temporary buffer, then loading it to verify that everything was fine (this tests more than it should - it also verifies that the saveToProject() implementation(s) are correct).
As of mocks vs stubs, I usually (not always) find that mocks provide clearer tests and (optionally) more fine-grained control over the expectations. Mocks can be too powerful though, allowing you to test an implementation to the level that changing implementation under test leaving the result unchanged, but the test failing.
By just looking on method/function signature you can test only the output, providing some input (stubs that are only able to feed you with needed data). While this is ok in some cases, sometimes you do need to test what's happening inside that method, you need to test whether it behaves correctly.
string readDoc(name, fileManager) { return fileManager.Read(name).ToString() }
You can directly test returned value here, so stub works just fine.
void saveDoc(doc, fileManager) { fileManager.Save(doc) }
here you would much rather like to test, whether method Save got called with proper arguments (doc). The doc content is not changing, the fileManager is not outputting anything. This is because the method that is tested depends on some other functionality provided by the interface. And, the interface is the contract, so you not only want to test whether your method gives correct results. You also test whether it uses provided contract in correct way.
I see it a little different. Let me explain my view:
I use a mocking framework. When I try to test a class, to ensure it will work as intended, I have to test all the situations may happening. When my class under test uses other classes, I have to ensure in certain test situation that a special exceptions is raised by a used class or a certain return value, and so on... This is hardly to simulate with the real implementations of those classes, so I have to write fakes of them. But I think that in the case I use fakes, tests are not so easy to understand. In my tests I use MOQ-Framework and have the setup for the mocks in my test method. In case I have to analyse my testmethod, I can easy see how the mocks are configured and have not to switch to the coding of the fakes to understand the test.
Hope that helps you finding your answer ...

How would I go about unit testing this?

I need to develop a fairly simple algorithm, but am kindof confused as how to best write a test for it.
General description: User needs to be able to delete a Plan. Plan has Tasks associated with it, these need to be deleted as well (as long as they're not already done).
Pseudo-code as how the algorithm should behave:
PlanController.DeletePlan(plan)
=>
PlanDbRepository.DeletePlan()
ForEach Task t in plan.Tasks
If t.Status = Status.Open Then
TaskDbRepository.DeleteTask(t)
End If
End ForEach
Now as far as I understand it, unit tests are not supposed to touch the Database or generally require access to any outside systems, so I'm guessing I have two options here:
1) Mock out the Repository calls, and check whether they have been called the appropriate number of times as Asserts
2) Create stubs for both repository classes, setting their delete flag manually and then verify that the appropriate objects have been marked for deletion.
In both approaches, the big question is: What exactly am I testing here? What is the EXTRA value that such tests would give me?
Any insight in this would be highly appreciated. This is technically not linked to any specific unit testing framework, although we have RhinoMocks to be used. But I'd prefer a general explanation, so that I can properly wrap my head around this.
You should mock the repository and then construct a dummy plan in your unit test containing both Open and Closed tasks. Then call the actual method passing this plan and at the end verify that the DeleteTask method was called with correct arguments (tasks with only status = Open). This way you would ensure that only open tasks associated to this plan have been deleted by your method. Also don't forget (probably in a separate unit test) to verify that the plan itself has been deleted by asserting that the DeletePlan method has been called on the object your are passing.
To add to Darin's answer I'd like to tell you what you are actually testing. There's a bit of business logic in there, for example the check on the status.
This unit test might seem a bit dumb right now, but what about future changes to your code and model? This test is necessary to make sure this seemingly simple functionality will always keep working.
As you noted, you are testing that the logic in the algorithm behaves as expected. Your approach is correct, but consider the future - Months down the road, this algorithm may need to be changed, a different developer chops it up and redoes it, missing a critical piece of logic. Your unit tests will now fail, and the developer will be alerted to their mistake. Unit testing is useful at the start, and weeks/months/years down the road as well.
If you want to add more, consider how failure is handled. Have your DB mock throw an exception on the delete command, test that your algorithm handles this correctly.
The extra value provided by your tests is to check that your code does the right things (in this case, delete the plan, delete any open tasks associated with the plan and leave any closed tasks associated with the plan).
Assuming that you have tests in place for your Repository classes (i.e. that they do the right things when delete is called on them), then all you need to do is check that the delete methods are called appropriately.
Some tests you could write are:
Does deleting an empty plan only call DeletePlan?
Does deleting a plan with two open tasks call DeleteTask for both tasks?
Does deleting a plan with two closed tasks not call DeleteTask at all?
Does deleting a plan with one open and one closed task call DeleteTask once on the right task?
Edit: I'd use Darin's answer as the way to go about it though.
Interesting, I find unit testing helps to focus the mind on the specifications.
To that end let me ask this question...
If I have a plan with 3 tasks:
Plan1 {
Task1: completed
Task2: todo
Task3: todo
}
and I call delete on them, what should the happen to the Plan?
Plan1 : ?
Task1: not deleted
Task2: deleted
Task3: deleted
Is plan1 deleted, orphaning task1? or is it otherwise marked deleted?.
This is a big part of the Value I see in unit tests (Although it is only 1 of the 4 values:
1) Spec
2) Feedback
3) Regression
4) granularity
As for how to test, I wouldn't suggest mocks at all. I would consider a 2 part method
The first would look like
public void DeletePlan(Plan p)
{
var objectsToDelete = GetDeletedPlanObjects(p);
DeleteObjects(objectsToDelete);
}
And I wouldn't test this method.
I would test the method GetDeletedPlanObjects, which wouldn't touch the database anyways, and would allow you to send in scenarios like the above situation.... which I would then assert with www.approvaltests.com , but that's another story :-)
Happy Testing,
Llewellyn
I would not write unit tests for this because to me this is not testing behaviour but rather implementation. If at some point you want to chance the behaviour to not delete the tasks but rather set them to a state of 'disabled' or 'ignored', your unit tests will fail. If you test all controllers this way your unit tests are very brittle and will need to be changed often.
Refactor out the business logic to a 'TaskRemovalStrategy' if you want to test the business logic for this and leave the implementation details of the removal up to the class itself.
IMO you can write your unit tests around the abstract PlanRepository and the same tests should be useful in testing the data integrity in the database also.
For example you could write a test -
void DeletePlanTest()
{
PlanRepository repo = new PlanDbRepository("connection string");
repo.CreateNewPlan(); // create plan and populate with tasks
AssertIsTrue(repo.Plan.OpenTasks.Count == 2); // check tasks are in open state
repo.DeletePlan();
AssertIsTrue(repo.Plan.OpenTasks.Count == 0);
}
This test will work even if your repository deletes the plan and your database deletes the related tasks via a cascaded delete trigger.
The value of such test is whether the test is run for PlanDbRepository or a MockRepository it will still check that the behavior is correct. So when you change any repository code or even your database schema, you can run the tests to check nothing is broken.
You can create such tests which cover all the possible behaviors of your repository and then use them to make sure that any of your changes do not break the implementation.
You can also parameterize this test with a concrete repository instance and reuse them the test any future implementations of repositories.

Categories