How would I go about unit testing this? - c#

I need to develop a fairly simple algorithm, but am kindof confused as how to best write a test for it.
General description: User needs to be able to delete a Plan. Plan has Tasks associated with it, these need to be deleted as well (as long as they're not already done).
Pseudo-code as how the algorithm should behave:
PlanController.DeletePlan(plan)
=>
PlanDbRepository.DeletePlan()
ForEach Task t in plan.Tasks
If t.Status = Status.Open Then
TaskDbRepository.DeleteTask(t)
End If
End ForEach
Now as far as I understand it, unit tests are not supposed to touch the Database or generally require access to any outside systems, so I'm guessing I have two options here:
1) Mock out the Repository calls, and check whether they have been called the appropriate number of times as Asserts
2) Create stubs for both repository classes, setting their delete flag manually and then verify that the appropriate objects have been marked for deletion.
In both approaches, the big question is: What exactly am I testing here? What is the EXTRA value that such tests would give me?
Any insight in this would be highly appreciated. This is technically not linked to any specific unit testing framework, although we have RhinoMocks to be used. But I'd prefer a general explanation, so that I can properly wrap my head around this.

You should mock the repository and then construct a dummy plan in your unit test containing both Open and Closed tasks. Then call the actual method passing this plan and at the end verify that the DeleteTask method was called with correct arguments (tasks with only status = Open). This way you would ensure that only open tasks associated to this plan have been deleted by your method. Also don't forget (probably in a separate unit test) to verify that the plan itself has been deleted by asserting that the DeletePlan method has been called on the object your are passing.

To add to Darin's answer I'd like to tell you what you are actually testing. There's a bit of business logic in there, for example the check on the status.
This unit test might seem a bit dumb right now, but what about future changes to your code and model? This test is necessary to make sure this seemingly simple functionality will always keep working.

As you noted, you are testing that the logic in the algorithm behaves as expected. Your approach is correct, but consider the future - Months down the road, this algorithm may need to be changed, a different developer chops it up and redoes it, missing a critical piece of logic. Your unit tests will now fail, and the developer will be alerted to their mistake. Unit testing is useful at the start, and weeks/months/years down the road as well.
If you want to add more, consider how failure is handled. Have your DB mock throw an exception on the delete command, test that your algorithm handles this correctly.

The extra value provided by your tests is to check that your code does the right things (in this case, delete the plan, delete any open tasks associated with the plan and leave any closed tasks associated with the plan).
Assuming that you have tests in place for your Repository classes (i.e. that they do the right things when delete is called on them), then all you need to do is check that the delete methods are called appropriately.
Some tests you could write are:
Does deleting an empty plan only call DeletePlan?
Does deleting a plan with two open tasks call DeleteTask for both tasks?
Does deleting a plan with two closed tasks not call DeleteTask at all?
Does deleting a plan with one open and one closed task call DeleteTask once on the right task?
Edit: I'd use Darin's answer as the way to go about it though.

Interesting, I find unit testing helps to focus the mind on the specifications.
To that end let me ask this question...
If I have a plan with 3 tasks:
Plan1 {
Task1: completed
Task2: todo
Task3: todo
}
and I call delete on them, what should the happen to the Plan?
Plan1 : ?
Task1: not deleted
Task2: deleted
Task3: deleted
Is plan1 deleted, orphaning task1? or is it otherwise marked deleted?.
This is a big part of the Value I see in unit tests (Although it is only 1 of the 4 values:
1) Spec
2) Feedback
3) Regression
4) granularity
As for how to test, I wouldn't suggest mocks at all. I would consider a 2 part method
The first would look like
public void DeletePlan(Plan p)
{
var objectsToDelete = GetDeletedPlanObjects(p);
DeleteObjects(objectsToDelete);
}
And I wouldn't test this method.
I would test the method GetDeletedPlanObjects, which wouldn't touch the database anyways, and would allow you to send in scenarios like the above situation.... which I would then assert with www.approvaltests.com , but that's another story :-)
Happy Testing,
Llewellyn

I would not write unit tests for this because to me this is not testing behaviour but rather implementation. If at some point you want to chance the behaviour to not delete the tasks but rather set them to a state of 'disabled' or 'ignored', your unit tests will fail. If you test all controllers this way your unit tests are very brittle and will need to be changed often.
Refactor out the business logic to a 'TaskRemovalStrategy' if you want to test the business logic for this and leave the implementation details of the removal up to the class itself.

IMO you can write your unit tests around the abstract PlanRepository and the same tests should be useful in testing the data integrity in the database also.
For example you could write a test -
void DeletePlanTest()
{
PlanRepository repo = new PlanDbRepository("connection string");
repo.CreateNewPlan(); // create plan and populate with tasks
AssertIsTrue(repo.Plan.OpenTasks.Count == 2); // check tasks are in open state
repo.DeletePlan();
AssertIsTrue(repo.Plan.OpenTasks.Count == 0);
}
This test will work even if your repository deletes the plan and your database deletes the related tasks via a cascaded delete trigger.
The value of such test is whether the test is run for PlanDbRepository or a MockRepository it will still check that the behavior is correct. So when you change any repository code or even your database schema, you can run the tests to check nothing is broken.
You can create such tests which cover all the possible behaviors of your repository and then use them to make sure that any of your changes do not break the implementation.
You can also parameterize this test with a concrete repository instance and reuse them the test any future implementations of repositories.

Related

Assert that "nothing happened" when writing unit test

When writing unit test, Is there a simple way to ensure that nothing unexpected happened ?
Since the list of possible side effect is infinite, adding tons of Assert to ensure that nothing changed at every steps seems vain and it obfuscate the purpose of the test.
I might have missed some framework feature or good practice.
I'm using C#7, .net 4.6, MSTest V1.
edit:
The simpler example would be to test the setter of a viewmodel, 2 things should happen: the value should change and PropertyChanged event should be raised.
These 2 things are easy to check but now I need to make sure that other properties values didn't changed, no other event was raised, the system clipboard was not touched...
You're missing the point of unit tests. They are "proofs". You cannot logically prove a negative assertion, so there's no point in even trying.
The assertions in each unit test should prove that the desired behavior was accomplished. That's all.
If we reduce the question to absurdity, every unit test would require that we assert that the function under test didn't start a thermonuclear war.
Unit tests are not the only kind of tests you'll need to perform. There are functional tests, integration tests, usability tests, etc. Each one has its own focus. For unit tests, the focus is proving the expected behavior of a single function. So if the function is supposed to accomplish 2 things, just assert that each of those 2 things happened, and move on.
One of the options to ensure that nothing 'bad' or unexpected happens is to ensure good practices of using dependency injection and mocking:
[Test]
public void TestSomething()
{
// Arrange
var barMock = RhinoMocks.MockRepository.GenerateStrictMock<IBar>();
var foo = new Foo(barMock);
// Act
foo.DoSomething();
// Assert
...
}
In the example above if Foo accidentally touches Bar, that will result in an exception (the strict mock) and the test fails. Such approach might not be applicable in all test cases, but serves as a good addition to other potential practices.
Some addition to your edit:
In Test Driven Development you are writing only code, which will pass the test and nothing more. Furthermore you want to choose the simplest possible solutoin to accomplish this goal.
That said you will start most likely with a failing unit-test. In your situation you will not get a failing unit test at the beginning.
If you push it to the limits, you will have to check that format C:\ is not called in your application when you want to check every outcome. You might want to have a look at design principles like the KISS-principle (Keep it simple, stupid).
If the scope of "check that nothing else happened" is to ensure the state of the model didn't change, which it appears is the case from the question.
Write a helper function that takes the model before your event and the model after and compares them. Let it return the properties that are changed, then you can assert that only those properties that you intended to update are in the return list. This sort of helper is portable, maintainable, and reusable
Checking model state is a valid application of a unit test.
This is only possible in referentially transparent languages such as Safe Haskell.

What is the benefits of mocking the dependencies in unit testing?

I am working on unit testing stuffs for my controller and service layers(C#,MVC). And I am using Moq dll for mocking the real/dependency objects in unit testing.
But I am little bit confuse regarding mocking the dependencies or real objects. Lets take a example of below unit test method :-
[TestMethod]
public void ShouldReturnDtosWhenCustomersFound_GetCustomers ()
{
// Arrrange
var name = "ricky";
var description = "this is the test";
// setup mocked dal to return list of customers
// when name and description passed to GetCustomers method
_customerDalMock.Setup(d => d.GetCustomers(name, description)).Returns(_customerList);
// Act
List<CustomerDto> actual = _CustomerService.GetCustomers(name, description);
// Assert
Assert.IsNotNull(actual);
Assert.IsTrue(actual.Any());
// verify all setups of mocked dal were called by service
_customerDalMock.VerifyAll();
}
In the above unit test method I am mocking the GetCustomers method and returning a customer list. Which is already defined. And looks like below:
List<Customer> _customerList = new List<Customer>
{
new Customer { CustomerID = 1, Name="Mariya",Description="description"},
new Customer { CustomerID = 2, Name="Soniya",Description="des"},
new Customer { CustomerID = 3, Name="Bill",Description="my desc"},
new Customer { CustomerID = 4, Name="jay",Description="test"},
};
And lets have a look on the Assertion of Customer mocked object and actual object Assertion :-
Assert.AreEqual(_customer.CustomerID, actual.CustomerID);
Assert.AreEqual(_customer.Name, actual.Name);
Assert.AreEqual(_customer.Description, actual.Description);
But here I am not understanding that it(above unit test) always work fine. Means we are just testing(in Assertion) which we passed or which we are returning(in mocking object). And we know that the real/actual object will always return which list or object that we passed.
So what is the meaning of doing unit testing or mocking here?
The true purpose of mocking is to achieve true isolation.
Say you have a CustomerService class, that depends on a CustomerRepository. You write a few unit tests covering the features provided by CustomerService. They all pass.
A month later, a few changes were made, and suddenly your CustomerServices unit tests start failing - and you need to find where the problem is.
So you assume:
Because a unit test that tests CustomerServices is failing, the problem must be in that class!!
Right? Wrong! The problem could be either in CustomerServices or in any of its depencies, i.e., CustomerRepository. If any of its dependencies fail, chances are the class under test will fail too.
Now picture a huge chain of dependencies: A depends on B, B depends on C, ... Y depends on Z. If a fault is introduced in Z, all your unit tests will fail.
And that's why you need to isolate the class under test from its dependencies (may it be a domain object, a database connection, file resources, etc). You want to test a unit.
Your example is too simplistic to show off the real benefit of mocking. That's because your logic under test isn't really doing much beyond returning some data.
But imagine as an example that your logic did something based on wall clock time, say scheduled some process every hour. In a situation like that, mocking the time source lets you actually unit test such logic so that your test doesn't have to run for hours, waiting for the time to pass.
In addition to what already been said:
We can have classes without dependencies. And the only thing we have is unit testing without mocks and stubs.
When we have dependencies there are several kinds of them:
Service that our class uses mostly in a 'fire and forget' way, i.e. services that do not affect control flow of the consuming code.
We can mock these (and all other kinds) services to test they were called correctly (integration testing) or simply for injecting as they could be required by our code.
Two Way Services that provide result but do not have an internal
state and do not affect the state of the system. They can be dubbed complex data transformations.
By mocking these services you can test you expectations about code behavior for different variants of service implementation without need to heave all of them.
Services which affect the state of the system or depend on real world
phenomena or something out of your control. '#500 - Internal Server Error' gave a good example of the time service.
With mocking you can let the time flow at the speed (and direction) whatever is needed. Another example is working with DB. When unit testing it is usually desirable not to change DB state what is not true about functional test. For such kind of services 'isolation' is the main (but not the only) motivation for mocking.
Services with internal state your code depends on.
Consider Entity Framework:
When SaveChanges() is called, many things happen behind the scene. EF detects changes and fixups navigation properties. Also EF won't allow you to add several entities with the same key.
Evidently, it can be very difficult to mock the behavior and the complexity of such dependencies...but usually you have not if they are designed well. If you heavily rely on the functionality some component provides you hardly will be able to substitute this dependency. What is probably needed is isolation again. You don't want to leave traces when testing, thus butter approach is to tell EF not to use real DB. Yes, dependency means more than a mere interface. More often it is not the methods signatures but the contract for expected behavior. For instance IDbConnection has Open() and Close() methods what implies certain sequence of calls.
Sure, it is not strict classification. Better to treat it as extremes.
#dcastro writes: You want to test a unit. Yet the statement doesn't answer the question whether you should.
Lets not discount integration tests. Sometimes knowing that some composite part of the system has a failure is ok.
As to example with the chain of dependencies given by #dcastro we can try to find the place where the bag is likely to by:
Assume, Z is a final dependency. We create unit tests without mocks for it. All boundary conditions are known. 100% coverage is a must here. After that we say that Z works correctly. And if Z fails our unit tests must indicate it.
The analogue comes from engineering. Nobody tests each screw and bolt when building a plane.Statistic methods are used to prove with some certainty that factory producing the details works fine.
On the other hand, for very critical parts of your system it is reasonable to spend time and mock complex behavior of the dependency. Yes, the more complex it is the less maintainable tests are going to be. And here I'd rather call them as the specification checks.
Yes your api and tests both can be wrong but code review and other forms of testing can assure the correctness of the code to some degree. And as soon as these tests fail after some changes are made you either need to change specs and corresponding tests or find the bug and cover the case with the test.
I highly recommend you watching Roy's videos: http://youtube.com/watch?v=fAb_OnooCsQ
In this very case mocking allowed you to fake a database connection, so that you can run a test in place and in-memory, without relying on any additional resource, i.e. the database. This tests asserts that, when a service is called, a corresponded method of DAL is called.
However the later asserts of the list and the values in list aren't necessary. As you correctly noticed you just asserting that the values you "mocked" are returned. This would be useful within the mocking framework itself, to assert that the mocking methods behave as expected. But in your code is is just excess.
In general case, mocking allow one to:
Test behaviour (when something happens, then a particular method is executed)
Fake resources (for example, email servers, web servers, HTTP API request/response, database)
In contrast, unit-tests without mocking usually allow you to test the state. That is, you can detect a change in a state of an object, when a particular method was called.
All previous answers assume that mocking has some value, and then they proceed to explain what that value supposedly is.
For the sake of future generations that might arrive at this question looking to satisfy their philosophical objections on the issue, here is a dissenting opinion:
Mocking, despite being a nifty trick, should be avoided at (almost) all costs.
When you mock a dependency of your code-under-test, you are by definition making two kinds of assumptions:
Assumptions about the behavior of the dependency
Assumptions about the inner workings of your code-under-test
It can be argued that the assumptions about the behavior of the dependency are innocent because they are simply a stipulation of how the real dependency should behave according to some requirements or specification document. I would be willing to accept this, with the footnote that they are still assumptions, and whenever you make assumptions you are living your life dangerously.
Now, what cannot be argued is that the assumptions you are making about the inner workings of your code-under-test are essentially turning your test into a white-box test: the mock expects the code-under-test to issue specific calls to its dependencies, with specific parameters, and as the mock returns specific results, the code-under-test is expected to behave in specific ways.
White-box testing might be suitable if you are building high criticality (aerospace grade) software, where the goal is to leave absolutely nothing to chance, and cost is not a concern. It is orders of magnitude more labor intensive than black-box testing, so it is immensely expensive, and it is a complete overkill for commercial software, where the goal is simply to meet the requirements, not to ensure that every single bit in memory has some exact expected value at any given moment.
White-box testing is labor intensive because it renders tests extremely fragile: every single time you modify the code-under-test, even if the modification is not in response to a change in requirements, you will have to go modify every single mock you have written to test that code. That is an insanely high maintenance level.
How to avoid mocks and black-box testing
Use fakes instead of mocks
For an explanation of what the difference is, you can read this article by Martin Fowler: https://martinfowler.com/bliki/TestDouble.html but to give you an example, an in-memory database can be used as fake in place of a full-blown RDBMS. (Note how fakes are a lot less fake than mocks.)
Fakes will give you the same amount of isolation as mocks would, but without all the risky and costly assumptions, and most importantly, without all the fragility.
Do integration testing instead of unit testing
Using the fakes whenever possible, of course.
For a longer article with my thoughts on the subject, see https://blog.michael.gr/2021/12/white-box-vs-black-box-testing.html

Best Practice - What to do when creating Unit Test with huge amount of entities

I am creating an unit test, but there are many entities. So do I have to insert all entities at database manually or is there any better solution?
Are you looking for something like Moq? You use it to create a Mock objects and Queryable lists of objects so that you don't need to put fake data into your database to test.
Have a look at this link on how to get going on writing unit tests. The one thing I think that may help you in regard to your question:
Mock out all external services and state
Otherwise, behaviour in those external services overlaps multiple tests, and state data means that different unit tests can influence each other’s outcome.
You’ve definitely taken a wrong turn if you have to run your tests in a specific order, or if they only work when your database or network connection is active.
(By the way, sometimes your architecture might mean your code touches static variables during unit tests. Avoid this if you can, but if you can’t, at least make sure each test resets the relevant statics to a known state before it runs.)

Unit testing database code [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Unit testing on code that uses the Database
I am just starting with unit testing and wondering how to unit test methods that are making actual changes to my database. Would the best way be to put them into transactions and then rollback, or are there better approaches to this?
If you want proper test coverage, you need two types of tests:
Unit-tests which mock all your actual data access. These tests will not acually write to the database, but test the behaviour of the class that does (which methods it calls on other dependencies, etc.)
System tests (or integration tests) which check that your database can be accessed and modified. I would considered two types of tests here: simple plain CRUD tests (create / read / update / delete) for each one of your model objects, and more complex system tests for your actual methods, and everything you deem interesting or valuable to test. Good practices here are to have each test starting from an empty (or "ready for the test") database, do its stuff, then check state of the database. Transactions / rollbacks are one good way to achieve this.
For unit testing you need to mock or stub the data access code, mostly you have repository interface and you can stub it by creating a concrete repository which stores data in memory, or you could mock it using dynamic mocking framework ..
For system or integration testing, you need to re-create the entire database before each test method in order to maintain a stable state before each test.
As per some of the previous answers if you want to test your data access code then you might want to think about mocks and a system/integration test strategy.
But, if you want to unit test your SQL objects (eg sprocs, views, constraints in tables etc) - then there are a number of database unit testing frameworks out there that might be of interest (including one that I have written).
Some implement tests within SQL, others within your code and use mbUnit/NUnit etc.
I have written a number of articles with examples on how I approach this - see http://dbtestunit.wordpress.com/
Other resources that might be of use:
http://www.simple-talk.com/sql/t-sql-programming/close-those-loopholes---testing-stored-procedures--/
http://tsqlt.org/articles/
The general approach is to have a way to mock you database actions. So that your unit tests are not reliant on the database being available or in a certain state. That said it also implies design that facilitates the isolation required to mock away your data layer. Unit test and how to do it well is a huge topic. Take a look on the googley for Mock Frameworks, and Dependency injection for a start.
If you are not developing an O/R mapper, there's no need to test database code. You don't want to test ADO.NET methods, right? Instead you want to verify that the ADO.NET methods are called with the right values.
Search Google for repository pattern. You will create an implementation of IRepository interface with CRUD methods and test/mock this.
If you want to test against a real database, this would be more of an integration then a unit test. Wrapping your tests in transaction could be an idea to keep your database in a consistent state.
We've done this in a base class and used the TestInitialize and TestCleanup functions to make sure this always happens.
But testing against a real database will certainly bring you into performance problems. So make sure from the beginning that you can swap your database access code with something that runs in memory. I don't now which database access code your targeting but design patterns like UnitOfWork and Repository can help you to isolate your database code and replace it with an in memory solution.

Integration Testing: Am I doing it right?

Here's an integration test I wrote for a class that interacts with a database:
[Test]
public void SaveUser()
{
// Arrange
var user = new User();
// Set a bunch of properties of the above User object
// Act
var usersCountPreSave = repository.SearchSubscribersByUsername(user.Username).Count();
repository.Save(user);
var usersCountPostSave = repository.SearchSubscribersByUsername(user.Username).Count();
// Assert
Assert.AreEqual(userCountPreSave + 1, userCountPostSave);
}
It seems to me that I can't test the Save function without involving the SearchSubscriberByUsername function to find out if the user was successfully saved. I realize that integration tests aren't meant to be unit tests which are supposed to test one unit of code at a time. But ideally, it would be nice if I could test one function in my repository class per test but I don't know how I can accomplish that.
Is it fine how I've written the code so far or is there a better way?
You have a problem with your test. When you're testing that data is saved into the database, you should be testing that it's in the database, not that the repository says that it's in the database.
If you're testing the functionality of repository, then you can't verify that functionality by asking if it has done it correctly. It's the equivalent of saying to someone 'Did you do this correctly?' They are going to say yes.
Imagine that repository never commits. Your test will pass fine, but the data won't be in the database.
So, what I would do is to to open a connection (pure SQL) to the database and check that the data has been saved correctly. You only need to a select count(*) before and after to ensure that the user has been saved. If you do this, you can avoid using the SearchSubscribersByUsername as well.
If you're testing the functionality of repository, you can't trust repository, by definition.
To unit test something like a "Save" function, you will definitely need some trustworthy channel to check the result of the operation. If you trust SearchSubscribersByUsername (because you have already made some unit tests for that function on its own), you can use it here.
If you don't trust SearchSubscribersByUsername and you think your unit test could also break because there is an error in that function (and not in Save), you should think about a different channel (perhaps you have a possibility to make a bypassing SQL access to your DB to check the Save result, which may be simpler than the implementation of SearchSubscribersByUsername)? However, do not reimplement SearchSubscribersByUsername again, that would be getting pointless. Either way, you will need at least some other function you can trust.
Unless the method you are testing returns definitive information about what you have done I don't see any way to avoid calling other methods. I think you are correct in your assumption that Integration testing needs a different way of thinking from Unit testing.
I would still build tests that focus on individual methods. So in testing Save() I may well use the capabilities of Search(), but my focus is on the edge cases of Save(). I build tests that deal with duplicate insertions or invalid input data. Then later I build a whole raft of Search() tests that deal with the edge cases of Search().
Now one possible way of thinking is that Save and Search have some commonality, a bug in Search might mask a bug in Save. Imagine, for example, if you had a caching layer down there. So possibly an alternative approach is to use some other verification mechanism. For example a direct JDBC call to the database, or alteratively introducing mocking layers at some point in your infrastructure. When building complex Integrated Systems this kind of "Back Door" verification may be essential.
Personally, I've written countless tests very similar to this and think its fine. The alternative is to stub out the database so that searchSubscribers never actually does anything but thats great deal of work for what I'd say is little gain.

Categories