Whilst practicing some TDD at work for an ASP.Net MVC project, I ran into a number of scenarios where I was writing tests to ensure that particular actions returned the correct views or had particular attributes on them ([ChildActionOnly] etc). (in fact, I found a number of interesting posts here SO about useful extension methods to help acheive this).
When I was first introduced to the concepts of unit testing and TDD when on a course a few years ago, the emphasis was based strongly on that tests should be focusing on testing logic behind the user-desired features and functionality - the core project 'requirements' if you will.
My question is - if this is the case, are menial tests checking for the correct view file to be rendered, or an action having a particular attribute etc not really encompassing what the unit testing methodology is all about? Am I writing tests for the wrong reasons (i.e. simply protecting myself and other colleagues from making a refactoring mistake) or are these valid cases of valuable unit tests?
If a handler method could return one of two (or more) views depending on some logic then a unit test that asserted the correct view would be useful. Same goes for a handler method that inserted particular attributes depending on the logic.
Am I writing tests for the wrong reasons (i.e. simply protecting other
colleagues from making a refactoring mistake) or are these valid cases
of valuable unit tests?
Catching regression errors is one of the benefits of unit tests, especially usefull when refactoring. If a you inadvertently changed the view returned while doing some refactoring that would be very usefull to catch early - rather than waiting for a test that only ran when the application was runnning.
If your action returns different views (or action results) depending on some logic. Write a test.
If you always return View() then don't.
If you return a view with a name different from your action's name - then you could argue it would be a good idea to write a test.
It depends. To use one of your examples: Checking if an action has a particular attribute is okay, but it's better to write a test that verifies a behavior is functioning as expected, that would fail if the attribute was missing.
That said, ultimately your tests are a safety net. If it has a reasonable chance of preventing a bug from slipping in at some point in the future, it's doing its job.
If it's a simple test that has low overhead and a low chance of breaking arbitrarily in the future, go for it. I'd rather have too many tests than too few.
Indeed, your tests should be testing your logic. That should not go away. However, ideally you should write tests for anything that can go wrong. For instance, insuring that all methods that need to be secured have a proper Authorize attribute. This is a security test.
Ultimate, you decide what's useful for you to test.
Related
We have been developing REST APIs in .Net core 3.1 and using MS TEST - Test Project in Visual Studio 2019 for unit testing.
Now we have a hard rule to follow Test Driven Development. In theory I can understand that TDD will involve writing tests to implement a requirement and continuously iterate through RED GREEN and REFACTOR cycles.
Now my question is, if I have a endpoint which will get all books according to author id from DB, like following endpoint
httpget : api/boooks/authors/1006
which will return a response, what all cases/ test methods should I write in the Test class of MSTest project, so that I can say that, I have followed Test Driven Development ?
Before TDD, I will write a TestMethod for the Service method which will call the Database layer and get the author information from SQL DB. SO only 1 test method with different input to mock, will be written.
I am very new to TDD hence the question. Any comment is helpful
If you're in doubt about how many test cases you need to write, you may consider using the Devil's Advocate technique to critique your test code.
If, for example, you write a single unit test (pseudocode):
Initialise System Under Test (SUT)
Call api/boooks/authors/1006
Assert that the response is, say, [Foo, Bar]
Then, according to the red-green-refactor checklist and the Devil's Advocate technique, the implementation should be the simplest thing that could possibly work. What would that be?
return [Foo, Bar];
Unconditionally returning a hard-coded value is not the implementation you have in mind. This means that you have at least one more test case to write.
Another heuristic to keep in mind is that there should be at least as many test cases as the cyclomatic complexity of the implementation, and most likely more.
When I start to write test cases on new code for TDD (not a modification to existing code), I usually start with a glut of typical scenarios:
One simple test case (happy path with easiest implementation)
One complex test case (multiple inputs for lists, all values filled in, etc.)
Edge cases (min/max values for all inputs, within reason)
Invalid inputs (text instead of numbers in JSON inputs, missing required values, negative integers for indexes, etc.)
Purposefully forgetting required pre-initialization steps to verify they are caught and appropriate exceptions are thrown
OBOBs (off-by-one bugs)
All except the last one applies to modified code, unless you are adding a new pre-initialization step. Additionally, I go through the test cases to verify all scenarios are tested. For example, adding a new input to a method overload to perform additional functionality will often need more tests around existing code to make sure the new functionality doesn't alter the pre-existing conditions of the objects.
Also, I always make sure to add test cases when bugs are identified to not only verify the bug is a bug, but to have test cases that will verify the update corrects the issues.
When writing unit test, Is there a simple way to ensure that nothing unexpected happened ?
Since the list of possible side effect is infinite, adding tons of Assert to ensure that nothing changed at every steps seems vain and it obfuscate the purpose of the test.
I might have missed some framework feature or good practice.
I'm using C#7, .net 4.6, MSTest V1.
edit:
The simpler example would be to test the setter of a viewmodel, 2 things should happen: the value should change and PropertyChanged event should be raised.
These 2 things are easy to check but now I need to make sure that other properties values didn't changed, no other event was raised, the system clipboard was not touched...
You're missing the point of unit tests. They are "proofs". You cannot logically prove a negative assertion, so there's no point in even trying.
The assertions in each unit test should prove that the desired behavior was accomplished. That's all.
If we reduce the question to absurdity, every unit test would require that we assert that the function under test didn't start a thermonuclear war.
Unit tests are not the only kind of tests you'll need to perform. There are functional tests, integration tests, usability tests, etc. Each one has its own focus. For unit tests, the focus is proving the expected behavior of a single function. So if the function is supposed to accomplish 2 things, just assert that each of those 2 things happened, and move on.
One of the options to ensure that nothing 'bad' or unexpected happens is to ensure good practices of using dependency injection and mocking:
[Test]
public void TestSomething()
{
// Arrange
var barMock = RhinoMocks.MockRepository.GenerateStrictMock<IBar>();
var foo = new Foo(barMock);
// Act
foo.DoSomething();
// Assert
...
}
In the example above if Foo accidentally touches Bar, that will result in an exception (the strict mock) and the test fails. Such approach might not be applicable in all test cases, but serves as a good addition to other potential practices.
Some addition to your edit:
In Test Driven Development you are writing only code, which will pass the test and nothing more. Furthermore you want to choose the simplest possible solutoin to accomplish this goal.
That said you will start most likely with a failing unit-test. In your situation you will not get a failing unit test at the beginning.
If you push it to the limits, you will have to check that format C:\ is not called in your application when you want to check every outcome. You might want to have a look at design principles like the KISS-principle (Keep it simple, stupid).
If the scope of "check that nothing else happened" is to ensure the state of the model didn't change, which it appears is the case from the question.
Write a helper function that takes the model before your event and the model after and compares them. Let it return the properties that are changed, then you can assert that only those properties that you intended to update are in the return list. This sort of helper is portable, maintainable, and reusable
Checking model state is a valid application of a unit test.
This is only possible in referentially transparent languages such as Safe Haskell.
I am in the process of retrofitting unit tests for a asp.net solution written in VB.Net and c#.
The unit tests need to verify the current functionality and act as a check for future breaking changes.
The solution comprises of:
1 MVC web project
written in vb.net (don't ask, it's a legacy thing)
10 other supporting projects each containing logically grouped functionality
written in C#, each project contains repositories and DAL
All the classes are tightly coupled as there is no inversion of control (IOC) implemented anywhere, yet.
currently to test a controller there is the following stack:
controller
repository
dal
logging
First question, to unit test this correctly would I setup 1 test project and run all tests from it, or should I setup 1 test project for each project to test the functionality of that DLL only?
Second question, do I need to implement IOC to be able to use MOQ?
Third question, is it even possible to refactor IOC into a huge solution like this?
Forth question, what other options are available to get this done asap?
I am in the process of retrofitting unit tests for a asp.net solution written in VB.Net and c#. The unit tests need to verify the current functionality and act as a check for future breaking changes.
When working with a large code base that doesn't have unit tests and hasn't been written with testing in mind, there is a good chance that in order to write a useful set of unit tests you will have to modify the code, hence you're going to be triggering the event that you're planning on writing the unit tests to support. This is obviously risky, but may not be any riskier than what you're already doing on a day to day basis.
There are a number of approaches that you could take (and there's a good chance that this question will be closed as too broad). One approach is to create a good set of integration tests ensure that the core functionality is working. These tests won't be as fast to run as unit tests, but they will be further decoupled from the legacy code base. This will give you a good safety net for any changes that you need to make as part of introducing unit testing.
If you have an appropriate version of visual studio, then you may also be able to use shims (or if you have funds, typemock may be an option) to isolate elements of your application when writing your initial tests. So, you could for example create shims of your dal to isolate the rest of your code from the db.
First question, to unit test this correctly would i setup 1 test project and run all tests from it, or should i setup 1 test project for each project to test the functionality of that dll only?
Personally, I prefer think of each assembly as a testable unit, so I tend to create at least one test project for each assembly containing production code. Whether or not that makes sense though, depends a bit on what's contained in each of the assemblies... I'd also tend to have at least one test project for integration tests of the top level project.
Second question, do i need to implement IOC to be able to use MOQ?
The short answer is no, but it depends what your classes do. If you want to test using Moq, then it's certainly easier to do so if your classes support dependency injection, although you don't need to use an IOC container to achieve this. Hand rolled injection either through constructors like below, or through properties can form a bridge to allow testing stubs to be injected.
public SomeConstructor(ISomeDependency someDependency = null) {
if(null == someDependency) {
someDependency = new SomeDependency();
}
_someDependency = someDependency;
}
Third question, is it even possible refactor IOC into a huge solution like this?
Yes it's possible. The bigger question is it worth it? You appear to be suggesting a big bang approach to the migration. If you have a team of developers that don't have much experience in this area, this seems awfully risky. A safer approach might be to target a specific area of the application and migrate that section. If your assemblies are discrete then they should form fairly easy split points in your application. Learn what works and what doesn't, along with what benefits and unexpected pain you're feeling. Use that to inform your decision about how and when to migrate the rest of the code.
Forth question, what other options are available to get this done asap?
As I've said above, I'm not sure that ASAP is really the right approach to take. Working towards unit-testing can be done as a slow migration, adding tests as you actually change the code due to business requirements. This helps to ensure that testers are also allocated to catch any errors that you introduce as part of the refactoring that might need to take place to support the testing.
There are some other variations of this question here at SO, but please read the entire question.
By just using fakes, we look at the constructor to see what kind of dependencies that a class have and then create fakes for them accordingly.
Then we write a test for a method by just looking at it's contract (method signature). If we can't figure out how to test the method by doing so, shouldn't we rather try to refactor the method (most likely break it up in smaller pieces) than to look inside it to figure our how we should test it? In other words, it also gives us a quality control by doing so.
Isn't mocks a bad thing since they require us to look inside the method that we are going to test? And therefore skip the whole "look at the signature as a critic".
Update to answer the comment
Say a stub then (just a dummy class providing the requested objects).
A framework like Moq makes sure that Method A gets called with the arguments X and Y. And to be able to setup those checks, one needs to look inside the tested method.
Isn't the important thing (the method contract) forgotten when setting up all those checks, as the focus is shifted from the method signature/contract to look inside the method and create the checks.
Isn't it better to try to test the method by just looking at the contract? After all, when we use the method we'll just look at the contract when using it. So it's quite important the it's contract is easy to follow and understand.
This is a bit of a grey area and I think that there is some overlap. On the whole I would say using mock objects is preferred by me.
I guess some of it depends on how you go about testing code - test or code first?
If you follow a test driven design plan with objects implementing interfaces then you effectively produce a mock object as you go.
Each test treats the tested object / method as a black box.
It focuses you onto writing simpler method code in that you know what answer you want.
But above all else it allows you to have runtime code that uses mock objects for unwritten areas of the code.
On the macro level it also allows for major areas of the code to be switched at runtime to use mock objects e.g. a mock data access layer rather than one with actual database access.
Fakes are just stupid dummy objects. Mocks enable you to verify that the controlflow of the unit is correct (e.g. that it calls the correct functions with the expected arguments). Doing so is very often a good way to test things. An example is that a saveProject()-function probably want's to call something like saveToProject() on the objects to be saved. I consider doing this a lot better than saving the project to a temporary buffer, then loading it to verify that everything was fine (this tests more than it should - it also verifies that the saveToProject() implementation(s) are correct).
As of mocks vs stubs, I usually (not always) find that mocks provide clearer tests and (optionally) more fine-grained control over the expectations. Mocks can be too powerful though, allowing you to test an implementation to the level that changing implementation under test leaving the result unchanged, but the test failing.
By just looking on method/function signature you can test only the output, providing some input (stubs that are only able to feed you with needed data). While this is ok in some cases, sometimes you do need to test what's happening inside that method, you need to test whether it behaves correctly.
string readDoc(name, fileManager) { return fileManager.Read(name).ToString() }
You can directly test returned value here, so stub works just fine.
void saveDoc(doc, fileManager) { fileManager.Save(doc) }
here you would much rather like to test, whether method Save got called with proper arguments (doc). The doc content is not changing, the fileManager is not outputting anything. This is because the method that is tested depends on some other functionality provided by the interface. And, the interface is the contract, so you not only want to test whether your method gives correct results. You also test whether it uses provided contract in correct way.
I see it a little different. Let me explain my view:
I use a mocking framework. When I try to test a class, to ensure it will work as intended, I have to test all the situations may happening. When my class under test uses other classes, I have to ensure in certain test situation that a special exceptions is raised by a used class or a certain return value, and so on... This is hardly to simulate with the real implementations of those classes, so I have to write fakes of them. But I think that in the case I use fakes, tests are not so easy to understand. In my tests I use MOQ-Framework and have the setup for the mocks in my test method. In case I have to analyse my testmethod, I can easy see how the mocks are configured and have not to switch to the coding of the fakes to understand the test.
Hope that helps you finding your answer ...
I plan to introduce a set of standards for writing unit tests into my team. But what to include?
These two posts (Unit test naming best practices and Best practices for file system dependencies in unit/integration tests) have given me some food for thought already.
Other domains that should be covered in my standards should be how test classes are set up and how to organize them. For example if you have class called OrderLineProcessor there should be a test class called OrderLineProcessorTest. If there's a method called Process() on that class then there should be a test called ProcessTest (maybe more to test different states).
Any other things to include?
Does your company have standards for unit testing?
EDIT: I'm using Visual Studio Team System 2008 and I develop in C#.Net
Have a look at Michael Feathers on what is a unit test (or what makes unit tests bad unit tests)
Have a look at the idea of "Arrange, Act, Assert", i.e. the idea that a test does only three things, in a fixed order:
Arrange any input data and processing classes needed for the test
Perform the action under test
Test the results with one or more asserts. Yes, it can be more than one assert, so long as they all work to test the action that was performed.
Have a Look at Behaviour Driven Development for a way to align test cases with requirements.
Also, my opinion of standard documents today is that you shouldn't write them unless you have to - there are lots of resources available already written. Link to them rather than rehashing their content. Provide a reading list for developers who want to know more.
You should probably take a look at the "Pragmatic Unit Testing" series. This is the C# version but there is another for Java.
With respect to your spec, I would not go overboard. You have a very good start there - the naming conventions are very important. We also require that the directory structure match the original project. Coverage also needs to extend to boundary cases and illegal values (checking for exceptions). This is obvious but your spec is the place to write it down for that argument that you'll inevitably have in the future with the guy who doesn't want to test for someone passing an illegal value. But don't make the spec more than a few pages or no one will use it for a task that is so context-dependent.
Update: I disagree with Mr. Potato Head about only one assert per Unit Test. It sounds quite fine in theory but, in practice, it leads to either loads of mostly redundant tests or people doing tons of work in setup and tear-down that itself should be tested.
I follow the BDD style of TDD. See:
http://blog.daveastels.com/files/BDD_Intro.pdf
http://dannorth.net/introducing-bdd
http://behaviour-driven.org/Introduction
In short this means that
The tests are not thought as "tests", but as specifications of the system's behaviour (hereafter called "specs"). The intention of the specs is not to verify that the system works under every circumstance. Their intention is to specify the behaviour and to drive the design of the system.
The spec method names are written as full English sentences. For example the specs for a ball could include "the ball is round" and "when the ball hits a floor then it bounces".
There is no forced 1:1 relation between the production classes and the spec classes (and generating a test method for every production method would be insane). Instead there is a 1:1 relation between the behaviour of the system and the specs.
Some time ago I wrote TDD tutorial (where you begin writing a Tetris game using the provided tests) which shows this style of writing tests as specs. You can download it from http://www.orfjackal.net/tdd-tutorial/tdd-tutorial_2008-09-04.zip The instructions about how to do TDD/BDD are still missing from that tutorial, but the example code is ready, so you can see how the tests are organized and write code that passes them.
You will notice that in this tutorial the production classes are named such as Board, Block, Piece and Tetrominoe which are centered around the concepts of a Tetris game. But the test classes are centered around the behaviour of the Tetris game: FallingBlocksTest, RotatingPiecesOfBlocksTest, RotatingTetrominoesTest, FallingPiecesTest, MovingAFallingPieceTest, RotatingAFallingPieceTest etc.
Try to use as few assert statements per test method as possible. This makes sure that the purpose of the test is well-defined.
I know this will be controversial, but don't test the compiler - time spent testing Java Bean accessors and mutators is better spent writing other tests.
Try, where possible, to use TDD instead of writing your tests after your code.
I've found that most testing conventions can be enforced through the use of a standard base class for all your tests. Forcing the tester to override methods so that they all have the same name.
I also advocate the Arrange-Act-Assert (AAA) style of testing as you can then generate fairly useful documentation from your tests. It also forces you to consider what behaviour you are expecting due to the naming style.
Another item you can put in your standards is to try and keep your unit test size small. That is the actuall test methods themselves. Unless you are doing a full integration unit test there usually is no need for large unit tests, like say more than 100 lines. I'll give you that much in case you have a lot of setup to get to your one test. However if you do you should maybe refactor it.
People also talk about refactoring there code make sure people realize that unit tests is code too. So refactor, refactor, refactor.
I find the biggest problem in the uses I have seen is that people do not tend to recognize that you want to keep your unit tests light and agile. You don't want a monolithic beast for your tests after all. With that in mind if you have a method you are trying to test you should not test every possible path in one unit test. You should have multiple unit tests to account for every possible path through the method.
Yes if you are doing your unit tests correctly you should on average have more lines of unit test code than your application. While this sounds like a lot of work it will save you alot of time in the end when comes time for the inevitable business requirement change.
Users of full-featured IDE's will find that "some of them" have quite detailed support for creating tests in a specific pattern. Given this class:
public class MyService {
public String method1(){
return "";
}
public void method2(){
}
public void method3HasAlongName(){
}
}
When I press ctrl-shift-T in intellij IDEA I get this test class after answering 1 dialog box:
public class MyServiceTest {
#Test
public void testMethod1() {
// Add your code here
}
#Test
public void testMethod2() {
// Add your code here
}
#Test
public void testMethod3HasAlongName() {
// Add your code here
}
}
So you may want to take a close look at tool support before writing your standards.
I use nearly plain English for my unit test function names. Helps to define what they do exactly:
TEST( TestThatVariableFooDoesNotOverflowWhenCalledRecursively )
{
/* do test */
}
I use C++ but the naming convention can be used anywhere.
Make sure to include what is not an unit tests. See: What not to test when it comes to Unit Testing?
Include a guideline so integration tests are clearly identified and can be run separately from unit tests. This is important, because you can end with a set of "unit" tests that are really slow if the unit tests are mixed with other types of tests.
Check this for more info on it: How can I improve my junit tests ... specially the second update.
If you are using tools from the family of Junit (OCunit, SHunit, ...), names of tests already follow some rules.
For my tests, I use custom doxygen tags in order to gather their documentation in a specific page.