Is there a good measure of completeness of Unit tests - c#

I have a class that I need to Unit Test.
For background I'm developing in c# and using NUnit, but my question is more theoretical:
I don't know if I've written enough test methods and if I checked all the scenarios.
Is there a known working method/best practices/collection of rules for that?
Something like
"check every method in your class ...bla bla "
"check all the inserts to DB ...bla bla "
(this is a silly example of possible rules but if I had something not silly on my mind I wouldn't ask this question)

There are several available metrics for unit testing. Have a look into both code coverage and orthogonal testing.
However, I would say that this is not the best way of addressing the problem. While 100% code coverage is an admirable goal it can become the sort of metric which obscures that actual quality of the tests.
Personally I think you would get better results from investigating test driven development - using this approach you know you have good coverage (both in terms of lines of code and in terms of functionality of your class) because you have been writing the tests to exercise your class before you wrote the class methods themselves.

You might want to look at your test coverage. NCover is the code coverage solution from the developers of NUnit.

You can look into NCover or Visual Studio code coverage tool that support Nunit

The unit of measure for measuring test coverage in codes is called "Code Coverage".
As per Wikipedia:
Code coverage is a measure used in
software testing. It describes the
degree to which the source code of a
program has been tested. It is a form
of testing that inspects the code
directly and is therefore a form of
white box testing. In time, the use
of code coverage has been extended to
the field of digital hardware, the
contemporary design methodology of
which relies on hardware description
languages (HDLs)
Code Coverage measurements are given in percentage. Different teams and projects sets their own test coverage goal. I don't know if there is an industry "best practice" number, but most of my project sets this number at 80%.
For instance, if you are working on a project that has a lot of UI code, chances are the unit test coverage for that is low, but if you're working on a library, chances are every method has a proper unit test.
For .NET, one of the popular tool for code coverage is NCover.

As the others have mentioned Coverage provides one metric by which to measure the quality of your tests, but this does not tell you anything about how well the tests test your code. Just because a line is executed, it does not mean that all possible permutations of that line have been executed.
you may find some usefulness for a tool such as pex, which will test your code with various inputs to see what is does in those situations. this will give you good covereage (as it will tailor the inputs to generate paths through all of the possible paths through your code) but will also give you good coverage of the possible inputs (like ensuring your methods are tested with null inputs, or that methods which take lists are tested with empty lists or lists that contain null items etc)
There are other intriguing initiatives like a tool which removes lines of code, recompiles, and re runs tests. If no tests fail in this scenario, then it assumes that you have a missing test, as there should be something which depends on that line, or else why is it there? I'll look for a link to that.

Related

Is Pex (and its output) suitable for an enterprise environment?

I love the idea of Pex - autogenerating unit tests through static code analysis - but the tests that are actually generated by the tool are horrible, ugly, tightly coupled to Pex modules, difficult to read and understand etc.
Is a tool like that really suitable (in its current state) for use in an enterprise environment, where the emphasis must be on ease of maintenance?
Or have I misunderstood the intended use of Pex?
Indeed, you have misunderstood the intended use.
Pex is a white-box testing tool. It generates test cases based on the analysis of the code it should test. The reason for this is to detect and test edge cases. So basically, you shouldn't even change the auto-generated tests.
Your normal unit-tests can't be replaced by Pex. It's just an additional tool.
This seems a subjective question...
I'd say that yes, tests written in a given framework/API are often tightly coupled to that framework. Pex's intention is not to generate "readable" tests, it's to ensure code coverage for a given set of constraints. If that's valuable to your product, then it's suitable - and certainly I'd wager that for a given team and a given codebase, this will provide value.
Every enterprise is different, but it's the product and its code that dictates the suitability of a testing tool. I would suggest that the thing to question is the value of Pex for a given codebase, regardless of the organisation in question.
I have used pex in a large financial institution and would recommend its use, but only for a very specific case. I think pex is good at what it does (as described elsewhere here, white box tests to find edge cases), but the tests do not have significant longevity as they are very tightly coupled.
Basically pex is great at generating coverage. If you have no tests and want some fast, use Pex. BUT THEN I recommend that instead of using it again you enforce standards for new code to meet an agreed coverage metric with hand written tests.
In this way the fragile tests from pex get replaced over time with tests that are more flexible and of a higher quality.
Pex is very useful for testing complex algorithms that do not rely on anything external. For example, it will not help you find edge cases in SQL statements or file access. However, for finding edge cases and increasing code coverage, it is extremely useful in addition to your normal unit tests.

AssemblyInitialize not measured in Code Coverage

I came across some strange results when using Code Coverage for our unit tests.
In the AssemblyInitialize function we do some initialization work (like AutoMapper, AbstractFactories) and this function is correctly executed.
The strange thing is that Code Coverage shows that there is no coverage for the functions that are called from AssemblyInitialize. Is this by design or am I doing something wrong here?
I would go with as by-design it seems too specific to be anything other. As someone who is looking into doing similar for an open source coverage tool it seems odd for this to be an accident and would be a very unusual bug.
The TDD purist in me would say this is because the setup/teardown of any type (assembly/class) is not actually part of the test itself and so should not be included in coverage. You should instead have separate, specific tests for that code rather than rely on the test setup/initialization failing.
Other .NET tools (dotCover for one) does coverage by test and 'may' also exclude results gained whilst running such setups; this is conjecture rather than known fact though.

Which testing method to use?

First of all, I'm new to testing software. I suppose I'm rather green. Still for this project I need it to make sure all the calculations are done right. For the moment I'm using unit tests to test each module in C# to make sure it does what it should do.
I know about some different testing methods (Integration, unit testing), but I can't seemingly find any good book or reference material on testing in general. Most of em are to the point in one specific area (like unit testing). This sort of makes it really hard (for me) to get a good grasp on how to test your software the best, and which method to use.
For example, I'm using GUI elements as well, should I test those? Should I just test it by using it and then visually confirm all is in order? I do know it depends on whether it's critical or not, but at which point do you say "Let's not test that, because it's not really relevant/important"?
So a summary: How to choose which testing method is best, and where do you draw the line when you use the method to test your software?
There are several kind of tests but imho the most important are unit test, component test, function test and end-to-end test.
Unit tests checks if your classes works as intended.(JUnit). These tests are the base of your testing environment as these tells whether your methods works. Your goal is 100% coverage here as these are the most important part of your tests.
Component tests checks how several class works together in your code. Component can be a lot of stuff in the code, but its basically more than unit testing, and less than function testing. The goal coverage is around 75%.
Function tests are testing the actual functionalities you implement. For example, if you want a button that saves some input data to a database, that is a functionality of the program. That is what you test. The goal coverage here is around 50%.
End-to-end tests are testing your whole application. These can be quite robust, and you likely cant and don't want to test everything, this test is here to check if the whole system works. The goal coverage is around 25%.
This is the order of importance too.
There is no such thing in these as "better" though. Any test you can run to check if your code works as intended is equally good.
You probably want most of your test automated: so you can test while you having a coffee break or your servers can test everything while you are away from work, then check the results in the morning.
GUI testing considered the hardest part of testing, and there are several tools that help you with that, for example Selenium for browser-GUI testing.
Several architectural patterns like Model-View-Presenter tries to separate the GUI part of the application because of this and work with as dumb GUI as possible to avoid errors in there. If you can successfully separate your graphic, you will be able to mock out the gui part of the application and simply left it out from most of the testing process.
For reference I suggest "Effective Software Testing" from Elfriede Dustin, but I'm not familiar with the books on the subject; there could be better ones.
It realy depends on the software you need to test. If you are mostly interested in the logic behind (calculations) then you should write some integration tests that will test not just separate methods, but the workflow from start to finish, so try to emulate what typical user would do. I can hardly imagine user calling one particular method - most likely some specific sequence of methods. If the calculations are ok - then the chance is low that GUI will display it wrong. Besides automating GUI is time consuming process and it will require a lot of skill and energy to build it and maintain, as every simple change might brake everything. My advice is to start with writing integration tests with different values that will cover the most common scenarios.
This answer is specific to the type of application you are involved in - WinForms
For GUI use MVC/MVP pattern. Avoid writing any code in the code behind file. This will allow you to unit test your UI code. When I say UI code it means you will be able to test your code which will be invoked when a button is clicked or any action that needs to be taken on an UI event occurrence.
You should be able to unit test each of your class files (except UI code). This will mainly focus on state based testing.
Also, you will be able to write interaction test cases for tests involving multiple class files. This should cover most of your flows.
So two things to focus on State based testing and Interaction testing.

How to approach unit testing in a large project

We have a project that is starting to get large, and we need to start applying Unit Tests as we start to refactor. What is the best way to apply unit tests to a project that already exists? I'm (somewhat) used to doing it from the ground up, where I write the tests in conjunction with the first lines of code onward. I'm not sure how to start when the functionality is already in place. Should I just start writing test for each method from the repository up? Or should I start from the Controllers down?
update:
to clarify on the size of the project.. I'm not really sure how to describe this other than to say there's 8 Controllers and about 167 files that have a .cs extension, all done over about 7 developer months..
As you seem to be aware, retrofitting testing into an existing project is not easy. Your method of writing tests as you go is the better way. Your problem is one of both process and technology- tests must be required by everyone or no one will use them.
The recommendation I've heard and agree with is that you should not attempt to wrap tests around an existing codebase all at once. You'll never finish. Start by working testing into your bugfix process- every fixed bug gets a test. This will start to work testing into your existing code over time. New code must always have tests, of course. Eventually, you'll get the coverage up to a reasonable percentage, but it will take time.
One good book I've had recommended to me is Working Effectively With Legacy Code by Michael C. Feathers. The title doesn't really demonstrate it, but working testing into an existing codebase is a major subject of the book.
There are lots of approaches to fitting tests around an existing codebase. Unit tests are not necessarily the most productive way to start. If you have a large amount of code written then you might want to think about functional and integration tests before you work down to the level of unit tests. Those higher level tests will help give you broad assurance that your product continues to work while you make changes to improve the structure and retrofit unit tests.
One of the practices that non-test-first organizations use that I recommend highly in your situation is this: Have someone other than the author of the original code section write the unit tests for that section. This gets you some level of cross-training and sanity checking, and it also helps ensure that you don't preserve assumptions which will do damage to your code overall.
Other than that, I'll second the recommendation for Michael Feathers' book.
For a legacy project with a decently sized code base, unit testing everything may not be a justifiable effort spend due to budgetary constraints etc. Based on my reading on this subject, I would suggest:
Every bug which has been leaked to QA, Release or Production environment is a candidate for writing unit test case(s) along with fixing the bug.
Use source control to find out which sections/files of your code base are changing more frequently than others. Bring those sections/files under unit test coverage.
New story development should have meaningful unit test case written against them.
Keep monitoring the unit test coverage to observe any downward trend in particular area of the code base. This area needs you to zoom-in and review if unit test coverage is loosing its effectiveness or not.
P.S.: I have added Michael Feathers book to my reading list, thanks for suggesting it.

Generating tests from run-time analysis

We have a large body of legacy code, several portions of which are scheduled for refactoring or replacement. We wish to optimise parts that currently impact on the user-experience, facilitate reuse in a new product being planned, and hopefully improve maintainability too.
We have quite good/comprehensive functional tests for an existing product. These are a mixture of automated and manually-driven GUI tests, but they can take a developer more than half a day to run fully. The "low-level domain logic" has a good suite of unit tests (NUnit) with good coverage. Unfortunately, the remainder of the code has no unit tests (or, at least, no worthy unit tests).
What I'd like to find is a tool that automatically generates unit tests for specific methods/classes and maybe specific interfaces based on their use and behaviour in the functional tests. These unit tests would be invaluable for refactoring, and would also be run as part of our C.I. system to detect regressions much earlier than is currently happening (and to localise regressions much better than "button X doesn't work.").
Do any such tools exist? Do you have any recommendations for me?
I've come across Parasoft .TEST, which looks like it might do want I want. Do you have any comments on that, with respect to my situation?
I don't think something that just generates test code from a static analysis, ala NStub, is useful here. I suppose it is actually the generation of representative test data that is really important.
Please ignore the merits, or lack of, of automated test generation - it is not something I'd usually advocate. (Not least because you get tests that pass for broken code!)
Try Pex:
Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Pex performs a systematic analysis, hunting for boundary conditions, exceptions and assertion failures, which you can debug right away. Pex enables Parameterized Unit Testing, an extension of Unit Testing that reduces test maintenance costs.
Well, you could look at PEX - but I believe that invents its own data (it doesn't watch your existing tests, AFAIK).

Categories