How to unit test opaque code? - c#

I'm using some old C# code (specifically this Fortune's Voronoi graph algorithm) in a Unity3D project and I wanted to update it to use proper generics, refactor and generally clean things up.
Ideally, I'd do this without breaking anything; the code works and its implementation of the algorithm is sound. Unit tests would obviously help me refactor this without screwing it up.
Unfortunately I really don't understand the math or the algorithm, and the code is dense and comment-free.
How can I write unit tests for code like this?

Unit testing is all about input and output of methods.
So you could just single out methods, execute them with several sets of parameters, and store the result.
Then in your Unit Tests, you execute the same methods, with the same sets of parameters, and you know what to expect as output. If the output changes, you broke something.

Related

Can I exclude part of a method from code coverage?

I suspect the answer is no, but I'll ask anyway...
TL;DR
I know I can exclude a class or method from coverage analysis with the [ExcludeFromCodeCoverage] attribute, but is there a way to exclude only part of a method?
Concrete example
I have a method that lazily generates a sequence of int.MaxValue elements:
private static IEnumerable<TElement> GenerateIterator<TElement>(Func<int, TElement> generator)
{
for (int i = 0; i < int.MaxValue; i++)
{
yield return generator(i);
}
}
In practice, it's never fully enumerated, so the end of the method is never reached. Because of that, DotCover considers that 20% of the method is not covered, and it highlights the closing brace as uncovered (which corresponds to return false in the generated MoveNext method).
I could write a test that consumes the whole sequence, but it takes a very long time to run, especially with coverage enabled.
So I'd like to find a way to tell DotCover that the very last instruction doesn't need to be covered.
Note: I know I don't really need to have all the code covered by unit tests; some pieces of code can't or don't need to be tested, and I usually exclude those with the [ExcludeFromCodeCoverage] attribute. But I like to have 100% reported coverage for the code that I do test, because it makes it easier to spot untested parts of the code. Having a method with 80% coverage when you know there is nothing more to test in it is quite annoying...
No, there is no way to exclude "part of a method" from coverage analysis with dotCover.
In the general sense you got a couple of options:
Extract the uncovered part into its own method, so you can properly ignore that method from analsysis
Ignore the problem
In this case there may be a third options. Since your test code exercises the majority of your method, perhaps you should just write a test method that makes sure the code runs to completion?
First and foremost, while "code coverage" can be an important metric, one must realize that it just might not be possible to have 100% "code coverage". 100% Code coverage is one of those metrics that you should aspire to attain, but which you never will; i.e. get as close as you possibly can.
OTOH, don't go crazy trying to get 100% code coverage. More importantly, is your code readable? Is it testable (I presume so since you're looking at code coverage)? Is it maintainable? Is it SOLID? Do you have passing unit, integration, and end-to-end tests? These things are more important than achieving 100% code coverage. What code coverage will tell you is how extensive your testing is (I'm not sure if the built-in code coverage analysis engine includes only unit tests, or includes all types of tests when calculating its statistics), which gives you an indication of whether or not you have enough tests. Also, while it will tell you how extensive your tests are (i.e. how many lines of code are executed by your tests), it won't tell you if your tests are any good (i.e. are your tests really testing what needs to be tested to ensure your application is working correctly).
Anyway, this may be not an answer, but food for thought.

Writing unit tests in my compiler (which generates IL)

I'm writing a Tiger compiler in C# and I'm going to translate the Tiger code into IL.
While implementing the semantic check of every node in my AST, I created lots of unit tests for this. That is pretty simple, because my CheckSemantic method looks like this:
public override void CheckSemantics(Scope scope, IList<Error> errors) {
...
}
so, if I want to write some unit test for the semantic check of some node, all I have to do is build an AST, and call that method. Then I can do something like:
Assert.That(errors.Count == 0);
or
Assert.That(errors.Count == 1);
Assert.That(errors[0] is UnexpectedTypeError);
Assert.That(scope.ExistsType("some_declared_type"));
but I'm starting the code generation in this moment, and I don't know what could be a good practice when writing unit tests for that phase.
I'm using the ILGenerator class. I've thought about the following:
Generate the code of the sample program I want to test
Save generated code as test.exe
Execute text.exe and store the output in results
Assert against results
but I'm wondering if there is a better way of doing it?
That's exactly what we do on the C# compiler team to test our IL generator.
We also run the generated executable through ILDASM and verify that the IL is produced as expected, and run it through PEVERIFY to ensure that we're generating verifiable code. (Except of course in those cases where we are deliberately generating unverifiable code.)
I've created a post-compiler in C# and I used this approach to test the mutated CIL:
Save the assembly in a temp file, that will be deleted after I'm done with it.
Use PEVerify to check the assembly; if there's a problem I copy it to a known place for further error analysis.
Test the assembly contents. In my case I'm mostly loading the assembly dynamically in a separate AppDomain (so I can tear it down later) and exercising a class in there (so it's like a self-checking assembly: here's a sample implementation).
I've also given some ideas on how to scale integration tests in this answer.
You can think of testing as doing two things:
letting you know if the output has changed
letting you know if the output is incorrect
Determining if something has changed is often considerably faster than determining if something is incorrect, so it can be a good strategy to run change-detecting tests more frequently than incorrectness-detecting tests.
In your case you don't need to run the executables produced by your compiler every time if you can quickly determine that the executable has not changed since a known good (or assumed good) copy of the same executable was produced.
You typically need to do a small amount of manipulation on the output that you're testing to eliminate differences that are expected (for example setting embedded dates to a fixed value), but once you have that done, change-detecting tests are easy to write because the validation is basically a file comparison: Is the output the same as the last known good output? Yes: Pass, No: Fail.
So the point is that if you see performance problems with running the executables produced by your compiler and detecting changes in the output of those programs, you can choose to run tests that detect changes a stage earlier by comparing the executables themselves.

How TDD works when there can be millions of test cases for a production functionality?

In TDD, you pick a test case and implement that test case then you write enough production code so that the test passes, refactor the codes and again you pick a new test case and the cycle continues.
The problem I have with this process is that TDD says that you write enough code only to pass the test you just wrote. What I refer to exactly is that if a method can have e.g. 1 million test cases, what can you do?! Obviously not writing 1 million test cases?!
Let me explain what I mean more clearly by the below example:
internal static List<long> GetPrimeFactors(ulong number)
{
var result = new List<ulong>();
while (number % 2 == 0)
{
result.Add(2);
number = number / 2;
}
var divisor = 3;
while (divisor <= number)
{
if (number % divisor == 0)
{
result.Add(divisor);
number = number / divisor;
}
else
{
divisor += 2;
}
}
return result;
}
The above code returns all the prime factors of a given number. ulong has 64 bits which means it can accept values between 0 to 18,446,744,073,709,551,615!
So, How TDD works when there can be millions of test cases for a production functionality?!
I mean how many test cases suffice to be written so that I can say I used TDD to achieve this production code?
This concept in TDD which says that you should only write enough code to pass your test seems to be wrong to me as can be seen by the example above?
When enough is enough?
My own thoughts are that I only pick some test cases e.g. for Upper band, lower band and few more e.g. 5 test cases but that's not TDD, is it?
Many thanks for your thoughts on TDD for this example.
It's an interesting question, related to the idea of falsifiability in epistemology. With unit tests, you are not really trying to prove that the system works; you are constructing experiments which, if they fail, will prove that the system doesn't work in a way consistent with your expectations/beliefs. If your tests pass, you do not know that your system works, because you may have forgotten some edge case which is untested; what you know is that as of now, you have no reason to believe that your system is faulty.
The classical example in history of sciences is the question "are all swans white?". No matter how many different white swans you find, you can't say that the hypothesis "all swans are white" is correct. On the other hand, bring me one black swan, and I know the hypothesis is not correct.
A good TDD unit test is along these lines; if it passes, it won't tell you that everything is right, but if it fails, it tells you where your hypothesis is incorrect. In that frame, testing for every number isn't that valuable: one case should be sufficient, because if it doesn't work for that case, you know something is wrong.
Where the question is interesting though is that unlike for swans, where you can't really enumerate over every swan in the world, and all their future children and their parents, you could enumerate every single integer, which is a finite set, and verify every possible situation. Also, a program is in lots of ways closer to mathematics than to physics, and in some cases you can also truly verify whether a statement is true - but that type of verification is, in my opinion, not what TDD is going after. TDD is going after good experiments which aim at capturing possible failure cases, not at proving that something is true.
You're forgetting step three:
Red
Green
Refactor
Writing your test cases gets you to red.
Writing enough code to make those test cases pass gets you to green.
Generalizing your code to work for more than just the test cases you wrote, while still not breaking any of them, is the refactoring.
You appear to be treating TDD as if it is black-box testing. It's not. If it were black-box testing, only a complete (millions of test cases) set of tests would satisfy you, because any given case might be untested, and therefore the demons in the black box would be able to get away with a cheat.
But it isn't demons in the black box in your code. It's you, in a white box. You know whether you're cheating or not. The practice of Fake It Til You Make It is closely associated with TDD, and sometimes confused with it. Yes, you write fake implementations to satisfy early test cases - but you know you're faking it. And you also know when you have stopped faking it. You know when you have a real implementation, and you've gotten there by progressive iteration and test-driving.
So your question is really misplaced. For TDD, you need to write enough test cases to drive your solution to completion and correctness; you don't need test cases for every conceivable set of inputs.
From my POV the refactoring step doesn't seem to have taken place on this piece of code...
In my book TDD does NOT mean to write testcases for every possible permutation of every possible input/output parameter...
BUT to write all testcases needed to ensure that it does what it is specified to be doing i.e. for such a method all boundary cases plus a test which picks randomly a number from a list containing numbers with known correct results. If need be you can always extend this list to make the test more thorough...
TDD only works in real world if you don't throw common sense out the window...
As to
Only write enough code to pass your test
in TDD this refers to "non-cheating programmers"... IF you have one or more "cheating programmer" who for example just hardcode the "correct result" of the testcases into the method I suspect you have a much bigger problem on your hands than TDD...
BTW "Testcase construction" is something you get better at the more you practice it - there is no book/guide that can tell you which testcases are best for any given situation upfront... experience pays off big when it comes to constructing testcases...
TDD does permit you to use common sense if you want to. There's no point defining your version of TDD to be stupid, just so that you can say "we're not doing TDD, we're doing something less stupid".
You can write a single test case that calls the function under test more than once, passing in different arguments. This prevents "write code to factorize 1", "write code to factorize 2", "write code to factorize 3" being separate development tasks.
How many distinct values to test really depends how much time you have to run the tests. You want to test anything that might be a corner case (so in the case of factorization at least 0, 1, 2, 3, LONG_MAX+1 since it has the most factors, whichever value has the most distinct factors, a Carmichael number, and a few perfect squares with various numbers of prime factors) plus as big a range of values as you can in the hope of covering something that you didn't realise was a corner case, but is. This may well mean writing the test, then writing the function, then adjusting the size of the range based on its observed performance.
You're also allowed to read the function specification, and implement the function as if more values are tested than actually will be. This doesn't really contradict "only implement what's tested", it just acknowledges that there isn't enough time before ship date to run all 2^64 possible inputs, and so the actual test is a representative sample of the "logical" test that you'd run if you had time. You can still code to what you want to test, rather than what you actually have time to test.
You could even test randomly-selected inputs (common as part of "fuzzing" by security analysts), if you find that your programmers (i.e. yourself) are determined to be perverse, and keep writing code that only solves the inputs tested, and no others. Obviously there are issues around the repeatability of random tests, so use a PRNG and log the seed. You see a similar thing with competition programming, online judge programs, and the like, to prevent cheating. The programmer doesn't know exactly which inputs will be tested, so must attempt to write code that solves all possible inputs. Since you can't keep secrets from yourself, random input does the same job. In real life programmers using TDD don't cheat on purpose, but might cheat accidentally because the same person writes the test and the code. Funnily enough, the tests then miss the same difficult corner cases that the code does.
The problem is even more obvious with a function that takes a string input, there are far more than 2^64 possible test values. Choosing the best ones, that is to say ones the programmer is most likely to get wrong, is at best an inexact science.
You can also let the tester cheat, moving beyond TDD. First write the test, then write the code to pass the test, then go back and write more white box tests, that (a) include values that look like they might be edge cases in the implementation actually written; and (b) include enough values to get 100% code coverage, for whatever code coverage metric you have the time and willpower to work to. The TDD part of the process is still useful, it helps write the code, but then you iterate. If any of these new tests fail you could call it "adding new requirements", in which case I suppose what you're doing is still pure TDD. But it's solely a question of what you call it, really you aren't adding new requirements, you're testing the original requirements more thoroughly than was possible before the code was written.
When you write a test you should take meaningful cases, not every case. Meaningful cases include general cases, corner cases...
You just CAN'T write a test for every single case (otherwise you could just put the values on a table and answer them, so you'd be 100% sure your program will work :P).
Hope that helps.
That's sort of the first question you've got for any testing. TDD is of no importance here.
Yes, there are lots and lots of cases; moreover, there are combinations and combinations of cases if you start building the system. It will indeed lead you to a combinatoric explosion.
What to do about that is a good question. Usually, you choose equivalence classes for which your algorithm will probably work the same—and test one value for each class.
Next step would be, test boundary conditions (remember, two most frequent errors in CS are off-by one error).
Next... Well, for all practical reasons, it's ok to stop here. Still, take a look at these lecture notes: http://www.scs.stanford.edu/11au-cs240h/notes/testing.html
PS. By the way, using TDD "by book" for math problems is not a very good idea. Kent Beck in his TDD book proves that, implementing the worst possible implementation of a function calculating Fibonacci numbers. If you know a closed form—or have an article describing a proven algorithm, just make sanity checks as described above, and do not do TDD with the whole refactoring cycle—it'll save your time.
PPS. Actually, there's a nice article which (surprise!) mentions bot the Fibonacci problem and the problem you have with TDD.
There aren't millions of test cases. Only a few. You might like to try PEX, which will let you find out the different real test cases in your algorithm. Of course, you need only test those.
I've never done any TDD, but what you're asking about isn't about TDD: It is about how to write a good test suite.
I like to design models (on paper or in my head) of all the states each piece of code can be in. I consider each line as if it were a part of a state machine. For each of those lines, I determine all the transitions that can be made (execute the next line, branch or not branch, throw an exception, overflow any of the sub calculations in the expression, etc).
From there I've got a basic matrix for my test cases. Then I determine each boundary condition for each of those state transitions, and any interesting mid-points between each of those boundaries. Then I've got the variations for my test cases.
From here I try to come up with interesting and different combinations of flow or logic - "This if statement, plus that one - with multiple items in the list", etc.
Since code is a flow, you often can't interrupt it in the middle unless it makes sense to insert a mock for an unrelated class. In those cases I've often reduced my matrix quite a bit, because there are conditions you just can't hit, or because the variation becomes less interesting by being masked out by another piece of logic.
After that, I'm about tired for the day, and go home :) And I probably have about 10-20 test cases per well-factored and reasonably short method, or 50-100 per algorithm/class. Not 10,000,000.
I probably come up with too many uninteresting test cases, but at least I usually overtest rather than undertest. I mitigate this by trying to to factor my test cases well to avoid code duplication.
Key pieces here:
Model your algorithms/objects/code, at least in your head. Your code is more of a tree than a script
Exhaustively determine all the state transitions within that model (each operation that can be executed independently, and each part of each expression that gets evaluated)
Utilize boundary testing so you don't have to come up with infinite variations
Mock when you can
And no, you don't have to write up FSM drawings, unless you have fun doing that sort of thing. I don't :)
What you usually do, it test against "test boundary conditions", and a few random conditions.
for example: ulong.min, ulong.max, and some values. Why are you even making a GetPrimeFactors? You like to calculate them in general, or are you making that to do something specific? Test for why you're making it.
What you could also do it Assert for result.Count, instead of the all individual items. If you know how many items you're suppose to get, and some specific cases, you can still refactor your code and if those cases and the total count is the same, assume the function still works.
If you really want to test that much, you could also look into white box testing. For example Pex and Moles is pretty good.
TDD is not a way to check that a function/program works correctly on every permutation of inputs possible. My take on it is that the probability that I write a particular test-case is proportional to how uncertain I am that my code is correct in that case.
This basically means I write tests in two scenarios: 1) some code I've written is complicated or complex and/or has too many assumptions and 2) a bug happens in production.
Once you understand what causes a bug it is generally very easy to codify in a test case. In the long term, doing this produces a robust test suite.

How can I ignore some unit-tests without lowering the code-coverage percentage?

I have one class which talks to DataBase.
I have my integration-tests which talks to Db and asserts relevant changes. But I want those tests to be ignored when I commit my code because I do not want them to be called automatically later on.
(Just for development time I use them for now)
When I put [Ignore] attribute they are not called but code-coverage reduces dramatically.
Is there a way to keep those tests but not have them run automatically
on the build machine in a way that the fact that they are ignored does
not influence code-coverage percentage?
Whatever code coverage tool you use most likely has some kind of CoverageIgnoreAttribute or something along those lines (at least the ones I've used do) so you just place that on the method block that gets called from those unit tests and you should be fine.
What you request seems not to make sense. Code-Coverage is measured by executing your tests and log which statements/conditions etc. are executed. If you disable your tests, nothing get executed and your code-coverage goes down.
TestNG has groups so you can specify to only run some groups, automatically and have the others for usage outside of that. You didn't specify your unit testing framework but it might have something similar.
I do not know if this is applicable to your situation. But spontaneously I am thinking of a setup where you have two solution files (.sln), one with unit/integration tests and one without. The two solutions share the same code and project files with the exception that your development/testing solution includes your unit tests (which are built and run at compile time), and the other solution doesn't. Both solutions should be under source control but only the one without unit tests are built by the build server.
This kind of setup should not need you to change existing code (too much). Which I would prefer over rewriting code to fit your test setup.

Does Unit Testing make Debug.Assert() unnecessary?

Its been a while I have ready Mcconnell's "Code Complete". Now I read it again in Hunt & Thomas' "The Pragmatic Programmer": Use assertions! Note: Not Unit Testing assertions, I mean Debug.Assert().
Following the SO questions When should I use Debug.Assert()? and When to use assertion over exceptions in domain classes assertions are useful for development, because "impossible" situations can be found quite fast. And it seems that they are commonly used. As far as I understood assertions, in C# they are often used for checking input variables for "impossible" values.
To keep unit tests concise and isolated as much as possible, I do feed classes and methods with nulls and "impossible" dummy input (like an empty string).
Such tests are explicitly documenting, that they don't rely on some specific input. Note: I am practicing what Meszaros' "xUnit Test Patterns" is describing as Minimal Fixture.
And that's the point: If I would have an assertions guarding these inputs, they would blow up my unit tests.
I like the idea of Assertative Programming, but on the other hand I don't need to force it. Currently I can't think of any use for Debug.Assert(). Maybe there is something I miss? Do you have any suggestions, where they could be really useful? Maybe I just overestimate the usefulness of assertions? Or maybe my way of testing needs to be revisited?
Edit: Best practice for debug Asserts during Unit testing is very similar, but it does not answer the question which bothers me: Should I care about Debug.Assert() in C# if I test like I have described? If yes, in which situation are they really useful? In my current point of view such Unit Tests would make Debug.Assert() unnecessary.
Another point: If you really think that, this is a duplicate question, just post some comment.
In theory, you're right - exhaustive testing makes asserts redundant. In theory. In parctice, they're still useful for debugging your tests, and for catching attempts by future developers who might try to use interfaces not according to their intended semantics.
In short, they just serve a different purpose from unit tests. They're there to catch mistakes that by their very nature aren't going to be made when writing unit tests.
I would recommend keeping them, since they offer another level of protection from programmer mistakes.
They're also a local error protection mechanism, whereas unit tests are external to the code being tested. It's far easier to "inadvertently" disable unit tests when under pressure than it is to disable all the assertions and runtime checks in a piece of code.
I generally see asserts being used for sanity checks on internal state rather than things like argument checking.
IMO the inputs to a solid API should be guarded by checks that remain in place regardless of the build type. For example, if a public method expects an argument that is a number in between 5 and 500, it should be guarded with an ArgumentOutOfRangeException. Fail fast and fail often using exceptions as far as I'm concerned, particularly when an argument is pushed somewhere and is used much later.
However, in places where internal, transient state is being sanity-checked (e.g. checking that some intermediate state is within reasonable bounds during a loop), it seems the Debug.Assert is more at home. What else are you meant to do when your algorithm has gone wrong despite having valid arguments passed to it? Throw an EpicFailException? :) I think this is where Debug.Assert is still useful.
I'm still undecided on the best balance between the two. I've stopped using using Debug.Asserts so much in C# since I started unit testing, but there's still a place for them IMO. I certainly wouldn't use them to check correctness in API use, but sanity checking in hard to get to places? Sure.
The only downside is that they can pop up and halt NUnit, but you can write an NUnit plugin to detect them and fail any test that triggers an assert.
I'm using both unit testing and assertions, for different purposes.
Unit testing is automated experimentation showing that your program (and its parts) function as specified. Just like in mathematics, experimentation is not a proof as long as you cannot try every possible combination of input. Nothing demonstrates that better than the fact that even with unit testing, your code will have bugs. Not many, but it will have them.
Assertions are for catching dodgy situations during runtime that normally shouldn't happen. Maybe you've heard about preconditions, postconditions, loop invariables and things like that. In the real world, we don't often go through the formal process of actually proving (by formal logic) that a piece of code yields the specified postconditions if the preconditions are satisfied. That would be a real mathematical proof, but we often don't have time to do that for each method. However, by checking if the preconditions and postconditions are satisfied, we can spot problems at a much earlier stage.
If you're doing exhaustive unit testing which covers all the odd edge cases you might encounter, then I don't think you are going to find assertions very useful. Most people who don't unit test place assertions to establish similar constraints as you catch with your tests.
I think the idea behind Unit Testing in this case is to move those assertions over to the test cases to ensure that, instead of having Debug.Assert(...) your code under test handles it without throwing up (or ensures that it's throwing up correctly).

Categories