Related
I've just finished reading "Professional Test-Driven Development with C#" and have been trying to find a way to achieve 100% coverage in my code. It's all good until I hit a repository class that is filled with methods implemented something like this:
public IEnumerable<MyDataContract> LoadConditional(bool isCondition)
{
const string QUERY = #"SELECT ...fields... FROM MyData WHERE [IsCondition] = #IsCondition";
return DataAccessor.ReadMyContracts(QUERY, isCondition); // something, something...
}
I've been thinking about this for some time, and have not been able to find an answer on the internet that answers this question directly.
I read things that would suggest that SQL related business would exist in another assembly. I don't require this though and don't believe I should have to go there. And this, from a code coverage, perspective is a pretty superficial change.
I've read that you can hook up databases for your unit tests (which I've done before). But this just well... I dunno, it doesn't feel right. The tests are slow and have a significant increase in maintenance.
My gut feeling is that without the last bit I mentioned, this method can't be unit tested. How should I be viewing this problem?
First let me say that I believe that achieving 100% coverage makes no sense at all and doesn't prove anything.
That being said, I typically use some layer between DB and business logic - some simple mapper (PetaPoco, Dapper, OrmLite) or, rarely, a full blown ORM (NHibernate).
In cases where I need integration tests against a DB, these tools allow me to run the same queries against a test DB (e.g. an in-memory SQLite DB) instead of 'real' DB server.
With regard to your concern that "the tests are slow and have a significant increase in maintenance." you should bear in mind that these are not unit tests anymore - these are integration tests and they can't be as fast as unit tests.
The way I see it, if you do actual data access, you are going in to integration testing and leaving the realm of unit testing. I personaly prefer to keep SQL inside the data access layer only, i.e. a single layer before the DB itself and then I test everything up to that point. In my view a method named ReadMyContracts should already have the correct SQL for accessing the data and should only receive (and pass on) the isCondition parameter.
That`s just MHO.
In TDD, you pick a test case and implement that test case then you write enough production code so that the test passes, refactor the codes and again you pick a new test case and the cycle continues.
The problem I have with this process is that TDD says that you write enough code only to pass the test you just wrote. What I refer to exactly is that if a method can have e.g. 1 million test cases, what can you do?! Obviously not writing 1 million test cases?!
Let me explain what I mean more clearly by the below example:
internal static List<long> GetPrimeFactors(ulong number)
{
var result = new List<ulong>();
while (number % 2 == 0)
{
result.Add(2);
number = number / 2;
}
var divisor = 3;
while (divisor <= number)
{
if (number % divisor == 0)
{
result.Add(divisor);
number = number / divisor;
}
else
{
divisor += 2;
}
}
return result;
}
The above code returns all the prime factors of a given number. ulong has 64 bits which means it can accept values between 0 to 18,446,744,073,709,551,615!
So, How TDD works when there can be millions of test cases for a production functionality?!
I mean how many test cases suffice to be written so that I can say I used TDD to achieve this production code?
This concept in TDD which says that you should only write enough code to pass your test seems to be wrong to me as can be seen by the example above?
When enough is enough?
My own thoughts are that I only pick some test cases e.g. for Upper band, lower band and few more e.g. 5 test cases but that's not TDD, is it?
Many thanks for your thoughts on TDD for this example.
It's an interesting question, related to the idea of falsifiability in epistemology. With unit tests, you are not really trying to prove that the system works; you are constructing experiments which, if they fail, will prove that the system doesn't work in a way consistent with your expectations/beliefs. If your tests pass, you do not know that your system works, because you may have forgotten some edge case which is untested; what you know is that as of now, you have no reason to believe that your system is faulty.
The classical example in history of sciences is the question "are all swans white?". No matter how many different white swans you find, you can't say that the hypothesis "all swans are white" is correct. On the other hand, bring me one black swan, and I know the hypothesis is not correct.
A good TDD unit test is along these lines; if it passes, it won't tell you that everything is right, but if it fails, it tells you where your hypothesis is incorrect. In that frame, testing for every number isn't that valuable: one case should be sufficient, because if it doesn't work for that case, you know something is wrong.
Where the question is interesting though is that unlike for swans, where you can't really enumerate over every swan in the world, and all their future children and their parents, you could enumerate every single integer, which is a finite set, and verify every possible situation. Also, a program is in lots of ways closer to mathematics than to physics, and in some cases you can also truly verify whether a statement is true - but that type of verification is, in my opinion, not what TDD is going after. TDD is going after good experiments which aim at capturing possible failure cases, not at proving that something is true.
You're forgetting step three:
Red
Green
Refactor
Writing your test cases gets you to red.
Writing enough code to make those test cases pass gets you to green.
Generalizing your code to work for more than just the test cases you wrote, while still not breaking any of them, is the refactoring.
You appear to be treating TDD as if it is black-box testing. It's not. If it were black-box testing, only a complete (millions of test cases) set of tests would satisfy you, because any given case might be untested, and therefore the demons in the black box would be able to get away with a cheat.
But it isn't demons in the black box in your code. It's you, in a white box. You know whether you're cheating or not. The practice of Fake It Til You Make It is closely associated with TDD, and sometimes confused with it. Yes, you write fake implementations to satisfy early test cases - but you know you're faking it. And you also know when you have stopped faking it. You know when you have a real implementation, and you've gotten there by progressive iteration and test-driving.
So your question is really misplaced. For TDD, you need to write enough test cases to drive your solution to completion and correctness; you don't need test cases for every conceivable set of inputs.
From my POV the refactoring step doesn't seem to have taken place on this piece of code...
In my book TDD does NOT mean to write testcases for every possible permutation of every possible input/output parameter...
BUT to write all testcases needed to ensure that it does what it is specified to be doing i.e. for such a method all boundary cases plus a test which picks randomly a number from a list containing numbers with known correct results. If need be you can always extend this list to make the test more thorough...
TDD only works in real world if you don't throw common sense out the window...
As to
Only write enough code to pass your test
in TDD this refers to "non-cheating programmers"... IF you have one or more "cheating programmer" who for example just hardcode the "correct result" of the testcases into the method I suspect you have a much bigger problem on your hands than TDD...
BTW "Testcase construction" is something you get better at the more you practice it - there is no book/guide that can tell you which testcases are best for any given situation upfront... experience pays off big when it comes to constructing testcases...
TDD does permit you to use common sense if you want to. There's no point defining your version of TDD to be stupid, just so that you can say "we're not doing TDD, we're doing something less stupid".
You can write a single test case that calls the function under test more than once, passing in different arguments. This prevents "write code to factorize 1", "write code to factorize 2", "write code to factorize 3" being separate development tasks.
How many distinct values to test really depends how much time you have to run the tests. You want to test anything that might be a corner case (so in the case of factorization at least 0, 1, 2, 3, LONG_MAX+1 since it has the most factors, whichever value has the most distinct factors, a Carmichael number, and a few perfect squares with various numbers of prime factors) plus as big a range of values as you can in the hope of covering something that you didn't realise was a corner case, but is. This may well mean writing the test, then writing the function, then adjusting the size of the range based on its observed performance.
You're also allowed to read the function specification, and implement the function as if more values are tested than actually will be. This doesn't really contradict "only implement what's tested", it just acknowledges that there isn't enough time before ship date to run all 2^64 possible inputs, and so the actual test is a representative sample of the "logical" test that you'd run if you had time. You can still code to what you want to test, rather than what you actually have time to test.
You could even test randomly-selected inputs (common as part of "fuzzing" by security analysts), if you find that your programmers (i.e. yourself) are determined to be perverse, and keep writing code that only solves the inputs tested, and no others. Obviously there are issues around the repeatability of random tests, so use a PRNG and log the seed. You see a similar thing with competition programming, online judge programs, and the like, to prevent cheating. The programmer doesn't know exactly which inputs will be tested, so must attempt to write code that solves all possible inputs. Since you can't keep secrets from yourself, random input does the same job. In real life programmers using TDD don't cheat on purpose, but might cheat accidentally because the same person writes the test and the code. Funnily enough, the tests then miss the same difficult corner cases that the code does.
The problem is even more obvious with a function that takes a string input, there are far more than 2^64 possible test values. Choosing the best ones, that is to say ones the programmer is most likely to get wrong, is at best an inexact science.
You can also let the tester cheat, moving beyond TDD. First write the test, then write the code to pass the test, then go back and write more white box tests, that (a) include values that look like they might be edge cases in the implementation actually written; and (b) include enough values to get 100% code coverage, for whatever code coverage metric you have the time and willpower to work to. The TDD part of the process is still useful, it helps write the code, but then you iterate. If any of these new tests fail you could call it "adding new requirements", in which case I suppose what you're doing is still pure TDD. But it's solely a question of what you call it, really you aren't adding new requirements, you're testing the original requirements more thoroughly than was possible before the code was written.
When you write a test you should take meaningful cases, not every case. Meaningful cases include general cases, corner cases...
You just CAN'T write a test for every single case (otherwise you could just put the values on a table and answer them, so you'd be 100% sure your program will work :P).
Hope that helps.
That's sort of the first question you've got for any testing. TDD is of no importance here.
Yes, there are lots and lots of cases; moreover, there are combinations and combinations of cases if you start building the system. It will indeed lead you to a combinatoric explosion.
What to do about that is a good question. Usually, you choose equivalence classes for which your algorithm will probably work the same—and test one value for each class.
Next step would be, test boundary conditions (remember, two most frequent errors in CS are off-by one error).
Next... Well, for all practical reasons, it's ok to stop here. Still, take a look at these lecture notes: http://www.scs.stanford.edu/11au-cs240h/notes/testing.html
PS. By the way, using TDD "by book" for math problems is not a very good idea. Kent Beck in his TDD book proves that, implementing the worst possible implementation of a function calculating Fibonacci numbers. If you know a closed form—or have an article describing a proven algorithm, just make sanity checks as described above, and do not do TDD with the whole refactoring cycle—it'll save your time.
PPS. Actually, there's a nice article which (surprise!) mentions bot the Fibonacci problem and the problem you have with TDD.
There aren't millions of test cases. Only a few. You might like to try PEX, which will let you find out the different real test cases in your algorithm. Of course, you need only test those.
I've never done any TDD, but what you're asking about isn't about TDD: It is about how to write a good test suite.
I like to design models (on paper or in my head) of all the states each piece of code can be in. I consider each line as if it were a part of a state machine. For each of those lines, I determine all the transitions that can be made (execute the next line, branch or not branch, throw an exception, overflow any of the sub calculations in the expression, etc).
From there I've got a basic matrix for my test cases. Then I determine each boundary condition for each of those state transitions, and any interesting mid-points between each of those boundaries. Then I've got the variations for my test cases.
From here I try to come up with interesting and different combinations of flow or logic - "This if statement, plus that one - with multiple items in the list", etc.
Since code is a flow, you often can't interrupt it in the middle unless it makes sense to insert a mock for an unrelated class. In those cases I've often reduced my matrix quite a bit, because there are conditions you just can't hit, or because the variation becomes less interesting by being masked out by another piece of logic.
After that, I'm about tired for the day, and go home :) And I probably have about 10-20 test cases per well-factored and reasonably short method, or 50-100 per algorithm/class. Not 10,000,000.
I probably come up with too many uninteresting test cases, but at least I usually overtest rather than undertest. I mitigate this by trying to to factor my test cases well to avoid code duplication.
Key pieces here:
Model your algorithms/objects/code, at least in your head. Your code is more of a tree than a script
Exhaustively determine all the state transitions within that model (each operation that can be executed independently, and each part of each expression that gets evaluated)
Utilize boundary testing so you don't have to come up with infinite variations
Mock when you can
And no, you don't have to write up FSM drawings, unless you have fun doing that sort of thing. I don't :)
What you usually do, it test against "test boundary conditions", and a few random conditions.
for example: ulong.min, ulong.max, and some values. Why are you even making a GetPrimeFactors? You like to calculate them in general, or are you making that to do something specific? Test for why you're making it.
What you could also do it Assert for result.Count, instead of the all individual items. If you know how many items you're suppose to get, and some specific cases, you can still refactor your code and if those cases and the total count is the same, assume the function still works.
If you really want to test that much, you could also look into white box testing. For example Pex and Moles is pretty good.
TDD is not a way to check that a function/program works correctly on every permutation of inputs possible. My take on it is that the probability that I write a particular test-case is proportional to how uncertain I am that my code is correct in that case.
This basically means I write tests in two scenarios: 1) some code I've written is complicated or complex and/or has too many assumptions and 2) a bug happens in production.
Once you understand what causes a bug it is generally very easy to codify in a test case. In the long term, doing this produces a robust test suite.
Omar Al Zabir is looking for "a simpler way to do AOP style coding".
He created a framework called AspectF, which is "a fluent and simple way to add Aspects to your code".
It is not true AOP, because it doesn't do any compile time or runtime weaving, but does it accomplish the same goals as AOP?
Here's an example of AspectF usage:
public void InsertCustomerTheEasyWay(string firstName, string lastName, int age,
Dictionary<string, string> attributes)
{
AspectF.Define
.Log(Logger.Writer, "Inserting customer the easy way")
.HowLong(Logger.Writer, "Starting customer insert", "Inserted customer in {1} seconds")
.Retry()
.Do(() =>
{
CustomerData data = new CustomerData();
data.Insert(firstName, lastName, age, attributes);
});
}
Here are some posts by the author that further clarify the aim of AspectF:
AspectF fluent way to put Aspects into your code for separation of concern (Blog)
AspectF (google code)
AspectF Fluent Way to Add Aspects for Cleaner Maintainable Code (CodeProject)
According to the author, I gather that AspectF is not designed so much as an AOP replacement, but a way to achieve "separation of concern and keep code nice and clean".
Some thoughts/questions:
Any thoughts on using this style of coding as project grows?
Is it a scalable architecture?
performance concerns?
How does maintainability compare against a true AOP solution?
I don't mean to bash the project, but
IMHO this is abusing AOP. Aspects are not suitable for everything, and used like this it only hampers readability.
Moreover, I think this misses one of the main points of AOP, which is being able to add/remove/redefine aspects without touching the underlying code.
Aspects should be defined outside of the affected code in order to make them truly cross-cutting concerns. In AspectF's case, the aspects are embedded in the affected code, which violates SoC/SRP.
Performance-wise there is no penalty (or it's negligible) because there is no runtime IL manipulation, just as explained in the codeproject article. However, I've never had any perf problems with Castle DynamicProxy.
On a recent project, it was recommended to me that I give AspectF a try.
I took to heart the idea of laying all the concerns up front, and having the code that does the real work blissfully unaware of all the checks and balances that happened outside of it.
I actually took it a little further, and added a security "concern" that required credentials that were being received as part of a WCF request. It went off to the database and did what it had to. I did obvious validations, and the security check before running the actual code that would return the required data.
I found this approach quite a refreshing change, and I certainly liked that I had the source of AspectF to walk through as I was debugging and testing the service calls.
In the office, some argued that these concerns should be implemented as a decoration on a class / method. But it doesn't really matter where you decorate it, at some point somewhere, you need to say you wish to perform certain actions / checks. I like the fact that it's all laid out in-place, not as another code file, not as some kind of configuration file, and for once, not adding yet another decoration to a class / method.
I'm not saying it's true AOP - and I certainly think there are solutions and situations where it really isn't the best way of implementing your objectives, but given that it's just a couple of K of source files, that makes for a very light-weight implementation.
AspectF is basically a very clever way of chaining delegates together.
I don't think every developer is going to look at the code and say how wonderful it is to look at, indeed in our office it confused some of us! But once you understand what's going on, it's an inexpensive way of achieving much of what can be done by other approaches too.
Its been a while I have ready Mcconnell's "Code Complete". Now I read it again in Hunt & Thomas' "The Pragmatic Programmer": Use assertions! Note: Not Unit Testing assertions, I mean Debug.Assert().
Following the SO questions When should I use Debug.Assert()? and When to use assertion over exceptions in domain classes assertions are useful for development, because "impossible" situations can be found quite fast. And it seems that they are commonly used. As far as I understood assertions, in C# they are often used for checking input variables for "impossible" values.
To keep unit tests concise and isolated as much as possible, I do feed classes and methods with nulls and "impossible" dummy input (like an empty string).
Such tests are explicitly documenting, that they don't rely on some specific input. Note: I am practicing what Meszaros' "xUnit Test Patterns" is describing as Minimal Fixture.
And that's the point: If I would have an assertions guarding these inputs, they would blow up my unit tests.
I like the idea of Assertative Programming, but on the other hand I don't need to force it. Currently I can't think of any use for Debug.Assert(). Maybe there is something I miss? Do you have any suggestions, where they could be really useful? Maybe I just overestimate the usefulness of assertions? Or maybe my way of testing needs to be revisited?
Edit: Best practice for debug Asserts during Unit testing is very similar, but it does not answer the question which bothers me: Should I care about Debug.Assert() in C# if I test like I have described? If yes, in which situation are they really useful? In my current point of view such Unit Tests would make Debug.Assert() unnecessary.
Another point: If you really think that, this is a duplicate question, just post some comment.
In theory, you're right - exhaustive testing makes asserts redundant. In theory. In parctice, they're still useful for debugging your tests, and for catching attempts by future developers who might try to use interfaces not according to their intended semantics.
In short, they just serve a different purpose from unit tests. They're there to catch mistakes that by their very nature aren't going to be made when writing unit tests.
I would recommend keeping them, since they offer another level of protection from programmer mistakes.
They're also a local error protection mechanism, whereas unit tests are external to the code being tested. It's far easier to "inadvertently" disable unit tests when under pressure than it is to disable all the assertions and runtime checks in a piece of code.
I generally see asserts being used for sanity checks on internal state rather than things like argument checking.
IMO the inputs to a solid API should be guarded by checks that remain in place regardless of the build type. For example, if a public method expects an argument that is a number in between 5 and 500, it should be guarded with an ArgumentOutOfRangeException. Fail fast and fail often using exceptions as far as I'm concerned, particularly when an argument is pushed somewhere and is used much later.
However, in places where internal, transient state is being sanity-checked (e.g. checking that some intermediate state is within reasonable bounds during a loop), it seems the Debug.Assert is more at home. What else are you meant to do when your algorithm has gone wrong despite having valid arguments passed to it? Throw an EpicFailException? :) I think this is where Debug.Assert is still useful.
I'm still undecided on the best balance between the two. I've stopped using using Debug.Asserts so much in C# since I started unit testing, but there's still a place for them IMO. I certainly wouldn't use them to check correctness in API use, but sanity checking in hard to get to places? Sure.
The only downside is that they can pop up and halt NUnit, but you can write an NUnit plugin to detect them and fail any test that triggers an assert.
I'm using both unit testing and assertions, for different purposes.
Unit testing is automated experimentation showing that your program (and its parts) function as specified. Just like in mathematics, experimentation is not a proof as long as you cannot try every possible combination of input. Nothing demonstrates that better than the fact that even with unit testing, your code will have bugs. Not many, but it will have them.
Assertions are for catching dodgy situations during runtime that normally shouldn't happen. Maybe you've heard about preconditions, postconditions, loop invariables and things like that. In the real world, we don't often go through the formal process of actually proving (by formal logic) that a piece of code yields the specified postconditions if the preconditions are satisfied. That would be a real mathematical proof, but we often don't have time to do that for each method. However, by checking if the preconditions and postconditions are satisfied, we can spot problems at a much earlier stage.
If you're doing exhaustive unit testing which covers all the odd edge cases you might encounter, then I don't think you are going to find assertions very useful. Most people who don't unit test place assertions to establish similar constraints as you catch with your tests.
I think the idea behind Unit Testing in this case is to move those assertions over to the test cases to ensure that, instead of having Debug.Assert(...) your code under test handles it without throwing up (or ensures that it's throwing up correctly).
So I'm working on some legacy code that's heavy on the manual database operations. I'm trying to maintain some semblance of quality here, so I'm going TDD as much as possible.
The code I'm working on needs to populate, let's say a List<Foo> from a DataReader that returns all the fields required for a functioning Foo. However, if I want to verify that the code in fact returns one list item per one database row, I'm writing test code that looks something like this:
Expect.Call(reader.Read()).Return(true);
Expect.Call(reader["foo_id"]).Return((long) 1);
// ....
Expect.Call(reader.Read()).Return(true);
Expect.Call(reader["foo_id"]).Return((long) 2);
// ....
Expect.Call(reader.Read()).Return(false);
Which is rather tedious and rather easily broken, too.
How should I be approaching this issue so that the result won't be a huge mess of brittle tests?
Btw I'm currently using Rhino.Mocks for this, but I can change it if the result is convincing enough. Just as long as the alternative isn't TypeMock, because their EULA was a bit too scary for my tastes last I checked.
Edit: I'm also currently limited to C# 2.
To make this less tedious, you will need to encapsulate/refactor the mapping between the DataReader and the Object you hold in the list. There is quite of few steps to encapsulate that logic out. If that is the road you want to take, I can post code for you. I am just not sure how practical it would be to post the code here on StackOverflow, but I can give it a shot to keep it concise and to the point. Otherwise, you are stuck with the tedious task of repeating each expectation on the index accessor for the reader. The encapsulation process will also get rid of the strings and make those strings more reusable through your tests.
Also, I am not sure at this point how much you want to make the existing code more testable. Since this is legacy code that wasn't built with testing in mind.
I thought about posting some code and then I remembered about JP Boodhoo's Nothin But .NET course. He has a sample project that he is sharing that was created during one of his classes. The project is hosted on Google Code and it is a nice resource. I am sure it has some nice tips for you to use and give you ideas on how to refactor the mapping. The whole project was built with TDD.
You can put the Foo instances in a list and compare the objects with what you read:
var arrFoos = new Foos[]{...}; // what you expect
var expectedFoos = new List<Foo>(arrFoos); // make a list from the hardcoded array of expected Foos
var readerResult = ReadEntireList(reader); // read everything from reader and put in List<Foo>
Expect.ContainSameFoos(expectedFoos, readerResult); // compare the two lists
Kokos,
Couple of things wrong there. First, doing it that way means I have to construct the Foos first, then feed their values to the mock reader which does nothing to reduce the amount of code I'm writing. Second, if the values pass through the reader, the Foos won't be the same Foos (reference equality). They might be equal, but even that's assuming too much of the Foo class that I don't dare touch at this point.
Just to clarify, you want to be able to test your call into SQL Server returned some data, or that if you had some data you could map it back into the model?
If you want to test your call into SQL returned some data checkout my answer found here
#Toran: What I'm testing is the programmatic mapping from data returned from the database to quote-unquote domain model. Hence I want to mock out the database connection. For the other kind of test, I'd go for all-out integration testing.
#Dale: I guess you nailed it pretty well there, and I was afraid that might be the case. If you've got pointers to any articles or suchlike where someone has done the dirty job and decomposed it into more easily digestible steps, I'd appreciate it. Code samples wouldn't hurt either. I do have a clue on how to approach that problem, but before I actually dare do that, I'm going to need to get other things done, and if testing that will require tedious mocking, then that's what I'll do.