Is Pex (and its output) suitable for an enterprise environment? - c#

I love the idea of Pex - autogenerating unit tests through static code analysis - but the tests that are actually generated by the tool are horrible, ugly, tightly coupled to Pex modules, difficult to read and understand etc.
Is a tool like that really suitable (in its current state) for use in an enterprise environment, where the emphasis must be on ease of maintenance?
Or have I misunderstood the intended use of Pex?

Indeed, you have misunderstood the intended use.
Pex is a white-box testing tool. It generates test cases based on the analysis of the code it should test. The reason for this is to detect and test edge cases. So basically, you shouldn't even change the auto-generated tests.
Your normal unit-tests can't be replaced by Pex. It's just an additional tool.

This seems a subjective question...
I'd say that yes, tests written in a given framework/API are often tightly coupled to that framework. Pex's intention is not to generate "readable" tests, it's to ensure code coverage for a given set of constraints. If that's valuable to your product, then it's suitable - and certainly I'd wager that for a given team and a given codebase, this will provide value.
Every enterprise is different, but it's the product and its code that dictates the suitability of a testing tool. I would suggest that the thing to question is the value of Pex for a given codebase, regardless of the organisation in question.

I have used pex in a large financial institution and would recommend its use, but only for a very specific case. I think pex is good at what it does (as described elsewhere here, white box tests to find edge cases), but the tests do not have significant longevity as they are very tightly coupled.
Basically pex is great at generating coverage. If you have no tests and want some fast, use Pex. BUT THEN I recommend that instead of using it again you enforce standards for new code to meet an agreed coverage metric with hand written tests.
In this way the fragile tests from pex get replaced over time with tests that are more flexible and of a higher quality.

Pex is very useful for testing complex algorithms that do not rely on anything external. For example, it will not help you find edge cases in SQL statements or file access. However, for finding edge cases and increasing code coverage, it is extremely useful in addition to your normal unit tests.

Related

Measuring test generation time in Pex

I would like to measure the time that Pex takes in generating unit tests for a specific C# function. How I can get such information?
We do not expose any means (like an API, etc.) that reports the time taken by IntelliTest to generate tests. As I mentioned earlier the time taken to generate tests can depend on various factors.
Incidentally, folks at the Budapest University of Technology and Economics have some interesting data that compares test generators (including IntelliTest) that you might find relevant. Please see here: https://github.com/SETTE-Testing/sette-tool/wiki

Is there a good measure of completeness of Unit tests

I have a class that I need to Unit Test.
For background I'm developing in c# and using NUnit, but my question is more theoretical:
I don't know if I've written enough test methods and if I checked all the scenarios.
Is there a known working method/best practices/collection of rules for that?
Something like
"check every method in your class ...bla bla "
"check all the inserts to DB ...bla bla "
(this is a silly example of possible rules but if I had something not silly on my mind I wouldn't ask this question)
There are several available metrics for unit testing. Have a look into both code coverage and orthogonal testing.
However, I would say that this is not the best way of addressing the problem. While 100% code coverage is an admirable goal it can become the sort of metric which obscures that actual quality of the tests.
Personally I think you would get better results from investigating test driven development - using this approach you know you have good coverage (both in terms of lines of code and in terms of functionality of your class) because you have been writing the tests to exercise your class before you wrote the class methods themselves.
You might want to look at your test coverage. NCover is the code coverage solution from the developers of NUnit.
You can look into NCover or Visual Studio code coverage tool that support Nunit
The unit of measure for measuring test coverage in codes is called "Code Coverage".
As per Wikipedia:
Code coverage is a measure used in
software testing. It describes the
degree to which the source code of a
program has been tested. It is a form
of testing that inspects the code
directly and is therefore a form of
white box testing. In time, the use
of code coverage has been extended to
the field of digital hardware, the
contemporary design methodology of
which relies on hardware description
languages (HDLs)
Code Coverage measurements are given in percentage. Different teams and projects sets their own test coverage goal. I don't know if there is an industry "best practice" number, but most of my project sets this number at 80%.
For instance, if you are working on a project that has a lot of UI code, chances are the unit test coverage for that is low, but if you're working on a library, chances are every method has a proper unit test.
For .NET, one of the popular tool for code coverage is NCover.
As the others have mentioned Coverage provides one metric by which to measure the quality of your tests, but this does not tell you anything about how well the tests test your code. Just because a line is executed, it does not mean that all possible permutations of that line have been executed.
you may find some usefulness for a tool such as pex, which will test your code with various inputs to see what is does in those situations. this will give you good covereage (as it will tailor the inputs to generate paths through all of the possible paths through your code) but will also give you good coverage of the possible inputs (like ensuring your methods are tested with null inputs, or that methods which take lists are tested with empty lists or lists that contain null items etc)
There are other intriguing initiatives like a tool which removes lines of code, recompiles, and re runs tests. If no tests fail in this scenario, then it assumes that you have a missing test, as there should be something which depends on that line, or else why is it there? I'll look for a link to that.

How to approach unit testing in a large project

We have a project that is starting to get large, and we need to start applying Unit Tests as we start to refactor. What is the best way to apply unit tests to a project that already exists? I'm (somewhat) used to doing it from the ground up, where I write the tests in conjunction with the first lines of code onward. I'm not sure how to start when the functionality is already in place. Should I just start writing test for each method from the repository up? Or should I start from the Controllers down?
update:
to clarify on the size of the project.. I'm not really sure how to describe this other than to say there's 8 Controllers and about 167 files that have a .cs extension, all done over about 7 developer months..
As you seem to be aware, retrofitting testing into an existing project is not easy. Your method of writing tests as you go is the better way. Your problem is one of both process and technology- tests must be required by everyone or no one will use them.
The recommendation I've heard and agree with is that you should not attempt to wrap tests around an existing codebase all at once. You'll never finish. Start by working testing into your bugfix process- every fixed bug gets a test. This will start to work testing into your existing code over time. New code must always have tests, of course. Eventually, you'll get the coverage up to a reasonable percentage, but it will take time.
One good book I've had recommended to me is Working Effectively With Legacy Code by Michael C. Feathers. The title doesn't really demonstrate it, but working testing into an existing codebase is a major subject of the book.
There are lots of approaches to fitting tests around an existing codebase. Unit tests are not necessarily the most productive way to start. If you have a large amount of code written then you might want to think about functional and integration tests before you work down to the level of unit tests. Those higher level tests will help give you broad assurance that your product continues to work while you make changes to improve the structure and retrofit unit tests.
One of the practices that non-test-first organizations use that I recommend highly in your situation is this: Have someone other than the author of the original code section write the unit tests for that section. This gets you some level of cross-training and sanity checking, and it also helps ensure that you don't preserve assumptions which will do damage to your code overall.
Other than that, I'll second the recommendation for Michael Feathers' book.
For a legacy project with a decently sized code base, unit testing everything may not be a justifiable effort spend due to budgetary constraints etc. Based on my reading on this subject, I would suggest:
Every bug which has been leaked to QA, Release or Production environment is a candidate for writing unit test case(s) along with fixing the bug.
Use source control to find out which sections/files of your code base are changing more frequently than others. Bring those sections/files under unit test coverage.
New story development should have meaningful unit test case written against them.
Keep monitoring the unit test coverage to observe any downward trend in particular area of the code base. This area needs you to zoom-in and review if unit test coverage is loosing its effectiveness or not.
P.S.: I have added Michael Feathers book to my reading list, thanks for suggesting it.

Generating tests from run-time analysis

We have a large body of legacy code, several portions of which are scheduled for refactoring or replacement. We wish to optimise parts that currently impact on the user-experience, facilitate reuse in a new product being planned, and hopefully improve maintainability too.
We have quite good/comprehensive functional tests for an existing product. These are a mixture of automated and manually-driven GUI tests, but they can take a developer more than half a day to run fully. The "low-level domain logic" has a good suite of unit tests (NUnit) with good coverage. Unfortunately, the remainder of the code has no unit tests (or, at least, no worthy unit tests).
What I'd like to find is a tool that automatically generates unit tests for specific methods/classes and maybe specific interfaces based on their use and behaviour in the functional tests. These unit tests would be invaluable for refactoring, and would also be run as part of our C.I. system to detect regressions much earlier than is currently happening (and to localise regressions much better than "button X doesn't work.").
Do any such tools exist? Do you have any recommendations for me?
I've come across Parasoft .TEST, which looks like it might do want I want. Do you have any comments on that, with respect to my situation?
I don't think something that just generates test code from a static analysis, ala NStub, is useful here. I suppose it is actually the generation of representative test data that is really important.
Please ignore the merits, or lack of, of automated test generation - it is not something I'd usually advocate. (Not least because you get tests that pass for broken code!)
Try Pex:
Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Pex performs a systematic analysis, hunting for boundary conditions, exceptions and assertion failures, which you can debug right away. Pex enables Parameterized Unit Testing, an extension of Unit Testing that reduces test maintenance costs.
Well, you could look at PEX - but I believe that invents its own data (it doesn't watch your existing tests, AFAIK).

TDD, What are your techniques for finding good tests?

I am writing a simple web app using Linq to Sql as my datalayer as I like Linq2Sql very much. I have been reading a bit about DDD and TDD lately and wanted to give it a shot.
First and foremost it strikes me that Linq2Sql and DDD don't go along too great. My other problem is finding tests, I actually find it very hard to define good tests so I wanted to ask, What is your best techniques for discovering good test cases.
Test Case discovery is more of an art than a science. However simple guidelines include:
Code that you know to be frail / weak / likely to break
Follow the user scenario (what your user will be doing) and see how it will touch your code (often this means Debugging it, other times profiling, and other times it simply means thinking about the scenario) - whatever points in your code get touched by the user, those are the highest priority to write tests against.
During your own development the tests you ran that resulted in bugs you found - write tests to avoid the code regressing again with the same behavior.
There are several books on how to write test cases out there, but unless you are working in a large organization that requires documented test cases, your best bet is to think of all the parts in your code that you don't like (that aren't "pure") and make sure you can test those modules thoroughly.
Well, going by the standard interpretation of TDD is that the tests drive your development. So, in essence you start with the test. It will fail, and you will write code until that test passes. So it's kind of driven by your requirements, however you go about gathering those. You decide what your app/feature needs to do, write the test, then code until it passes. Of course, there are many other techniques but this is just a brief statement about what is typically thought in the TDD world.
Think. Read the code. Question yourself: e.g. can this pointer never be NULL here ? What happens if this method is called before the initialization ?
Don't make assumptions such as "this file will always be there". Test.
Think about edge cases, boundaries, negative values, overflows ...
Bug are often grouped by cluster. Look around when you find one. Also look for the same kind of bug in other locations.
Set your mind to the actual goal of testing: Finding bugs.
Be creative at imagining what could make your program fail.
Your tests must find bugs, not confirm that your program is OK.
I regularly write tests for third-party APIs. That way, when the API updates, I know whether I'm going to break or not.
I think this is a useful technique:
Using contracts and boolean queries to improve quality of automatic test generation
Reference: Lisa (Ling) Liu, Bertrand Meyer and
Bernd Schoeller, Using contracts and boolean queries to improve the
quality of automatic test generation, in proceedings of TAP: Tests
And Proofs, ETH Zurich, 5-6 February 2007, eds. Yuri Gurevich and
Bertrand Meyer, Lecture Notes in Computer Science, Springer-
Verlag, 2007.

Categories