I would like to measure the time that Pex takes in generating unit tests for a specific C# function. How I can get such information?
We do not expose any means (like an API, etc.) that reports the time taken by IntelliTest to generate tests. As I mentioned earlier the time taken to generate tests can depend on various factors.
Incidentally, folks at the Budapest University of Technology and Economics have some interesting data that compares test generators (including IntelliTest) that you might find relevant. Please see here: https://github.com/SETTE-Testing/sette-tool/wiki
Related
I love the idea of Pex - autogenerating unit tests through static code analysis - but the tests that are actually generated by the tool are horrible, ugly, tightly coupled to Pex modules, difficult to read and understand etc.
Is a tool like that really suitable (in its current state) for use in an enterprise environment, where the emphasis must be on ease of maintenance?
Or have I misunderstood the intended use of Pex?
Indeed, you have misunderstood the intended use.
Pex is a white-box testing tool. It generates test cases based on the analysis of the code it should test. The reason for this is to detect and test edge cases. So basically, you shouldn't even change the auto-generated tests.
Your normal unit-tests can't be replaced by Pex. It's just an additional tool.
This seems a subjective question...
I'd say that yes, tests written in a given framework/API are often tightly coupled to that framework. Pex's intention is not to generate "readable" tests, it's to ensure code coverage for a given set of constraints. If that's valuable to your product, then it's suitable - and certainly I'd wager that for a given team and a given codebase, this will provide value.
Every enterprise is different, but it's the product and its code that dictates the suitability of a testing tool. I would suggest that the thing to question is the value of Pex for a given codebase, regardless of the organisation in question.
I have used pex in a large financial institution and would recommend its use, but only for a very specific case. I think pex is good at what it does (as described elsewhere here, white box tests to find edge cases), but the tests do not have significant longevity as they are very tightly coupled.
Basically pex is great at generating coverage. If you have no tests and want some fast, use Pex. BUT THEN I recommend that instead of using it again you enforce standards for new code to meet an agreed coverage metric with hand written tests.
In this way the fragile tests from pex get replaced over time with tests that are more flexible and of a higher quality.
Pex is very useful for testing complex algorithms that do not rely on anything external. For example, it will not help you find edge cases in SQL statements or file access. However, for finding edge cases and increasing code coverage, it is extremely useful in addition to your normal unit tests.
I have a class that I need to Unit Test.
For background I'm developing in c# and using NUnit, but my question is more theoretical:
I don't know if I've written enough test methods and if I checked all the scenarios.
Is there a known working method/best practices/collection of rules for that?
Something like
"check every method in your class ...bla bla "
"check all the inserts to DB ...bla bla "
(this is a silly example of possible rules but if I had something not silly on my mind I wouldn't ask this question)
There are several available metrics for unit testing. Have a look into both code coverage and orthogonal testing.
However, I would say that this is not the best way of addressing the problem. While 100% code coverage is an admirable goal it can become the sort of metric which obscures that actual quality of the tests.
Personally I think you would get better results from investigating test driven development - using this approach you know you have good coverage (both in terms of lines of code and in terms of functionality of your class) because you have been writing the tests to exercise your class before you wrote the class methods themselves.
You might want to look at your test coverage. NCover is the code coverage solution from the developers of NUnit.
You can look into NCover or Visual Studio code coverage tool that support Nunit
The unit of measure for measuring test coverage in codes is called "Code Coverage".
As per Wikipedia:
Code coverage is a measure used in
software testing. It describes the
degree to which the source code of a
program has been tested. It is a form
of testing that inspects the code
directly and is therefore a form of
white box testing. In time, the use
of code coverage has been extended to
the field of digital hardware, the
contemporary design methodology of
which relies on hardware description
languages (HDLs)
Code Coverage measurements are given in percentage. Different teams and projects sets their own test coverage goal. I don't know if there is an industry "best practice" number, but most of my project sets this number at 80%.
For instance, if you are working on a project that has a lot of UI code, chances are the unit test coverage for that is low, but if you're working on a library, chances are every method has a proper unit test.
For .NET, one of the popular tool for code coverage is NCover.
As the others have mentioned Coverage provides one metric by which to measure the quality of your tests, but this does not tell you anything about how well the tests test your code. Just because a line is executed, it does not mean that all possible permutations of that line have been executed.
you may find some usefulness for a tool such as pex, which will test your code with various inputs to see what is does in those situations. this will give you good covereage (as it will tailor the inputs to generate paths through all of the possible paths through your code) but will also give you good coverage of the possible inputs (like ensuring your methods are tested with null inputs, or that methods which take lists are tested with empty lists or lists that contain null items etc)
There are other intriguing initiatives like a tool which removes lines of code, recompiles, and re runs tests. If no tests fail in this scenario, then it assumes that you have a missing test, as there should be something which depends on that line, or else why is it there? I'll look for a link to that.
We have a project that is starting to get large, and we need to start applying Unit Tests as we start to refactor. What is the best way to apply unit tests to a project that already exists? I'm (somewhat) used to doing it from the ground up, where I write the tests in conjunction with the first lines of code onward. I'm not sure how to start when the functionality is already in place. Should I just start writing test for each method from the repository up? Or should I start from the Controllers down?
update:
to clarify on the size of the project.. I'm not really sure how to describe this other than to say there's 8 Controllers and about 167 files that have a .cs extension, all done over about 7 developer months..
As you seem to be aware, retrofitting testing into an existing project is not easy. Your method of writing tests as you go is the better way. Your problem is one of both process and technology- tests must be required by everyone or no one will use them.
The recommendation I've heard and agree with is that you should not attempt to wrap tests around an existing codebase all at once. You'll never finish. Start by working testing into your bugfix process- every fixed bug gets a test. This will start to work testing into your existing code over time. New code must always have tests, of course. Eventually, you'll get the coverage up to a reasonable percentage, but it will take time.
One good book I've had recommended to me is Working Effectively With Legacy Code by Michael C. Feathers. The title doesn't really demonstrate it, but working testing into an existing codebase is a major subject of the book.
There are lots of approaches to fitting tests around an existing codebase. Unit tests are not necessarily the most productive way to start. If you have a large amount of code written then you might want to think about functional and integration tests before you work down to the level of unit tests. Those higher level tests will help give you broad assurance that your product continues to work while you make changes to improve the structure and retrofit unit tests.
One of the practices that non-test-first organizations use that I recommend highly in your situation is this: Have someone other than the author of the original code section write the unit tests for that section. This gets you some level of cross-training and sanity checking, and it also helps ensure that you don't preserve assumptions which will do damage to your code overall.
Other than that, I'll second the recommendation for Michael Feathers' book.
For a legacy project with a decently sized code base, unit testing everything may not be a justifiable effort spend due to budgetary constraints etc. Based on my reading on this subject, I would suggest:
Every bug which has been leaked to QA, Release or Production environment is a candidate for writing unit test case(s) along with fixing the bug.
Use source control to find out which sections/files of your code base are changing more frequently than others. Bring those sections/files under unit test coverage.
New story development should have meaningful unit test case written against them.
Keep monitoring the unit test coverage to observe any downward trend in particular area of the code base. This area needs you to zoom-in and review if unit test coverage is loosing its effectiveness or not.
P.S.: I have added Michael Feathers book to my reading list, thanks for suggesting it.
The code was migrated using a third party tool. what ever the tool couldnt do, was done by the .net developers, so that all compile issues were fixed. My question is, for such migration activities, do we not bother running unit tests for the functions.
Secondly, Could anyone suggest if we should use some tool in VSTS 10 to create a UML model of this code to minimize risks of issues that the client might find. How cumbersome is it.
Are there any other suggestions for how quality migrated code can be delivered, in light of the fact that the functionality of the original VB6 application is unknown to us.
for such migration activities, do we not bother running unit tests for the functions.
I wouldn't trust freshly translated code (mechanical or otherwise) at all. Absolutely it needs testing.
the functionality of the original VB6 application is unknown to us.
That will make regression testing quite... challenging. If you don't know how it is meant to behave, how do you know when you've finished it?
Of course, you could decide not to unit test the translated code, then you won't know how the new code works either - not sure that "unknown = unknown" counts as a "pass", though.
In my experience, the vast majority of applications provide a great deal of "unknown" functionality. After all the reason we write software is to help us manage information in ways that immeasurably exceed our abilities as mere morals. Over time, the size and complexity of our software grows, and grows, and grows until it contains a vast amount of "unknown" functionality. The unknown functionality was probably known and verified as "correct" at one time and it was captured in detail by the source code. However, as time passes no one fully remembers/knows what all the functionality is or even why it is "correct". The full functionality is only "remembered/known" by the source code, teams "test what they change" and the rest is assumed correct unless a problem shows up. This is particularly true of systems that have been extended and changed by many people over many years. Of course this creates risk, and we can do better, process like TDD and tools to automate unit testing are helping, but for many older systems lack of system understanding and incomplete testing are facts of life. The technical idealist in me does not like this, but the business realist in me accepts it.
All that said, this presents a major problem for migration teams. In theory these teams are "changing everything". In a VB6-to-.NET migration, "Test what we changed" means test it all. Ouch. Also the functional requirements for a migration often are "just make it do what it does now, but on the new platform." Not very useful when people do not know/remember everything the system does let alone how to verify that it does it correctly. I am working with several customers that have huge VB6 apps containing 100s of thousands of LOC organized into hundreds or forms and classes and several thousand methods, properties, and event handlers. I am sure these apps contain 10s of thousands of function points. I like to ask migration teams how long it would take them to find the error if I went into the VB6 and "broke" one little thing somewhere. I rarely get an answer...
This is why I advocate using a tool-assisted rewrite methodology. One of the most critical inputs to this process is the production-tested source code. We assume this code is "correct" since you or your customers are running their business on it. The source code is an extremely detailed, formal, and complete answer to the question: what does the system do? In our approach, the migration team iteratively customizes, calibrates, and verifies the automatic, systematic translation and re-engineering of the VB6 source to a complete .NET source. We translate, test, tune, and repeat; each time improving the quality of the translation in terms of functional correctness and conformance to .NET coding standards. Verifying and refining what the tool does is central to the methodology.
In order to verify code quality, we use code reviews and "side-by-side" testing. Code reviews are done by inspecting the .NET code using eyes, and other tools such as the .NET compiler, FXCop, NDepends, etc. We also do a lot of comparing successive generations of the translated codes using a product like BeyondCompare to verify that each translation tuning change has the desired effect and no undesired side-effect. Side-by-side testing is just what it sounds like: the general idea is to run the legacy and .NET apps in side-by-side test environments and make sure their results and behaviors match. There are at least a couple challenges here:
what do you do when you "run the app"; and
how do you make sure the results and behaviors match?
The first question is typically answered in terms of test data, use cases and automated unit tests; the second question is answered in terms of looking at the application UI, and the results (data, web pages, reports) from both systems and comparing (aka approval-based testing). Of course testing tools can go a long way to increase the efficiency. A large-scale migration is a very good time to have a discussion about starting to use testing tools.
If you are planning to migrate a large complex codebase, you need to plan to be very smart about testing. If done properly, the tool-assisted approach delivers production ready code very efficiently, and this will free up resources to produce QC artifacts and improve QC processes that will endure long after the migration.
Disclaimer: I work for Great Migrations.
From the tone of your question it sounds like you know the answer! I would say anything other than a complete set of regression tests would be a recipe for disaster! Ideally, you would want to run the same set of tests against both the old and new versions, although it sounds like you might not be able to do that...
My honest answer - make sure you've got plenty of support/maintenance developers ready to work round the clock fixing support issues!
We have a large body of legacy code, several portions of which are scheduled for refactoring or replacement. We wish to optimise parts that currently impact on the user-experience, facilitate reuse in a new product being planned, and hopefully improve maintainability too.
We have quite good/comprehensive functional tests for an existing product. These are a mixture of automated and manually-driven GUI tests, but they can take a developer more than half a day to run fully. The "low-level domain logic" has a good suite of unit tests (NUnit) with good coverage. Unfortunately, the remainder of the code has no unit tests (or, at least, no worthy unit tests).
What I'd like to find is a tool that automatically generates unit tests for specific methods/classes and maybe specific interfaces based on their use and behaviour in the functional tests. These unit tests would be invaluable for refactoring, and would also be run as part of our C.I. system to detect regressions much earlier than is currently happening (and to localise regressions much better than "button X doesn't work.").
Do any such tools exist? Do you have any recommendations for me?
I've come across Parasoft .TEST, which looks like it might do want I want. Do you have any comments on that, with respect to my situation?
I don't think something that just generates test code from a static analysis, ala NStub, is useful here. I suppose it is actually the generation of representative test data that is really important.
Please ignore the merits, or lack of, of automated test generation - it is not something I'd usually advocate. (Not least because you get tests that pass for broken code!)
Try Pex:
Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Pex performs a systematic analysis, hunting for boundary conditions, exceptions and assertion failures, which you can debug right away. Pex enables Parameterized Unit Testing, an extension of Unit Testing that reduces test maintenance costs.
Well, you could look at PEX - but I believe that invents its own data (it doesn't watch your existing tests, AFAIK).