I came across some strange results when using Code Coverage for our unit tests.
In the AssemblyInitialize function we do some initialization work (like AutoMapper, AbstractFactories) and this function is correctly executed.
The strange thing is that Code Coverage shows that there is no coverage for the functions that are called from AssemblyInitialize. Is this by design or am I doing something wrong here?
I would go with as by-design it seems too specific to be anything other. As someone who is looking into doing similar for an open source coverage tool it seems odd for this to be an accident and would be a very unusual bug.
The TDD purist in me would say this is because the setup/teardown of any type (assembly/class) is not actually part of the test itself and so should not be included in coverage. You should instead have separate, specific tests for that code rather than rely on the test setup/initialization failing.
Other .NET tools (dotCover for one) does coverage by test and 'may' also exclude results gained whilst running such setups; this is conjecture rather than known fact though.
Related
I am working on unit testing a code generator. A unit test's basic flow is:
The Unit Test calls the appropriate method and code is generated. Easy enough.
The Unit Test compiles the generated c# code (of step #1). If code compliles, proceed to step 3, else stop everything.
If step#2 suceeded, the Unit Test then runs other, pre-written unit tests on the generated compiled code of step 2. For this I will utilize the solution described here: Running individual NUnit tests programmatically and NUnit API And Running Tests Programmatically .
The approach for step #2 is what this question is about: I am thinking I have two options (1) Run Visual Studio Command Line to compile the solution Or (2) use
CSharpCodeProvider with CompilerParameters. Any recommendations would be greatly appreciated.
I personally use Roslyn on a daily basis, and so I was tempted to go with Kenneth and recommend it, but, in your case, if the only information you want to know is if the code compiled, I would lean more towards the CSharpCodeProvider class, especially if each method that is being unit tested generates a single file of code. If you would have to to any kind of analysis on the generated code or if you , it might be worth using Roslyn, but I doubt this is your case. The only other pro Roslyn might bring you is that you can open up a whole project/solution directly instead of trying to compile every separate file, which might appeal to you (it's a lot simpler to use than you might think).
Besides this advice, all I can say is that if you just have to choose between CSharpCodeProvider and the Command Line option, I would definitely go with the CSharpCodeProvider, since it already wraps and exposes the data and operations relevant to you analysis (Are there any errors ? => let's check if compiledResults.Errors.Count == 0). You don't need to call an external process (the C# compiler) in order to get what you want, which makes it a much simpler option in my opinion, while also having a lot of flexibility (i.e. The CompilerParameters)
I don't know if you have started playing around with it, but you shouldn't have too many problems with this method. Hope this helps.
I love the idea of Pex - autogenerating unit tests through static code analysis - but the tests that are actually generated by the tool are horrible, ugly, tightly coupled to Pex modules, difficult to read and understand etc.
Is a tool like that really suitable (in its current state) for use in an enterprise environment, where the emphasis must be on ease of maintenance?
Or have I misunderstood the intended use of Pex?
Indeed, you have misunderstood the intended use.
Pex is a white-box testing tool. It generates test cases based on the analysis of the code it should test. The reason for this is to detect and test edge cases. So basically, you shouldn't even change the auto-generated tests.
Your normal unit-tests can't be replaced by Pex. It's just an additional tool.
This seems a subjective question...
I'd say that yes, tests written in a given framework/API are often tightly coupled to that framework. Pex's intention is not to generate "readable" tests, it's to ensure code coverage for a given set of constraints. If that's valuable to your product, then it's suitable - and certainly I'd wager that for a given team and a given codebase, this will provide value.
Every enterprise is different, but it's the product and its code that dictates the suitability of a testing tool. I would suggest that the thing to question is the value of Pex for a given codebase, regardless of the organisation in question.
I have used pex in a large financial institution and would recommend its use, but only for a very specific case. I think pex is good at what it does (as described elsewhere here, white box tests to find edge cases), but the tests do not have significant longevity as they are very tightly coupled.
Basically pex is great at generating coverage. If you have no tests and want some fast, use Pex. BUT THEN I recommend that instead of using it again you enforce standards for new code to meet an agreed coverage metric with hand written tests.
In this way the fragile tests from pex get replaced over time with tests that are more flexible and of a higher quality.
Pex is very useful for testing complex algorithms that do not rely on anything external. For example, it will not help you find edge cases in SQL statements or file access. However, for finding edge cases and increasing code coverage, it is extremely useful in addition to your normal unit tests.
I've spent the last few weeks trying to figure out a way to implement (or find someone who has implemented) regressive testing for our build process, but so far I haven't found anything that works. We use TFS2008 and VS2010, and upgrading to TFS2010 is not an option for us. I've tried to use NDepend to give us the list of changed methods and type dependencies, but running it through our build script has proven supremely unreliable (if I run the same build twice without changing anything I would not be surprised to have one perfect NDepend report, and one exception saying NDepend can't run for one reason or another).
Unfortunately, I'm pretty much stuck with the tools I have (TFS2008, VS2010, MSBuild, and MSTest). I could probably get another tool, but changing the tools I already have (such as moving from MSTest to NUnit, or TFS2008 to TFS2010) will not be possible.
Has anyone does this already? Or can someone point me in the right direction to find which methods and types changed between two builds programmatically?
If you have unit tests and a coverage report. Then you could diff the coverage report before and after. Any changes to the coverage would be shown in that. You could then regression test off that (which I assume is manual)
I have a class that I need to Unit Test.
For background I'm developing in c# and using NUnit, but my question is more theoretical:
I don't know if I've written enough test methods and if I checked all the scenarios.
Is there a known working method/best practices/collection of rules for that?
Something like
"check every method in your class ...bla bla "
"check all the inserts to DB ...bla bla "
(this is a silly example of possible rules but if I had something not silly on my mind I wouldn't ask this question)
There are several available metrics for unit testing. Have a look into both code coverage and orthogonal testing.
However, I would say that this is not the best way of addressing the problem. While 100% code coverage is an admirable goal it can become the sort of metric which obscures that actual quality of the tests.
Personally I think you would get better results from investigating test driven development - using this approach you know you have good coverage (both in terms of lines of code and in terms of functionality of your class) because you have been writing the tests to exercise your class before you wrote the class methods themselves.
You might want to look at your test coverage. NCover is the code coverage solution from the developers of NUnit.
You can look into NCover or Visual Studio code coverage tool that support Nunit
The unit of measure for measuring test coverage in codes is called "Code Coverage".
As per Wikipedia:
Code coverage is a measure used in
software testing. It describes the
degree to which the source code of a
program has been tested. It is a form
of testing that inspects the code
directly and is therefore a form of
white box testing. In time, the use
of code coverage has been extended to
the field of digital hardware, the
contemporary design methodology of
which relies on hardware description
languages (HDLs)
Code Coverage measurements are given in percentage. Different teams and projects sets their own test coverage goal. I don't know if there is an industry "best practice" number, but most of my project sets this number at 80%.
For instance, if you are working on a project that has a lot of UI code, chances are the unit test coverage for that is low, but if you're working on a library, chances are every method has a proper unit test.
For .NET, one of the popular tool for code coverage is NCover.
As the others have mentioned Coverage provides one metric by which to measure the quality of your tests, but this does not tell you anything about how well the tests test your code. Just because a line is executed, it does not mean that all possible permutations of that line have been executed.
you may find some usefulness for a tool such as pex, which will test your code with various inputs to see what is does in those situations. this will give you good covereage (as it will tailor the inputs to generate paths through all of the possible paths through your code) but will also give you good coverage of the possible inputs (like ensuring your methods are tested with null inputs, or that methods which take lists are tested with empty lists or lists that contain null items etc)
There are other intriguing initiatives like a tool which removes lines of code, recompiles, and re runs tests. If no tests fail in this scenario, then it assumes that you have a missing test, as there should be something which depends on that line, or else why is it there? I'll look for a link to that.
We have a large body of legacy code, several portions of which are scheduled for refactoring or replacement. We wish to optimise parts that currently impact on the user-experience, facilitate reuse in a new product being planned, and hopefully improve maintainability too.
We have quite good/comprehensive functional tests for an existing product. These are a mixture of automated and manually-driven GUI tests, but they can take a developer more than half a day to run fully. The "low-level domain logic" has a good suite of unit tests (NUnit) with good coverage. Unfortunately, the remainder of the code has no unit tests (or, at least, no worthy unit tests).
What I'd like to find is a tool that automatically generates unit tests for specific methods/classes and maybe specific interfaces based on their use and behaviour in the functional tests. These unit tests would be invaluable for refactoring, and would also be run as part of our C.I. system to detect regressions much earlier than is currently happening (and to localise regressions much better than "button X doesn't work.").
Do any such tools exist? Do you have any recommendations for me?
I've come across Parasoft .TEST, which looks like it might do want I want. Do you have any comments on that, with respect to my situation?
I don't think something that just generates test code from a static analysis, ala NStub, is useful here. I suppose it is actually the generation of representative test data that is really important.
Please ignore the merits, or lack of, of automated test generation - it is not something I'd usually advocate. (Not least because you get tests that pass for broken code!)
Try Pex:
Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Pex performs a systematic analysis, hunting for boundary conditions, exceptions and assertion failures, which you can debug right away. Pex enables Parameterized Unit Testing, an extension of Unit Testing that reduces test maintenance costs.
Well, you could look at PEX - but I believe that invents its own data (it doesn't watch your existing tests, AFAIK).