On OpenCover github page I can see that OpenCover supports coverage by test ("Release 3 (coverage by test support, debug symbols"). The issue is, I don't know how to run OpenCover with this option. My workflow is to run unit tests with OpenCover and nUnit, then use ReportGenerator to generate full html report and view it - and I can't see the "coverage by test" anywhere.
Or maybe I got the "coverage by test" feature wrong? How I imagine this feature is that I can get an answer to a question such as "which lines of code does my TestXYZ() cover?".
Can anyone give me some tips on how to use the feature?
I submitted this as an issue to Daniel Palme, who is responsible for Report Generator and he actually agreed to add support for this capability! What's more, he already put it into the repository (http://reportgenerator.codeplex.com/SourceControl/changeset/70732).
What a great guy!
You will need to use the -coverbytest switch should be detailed in the Usage.rtf guide - it uses the same sort of filters as used for coverage inclusion/exclusion.
However ReportGenerator does not support OpenCover's Coverage By Test feature - you will need to write your own reporting for this - the XML from OpenCover is easy to understand though.
Choose the test method and then locate which lines of code those test methods are recorded against.
Related
I would like to do some analytics on .NET unit test coverage and I would use to get access to raw data from unit test run of the type [("SerializationTest.DeserializeComposedObject", ["Serializator.cs:89", "Serializator.cs:90", "Serializator.cs:91"])], i.e., I would like to see list of lines affected by each test separately.
I noticed there are questions how to get such data in a graphical form (NCrunch), but I would like to process them further. Is there such functionality available anywhere?
There is an option called coverbytest in OpenCover that does exactly what I need here. It add notes like <TrackedMethodRef uid="10" vc="7" /> to the XML output of the tool that marks what codepoints were visited by what tests.
Trying to create a test suite , in which I do not want to include all the feature files. Is that possible? This is in the lines of TestNG.xml test suite. Does Specflow provide any such feature? Is there any documentation for tags in Default.srprofile, using which I can manipulate to include/exclude feature files I want to run?
You can configure filtering for Tags/Features/Scenarios in the Default.srProfile.
See the documentation here: http://www.specflow.org/plus/documentation/SpecFlowPlus-Runner-Profiles/#Filter
https://stackoverflow.com/users/3155323/andreas-willich
#smoke
scenario: B
.....
#new
Scenario: A
--filter "#smoke | #new"
The ordered way of running test is not happening. It runs based on alphabetical order of how scenarios are defined
I'm using OpenCover to generate functional test coverage for a web application. These tests are fairly long running (3+ hours), so we've chopped them up into multiple tests that run in parallel. So instead of a single coverage report, there are six.
In order to import these coverage reports into SonarQube, I need to figure out a way to combine them into one uber report. ReportGenerator supports merging multiple reports into one, but creates HTML output, which is not something SonarQube can consume.
At this point my options are
hand-roll an OpenCover report merger (blech!)
Run my functional tests serially, substantially increasing failure feedback times
Any other options I'm missing?
I have created the following ticket on the SonarQube .NET side to allow multiple coverage reports to be specified, and to aggregate them: http://jira.codehaus.org/browse/SONARPLUGINS-3666.
In the meantime though, I cannot think of other options besides the 2 you already had.
Newer version of Report generator has support for wild card.
You can provide all XML reports as "*.XML" and Report Generator will generate one consolidated report from it.
OpenCover has -mergeoutput argument that makes it to work with the -output file in an append-only fashion, preserving previous measurements found there. That should allow you to call individual test runs separately -- as long as your SUT is still same.
My experience with trying to run tests with different -filter arguments is that OpenCover refuses to reopen module that has been filtered out in a previous test run. Still, worth a try from my opinion.
One of the most useful code analysis tools of Resharper is to mark symbols as not used if no usage if found in the solution.
Unfortunately any symbol that is covered by unit tests is regarded as used.
=> I am looking for a way to ignore the unit tests for this usage analysis.
Scanning through the Resharper options I found a button labeled "Edit Items to Skip". It has a long description text that says, amongst other things, "...if a certain symbol in solution is only used in files that you skip, this symbol will be highlighted as never used."
This sounded like exactly what I wanted. But by putting the unit test project on the skip-list not only reveals any effectively unused symbol, it only disables the whole code analysis for the test project. Of course I would still want to write good unit test code and thus make use of all the code analysis features of Resharper. I only do not want to count the usages of symbols outside the test project.
Any ideas?
Discovered a very simple answer:
Unload the test project and refresh the code analysis.
So I'm trying to find a way in which I can insert new generic test cases into TFS through C#. These are the same ones that you can create in Visual Studio so I was hoping there was someway to do this with the TFS API somehow. Any hints of suggestions are greatly appreciated.
Thanks!!
The key difference Ewald is pointing to is that there are Test Case Work Items (logical sets of tests you need to execute as a piece of recorded work) and physical tests that can verify behavior. Generic Test is a artifact that executes another tool and verifies the rseults of that other tool. It really has little direct relationship to TFS. These are things you add to a Visual Studio Test Project and can, but are not required to, place in source control (TFS or otherwise).
You can likely you can use an existing Generic Test file as your template for automating this process if you have the need.
Follow these instructions http://www.ewaldhofman.nl/post/2009/12/11/TFS-SDK-2010-e28093-Part-5-e28093-Create-a-new-Test-Case-work-item.aspx to create a new test case work item to TFS.