I'm using OpenCover to generate functional test coverage for a web application. These tests are fairly long running (3+ hours), so we've chopped them up into multiple tests that run in parallel. So instead of a single coverage report, there are six.
In order to import these coverage reports into SonarQube, I need to figure out a way to combine them into one uber report. ReportGenerator supports merging multiple reports into one, but creates HTML output, which is not something SonarQube can consume.
At this point my options are
hand-roll an OpenCover report merger (blech!)
Run my functional tests serially, substantially increasing failure feedback times
Any other options I'm missing?
I have created the following ticket on the SonarQube .NET side to allow multiple coverage reports to be specified, and to aggregate them: http://jira.codehaus.org/browse/SONARPLUGINS-3666.
In the meantime though, I cannot think of other options besides the 2 you already had.
Newer version of Report generator has support for wild card.
You can provide all XML reports as "*.XML" and Report Generator will generate one consolidated report from it.
OpenCover has -mergeoutput argument that makes it to work with the -output file in an append-only fashion, preserving previous measurements found there. That should allow you to call individual test runs separately -- as long as your SUT is still same.
My experience with trying to run tests with different -filter arguments is that OpenCover refuses to reopen module that has been filtered out in a previous test run. Still, worth a try from my opinion.
Related
While working on unit tests for a project which generates html using the Razor engine, I discovered a really weird scenario.
In order to get the unit test to be correct, I hard-coded the model, called the function, and saved the generated html code. Our business users viewed the generated html and gave the seal of approval, and our designers examined the html code and said everything looks good.
I now had an html file which I could use to compare against in the unit test to ensure that any changes to the code would not produce a different html file given the exact same model data.
On my local development machine, the unit test passes when comparing the byte array (File.ReadAllBytes(path)). However, on our build agent the unit test fails due to extra ASCII 13 bytes, here is a snippet of a section of the byte arrays:
Build Agent: 111-100-121-62-13-10-32-32-32
Local Machine: 111-100-121-62-10-32-32-32
I'm not sure what is going on here or how to resolve this. Is this normal? How would I rewrite the test to fixes this?
Additional Information:
The build agent is running Windows Server 2016, Visual Studio 2017 15.7.6
My local development box is running Windows 10 Enterprise 10.0.14393, Visual Studio 2017 15.8.1
"different html file given the exact same model data": There are infinitely many different HTML files that are semantically equivalent when parsed and even more when rendered.
To get over your hurdle though—if you don't need to consider newlines in <pre> and related elements—you could just
//Note: Ignoring newlines because they are believed to be insignificant. No <pre> etc expected.
// This allows newlines to vary across systems and across time.
Func<String, List<String>> splitByLineAndRemoveEmpty =
input => Regex.Split(input, #"\r|\n").Where(line => !String.IsNullOrEmpty(line)).ToList();
CollectionAssert.AreEqual(
splitByLineAndRemoveEmpty(expected),
splitByLineAndRemoveEmpty(File.ReadAllText(path)));
That could kick the can far enough down the testing road that it won't bother anyone again for awhile.
I would like to do some analytics on .NET unit test coverage and I would use to get access to raw data from unit test run of the type [("SerializationTest.DeserializeComposedObject", ["Serializator.cs:89", "Serializator.cs:90", "Serializator.cs:91"])], i.e., I would like to see list of lines affected by each test separately.
I noticed there are questions how to get such data in a graphical form (NCrunch), but I would like to process them further. Is there such functionality available anywhere?
There is an option called coverbytest in OpenCover that does exactly what I need here. It add notes like <TrackedMethodRef uid="10" vc="7" /> to the XML output of the tool that marks what codepoints were visited by what tests.
First of all i'm new here and new to SpecFlow. I'll try to be as clear as possible because I'm still exploring ways to solve my problems so please bear with me :)
Alright here I go. I have a solution (lets call it DBHelper) that does a few operations on a Database and i want to provide a tool in BBD (using specflow) to determine and set up a test suite using test rail that will run automatically. These tests could be a set composed of single scenario run several times but with different values. I'm still very early in the development of this tool so the version i have right now is connected to DBHelper and does a single operation when i run either SpecRun of NUnit.
Here is my scenario:
Scenario: InsertBuildCommand
Given The build name is AmazingTest
And The build type is Test
And The platform is PC
And The number of files in the build is 13
And Each file is 8 MB
And The project code name is UPS
And The studio code name is MTL
And The environment is TEST
When The command executes
Then The build should be inserted in the DB with the correct files in it
Now i am looking fo a way to make the scenario dynamic. I ultimatelly want the user to enter be able to run the scenario but his choice of values (ex: the name of the build would be MoreAmazingTest) without being in VS. I know you can run SpecRun from the command line but i am clueless as to how to close the gap between the orignally hardcoded scenario values and the user input. The steps contain regular expression where it is useful so it really is just about the scenario values.
Someone told me about coding a custom plugin or reverse engineer Specrun and make a modified version of it but i have no idea how that would help me. Pardon me if it all makes sense i'm not en expert :x
Thanks alot!
If I understand your question properly, you can use Scenario Outline rather than Scenario. Scenario Outline help
You would then have something like this:
Scenario Outline: test using multiple examples
Given I do something
When I enter <numbers>
And I click a button
Then I will have an answer
Examples:
|numbers|
|1 |
|2 |
|3 |
It will then run the same scenario for each example given.
One way is to define some kind of configuration file which the step definitions will read and perform the tests on it. After you change the file you can run the tests however you want, from a command line or VS and it will read the file and get the numbers from there.
I use environment variables for that.
But if you really need arguments, you could also create an .exe (consoleapp) which uses specflow/nunit/etc to pass the cmd arguments to your classes.
On OpenCover github page I can see that OpenCover supports coverage by test ("Release 3 (coverage by test support, debug symbols"). The issue is, I don't know how to run OpenCover with this option. My workflow is to run unit tests with OpenCover and nUnit, then use ReportGenerator to generate full html report and view it - and I can't see the "coverage by test" anywhere.
Or maybe I got the "coverage by test" feature wrong? How I imagine this feature is that I can get an answer to a question such as "which lines of code does my TestXYZ() cover?".
Can anyone give me some tips on how to use the feature?
I submitted this as an issue to Daniel Palme, who is responsible for Report Generator and he actually agreed to add support for this capability! What's more, he already put it into the repository (http://reportgenerator.codeplex.com/SourceControl/changeset/70732).
What a great guy!
You will need to use the -coverbytest switch should be detailed in the Usage.rtf guide - it uses the same sort of filters as used for coverage inclusion/exclusion.
However ReportGenerator does not support OpenCover's Coverage By Test feature - you will need to write your own reporting for this - the XML from OpenCover is easy to understand though.
Choose the test method and then locate which lines of code those test methods are recorded against.
I have one class which talks to DataBase.
I have my integration-tests which talks to Db and asserts relevant changes. But I want those tests to be ignored when I commit my code because I do not want them to be called automatically later on.
(Just for development time I use them for now)
When I put [Ignore] attribute they are not called but code-coverage reduces dramatically.
Is there a way to keep those tests but not have them run automatically
on the build machine in a way that the fact that they are ignored does
not influence code-coverage percentage?
Whatever code coverage tool you use most likely has some kind of CoverageIgnoreAttribute or something along those lines (at least the ones I've used do) so you just place that on the method block that gets called from those unit tests and you should be fine.
What you request seems not to make sense. Code-Coverage is measured by executing your tests and log which statements/conditions etc. are executed. If you disable your tests, nothing get executed and your code-coverage goes down.
TestNG has groups so you can specify to only run some groups, automatically and have the others for usage outside of that. You didn't specify your unit testing framework but it might have something similar.
I do not know if this is applicable to your situation. But spontaneously I am thinking of a setup where you have two solution files (.sln), one with unit/integration tests and one without. The two solutions share the same code and project files with the exception that your development/testing solution includes your unit tests (which are built and run at compile time), and the other solution doesn't. Both solutions should be under source control but only the one without unit tests are built by the build server.
This kind of setup should not need you to change existing code (too much). Which I would prefer over rewriting code to fit your test setup.