Current:
We have Hybrid test framework using C# and Nunit.
At end of all test execution, it creates TestResult.xml
Planning to implement:
which later to be pushed in Jira-Xray via API through c# code.
Challenging part:
but the challenge is if I am waiting for whether TestResult.xml file is created or not , it always sends me false.
TestResult.xml only gets creates when 100% of codes are executed. including [TearDown] attribute.
Even if I use wait, Thread, Task to check whether TestResult.xml file is created or not the Nunit thinks, still some code is getting executed, so it won't create TestResult.xml unless all codes are executed.
I want to send TestResult.xml to Jira-XRAY and from the response get all the test case ID and send mail with the list.
Note:
The test frame work has configure setting testResult.runsettings.
<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<NUnit>
<TestOutputXml>C:\ProgramData</TestOutputXml>
</NUnit>
</RunSettings>
Somebody please help me to fix this.
Is it possible to have the TestResult.xml create just after the test is exusted and keep updating with the ongoing test result.
Is it possible to create the TestResult.xml before TearDown in Nunit.
OR
Any other suggestion
Note:
I am able to push TestResult.xml via Postman to Jira-Xray API and works fine, but same thing I wanna use it via code and that can only be achived if TestResult.xml gets create before Nunit reaches to TearDown attribute.
TestResult.xml is created by the runner after all tests are run. Since you are using a .runsettings file, I assume your runner is the NUnit Visual Studio test adapter.
The file is created only at the end of the run because it needs to be a complete, valid XML file. Runners receive events as each test is run and so it's possible to send each result somewhere but for that you would have to create your own runner.
However, I think this is something of an xy problem: you are focused on using a file, which is not yet available when you want it. If you explain more exactly what you want to accomplish using the information in that file, we can probably offer alternatives.
For those who might experience the same as I have posted above, here is the solution.
If you are working with C# Nuint and then want to push the Nunit generated.XML to Jira-Xray , here is the workaround.
Problem:
Unless Nunit test case won't end, it won't generate XML file, and with the same solution you won't be able to push the XML to Jira xray API
Solution:
If you are capturing No. of test case Pass, failed, Skipped or any other status.
Store those result in light db (SQLite db)
When your Nunit test case ends, it is expected the DB will have all the required information and Testresult xml will also get generated.
Now, write down different solutions which read the database value and push the XML to Jira API
And then you can send the report and share the TestPlan ID which you will be receiving when you push the XML to Jira-Xray via SMTP or any communication source.
Note: If you try any lengthy process in TearDown or end of Nunit solution it won't help you because Nunit still thinks something is still in process so it will hold the XML creation, even if you wait for 1 eternity, whether the XML has been generated or not in the same solution, so it is advised to have a different solution.
Once you are done with your solution then you can add in Jenkins under post build script.
Related
I would like to do some analytics on .NET unit test coverage and I would use to get access to raw data from unit test run of the type [("SerializationTest.DeserializeComposedObject", ["Serializator.cs:89", "Serializator.cs:90", "Serializator.cs:91"])], i.e., I would like to see list of lines affected by each test separately.
I noticed there are questions how to get such data in a graphical form (NCrunch), but I would like to process them further. Is there such functionality available anywhere?
There is an option called coverbytest in OpenCover that does exactly what I need here. It add notes like <TrackedMethodRef uid="10" vc="7" /> to the XML output of the tool that marks what codepoints were visited by what tests.
First of all i'm new here and new to SpecFlow. I'll try to be as clear as possible because I'm still exploring ways to solve my problems so please bear with me :)
Alright here I go. I have a solution (lets call it DBHelper) that does a few operations on a Database and i want to provide a tool in BBD (using specflow) to determine and set up a test suite using test rail that will run automatically. These tests could be a set composed of single scenario run several times but with different values. I'm still very early in the development of this tool so the version i have right now is connected to DBHelper and does a single operation when i run either SpecRun of NUnit.
Here is my scenario:
Scenario: InsertBuildCommand
Given The build name is AmazingTest
And The build type is Test
And The platform is PC
And The number of files in the build is 13
And Each file is 8 MB
And The project code name is UPS
And The studio code name is MTL
And The environment is TEST
When The command executes
Then The build should be inserted in the DB with the correct files in it
Now i am looking fo a way to make the scenario dynamic. I ultimatelly want the user to enter be able to run the scenario but his choice of values (ex: the name of the build would be MoreAmazingTest) without being in VS. I know you can run SpecRun from the command line but i am clueless as to how to close the gap between the orignally hardcoded scenario values and the user input. The steps contain regular expression where it is useful so it really is just about the scenario values.
Someone told me about coding a custom plugin or reverse engineer Specrun and make a modified version of it but i have no idea how that would help me. Pardon me if it all makes sense i'm not en expert :x
Thanks alot!
If I understand your question properly, you can use Scenario Outline rather than Scenario. Scenario Outline help
You would then have something like this:
Scenario Outline: test using multiple examples
Given I do something
When I enter <numbers>
And I click a button
Then I will have an answer
Examples:
|numbers|
|1 |
|2 |
|3 |
It will then run the same scenario for each example given.
One way is to define some kind of configuration file which the step definitions will read and perform the tests on it. After you change the file you can run the tests however you want, from a command line or VS and it will read the file and get the numbers from there.
I use environment variables for that.
But if you really need arguments, you could also create an .exe (consoleapp) which uses specflow/nunit/etc to pass the cmd arguments to your classes.
Is there any way how I can output more information in a Windows Store App Unit Test?
I'd like to write some information that are not critical but good to know like the execution time of a specific function call to the console.
Unfortunately I can't find anything on this topic.
There is no Console class so I can't use Console.WriteLine(...);. Debug.WriteLine(...); doesn't seem to produce output.
Writing the data to a file would probably work but I'd prefer not to do that.
I'm using OpenCover to generate functional test coverage for a web application. These tests are fairly long running (3+ hours), so we've chopped them up into multiple tests that run in parallel. So instead of a single coverage report, there are six.
In order to import these coverage reports into SonarQube, I need to figure out a way to combine them into one uber report. ReportGenerator supports merging multiple reports into one, but creates HTML output, which is not something SonarQube can consume.
At this point my options are
hand-roll an OpenCover report merger (blech!)
Run my functional tests serially, substantially increasing failure feedback times
Any other options I'm missing?
I have created the following ticket on the SonarQube .NET side to allow multiple coverage reports to be specified, and to aggregate them: http://jira.codehaus.org/browse/SONARPLUGINS-3666.
In the meantime though, I cannot think of other options besides the 2 you already had.
Newer version of Report generator has support for wild card.
You can provide all XML reports as "*.XML" and Report Generator will generate one consolidated report from it.
OpenCover has -mergeoutput argument that makes it to work with the -output file in an append-only fashion, preserving previous measurements found there. That should allow you to call individual test runs separately -- as long as your SUT is still same.
My experience with trying to run tests with different -filter arguments is that OpenCover refuses to reopen module that has been filtered out in a previous test run. Still, worth a try from my opinion.
I'm working on a project that writes ADO.NET code for a database. Source code located here: GenTools. It reads the stored procedures and tables from a database and outputs C# code. I added unit testing to the project using NUnit, and hit a stumbling block on testing the generated code.
Right now, I'm following these steps to test the generated code:
Generate code
Compile the generated code into an assembly
Load assembly
Use reflection to test generated code
The problem with this approach is that the tests have to be ran in order. The next step will never succeed if the previous one fails, and none of the steps can be left out. An example is here.
I don't like this setup because once step #4 is reached, a failed test on the generated code will prevent the rest from running.
Is there way way to make sure the first 3 steps run sequentially, then have all tests in step #4 seperated out? I don't mind switching testing frameworks.
The TestNG test framework allows you to establish dependencies such that later tests depend on earlier tests, and you have good control over the details. I have more detail on ordering of tests here:
http://ancient-marinator.blogspot.ca/2013/05/on-testing-scheduling-your-tests.html
Naturally, TestNG has its own web site: http://testng.org/doc/index.html