Testing dynamically generated code - c#

I'm working on a project that writes ADO.NET code for a database. Source code located here: GenTools. It reads the stored procedures and tables from a database and outputs C# code. I added unit testing to the project using NUnit, and hit a stumbling block on testing the generated code.
Right now, I'm following these steps to test the generated code:
Generate code
Compile the generated code into an assembly
Load assembly
Use reflection to test generated code
The problem with this approach is that the tests have to be ran in order. The next step will never succeed if the previous one fails, and none of the steps can be left out. An example is here.
I don't like this setup because once step #4 is reached, a failed test on the generated code will prevent the rest from running.
Is there way way to make sure the first 3 steps run sequentially, then have all tests in step #4 seperated out? I don't mind switching testing frameworks.

The TestNG test framework allows you to establish dependencies such that later tests depend on earlier tests, and you have good control over the details. I have more detail on ordering of tests here:
http://ancient-marinator.blogspot.ca/2013/05/on-testing-scheduling-your-tests.html
Naturally, TestNG has its own web site: http://testng.org/doc/index.html

Related

Nunit TestResult.xml test report

Current:
We have Hybrid test framework using C# and Nunit.
At end of all test execution, it creates TestResult.xml
Planning to implement:
which later to be pushed in Jira-Xray via API through c# code.
Challenging part:
but the challenge is if I am waiting for whether TestResult.xml file is created or not , it always sends me false.
TestResult.xml only gets creates when 100% of codes are executed. including [TearDown] attribute.
Even if I use wait, Thread, Task to check whether TestResult.xml file is created or not the Nunit thinks, still some code is getting executed, so it won't create TestResult.xml unless all codes are executed.
I want to send TestResult.xml to Jira-XRAY and from the response get all the test case ID and send mail with the list.
Note:
The test frame work has configure setting testResult.runsettings.
<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<NUnit>
<TestOutputXml>C:\ProgramData</TestOutputXml>
</NUnit>
</RunSettings>
Somebody please help me to fix this.
Is it possible to have the TestResult.xml create just after the test is exusted and keep updating with the ongoing test result.
Is it possible to create the TestResult.xml before TearDown in Nunit.
OR
Any other suggestion
Note:
I am able to push TestResult.xml via Postman to Jira-Xray API and works fine, but same thing I wanna use it via code and that can only be achived if TestResult.xml gets create before Nunit reaches to TearDown attribute.
TestResult.xml is created by the runner after all tests are run. Since you are using a .runsettings file, I assume your runner is the NUnit Visual Studio test adapter.
The file is created only at the end of the run because it needs to be a complete, valid XML file. Runners receive events as each test is run and so it's possible to send each result somewhere but for that you would have to create your own runner.
However, I think this is something of an xy problem: you are focused on using a file, which is not yet available when you want it. If you explain more exactly what you want to accomplish using the information in that file, we can probably offer alternatives.
For those who might experience the same as I have posted above, here is the solution.
If you are working with C# Nuint and then want to push the Nunit generated.XML to Jira-Xray , here is the workaround.
Problem:
Unless Nunit test case won't end, it won't generate XML file, and with the same solution you won't be able to push the XML to Jira xray API
Solution:
If you are capturing No. of test case Pass, failed, Skipped or any other status.
Store those result in light db (SQLite db)
When your Nunit test case ends, it is expected the DB will have all the required information and Testresult xml will also get generated.
Now, write down different solutions which read the database value and push the XML to Jira API
And then you can send the report and share the TestPlan ID which you will be receiving when you push the XML to Jira-Xray via SMTP or any communication source.
Note: If you try any lengthy process in TearDown or end of Nunit solution it won't help you because Nunit still thinks something is still in process so it will hold the XML creation, even if you wait for 1 eternity, whether the XML has been generated or not in the same solution, so it is advised to have a different solution.
Once you are done with your solution then you can add in Jenkins under post build script.

Visual Studio unit test fails calling function in nuget package even though same call from application works C#

Let me start with what I don't believe the problems are. The problem is not with the Nuget package(directly at least), the problem isn't with SignNow API(that's where I'm failing), probably isn't with any of the settings for connecting.
What's happening is I have a Unit Test that now exactly mirrors my application code in connecting to SignNow to pull my token. When I make the call:
JObject OAuthRes = SignNow.OAuth2.RequestToken( UserId, Password, "*" );
from my application, it's great. When I make the same call from my UnitTest, it returns null. The funny thing is, SignNow tells me their function never returns null. If I attempt to step into the RequestToken function from the UnitTest, it just steps over it. I'm using the same Nuget Package for SignNow in both projects ( these are projects within the same solution, using shared code ) and has the same userid/password passed in. My best guess is that the build of the Unit Test is somehow failing to link to the SignNow library correctly. I've seen this sort of thing on older C++ compilers before. To fix, I attempted to roll back my Git repo to code when it used to work and it still failed.
I'm looking for some clues in how a function that works correctly in one build could fail in the unit test when all the code run is the same.

Get list of covered lines per unit test to process programmatically

I would like to do some analytics on .NET unit test coverage and I would use to get access to raw data from unit test run of the type [("SerializationTest.DeserializeComposedObject", ["Serializator.cs:89", "Serializator.cs:90", "Serializator.cs:91"])], i.e., I would like to see list of lines affected by each test separately.
I noticed there are questions how to get such data in a graphical form (NCrunch), but I would like to process them further. Is there such functionality available anywhere?
There is an option called coverbytest in OpenCover that does exactly what I need here. It add notes like <TrackedMethodRef uid="10" vc="7" /> to the XML output of the tool that marks what codepoints were visited by what tests.

Writing unit tests in my compiler (which generates IL)

I'm writing a Tiger compiler in C# and I'm going to translate the Tiger code into IL.
While implementing the semantic check of every node in my AST, I created lots of unit tests for this. That is pretty simple, because my CheckSemantic method looks like this:
public override void CheckSemantics(Scope scope, IList<Error> errors) {
...
}
so, if I want to write some unit test for the semantic check of some node, all I have to do is build an AST, and call that method. Then I can do something like:
Assert.That(errors.Count == 0);
or
Assert.That(errors.Count == 1);
Assert.That(errors[0] is UnexpectedTypeError);
Assert.That(scope.ExistsType("some_declared_type"));
but I'm starting the code generation in this moment, and I don't know what could be a good practice when writing unit tests for that phase.
I'm using the ILGenerator class. I've thought about the following:
Generate the code of the sample program I want to test
Save generated code as test.exe
Execute text.exe and store the output in results
Assert against results
but I'm wondering if there is a better way of doing it?
That's exactly what we do on the C# compiler team to test our IL generator.
We also run the generated executable through ILDASM and verify that the IL is produced as expected, and run it through PEVERIFY to ensure that we're generating verifiable code. (Except of course in those cases where we are deliberately generating unverifiable code.)
I've created a post-compiler in C# and I used this approach to test the mutated CIL:
Save the assembly in a temp file, that will be deleted after I'm done with it.
Use PEVerify to check the assembly; if there's a problem I copy it to a known place for further error analysis.
Test the assembly contents. In my case I'm mostly loading the assembly dynamically in a separate AppDomain (so I can tear it down later) and exercising a class in there (so it's like a self-checking assembly: here's a sample implementation).
I've also given some ideas on how to scale integration tests in this answer.
You can think of testing as doing two things:
letting you know if the output has changed
letting you know if the output is incorrect
Determining if something has changed is often considerably faster than determining if something is incorrect, so it can be a good strategy to run change-detecting tests more frequently than incorrectness-detecting tests.
In your case you don't need to run the executables produced by your compiler every time if you can quickly determine that the executable has not changed since a known good (or assumed good) copy of the same executable was produced.
You typically need to do a small amount of manipulation on the output that you're testing to eliminate differences that are expected (for example setting embedded dates to a fixed value), but once you have that done, change-detecting tests are easy to write because the validation is basically a file comparison: Is the output the same as the last known good output? Yes: Pass, No: Fail.
So the point is that if you see performance problems with running the executables produced by your compiler and detecting changes in the output of those programs, you can choose to run tests that detect changes a stage earlier by comparing the executables themselves.

How can I ignore some unit-tests without lowering the code-coverage percentage?

I have one class which talks to DataBase.
I have my integration-tests which talks to Db and asserts relevant changes. But I want those tests to be ignored when I commit my code because I do not want them to be called automatically later on.
(Just for development time I use them for now)
When I put [Ignore] attribute they are not called but code-coverage reduces dramatically.
Is there a way to keep those tests but not have them run automatically
on the build machine in a way that the fact that they are ignored does
not influence code-coverage percentage?
Whatever code coverage tool you use most likely has some kind of CoverageIgnoreAttribute or something along those lines (at least the ones I've used do) so you just place that on the method block that gets called from those unit tests and you should be fine.
What you request seems not to make sense. Code-Coverage is measured by executing your tests and log which statements/conditions etc. are executed. If you disable your tests, nothing get executed and your code-coverage goes down.
TestNG has groups so you can specify to only run some groups, automatically and have the others for usage outside of that. You didn't specify your unit testing framework but it might have something similar.
I do not know if this is applicable to your situation. But spontaneously I am thinking of a setup where you have two solution files (.sln), one with unit/integration tests and one without. The two solutions share the same code and project files with the exception that your development/testing solution includes your unit tests (which are built and run at compile time), and the other solution doesn't. Both solutions should be under source control but only the one without unit tests are built by the build server.
This kind of setup should not need you to change existing code (too much). Which I would prefer over rewriting code to fit your test setup.

Categories