Unit test additional output - c#

Is there any way how I can output more information in a Windows Store App Unit Test?
I'd like to write some information that are not critical but good to know like the execution time of a specific function call to the console.
Unfortunately I can't find anything on this topic.
There is no Console class so I can't use Console.WriteLine(...);. Debug.WriteLine(...); doesn't seem to produce output.
Writing the data to a file would probably work but I'd prefer not to do that.

Related

Nunit TestResult.xml test report

Current:
We have Hybrid test framework using C# and Nunit.
At end of all test execution, it creates TestResult.xml
Planning to implement:
which later to be pushed in Jira-Xray via API through c# code.
Challenging part:
but the challenge is if I am waiting for whether TestResult.xml file is created or not , it always sends me false.
TestResult.xml only gets creates when 100% of codes are executed. including [TearDown] attribute.
Even if I use wait, Thread, Task to check whether TestResult.xml file is created or not the Nunit thinks, still some code is getting executed, so it won't create TestResult.xml unless all codes are executed.
I want to send TestResult.xml to Jira-XRAY and from the response get all the test case ID and send mail with the list.
Note:
The test frame work has configure setting testResult.runsettings.
<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<NUnit>
<TestOutputXml>C:\ProgramData</TestOutputXml>
</NUnit>
</RunSettings>
Somebody please help me to fix this.
Is it possible to have the TestResult.xml create just after the test is exusted and keep updating with the ongoing test result.
Is it possible to create the TestResult.xml before TearDown in Nunit.
OR
Any other suggestion
Note:
I am able to push TestResult.xml via Postman to Jira-Xray API and works fine, but same thing I wanna use it via code and that can only be achived if TestResult.xml gets create before Nunit reaches to TearDown attribute.
TestResult.xml is created by the runner after all tests are run. Since you are using a .runsettings file, I assume your runner is the NUnit Visual Studio test adapter.
The file is created only at the end of the run because it needs to be a complete, valid XML file. Runners receive events as each test is run and so it's possible to send each result somewhere but for that you would have to create your own runner.
However, I think this is something of an xy problem: you are focused on using a file, which is not yet available when you want it. If you explain more exactly what you want to accomplish using the information in that file, we can probably offer alternatives.
For those who might experience the same as I have posted above, here is the solution.
If you are working with C# Nuint and then want to push the Nunit generated.XML to Jira-Xray , here is the workaround.
Problem:
Unless Nunit test case won't end, it won't generate XML file, and with the same solution you won't be able to push the XML to Jira xray API
Solution:
If you are capturing No. of test case Pass, failed, Skipped or any other status.
Store those result in light db (SQLite db)
When your Nunit test case ends, it is expected the DB will have all the required information and Testresult xml will also get generated.
Now, write down different solutions which read the database value and push the XML to Jira API
And then you can send the report and share the TestPlan ID which you will be receiving when you push the XML to Jira-Xray via SMTP or any communication source.
Note: If you try any lengthy process in TearDown or end of Nunit solution it won't help you because Nunit still thinks something is still in process so it will hold the XML creation, even if you wait for 1 eternity, whether the XML has been generated or not in the same solution, so it is advised to have a different solution.
Once you are done with your solution then you can add in Jenkins under post build script.

Get list of covered lines per unit test to process programmatically

I would like to do some analytics on .NET unit test coverage and I would use to get access to raw data from unit test run of the type [("SerializationTest.DeserializeComposedObject", ["Serializator.cs:89", "Serializator.cs:90", "Serializator.cs:91"])], i.e., I would like to see list of lines affected by each test separately.
I noticed there are questions how to get such data in a graphical form (NCrunch), but I would like to process them further. Is there such functionality available anywhere?
There is an option called coverbytest in OpenCover that does exactly what I need here. It add notes like <TrackedMethodRef uid="10" vc="7" /> to the XML output of the tool that marks what codepoints were visited by what tests.

Check that program logic is deterministic

Don't know if this is right title for what I need. I need to run program with same input data few times and ensure that every time program take exactly the same path and produced exactly the same output. I even need to make sure that some iterator proccessed elements in same order.
Maybe there is some tools for that purpose? Or maybe there is some standard way what-to-do in order to check that? I put C# in tags because I need solution specifically for that language (and I'm coding in VS2012 if that can be of any help).
Edit:
Input of my program consists of list of integers and output is simple boolean. Even if I'll write tests - there can be very big difference in calculations and yet same result. I especially need to check that program code taken the same path every time.
You can use test framework and use mocks with excepts and asssert the output

How to test whether a given functional code unit (C#) does NOT create/write any files to disk?

Imagine there's a mission-critical process that'll be used in a business which handles sensitive information (think of Credit Card, social security, patient records...etc). I would think this unit ideally should do whatever it has to do on-the-fly, meaning it won't intentionally write files to disk containing sensitive information. The idea here is that if the computer that runs this process is compromised, no sensitive information can be leaked, at least not by means of files.
What approaches could be taken to, say, come up with a unit test that will fail if the unit under test tries to write any file to disk?
There is the FileSystemWatcher (http://www.c-sharpcorner.com/uploadfile/puranindia/filesystemwatcher-in-C-Sharp/) however this requires you to know a specific directory. In your case this probably isn't very helpful since the program could write anything to disk any where. This introduces a unique problem. However, I have also found something called Detours from Microsoft. This appears to intercept all native win32 api calls. http://research.microsoft.com/en-us/projects/detours/ The issue with this is that its kind of hard to test, and integrating it into unit testing will be a challenge.
When you have to treat your software as "untrusted" in the sense that you need to prove it doesn't do something, testing becomes a complex task that requires you to run them on very controlled environments. When hooking in to the Win32 API, you will be deluged with API calls that need to be processed quickly. This can result in unintentional side effects because the application is not running in a truly native environment.
My suggestion to you (having worked several years doing software testing for Pharma automation to the exacting standards of the FDA) is to create a controlled environment, eg a virtual machine, that has a known starting state. This can be accomplished by never actually saving vmdk changes to disk. You have to take a snapshot of the file system. You can do this by writing a C# app to enumerate all files on the virtual drive, getting their size, some timestamps and maybe even a hash of the file. This can be time consuming so you may want (or be able) to skip the hashing. Create some sort of report, easiest would be by dropping them in a CSV or XML export. You then run your software under normal circumstances for a set period of time. Once this is complete, you run a file system analysis again and compare the results. There are some good apps out there for comparing file contents (like WinMerge). When taking these snap shots, the best way to do it would be to mount the vmdk as a drive in the host OS. This will bypass any file locks the guest OS might have.
This method is time intensive but quite thorough. If you don't need something of this depth, you can use something like Process Monitor and write the output to a file and run a report against that. However in my work I would have to prove that Process Monitor shows all IO before I could use it which can be just as hard as the method I spoke of above.
Just my 2 cents.
UPDATE:
I've been thinking about it, and you might be able to achieve fairly reliable results if you remove all references to System.IO from your code. Write a library to wrap around System.IO that either does not implement a write method, or only implements one that also writes to a log file. In this case, you simply have to validate that every time a write occurs using your library, it gets logged. Then validate using reflection that you don't reference System.IO outside of this new wrapper library. Your tests can then simply look at this log file to make sure only approved writes are occurring. You could make use of a SQL Database instead of a flat log file to help avoid cases of tampering or contaminated results. This should be much easier to validate than trying to script a virtual machine setup like I described above. This, of course, all requires you to access to the source code of the "untrusted" application, although since you are unit testing it, I assume you do.
1st option:
Maybe you could use Code Access Security, but the "Deny" is obsolete in .NET 4 (but should works in previous version):
[FileIOPermission(SecurityAction.Deny)]
public class MyClass
{
...
}
You may reactivate this behavior in .NET 4 using NetFx40_LegacySecurityPolicy
2nd option:
reducing the level of privilege may also works, as I know that downloaded app can't write on the disk and must use a special storage area.
3rd option:
Remove any reference to System.IO and replace by an interface that your code must use to write data to disk.
Then write an implementation that use System.IO (in a separate project)
In the nunit test, mock this interface and throw an exception when a method id called.
Problem is to ensure any developers will not call System.IO anymore. You can try to do this by enforcing coding rules using FxCop (or other similar tools)

Reading quickfix log file

I want to test my trading system by playing execution reports back into my application. Then I could verify that my order/position state is correct.
I found this somewhat related question: how to replay a quickfix log
The difference is that in the article the person was looking for a whole testing tool that would play back a log file. What I was wondering is whether there exists a utility that will take a string representing a FIX message and then just generate a FIX object (ex: ExecutionReport).
Does anything like this exist out there? Has everyone just been writing their own?
It sounds like you simply want a different kind of test tool.
If you've written your app in unit-test-friendly fashion, then you could simply write unit tests to create ExecReport objects and pass them as parameters into some ExecReport-processor component. (I'm guessing you're not designing for UTs, else you probably wouldn't need this suggestion.)
If not, then I think the best thing to do is write another app that your first app can connect to. You could create a simple Acceptor app that can use command-line commands to trigger ExecReports to be sent. If you're using QuickFIX/n (the C# port), you could steal code from QuickFIX/n's example apps "TradeClient" and "Executor".

Categories