Unit Testing a Customer's Installation (with NUnit and MSTest) - c#

I've developed a large base of unit tests for my company's application, and dev would like to hand my unit tests over to our support department to help them debug customer installation problems. I wrote my unit tests using mstest, so support would have to install Visual Studio on a customer's machine if they wanted to use my tests out of the box, which is the wrong thing to do, obviously. I have looked into using mstest without VS at the command prompt, but hacking the registry on a customer's system to make it think VS is installed isn't workable either.
To get around this I planned on compiling my mstests for nunit using the information in this post . However, after compiling with NUnit enabled, and adding my test assembly dll to the NUnit runner, I get the error message "This assembly was not built with any known framework."
Has anyone done this and has tips/tricks to get this running? Or is this the entirely wrong way to go about solving this problem? Thanks.

I'm going to go with your second thought on this "Or is this the entirely wrong way to go about solving this problem".
To solve this problem easily and not confuse your support department I would recommend creating a little command-line wrapper around the test class. You can write the command-line tool yourself, or if you prefer you can do the following:
using CSharpTest.Net.Commands;
static void Main(string[] args)
{
MyTest testClass = new MyTest();
// optional: testClass.MySetupMethod();
new CommandInterpreter(testClass).Run(args);
}
Just build the above code as a command-line exe in a new project referencing your test assembly and CSharpTest.Net.Libary.dll. The namespace CSharpTest.Net.Commands is defined in the CSharpTest.Net.Libary.dll assemby from this download.
Essentially the above code will crawl your test class (named MyTest in the example above) and expose all the public methods as commands that can be executed via the command line. By default it provides help output and sets the Environment.ExitCode on failure. If you want to get fancy you can decorate your tests with any of the following:
public class MyTest
{
[System.ComponentModel.DisplayName("rename-this-function")]
[System.ComponentModel.Description("Some description for tech-support")]
[System.ComponentModel.Browsable(true | false)]
public void TestSomeFunction()
{ ... }
}
(And yes I acknowledge I am shamelessly plugging my own code a wee-bit :)

My concern would be someone adding a unit test (or integration test disguised as a unit test) which you inadvertently run against your customer database. Of course I'm making assumptions such as you having a database, but it seems a risky approach unless you can be sure that your tests are non-destructive, now and in the future.
Do you have logging in your application? That's the traditional way to help support, and there are many tools out there to help you.

Related

Map Testcase ID with NUnit

I'm currently building out my automation framework using NUnit, I've got everything working just fine and the last enchancement I'd like to make is to be able to map my automated test scripts to test cases in my testing software.
I'm using TestRail for all my testcases.
My ideal situation is to be able to decorate each test case with the corresponding testcase ID in test rail and when it comes to report the test result in TestRail, I can just use the Case id. Currently I'm doing this via matching test name/script name.
Example -
[Test]
[TestCaseId("001")]
public void NavigateToSite()
{
LoginPage login = new LoginPage(Driver);
login.NavigateToLogInPage();
login.AssertLoginPageLoaded();
}
And then in my teardown method, it would be something like -
[TearDown]
public static void TestTearDown(IWebDriver Driver)
{
var testcaseId = TestContext.CurrentContext.TestCaseid;
var result = TestContext.CurrentContext.Result.Outcome;
//Static method to report to Testrail using API
Report.ReportTestResult(testcaseId, result);
}
I've just made up the testcaseid attribute, but this is what I'm looking for.
[TestCaseId("001")]
I may have missed this if it already exists, or how do I go about possibly extending NUnit to do this?
You can use PropertyAttribute supplied by NUnit.
Example:
[Property("TestCaseId", "001")]
[Test]
public void NavigateToSite()
{
...
}
[TearDown]
public void TearDown()
{
var testCaseId = TestContext.CurrentContext.Test.Properties["TestCaseId"];
}
In addition you can create custom property attribute - see NUnit link
For many years I recommended that people not do this: mix test management code into the tests themselves. It's an obvious violation of the single responsibility principle and it creates difficulties in maintaining the tests.
In addition, there's the problem that the result presented in TearDown may not be final. For example, if you have used [MaxTime] on the test and it exceeds the time specified, your successful test will change to a failure. Several other built-in attributes work this way and of course there is always the possibility of a user-created attribute. The purpose of TearDown is to clean up after your code, not as a springboard for creating a reporting or test management system.
That said, with older versions of NUnit, folks got into the habit of doing this. This was in part due to the fact that NUnit addins (the approach we designed) were fairly complicated to write. There were also fewer problems because NUNit V2 was significantly less extensible on the test side of things. With 3.0, we provided a means for creating test management functions such as this as extensions to the NUnit engine and I suggest you consider using that facility instead of mixing them in with the test code.
The general approach would be to create a property, as suggested in Sulo's answer but to replace your TearDown code with an EventListener extension that reports the result to TestRail. The EventListener has access all the result information - not just the limited fields available in TestContext - in XML format. You can readily extract whatever needs to go to TestRail.
Details of writing TestEngine extensions are found here: https://github.com/nunit/docs/wiki/Writing-Engine-Extensions
Note that are some outstanding issues if you want to use extensions under the Visual Studio adapter, which we are working on. Right now, you'll get the best experience using the console runner.

Is it possible to write the code to be tested and the testing code in the same source file?

In Visual Studio IDE, I can create a unit test file with a unit test class for the code in a source file, by right clicking inside the code to be tested and selecting the option to create unit test.
The testing code and the code to be tested are not only in the same file, but also not in the same namespace.
Is it possible to write the code to be tested and the testing code in the same source file?
If yes, how? Should I put them in the same or different namespaces?
Can you give some examples?
Thanks.
It is possible, but that also means that you deploy your tests with your code, as well as any mocks, dummy data, etc. All of this is unnecessary and may confuse anyone trying to use the library.
However, to answer the question, just use different namespace blocks to separate the test classes in a separate namespace.
namespace MyCompany.MyLibrary
{
// classes
}
namespace MyCompany.MyLibrary.Test
{
// tests, mocks, etc.
}
Yes, there is no restrictions where "code under test" is coming from.
While it is somewhat strange you can have just UnitTest project and put code you trying next to your tests. If you want - even in the same files using same or different namespaces of your choice (C# is not Java and there is no connection bewteen file name/location and namespace)
Yes, but
If you put them in the same code base as the system under test, then you will be unable to deploy the system without also deploying the tests. Do you want the tests sitting on your production servers?
Also, the same app.config (or web.config, depending on your solution) will apply to both your tests and the system under test. That means you can't set up alternate configurations for things like AutoFac, which normally is handy for unit/isolation testing.

How to specify approved.txt files location for running approvals tests on TeamCity

I am trying to run my approvals tests from nUnit under TeamCity
[assembly: FrontLoadedReporter(typeof(TeamCityReporter))]
[Test]
[UseReporter(typeof(WinMergeReporter))]
public void Test()
{
}
Unfortunately test fails because approvals is trying to pick up the approved file from C drive.
Test(s) failed. ApprovalTests.Core.Exceptions.ApprovalMissingException : Failed Approval: Approval File "C:\...approved.txt" Not Found.
Is there anyway I can specify right location for my approval files?
It is appeared that TeamCityReporter was hiding real reason of this issue.
Here is result of local run and output of approvals test with listed solutions.
System.Exception : Could Not Detect Test Framework
Either: 1) Optimizer Inlined Test Methods
Solutions: a) Add [MethodImpl(MethodImplOptions.NoInlining)] b) Set
Build->Opitmize Code to False & Build->Advanced->DebugInfo to Full
or 2) Approvals is not set up to use your test framework. It currently
supports [NUnit, MsTest, MbUnit, xUnit.net, xUnit.extensions,
Machine.Specifications (MSpec)]
Solution: To add one use
ApprovalTests.Namers.StackTraceParsers.StackTraceParser.AddParser()
method to add implementation of
ApprovalTests.Namers.StackTraceParsers.IStackTraceParser with support
for your testing framework. To learn how to implement one see
http://blog.approvaltests.com/2012/01/creating-namers.html
It was tricky to catch because usually local run is done under Debug while deployment and tests under Release. Nevertheless I hope the question and answer will be helpful for somebody else.

How to organize unit tests and do not make refactoring a nightmare?

My current way of organizing unit tests boils down to the following:
Each project has its own dedicated project with unit tests. For a project BusinessLayer, there is a BusinessLayer.UnitTests test project.
For each class I want to test, there is a separate test class in the test project placed within exactly the same folder structure and in exactly the same namespace as the class under test. For a class CustomerRepository from a namespace BusinessLayer.Repositories, there is a test class CustomerRepositoryTests in a namespace BusinessLayerUnitTests.Repositories.
Methods within each test class follow simple naming convention MethodName_Condition_ExpectedOutcome. So the class CustomerRepositoryTests that contains tests for a class CustomerRepository with a Get method defined looks like the following:
[TestFixture]
public class CustomerRepositoryTests
{
[Test]
public void Get_WhenX_ThenRecordIsReturned()
{
// ...
}
[Test]
public void Get_WhenY_ThenExceptionIsThrown()
{
// ...
}
}
This approach has served me quite well, because it makes locating tests for some piece of code really simple. On the opposite site, it makes code refactoring really more difficult then it should be:
When I decide to split one project into multiple smaller ones, I also need to split my test project.
When I want to change namespace of a class, I have to remember to change a namespace (and folder structure) of a test class as well.
When I change name of a method, I have to go through all tests and change the name there, as well. Sure, I can use Search & Replace, but that is not very reliable. In the end, I still need to check the changes manually.
Is there some clever way of organizing unit tests that would still allow me to locate tests for a specific code quickly and at the same time lend itself more towards refactoring?
Alternatively, is there some, uh, perhaps Visual Studio extension, that would allow me to somehow say that "hey, these tests are for that method, so when name of the method changes, please be so kind and change the tests as well"? To be honest, I am seriously considering to write something like that myself :)
After working a lot with tests, I've come to realize that (at least for me) having all those restrictions bring a lot of problems in the long run, rather than good things. So instead of using "Names" and conventions to determine that, we've started using code. Each project and each class can have any number of test projects and test classes. All the test code is organized based on what is being tested from a functionality perspective (or which requirement it implements, or which bug it reproduced, etc...). Then for finding the tests for a piece of code we do this:
[TestFixture]
public class MyFunctionalityTests
{
public IEnumerable<Type> TestedClasses()
{
// We can find the tests for a class, because the test cases references in some special method.
return new []{typeof(SomeTestedType), typeof(OtherTestedType)};
}
[Test]
public void TestRequirement23423432()
{
// ... test code.
this.TestingMethod(someObject.methodBeingTested); //We do something similar for methods if we want to track which methods are being tested (we usually don't)
// ...
}
}
We can use tools like resharper "usages" to find the test cases, etc... And when that's not enough, we do some magic by reflection and LINQ by loading all the test classes, and running something like allTestClasses.where(testClass => testClass.TestedClasses().FindSomeTestClasses());
You can also use the TearDown to gather information about which methods are tested by each method/class and do the same.
One way to keep class and test locations in sync when moving the code:
Move the code to a uniquely named temporary namespace
Search for references to that namespace in your tests to identify the tests that need to be moved
Move the tests to the proper new location
Once all references to the temporary namespace from tests are in the right place, then move the original code to its intended target
One strength of end-to-end or behavioral tests is the tests are grouped by requirement and not code, so you avoid the problem of keeping test locations in sync with the corresponding code.
Regarding VS extensions that associate code to tests, take a look at Visual Studio's Test Impact. It runs the tests under a profiler and creates a compact database that maps IL sequence points to unit tests. So in other words, when you change the code Visual Studio knows which tests need to be run.
One unit test project per project is the way to go. We have tried with a mega unit test project but this increased the compile time.
To help you refactor use a product like resharper or code rush.
Is there some clever way of organizing unit tests that would still
allow me to locate tests for a specific code quickly
Resharper have some good short cuts that allows you to search file or code
As you said for class CustomerRepository their is a test CustomerRepositoryTests
R# shortcut shows inpput box for what you find in you case you can just input CRT and it will show you all the files starting with name have first as Capital C then R and then T
It also allow you do search by wild cards such as CR* will show you the list of file CustomerRepository and CustomerRepositoryTests

C#: Using Conditional-attribute for tests

Are there any good way to use the Conditional-attribute in the context of testing?
My thoughts were that if you can do this:
[Conditional("Debug")]
public void DebugMethod()
{
//...
}
Maybe you could have some use for: (?)
[Conditional("Test")]
public void TestableMethod()
{
//...
}
I don't see a use when there's a better alternative: Test Projects.
Use NUnit or MSTest to achieve this functionality in a more graceful way.
I'd accept Mehrdad's answer - I just wanted to give more context on when you might use these attributes:
Things like [Conditional] are more generally used to control things like logging/tracing, or interactions with an executing debugger; where it makes sense for the calls to be in the middle of your regular code, but which you might not want in certain builds (and #if... etc is just so ugly and easy to forget).
If the test code is not part of the product it should not be in the product code base. I have seen projects trying to include unit tests in the same project as the tested objects, and using #if statements to include them only in debug builds only to regret it later.
One obvious problem would be that the application project gets a reference to the unit testing framework. Even though you may not need to ship that framework as part of your product (if you can guarantee that no code in the release build will reference it), it still smells a bit funny to me.
Let test code be test code and production code be production code, and let the production code have no clue about it.
The other problem with this is that you may want to run your unit tests on your release builds. (We certainly do).

Categories