Map Testcase ID with NUnit - c#

I'm currently building out my automation framework using NUnit, I've got everything working just fine and the last enchancement I'd like to make is to be able to map my automated test scripts to test cases in my testing software.
I'm using TestRail for all my testcases.
My ideal situation is to be able to decorate each test case with the corresponding testcase ID in test rail and when it comes to report the test result in TestRail, I can just use the Case id. Currently I'm doing this via matching test name/script name.
Example -
[Test]
[TestCaseId("001")]
public void NavigateToSite()
{
LoginPage login = new LoginPage(Driver);
login.NavigateToLogInPage();
login.AssertLoginPageLoaded();
}
And then in my teardown method, it would be something like -
[TearDown]
public static void TestTearDown(IWebDriver Driver)
{
var testcaseId = TestContext.CurrentContext.TestCaseid;
var result = TestContext.CurrentContext.Result.Outcome;
//Static method to report to Testrail using API
Report.ReportTestResult(testcaseId, result);
}
I've just made up the testcaseid attribute, but this is what I'm looking for.
[TestCaseId("001")]
I may have missed this if it already exists, or how do I go about possibly extending NUnit to do this?

You can use PropertyAttribute supplied by NUnit.
Example:
[Property("TestCaseId", "001")]
[Test]
public void NavigateToSite()
{
...
}
[TearDown]
public void TearDown()
{
var testCaseId = TestContext.CurrentContext.Test.Properties["TestCaseId"];
}
In addition you can create custom property attribute - see NUnit link

For many years I recommended that people not do this: mix test management code into the tests themselves. It's an obvious violation of the single responsibility principle and it creates difficulties in maintaining the tests.
In addition, there's the problem that the result presented in TearDown may not be final. For example, if you have used [MaxTime] on the test and it exceeds the time specified, your successful test will change to a failure. Several other built-in attributes work this way and of course there is always the possibility of a user-created attribute. The purpose of TearDown is to clean up after your code, not as a springboard for creating a reporting or test management system.
That said, with older versions of NUnit, folks got into the habit of doing this. This was in part due to the fact that NUnit addins (the approach we designed) were fairly complicated to write. There were also fewer problems because NUNit V2 was significantly less extensible on the test side of things. With 3.0, we provided a means for creating test management functions such as this as extensions to the NUnit engine and I suggest you consider using that facility instead of mixing them in with the test code.
The general approach would be to create a property, as suggested in Sulo's answer but to replace your TearDown code with an EventListener extension that reports the result to TestRail. The EventListener has access all the result information - not just the limited fields available in TestContext - in XML format. You can readily extract whatever needs to go to TestRail.
Details of writing TestEngine extensions are found here: https://github.com/nunit/docs/wiki/Writing-Engine-Extensions
Note that are some outstanding issues if you want to use extensions under the Visual Studio adapter, which we are working on. Right now, you'll get the best experience using the console runner.

Related

How to pass Test data to NUnit hooks like [TearDown]

I have a testing framework that has been converted to heavily utilize NUnit [Parallelizable]. I used to store contextual test data in the base class of the [TestFixture] which NUnit orchestrates the hooks like [OneTimeSetUp], [TearDown], etc.
For example:
[Test]
public void GoToGoogle()
{
var driver = new ChromeDriver();
// do some stuff
// Would like to pass data outside of test scope
TestContext.CurrentContext.Test.Properties.Set("DriverUrl", driver.Url); // Obviously does not work
Assert.Fail("This test should fail");
}
In the [TearDown] hook, I would like to get certain information about the test contextually. Because not everything is able to be handled nicely in asserts.
[TearDown]
public void TearDown()
{
var url = TestContext.CurrentContext.Test.Properties["DriverUrl"].ToString();
var msg = $"Test encountered an error at URL: {url}"
TestAPI.PushResult(Result.Fail, msg);
}
The code above involving the TestContext does not work for obvious reasons, but I am wondering if there is a best practice that allows for me to pass data in this manner, keeping in mind respect to [Parallelizable] and that I cannot scope test data or dependencies to the [TestFixture]
You say "for obvious reasons" but I'll first spell out the reasons why you cannot effectively set a property on the current test through TestContext. After all, other people just might be reading this. :-)
The Obvious Part
TestContext.CurrentContext.Test does not return the internal representation of a test from inside NUnit. Doing so would allow users to break NUnit in a variety of ways. In particular, TestContext.CurrentContext.Test.Properties returns a copy of the properties used within NUnit.
That copy of the properties is not readonly, so you are able to set properties on it. For that reason, one might expect to be able to set it in the [Test] method and access the value in the [Teardown].
Unfortunately, because of a minor implementation detail, that's not the case. In fact, each time you use TestContext.CurrentContext, an entirely new copy of the context is created. The only reason for this, I'm afraid, is that it was originally implemented that way and is a bit difficult to change in a non-breaking way.
As a result of this implementation detail, we lost an easy way for the three parts (SetUp, Test method, TearDown) of a test to communicate. Prior to the availability of parallel execution, it was possible to pass such information using members of the fixture class. That no longer works once tests are run in parallel.
Workarounds
Use Thread Local Storage to hold the retained information. SetUp, Test and Teardown all run on the same thread. Note that OneTimeSetUp and OneTimeTearDown will not generally use the same thread in a parallel execution environment.
If you are wiling to run fixtures in parallel but not individual test cases, then you can still use class members to retain information. As a further step, apply the SingleThreadedAttribute to your fixture, forcing all the code associated with it (including one-time setup and teardown) to run on the same thread.
If you have many fixtures, which can run in parallel, the second approach may actually give you a better performance trade-off than other approaches. Unfortunately, not everyone can use it - at least not without a major reorganization of their tests. You have to look at what your own tests are doing.
Permanent Solution
That would be to modify NUnit so that properties are both writable and shareable, at least within a single fixture instance. There have already been a few feature requests out there to do that on the NUnit GitHub project. I'm no longer active on the framework project, so I don't know what the plans are. However, I think I can say that it's not likely to happen before a major version change, i.e. NUnit 4.0.

How do I distinguish between Unit Tests and Integration Tests inside a test class?

My question is similar to this one: Junit: splitting integration test and Unit tests. However, my question regards NUnit instead of: JUnit. What is the best way to distinguish between Unit Tests and Integration Tests inside a test class? I was hoping to be able to do something like this:
[TestFixture]
public class MyFixture
{
[IntegrationTest]
[Test]
public void MyTest1()
{
}
[UnitTest]
[Test]
public void MyTest1()
{
}
}
Is there a way to do this with NUnit? Is there a better way to dot this?
Personally I've found it better to keep them in separate assemblies. You can use a convention, such as name.Integration.Tests and name.Tests (or whatever your team prefers).
Either assemblies or attributes work fine for CI servers like TeamCity. The pain with the attribute approach tends to show up in IDE test runners. I want to be able to quickly run only my unit tests. With separate assemblies, it's easy - select the appropriate test project and run tests.
The Category Attribute might help you do this.
https://github.com/nunit/docs/wiki/Category-Attribute
namespace NUnit.Tests
{
using System;
using NUnit.Framework;
[TestFixture]
public class SuccessTests
{
[Test]
[Category("Unit")]
public void VeryLongTest()
{ /* ... */ }
}
This answer shares some details with a few other answers, but I'd like to put the question in a slightly different perspective.
The design of TestFixtures is such that every test gets the same setup. To use TestFixtures correctly, you should divide your tests in such a way that all the tests with the same setup end up in the same test class. This is how almost every xunit framework is designed to be used and you always get better results when you use software as it is designed to be used.
Since Integration and Unit tests are not likely to share the same setup, this would naturally lead to putting them in a separate class. By doing that, you can group all integration tests under a namespace that makes them easy to run independently.
Even better, as another answer suggests, put them in a separate assembly. This works much better with most CI builds, since failure of an integration test may be more easily distinguished from failure of an integration test. Also, use of a separate assembly eliminates all the complication of using categories or special attributes.
Do not have them in the same class, either split them down into folders within your test assembly or split them into two separate test assemblies.
In the long run this will be far easier to manage especially if you use tools like NCrunch.

How to organize unit tests and do not make refactoring a nightmare?

My current way of organizing unit tests boils down to the following:
Each project has its own dedicated project with unit tests. For a project BusinessLayer, there is a BusinessLayer.UnitTests test project.
For each class I want to test, there is a separate test class in the test project placed within exactly the same folder structure and in exactly the same namespace as the class under test. For a class CustomerRepository from a namespace BusinessLayer.Repositories, there is a test class CustomerRepositoryTests in a namespace BusinessLayerUnitTests.Repositories.
Methods within each test class follow simple naming convention MethodName_Condition_ExpectedOutcome. So the class CustomerRepositoryTests that contains tests for a class CustomerRepository with a Get method defined looks like the following:
[TestFixture]
public class CustomerRepositoryTests
{
[Test]
public void Get_WhenX_ThenRecordIsReturned()
{
// ...
}
[Test]
public void Get_WhenY_ThenExceptionIsThrown()
{
// ...
}
}
This approach has served me quite well, because it makes locating tests for some piece of code really simple. On the opposite site, it makes code refactoring really more difficult then it should be:
When I decide to split one project into multiple smaller ones, I also need to split my test project.
When I want to change namespace of a class, I have to remember to change a namespace (and folder structure) of a test class as well.
When I change name of a method, I have to go through all tests and change the name there, as well. Sure, I can use Search & Replace, but that is not very reliable. In the end, I still need to check the changes manually.
Is there some clever way of organizing unit tests that would still allow me to locate tests for a specific code quickly and at the same time lend itself more towards refactoring?
Alternatively, is there some, uh, perhaps Visual Studio extension, that would allow me to somehow say that "hey, these tests are for that method, so when name of the method changes, please be so kind and change the tests as well"? To be honest, I am seriously considering to write something like that myself :)
After working a lot with tests, I've come to realize that (at least for me) having all those restrictions bring a lot of problems in the long run, rather than good things. So instead of using "Names" and conventions to determine that, we've started using code. Each project and each class can have any number of test projects and test classes. All the test code is organized based on what is being tested from a functionality perspective (or which requirement it implements, or which bug it reproduced, etc...). Then for finding the tests for a piece of code we do this:
[TestFixture]
public class MyFunctionalityTests
{
public IEnumerable<Type> TestedClasses()
{
// We can find the tests for a class, because the test cases references in some special method.
return new []{typeof(SomeTestedType), typeof(OtherTestedType)};
}
[Test]
public void TestRequirement23423432()
{
// ... test code.
this.TestingMethod(someObject.methodBeingTested); //We do something similar for methods if we want to track which methods are being tested (we usually don't)
// ...
}
}
We can use tools like resharper "usages" to find the test cases, etc... And when that's not enough, we do some magic by reflection and LINQ by loading all the test classes, and running something like allTestClasses.where(testClass => testClass.TestedClasses().FindSomeTestClasses());
You can also use the TearDown to gather information about which methods are tested by each method/class and do the same.
One way to keep class and test locations in sync when moving the code:
Move the code to a uniquely named temporary namespace
Search for references to that namespace in your tests to identify the tests that need to be moved
Move the tests to the proper new location
Once all references to the temporary namespace from tests are in the right place, then move the original code to its intended target
One strength of end-to-end or behavioral tests is the tests are grouped by requirement and not code, so you avoid the problem of keeping test locations in sync with the corresponding code.
Regarding VS extensions that associate code to tests, take a look at Visual Studio's Test Impact. It runs the tests under a profiler and creates a compact database that maps IL sequence points to unit tests. So in other words, when you change the code Visual Studio knows which tests need to be run.
One unit test project per project is the way to go. We have tried with a mega unit test project but this increased the compile time.
To help you refactor use a product like resharper or code rush.
Is there some clever way of organizing unit tests that would still
allow me to locate tests for a specific code quickly
Resharper have some good short cuts that allows you to search file or code
As you said for class CustomerRepository their is a test CustomerRepositoryTests
R# shortcut shows inpput box for what you find in you case you can just input CRT and it will show you all the files starting with name have first as Capital C then R and then T
It also allow you do search by wild cards such as CR* will show you the list of file CustomerRepository and CustomerRepositoryTests

NUnit vs. xUnit

What are the differences between NUnit and xUnit.net?
What's the point of developing two of them, not only one?
I've read that xUnit is being developed by inventor of NUnit:
xUnit.net is a unit testing tool for the .NET Framework. Written by
the original inventor of NUnit
On the other hand:
NUnit is a unit-testing framework for all .Net languages .. the
current production release, version 2.6, is the seventh major release
of this xUnit based unit testing tool
So where is the truth?
At the time of writing this answer, the latest NUnit version is v3.5 and xUnit.net is v2.1.
Both frameworks are awesome, and they both support parallel test running (in a different way though). NUnit has been around since 2002, it's widely used, well documented and has a large community, whereas xUnit.net is more modern, more TDD adherent, more extensible, and also trending in .NET Core development. It's also well documented.
In addition to that, the main difference I noticed is the way that xUnit.net runs the test methods. So, in NUnit, we've got a test class and a set of test methods in it.
NUnit creates a new instance of the test class and then runs all of the test methods from the same instance.
Whereas, xUnit.net creates a new instance of the test class for each of the test methods.
Therefore, one cannot use fields or properties to share data among test methods which is a bad practice, as our test methods would be dependent to each other which is not acceptable in TDD. So if you use xunit.net, you could be sure that your test methods are completely isolated.
If you're willing to share some data among your test methods though, xUnit will let you do so. Therefore, by default all test methods are completely isolated, but you can break this isolation in specific cases intentionally. I fancy this attitude, that's why I like it better.
You're confusing the name of a single tool (xUnit.net) with the name of a whole class of unit testing frameworks (xUnit, the x referring to a language/environment, e.g. JUnit, NUnit, ...).
xUnit Pros:
xUnit follows a new concept by avoiding the old "SetUp" and "TearDown" methods. It forces us to use IDisposable and a constructor as we should do as .NET developers. Also xUnit has a clear context sharing concept.
xUnit Cons:
Availability to get the test context is not implemented yet.
One benefit of xUnit is that it finds tests in different classes and runs them in parallel. This can save a lot of time if you have many test cases.
You can of course turn this off, or control its operation (number of threads, threads per class, tests per assembly, etc).
Check out this sample solution with two test projects, one using xUnit, the other NUnit.
You can read more about parallel tests in xUnit here.
While this was asked 10 years ago, the situation was really changed.
Here is good full video which compares xUnit, NUnit and MSTest with code examples: https://www.youtube.com/watch?time_continue=3654&v=rLbF8u46tfE&feature=emb_logo
Once you'll see it that here and there and there Nunit has something, while others don't or have less.
I will not say some tolerant things about xUnit (as well as about about MSTest) - a bad documented green sandbox with broken understanding of TDD, bad documentation and lack of cool features. Also, TDD is a concept, not a framework or smth to be limited by IDE or framework. If 'you' need borders to follow TDD and not to write a bad code, then you need to read books, not to design new framework or use xUnit.
what is xUnit?
almost no documentation
no setupt and teardown, manually written wrappers instead
no onetimesetup (imagine you write integration or e2e tests and need to setup default DB data)
no writeline
no test context
no other cool attributes which helps to control and decorate tests better
poor assertions
poor parallelism settings
MSTest is the only one concurrent for it. Does anyone in enterprise replace framework with MSTest? I think no.
NUnit is a modern full power easy to use and to learn framework which is top 1 for 20+ years. It has full documentation, good architecture (xUnit doesn't have Setup and onetimesetup? Seriously replace it with constructor? It's like to name a bug as feature instead of fixing), good community and no childhood problems.
There's one feature that makes me switching from XUnit (2.x) to NUnit (3.x), is that:
XUnit doesn't work with Console.WriteLine(), while NUnit does.
I can't describe how frustrated I am when I found that there's no easy way to get Console.WriteLine working in XUnit, especially when I'm trying get a short piece of code to work.
I think this is a standard benchmark user case that you should always make standard output work with your testing framework. I know it's not good practice, I know there're alternatives like output handler and stuff. Users trying out Console.WriteLine are especially new users, and failing to print anything to screen is very, very disappointing and frustrating.
NUnit and xUnit are both popular unit testing frameworks for .NET. Both frameworks provide similar functionality for creating and running unit tests, but there are some key differences between the two.
One of the main differences is the syntax used to write test cases. NUnit is based on the older JUnit framework for Java and follows a similar structure for organizing tests, including attributes for defining test fixtures, setup, and teardown methods.
For example, in NUnit, you would write a test case like this:
public class MyTests {
[Test]
public void TestMethod() {
// test code here
}
}
In xUnit, you would write a test case like this:
public class MyTests {
[Fact]
public void TestMethod() {
// test code here
}
}
Another difference is that xUnit has built-in support for data-driven tests, which allows you to run the same test case with different input data. NUnit also supports data-driven tests, but it requires you to use a separate library or attribute.
xUnit also provides support for async testing, which allows you to write asynchronous test cases using the async and await keywords. NUnit also supports async testing, but it requires you to use a separate library or attribute.
Both frameworks are actively maintained and supported, so it largely comes down to personal preference. Some developers prefer the more concise and expressive syntax of xUnit, while others prefer the more traditional attribute-based syntax of NUnit.

Writing standards for unit testing

I plan to introduce a set of standards for writing unit tests into my team. But what to include?
These two posts (Unit test naming best practices and Best practices for file system dependencies in unit/integration tests) have given me some food for thought already.
Other domains that should be covered in my standards should be how test classes are set up and how to organize them. For example if you have class called OrderLineProcessor there should be a test class called OrderLineProcessorTest. If there's a method called Process() on that class then there should be a test called ProcessTest (maybe more to test different states).
Any other things to include?
Does your company have standards for unit testing?
EDIT: I'm using Visual Studio Team System 2008 and I develop in C#.Net
Have a look at Michael Feathers on what is a unit test (or what makes unit tests bad unit tests)
Have a look at the idea of "Arrange, Act, Assert", i.e. the idea that a test does only three things, in a fixed order:
Arrange any input data and processing classes needed for the test
Perform the action under test
Test the results with one or more asserts. Yes, it can be more than one assert, so long as they all work to test the action that was performed.
Have a Look at Behaviour Driven Development for a way to align test cases with requirements.
Also, my opinion of standard documents today is that you shouldn't write them unless you have to - there are lots of resources available already written. Link to them rather than rehashing their content. Provide a reading list for developers who want to know more.
You should probably take a look at the "Pragmatic Unit Testing" series. This is the C# version but there is another for Java.
With respect to your spec, I would not go overboard. You have a very good start there - the naming conventions are very important. We also require that the directory structure match the original project. Coverage also needs to extend to boundary cases and illegal values (checking for exceptions). This is obvious but your spec is the place to write it down for that argument that you'll inevitably have in the future with the guy who doesn't want to test for someone passing an illegal value. But don't make the spec more than a few pages or no one will use it for a task that is so context-dependent.
Update: I disagree with Mr. Potato Head about only one assert per Unit Test. It sounds quite fine in theory but, in practice, it leads to either loads of mostly redundant tests or people doing tons of work in setup and tear-down that itself should be tested.
I follow the BDD style of TDD. See:
http://blog.daveastels.com/files/BDD_Intro.pdf
http://dannorth.net/introducing-bdd
http://behaviour-driven.org/Introduction
In short this means that
The tests are not thought as "tests", but as specifications of the system's behaviour (hereafter called "specs"). The intention of the specs is not to verify that the system works under every circumstance. Their intention is to specify the behaviour and to drive the design of the system.
The spec method names are written as full English sentences. For example the specs for a ball could include "the ball is round" and "when the ball hits a floor then it bounces".
There is no forced 1:1 relation between the production classes and the spec classes (and generating a test method for every production method would be insane). Instead there is a 1:1 relation between the behaviour of the system and the specs.
Some time ago I wrote TDD tutorial (where you begin writing a Tetris game using the provided tests) which shows this style of writing tests as specs. You can download it from http://www.orfjackal.net/tdd-tutorial/tdd-tutorial_2008-09-04.zip The instructions about how to do TDD/BDD are still missing from that tutorial, but the example code is ready, so you can see how the tests are organized and write code that passes them.
You will notice that in this tutorial the production classes are named such as Board, Block, Piece and Tetrominoe which are centered around the concepts of a Tetris game. But the test classes are centered around the behaviour of the Tetris game: FallingBlocksTest, RotatingPiecesOfBlocksTest, RotatingTetrominoesTest, FallingPiecesTest, MovingAFallingPieceTest, RotatingAFallingPieceTest etc.
Try to use as few assert statements per test method as possible. This makes sure that the purpose of the test is well-defined.
I know this will be controversial, but don't test the compiler - time spent testing Java Bean accessors and mutators is better spent writing other tests.
Try, where possible, to use TDD instead of writing your tests after your code.
I've found that most testing conventions can be enforced through the use of a standard base class for all your tests. Forcing the tester to override methods so that they all have the same name.
I also advocate the Arrange-Act-Assert (AAA) style of testing as you can then generate fairly useful documentation from your tests. It also forces you to consider what behaviour you are expecting due to the naming style.
Another item you can put in your standards is to try and keep your unit test size small. That is the actuall test methods themselves. Unless you are doing a full integration unit test there usually is no need for large unit tests, like say more than 100 lines. I'll give you that much in case you have a lot of setup to get to your one test. However if you do you should maybe refactor it.
People also talk about refactoring there code make sure people realize that unit tests is code too. So refactor, refactor, refactor.
I find the biggest problem in the uses I have seen is that people do not tend to recognize that you want to keep your unit tests light and agile. You don't want a monolithic beast for your tests after all. With that in mind if you have a method you are trying to test you should not test every possible path in one unit test. You should have multiple unit tests to account for every possible path through the method.
Yes if you are doing your unit tests correctly you should on average have more lines of unit test code than your application. While this sounds like a lot of work it will save you alot of time in the end when comes time for the inevitable business requirement change.
Users of full-featured IDE's will find that "some of them" have quite detailed support for creating tests in a specific pattern. Given this class:
public class MyService {
public String method1(){
return "";
}
public void method2(){
}
public void method3HasAlongName(){
}
}
When I press ctrl-shift-T in intellij IDEA I get this test class after answering 1 dialog box:
public class MyServiceTest {
#Test
public void testMethod1() {
// Add your code here
}
#Test
public void testMethod2() {
// Add your code here
}
#Test
public void testMethod3HasAlongName() {
// Add your code here
}
}
So you may want to take a close look at tool support before writing your standards.
I use nearly plain English for my unit test function names. Helps to define what they do exactly:
TEST( TestThatVariableFooDoesNotOverflowWhenCalledRecursively )
{
/* do test */
}
I use C++ but the naming convention can be used anywhere.
Make sure to include what is not an unit tests. See: What not to test when it comes to Unit Testing?
Include a guideline so integration tests are clearly identified and can be run separately from unit tests. This is important, because you can end with a set of "unit" tests that are really slow if the unit tests are mixed with other types of tests.
Check this for more info on it: How can I improve my junit tests ... specially the second update.
If you are using tools from the family of Junit (OCunit, SHunit, ...), names of tests already follow some rules.
For my tests, I use custom doxygen tags in order to gather their documentation in a specific page.

Categories