I have an NUnit test project containing a bunch of Test classes/fixtures each of which inherits from an abstract base class hierarchy. These sealed test classes all have a TestFixtureSource attribute attached at the class level:
[TestFixtureSource(typeof(ExecutionBrowsers))]
public sealed class MyTestClass : TestBase
Where ExecutionBrowsers is defined:
internal sealed class ExecutionBrowsers : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return Browser.Chrome;
yield return Browser.Edge;
yield return Browser.Firefox;
}
}
So essentially each test class will be instantiated 3 times, once for each browser. I want to run these tests in parallel in such as way that a browser does not have more than one test using it at the same time (I have a hard limitation on this - lets not get into that). So what I did was to add a .cs file at the root of the project and stick the following attributes in it:
[assembly: NUnit.Framework.FixtureLifeCycle(NUnit.Framework.LifeCycle.InstancePerTestCase)]
[assembly: NUnit.Framework.Parallelizable(NUnit.Framework.ParallelScope.Fixtures)]
[assembly: NUnit.Framework.LevelOfParallelism(3)]
This doesn't quite work though, it does not restrict tests to one per browser at any given time. It will start off with the first test in the first test class (some classes have more than one test) running that on each of the three browsers. However if one browser takes longer than the others it will get out of sync and begin executing two tests on one browser.
How can I achieve the behaviour that I want?
Well see, we have a situation unit test frameworks do not run tests of the same collection asynchronously so you must rethink the structure of your unit test so that they are separate from each other, I didn't see enough of your structure to be able to assist in this restructuring
You're trying to achieve fine control by putting attributes at the assembly level. There are lots of ways that can go wrong and you have discovered one of them. I recommend avoiding use of assembly-level ParallelizableAttribute unless you are absolutely sure that the specified parallel behavior will work for all your test fixtures as well as any you or others may add in the future. ;-)
Instead, add [Parallelizable] to the class. It will apply to each of your instances and will allow them to run against one another. The individual test cases will be non-parallelizable by default with respect to one another.
For the other attributes, you should eliminate [FixtureLifeCycle] unless you have a specific reason why you need it, i.e. unless your tests are running in parallel and changing the state of the fixture. You should only use [LevelOfParallelism] if it is needed for performance and should not count on it to keep any particular set of tests from running with one another.
You have not said how you run the tests. The above will work if you are running straight nunit console plus framework from the command line. If you are using Visual Studio, there are some other considerations because Test Explorer can change what NUnit thinks you are doing based on how it runs the tests.
Related
I have a NUnit test project which has two [TextFixture]s.
I want one of them to run before the other as it deals with file creation. They're currently running in the wrong order.
Given that I can't change the [Test]s or group them into a single unit test, is there a way in which I can have control over test fixture running order?
I have tried [Order(int num)] attribute and have also tried to create a new playlist.
Both of them aren't working.
C#, .NET Framework, NUnit Testing Framework, Windows.
The documentation for [OrderAttribute] states that ordering for fixtures applies within the containing namespace.
Make sure that your fixtures are within the same namespace & that you've applied [OrderAttribute] at the test fixture level:
namespace SameNamespace {
[TestFixture, Order(1)]
public class MyFirstFixture
{
/* ... */ }
}
[TestFixture, Order(2)]
public class MySecondFixture
{
/* ... */ }
}
}
Also, it's important to remember that while MyFirstFixture will run before MySecondFixture, the ordering of the tests inside is local to the test fixture.
A test with [Order(1)] in MySecondFixture will run after all the tests in MyFirstFixture have completed.
Important note: the documentation also does not guarantee ordering.
Tests do not wait for prior tests to finish. If multiple threads are in use, a test may be started while some earlier tests are still being run.
Regardless, tests should be following the F.I.R.S.T principles of testing, introduced by Robert C. Martin in his book "Clean Code".
The I in F.I.R.ST. stands for isolated, meaning that tests should not be dependable on one another & each test should be responsible for the setup it requires to be executed correctly.
Try your best to eventually combine the tests into one if they are testing one thing, or rewrite your logic in a way where the piece of code being tested by test 1, can be tested isolated from the piece of code being tested by test 2.
This will also have the side effect of cleaner code adhering to SRP.
Win-win situation.
I have a testing framework that has been converted to heavily utilize NUnit [Parallelizable]. I used to store contextual test data in the base class of the [TestFixture] which NUnit orchestrates the hooks like [OneTimeSetUp], [TearDown], etc.
For example:
[Test]
public void GoToGoogle()
{
var driver = new ChromeDriver();
// do some stuff
// Would like to pass data outside of test scope
TestContext.CurrentContext.Test.Properties.Set("DriverUrl", driver.Url); // Obviously does not work
Assert.Fail("This test should fail");
}
In the [TearDown] hook, I would like to get certain information about the test contextually. Because not everything is able to be handled nicely in asserts.
[TearDown]
public void TearDown()
{
var url = TestContext.CurrentContext.Test.Properties["DriverUrl"].ToString();
var msg = $"Test encountered an error at URL: {url}"
TestAPI.PushResult(Result.Fail, msg);
}
The code above involving the TestContext does not work for obvious reasons, but I am wondering if there is a best practice that allows for me to pass data in this manner, keeping in mind respect to [Parallelizable] and that I cannot scope test data or dependencies to the [TestFixture]
You say "for obvious reasons" but I'll first spell out the reasons why you cannot effectively set a property on the current test through TestContext. After all, other people just might be reading this. :-)
The Obvious Part
TestContext.CurrentContext.Test does not return the internal representation of a test from inside NUnit. Doing so would allow users to break NUnit in a variety of ways. In particular, TestContext.CurrentContext.Test.Properties returns a copy of the properties used within NUnit.
That copy of the properties is not readonly, so you are able to set properties on it. For that reason, one might expect to be able to set it in the [Test] method and access the value in the [Teardown].
Unfortunately, because of a minor implementation detail, that's not the case. In fact, each time you use TestContext.CurrentContext, an entirely new copy of the context is created. The only reason for this, I'm afraid, is that it was originally implemented that way and is a bit difficult to change in a non-breaking way.
As a result of this implementation detail, we lost an easy way for the three parts (SetUp, Test method, TearDown) of a test to communicate. Prior to the availability of parallel execution, it was possible to pass such information using members of the fixture class. That no longer works once tests are run in parallel.
Workarounds
Use Thread Local Storage to hold the retained information. SetUp, Test and Teardown all run on the same thread. Note that OneTimeSetUp and OneTimeTearDown will not generally use the same thread in a parallel execution environment.
If you are wiling to run fixtures in parallel but not individual test cases, then you can still use class members to retain information. As a further step, apply the SingleThreadedAttribute to your fixture, forcing all the code associated with it (including one-time setup and teardown) to run on the same thread.
If you have many fixtures, which can run in parallel, the second approach may actually give you a better performance trade-off than other approaches. Unfortunately, not everyone can use it - at least not without a major reorganization of their tests. You have to look at what your own tests are doing.
Permanent Solution
That would be to modify NUnit so that properties are both writable and shareable, at least within a single fixture instance. There have already been a few feature requests out there to do that on the NUnit GitHub project. I'm no longer active on the framework project, so I don't know what the plans are. However, I think I can say that it's not likely to happen before a major version change, i.e. NUnit 4.0.
I am using selenium web driver to a run a number of tests.
I have a base class which includes a lot of tests.
In my second class, called 'People', I have another set of tests. The People class inherits the Base class.
I initialise some of the tests in the base class, when run when I run the tests for the People class. My problem is, it also runs all the tests in the base class, whether or not I initialise them. This leaves me running 100 tests, which takes forever, and when I only really wanted to test about 50.
Is there any setting to stop selenium web driver from doing this?
As far as I understand your problem, it is normal. Your test framework will load your class 'People' and search for all tests. Since the tests defined in your base class belongs also to your class 'People' (for the framework it makes no difference), the framework will execute them as well.
You should'nt put any test in your base class nor initialise thing for your class 'People'. The base class should contains only utility/convenience methods and the startup/shutdown methods that are common to ALL your tests (for People and the rest).
In sub class 'People', you put all the tests and the startup/shutdown methods related to 'People'.
In sub class 'Toto', you put all the tests and the startup/shutdown methods related to 'Toto'.
etc.
Hope this help.
My current way of organizing unit tests boils down to the following:
Each project has its own dedicated project with unit tests. For a project BusinessLayer, there is a BusinessLayer.UnitTests test project.
For each class I want to test, there is a separate test class in the test project placed within exactly the same folder structure and in exactly the same namespace as the class under test. For a class CustomerRepository from a namespace BusinessLayer.Repositories, there is a test class CustomerRepositoryTests in a namespace BusinessLayerUnitTests.Repositories.
Methods within each test class follow simple naming convention MethodName_Condition_ExpectedOutcome. So the class CustomerRepositoryTests that contains tests for a class CustomerRepository with a Get method defined looks like the following:
[TestFixture]
public class CustomerRepositoryTests
{
[Test]
public void Get_WhenX_ThenRecordIsReturned()
{
// ...
}
[Test]
public void Get_WhenY_ThenExceptionIsThrown()
{
// ...
}
}
This approach has served me quite well, because it makes locating tests for some piece of code really simple. On the opposite site, it makes code refactoring really more difficult then it should be:
When I decide to split one project into multiple smaller ones, I also need to split my test project.
When I want to change namespace of a class, I have to remember to change a namespace (and folder structure) of a test class as well.
When I change name of a method, I have to go through all tests and change the name there, as well. Sure, I can use Search & Replace, but that is not very reliable. In the end, I still need to check the changes manually.
Is there some clever way of organizing unit tests that would still allow me to locate tests for a specific code quickly and at the same time lend itself more towards refactoring?
Alternatively, is there some, uh, perhaps Visual Studio extension, that would allow me to somehow say that "hey, these tests are for that method, so when name of the method changes, please be so kind and change the tests as well"? To be honest, I am seriously considering to write something like that myself :)
After working a lot with tests, I've come to realize that (at least for me) having all those restrictions bring a lot of problems in the long run, rather than good things. So instead of using "Names" and conventions to determine that, we've started using code. Each project and each class can have any number of test projects and test classes. All the test code is organized based on what is being tested from a functionality perspective (or which requirement it implements, or which bug it reproduced, etc...). Then for finding the tests for a piece of code we do this:
[TestFixture]
public class MyFunctionalityTests
{
public IEnumerable<Type> TestedClasses()
{
// We can find the tests for a class, because the test cases references in some special method.
return new []{typeof(SomeTestedType), typeof(OtherTestedType)};
}
[Test]
public void TestRequirement23423432()
{
// ... test code.
this.TestingMethod(someObject.methodBeingTested); //We do something similar for methods if we want to track which methods are being tested (we usually don't)
// ...
}
}
We can use tools like resharper "usages" to find the test cases, etc... And when that's not enough, we do some magic by reflection and LINQ by loading all the test classes, and running something like allTestClasses.where(testClass => testClass.TestedClasses().FindSomeTestClasses());
You can also use the TearDown to gather information about which methods are tested by each method/class and do the same.
One way to keep class and test locations in sync when moving the code:
Move the code to a uniquely named temporary namespace
Search for references to that namespace in your tests to identify the tests that need to be moved
Move the tests to the proper new location
Once all references to the temporary namespace from tests are in the right place, then move the original code to its intended target
One strength of end-to-end or behavioral tests is the tests are grouped by requirement and not code, so you avoid the problem of keeping test locations in sync with the corresponding code.
Regarding VS extensions that associate code to tests, take a look at Visual Studio's Test Impact. It runs the tests under a profiler and creates a compact database that maps IL sequence points to unit tests. So in other words, when you change the code Visual Studio knows which tests need to be run.
One unit test project per project is the way to go. We have tried with a mega unit test project but this increased the compile time.
To help you refactor use a product like resharper or code rush.
Is there some clever way of organizing unit tests that would still
allow me to locate tests for a specific code quickly
Resharper have some good short cuts that allows you to search file or code
As you said for class CustomerRepository their is a test CustomerRepositoryTests
R# shortcut shows inpput box for what you find in you case you can just input CRT and it will show you all the files starting with name have first as Capital C then R and then T
It also allow you do search by wild cards such as CR* will show you the list of file CustomerRepository and CustomerRepositoryTests
I'm writing a data structure in C# (a priority queue using a fibonacci heap) and I'm trying to use it as a learning experience for TDD which I'm quite new to.
I understand that each test should only test one piece of the class so that a failure in one unit doesn't confuse me with multiple test failures, but I'm not sure how to do this when the state of the data structure is important for a test.
For example,
private PriorityQueue<int> queue;
[SetUp]
public void Initialize()
{
this.queue = new PriorityQueue<int>();
}
[Test]
public void PeekShouldReturnMinimumItem()
{
this.queue.Enqueue(2);
this.queue.Enqueue(1);
Assert.That(this.queue.Peek(), Is.EqualTo(1));
}
This test would break if either Enqueue or Peek broke.
I was thinking that I could somehow have the test manually set up the underlying data structure's heap, but I'm not sure how to do that without exposing the implementation to the world.
Is there a better way to do this? Is relying on other parts ok?
I have a SetUp in place, just left it out for simplicity.
Add a private accessor for the class to your test project. Use the accessor to set up the private properties of the class in some known way instead of using the classes methods to do so.
You also need to use SetUp and TearDown methods on your test class to perform any initializations needed between tests. I would actually prefer recreating the queue in each test rather than reusing it between tests to reduce coupling between test cases.
Theoretically, you only want to test a single feature at a time. However, if your queue only has a couple methods (Enqueue, Peek, Dequeue, Count) then you're quite limited in the kinds of tests you can do while using one method only.
It's best you don't over-engineer the problem and simply create a few simple test cases (such as the one above) and build on top of that to ensure an appropriate coverage of various features.
I feel it is appropriate to write tests that cover multiple features, as long as you have something underneath that will also break if one of the used features is broken. Therefore, if you have a test suite and you break your Enqueue, obviously, all of your tests (or most of them will fail), but you'll know Enqueue broke because of your simplest tests. The relationship of a test to its test suite should not be neglected.
I think this is ok, but clear the queue at the start of your test method.