Xunit - disable parallelism in few tests of full set - c#

I have about 100 selenium tests to run, but 2 of them cannot be run in parallel
Is it possible to disable parallelism only for those 2 tests, using xUnit?
(those 2 test cannot be parallel, because they need to simulate keyboard clicking -> so I would lose input focus using parallel execution)
Best scenario that I am looking for:
Add some attribute to 2 tests that will disable parallelism for them. Then in tests executions 98 tests will be running with 16 threads, and two remaining tests will be executed at the end using 1 thread.
I know that one of the solutions can be something like this:
add 'Parallel' and 'NonParallel' categories for tests
with xunit.console run only 'Parallel' category with parameter maxthread=16.
after that run 'NonParallel' category with parameter maxthread=1
And after all of that merge xunit reports into one.
But is not fitting my needs, and I wonder if I can run tests in a scenario like I describe in "best scenario"
P.S. If the is no solution for that, can I find something like that in nunit?

While Diver answer is correct it doesn't show how exactly to achieve this.
Create new special class to establish test collection, something like this:
[CollectionDefinition(nameof(SystemTestCollectionDefinition), DisableParallelization = true)]
public class SystemTestCollectionDefinition { }
Now you can assign the same Collection name to all tests needed to disable parallelization. In my case I just added an attribute on the class which is base for all system tests:
[Collection(nameof(SystemTestCollectionDefinition))]
public class BaseSystemTest { ... }
Now all tests within Collection will be executed in sequence.
Source: https://github.com/xunit/xunit/issues/1999

If you have xunit >= 2.3 try [CollectionDefinition(DisableParallelization = true)].
It ability to disable cross-collection parallelization for individual test collections, via the test collection definition. Parallel-capable test collections will be run first (in parallel), followed by parallel-disabled test collections (run sequentially).

Related

Refactoring large methods in NUnit tests

How do I manage large tests. I'm testing a webapplication and one of the features is, making a new order, where the user has to go through a couple of forms before the order will be made.
I can write a selenium test in C# that tests the entire flow of making a new order. But that test would rather turn out quite large.
The simplified flow looks like this:
Select 1 or more customers for the order
Select 1 or more products associated with the selected customers
Add some metadata about the order, such as name, who has to complete it, date, comments, etc.
There are a few subforms where the user has to search for customers and for products.
Now I can write one (large) test that walks through the entire primary flow. But that test could easily result in a method with 100+ lines.
And I also want to test certain alternative flows, which would result in a method that could easily be 80% the same as the normal flow method.
However, I know you shouldn't write tests that depend on each other. So there's my dilemma. My code will look something like this
[test]
public void NormalFlow()
{
//Execute the first two steps
//Around 100 lines
//Execute the third step normally
//around 50 lines
}
[test]
public void AlternativeFlow()
{
//Execute the first two steps
//Around 100 lines
//Execute the third step, but follow alternative flow
//around 50 lines
}
There's a lot of duplicate code, but I can't just start at the third step, so I've got to walk through the first two steps. I can't separate those first two steps as a separate test, because that would make my tests dependent on each other.
What should I do? How do I avoid duplicating all of my code without creating dependent tests?
Now I can write one (large) test that walks through the entire primary
flow. But that test could easily result in a method with 100+ lines.
And I also want to test certain alternative flows, which would result
in a method that could easily be 80% the same as the normal flow
method.
Just because you're writing a test that does several things, doesn't mean you have to put it all in a single method. Refactoring your test code to make sure it is of an appropriate quality is an important part of the development process.
I know you shouldn't write tests that depend on each other.
Whilst this is true, I think you may be taking it a bit literally. Based on your description of the system, this would be an example of two tests that depend on each other:
Test One
Create a new order for customer XXX.
Test Two
Add product YYY to an open order for customer XXX.
Test Two is dependent on Test One, because if Test One hasn't executed / has failed Test Two will also fail and it may not be obvious why.
This is different from two related tests that aren't dependent on each other. So, the alternative to the above would be:
Test One Create a new order for customer XXX.
Test Two Create a new order for customer ZZZ and add product YYY to the order.
Each test case is self contained and can be run in isolation. As you've said, this is essentially because Test Two is performing a lot of the same processing that Test One is. This is ok, but it doesn't mean that all of the code for Test Two has to be in a single method. If you were writing production code, you would probably look at your code, identify duplication and refactor it out either into different methods or classes that could be reused. If this makes your test code easier to read, then it's absolutely the right thing to do.
So from your example code you might have something like this:
[Test]
public void NormalFlow()
{
var sessionDetails = Logon(customerCredentials);
var openOrder = CreateOrder(sessionDetails);
AddProductToOrder(openOrder, productDetails);
AddMetaDataToOrder(openOrder, metaData);
}
[Test]
public void AlternativeFlow()
{
var sessionDetails = Logon(customerCredentials);
var openOrder = CreateOrder(sessionDetails);
AddProductToOrder(openOrder, productDetails);
AddMetaDataToOrder(openOrder, alternateFlowMetaData);
}
The shared/duplicate code is pushed into the shared methods. Just because the code is shared, doesn't mean the tests are dependent.
As has been said in the comments by #Sriram Sakthivel, another thing that you can do if you have the same code at the start of every test (for example logging on) is to put that into a method that's marked up with the Setup attribute. Remember, the goal is to make your test code easy to write/understand and maintain.

Listing test results in VS2010 that DONT include a keyword

Is there any way I can filter test results by specifying a keyword that should NOT appear?
Context:
I've written some C# classes and methods, but did not implement those methods for now (I made them throw a NotImplementedException so that they clearly indicate this). I also written some test cases for those functions, but they currently fail because the methods throw the NotImplementedException. This is ok and I expect this for now.
I want to ignore these tests for now and look at other test results that are more meaningful, so I was trying to figure out how I can list results that do not have the "NotImplementedException". However, I can only list the results that do have that keyword, not those that don't. Is there any way I can list the results that don't? Using some wildcards or something?
I see a lot of information about the new Test Explorer in VS2012, but that's not a feature in 2010, which is what I'm using.
You can sort of cheat to pass this tests, if you want to, by marking that this test expects an exception to be thrown and thereby passes the test.
[TestMethod]
[ExpectedException(typeof(NotImplementedException))]
public void NotYetImplementedMethod(Object args)
{
....
}
Alternatively you can create categories for your tests. This way you can choose which tests to run in the Test explorer, if you assign a category to most of your tests.
[TestMethod]
[Testcategory("NotImplementedNotTested")]
public void NotYetImplementedMethod(Object args)
{
....
}
Last but not least the simplest solution [Ignore]. This will skip the tests alltogether.
[TestMethod]
[Ignore]
public void NotYetImplementedMethod(Object args)
{
....
}
Reference:
http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Getting-Started-with-Unit-Testing-Part-1
http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Getting-Started-with-Unit-Testing-Part-2
How to create unit tests which runs only when manually specified?
I also written some test cases for those functions
If your tests are linked to Test Cases work items on TFS you could simply set the Test Case's State to Design. Then, on your Test Plans exclude all test cases that are on Designed state.
If they are not linked to actual Test Cases work items (let's say a batch of unit tests), I believe the best solution is the Ignore attrbute (as #Serv already mentioned). Because I don't think you want to run tests that are not implemented yet and also waste time to find out how to exclude them from test results.

How to run NUnit test fixtures serially?

I have several suites of integration tests implemented in C#/NUNit. Each test suite is a separate class, each fixture setup creates and populates a SQL Server database from scripts. This all used to work just fine prior to Resharper 5.1.
Unfortunately, Resharper 5.1 begins to run several fixtures at the same time. This is a breaking change - they are all attempting to create and populate the same database, which obviously ends up in an mess. Is there any way I could have Resharper run my test fixtures serially?
If not, what would you suggest to run my NUnit test fixtures serially, one fixture at a time?
The order in which individual tests run does not matter.
I don't know whether it is possible to prevent ReSharper from running tests in parallel; if not, the following hack might work: Create a static class with a static readonly Monitor member. Then, in [TestFixtureSetUp], call Enter() on the monitor, and call Exit() on the monitor in [TestFixtureTearDown]. That way, only one test fixture will be allowed to run at a time. Not pretty, though...
Are you sure about this ? I just tried this out.. by putting a trace of the following form in tests in 3 diff NUnit fixtures followed by a "Run all". Doesn't seem to be running in parallel.
Trace.WriteLine(DateTime.Now.ToString("hh:mm:ss.ffff") + "VC:Start");
Trace.WriteLine(DateTime.Now.ToString("hh:mm:ss.ffff") + "VC:Done");
Output I see is : (R# Build 5.1.1753.1)
01:06:41.6639IOC
01:06:41.6649Done - IOC
01:06:41.6679VC:Start
01:06:41.6729VC:Done
01:06:41.2439Mef
01:06:41.6589Mef-Done
Unless the code being tested does its own transaction management, you can run each test inside a transaction. In that way, the different tests won't disturb each other. Also, you don't have to worry about cleaning up after each test, since the each test's transaction can simply abort after the test is completed.
In our projects, we usually let our integration tests derive from a class that looks more or less like this:
public class TransactionalTestFixture
{
private TransactionScope TxScope;
[SetUp]
public void TransactionalTestFixtureSetUp()
{
TxScope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions {IsolationLevel = IsolationLevel.Serializable});
}
[TearDown]
public void TransactionalTestFixtureTearDown()
{
TxScope.Dispose();
}
}
Give them alphabetical names, i.e. prefix them with a letter that signifies their running order. If you need this to happen you should be able to accept this nasty naming convention too.

Is there a way to run unit tests sequentially with MSTests?

I am working in an application that is mostly single-thread, single user.
There are a few worker threads here and there, and they are only using thread safe objects and classes. The unit tests are actually testing those with multiple threads (explicitly created for the tests), and they test fine.
The VSTS unit tests fail when testing business objects and sub-systems that are not thread safe. It is okay for them not to be thread-safe, that's the way the application uses them.
But the 'one thread per TestMethod' approach of MS tests kills us. I've had to implement object locks in many unit test classes just to ensure that the tests are run one after the other (I don't really care about the order, but I can't have two test methods hitting the same object at the same time).
The code looks like this:
[TestClass]
public class TestSomeObject
{
static object turnStile = new object();
...
[TestMethod]
public void T01_TestThis()
{
lock(turnStile)
{
.. actual test code
}
}
[TestMethod]
public void T02_TestThat()
{
lock(turnStile)
{
-- actual test code
}
}
}
Is there a better/more elegant way to make the test run sequentially?
Use an Ordered Test.
Test > New Test > Ordered Test
you can Use Playlist
right click on the test method -> Add to playlist -> New playlist
you can then specify the execution order
There is the notion of an "Ordered Test" in which you can list tests in sequence. It is more geared towards ensuring a certain sequential order, but I can't see how that would be possible if B doesn't wait for A to complete.
Apart from that, it is unfortunate that your tests interfere with each other. There are Setup / TearDown methods that can be used per test such that it may after all be possible to isolate the tests from each other.
You can specifically require a mutex for each test execution, either in the specific tests you want to serialize, or for all the tests in a class (whatever shares the same mutex string).
For an entire test class, you can use the TestInitialize and TestCleanup attributes like so:
private readonly Mutex testMutex = new Mutex(true, "MySpecificTestScenarioUniqueMutexString");
[TestInitialize]
public void Initialize()
{
testMutex.WaitOne(TimeSpan.FromSeconds(1));
}
[TestCleanup]
public void Cleanup() {
testMutex.ReleaseMutex();
}
To be clear this isn't a feature of tests, ANY locking structure should work. I'm using the system provided Mutexes in this case:
https://msdn.microsoft.com/en-us/library/system.threading.mutex(v=vs.110).aspx
I finally used the ordered test method. It works well.
However, I had a hell of a time making it work with the NAnt build.
Running only the ordered test list in the build requires using the /testmetadata and /testlist switches in the MSTest invocation block.
The documentation on these is sketchy, to use a kind description. I google all over for examples of "MSTest /testmetadata /testlist" to no effect.
The trick is simple, however, and I feel compelled to give it back to the community, in case someone else bumps into the same issue.
Edit the test metadata file (with a .vsmdi extension), and add a new list
to the list of tests (the first node in the tree on the left
pane. Give it the name you want, for example 'SequentialTests'.
If you used a /testcontainer switch for the MSTest invocation, remove it.
Add a switch for MSTest
-> /testmetadata:
Add a switch for MSTEst
/testlist:SequentialTests (or whatever name you used)
Then MSTest runs only the tests listed in the test list you created.
If someone has a better method, I'd like to hear about it!
I used ordered tests, also configured them easily on jenkins just use command
MSTest /testcontainer:"orderedtestfilename.orderedtest" /resultsfile:"testresults.trx"

Is it a good practice to use RowTest in a unit test

NUnit and MbUnit has a RowTest attribute that allows you to sent different set of parameters into a single test.
[RowTest]
[Row(5, 10, 15)]
[Row(3.5, 2.7, 6.2)]
[Row(-5, 6, 1)]
public void AddTest(double firstNumber, double secondNumber, double result)
{
Assert.AreEqual(result, firstNumber + secondNumber);
}
I used to be huge fan of this feature. I used it everywhere. However, lately I'm not sure if it's a very good idea to use RowTest in Unit Tests. Here are more reasons:
A unit test must be very simple. If there's a bug, you don't want to spent a lot of time to figure out what your test tests. When you use multiple rows, each row has different sent set of parameter and tests something different.
Also I'm using TestDriven.NET, that allows me to run my unit tests from my IDE, Visual Studio. With TestDrivent.NET I cannot instruct to run a specific row, it will execute all the rows. Therefore, when I debug I have to comment out all other rows and leave only the one I'm working with.
Here's an example how would write my tests today:
[Test]
public void Add_with_positive_whole_numbers()
{
Assert.AreEqual(5, 10 + 15);
}
[Test]
public void Add_with_one_decimal_number()
{
Assert.AreEqual(6.2, 3.5 + 2.7);
}
[Test]
public void Add_with_negative_number()
{
Assert.AreEqual(1, -5 + 6);
}
Saying that I still occasionally use RowTest attribute but only when I believe that it's not going to slow me down when I need to work on this later.
Do you think it's a good idea to use this feature in a Unit test?
Yes. It's basically executing the same test over and over again with different inputs... saving you the trouble of repeating yourself for each distinct input combination.
Thus upholding the 'once and only once' or DRY principle. So if you need to update this test you just update one test (vs multiple) tests.
Each Row should be a representative input from a distinct set - i.e. this input is different from all others w.r.t. this function's behavior.
The RowTest actually was a much-asked for feature for NUnit - having originated from MBUnit... I think Schlapsi wrote it as a NUnit extension which then got promoted to std distribution status. The NUnit GUI also groups all RowTests under one node in the GUI and shows which input failed/passed.. which is cool.
The minor disadvantage of the 'need to debug' is something I personally can live with.. It's after all commenting out a number of Row attributes temporarily (First of all most of the time I can eyeball the function once I find ScenarioX failed and solve it without needing a step-through) or conversely just copy the test out and pass it fixed (problematic) inputs temporarily

Categories