I'm doing something like this (below) with multiple Test Classes but [ClassCleanup] is not called until ALL tests run.
Because of this each class starts up a browser and ALL of the browsers remain open until ALL of the tests are completed, and then they ALL close down.
Is this expected?
Could someone comment if their tests do the same thing?
Is there a way to force a browser to close when the test class is done?
[ClassCleanup]
public static void ClassCleanup()
{
driver.Quit();
I dont know if this is just how Selenium is supposed to work, or if I have some sort of hanging resource that doesn't allow the browser to close until the end?
I cannot use [TestCleanup] because then I cannot have multiple tests in the same test class (as it will close before the 2nd test will run)
I finally found an example that works :)
Simply replaced
[ClassCleanup]
with
[ClassCleanup(ClassCleanupBehavior.EndOfClass)]
you can use [TestInitialize] as setUp and [TestCleanup] as Teardown of each test case.
In your code, it runs after class(all test cases)
So full code:
Setup:
[TestInitialize()]
public void InitializeTest() {
//your driver initilazation code
}
and, Teardown:
[TestCleanup]
public void Cleanup() {
driver.Quit();
}
Here is what I want to achieve with xUnit:
Run initialization code.
Run tests in parallel.
Perform teardown.
I have tried [CollectionDefinition]/[Collection]/ICollectionFixture
approach described here but it has disabled the parallel execution, which is critical for me.
Are there any way to run tests in parallel and be able to write global setup/tear-down code in xUnit?
If it is not possible with xUnit, does NUnit or MSUnit support this scenario?
NUnit supports this scenario. For global setup, create a class in one of your root namespaces and add the [SetupFixture] attribute to it. Then add a [OneTimeSetUp] method to that class. This method will get run once for all tests in that namespace and in child namespaces. This allows you to have additional namespace specific onetime setups.
[SetUpFixture]
public class MySetUpClass
{
[OneTimeSetUp]
public void RunBeforeAnyTests()
{
// ...
}
[OneTimeTearDown]
public void RunAfterAnyTests()
{
// ...
}
}
Then to run your tests in parallel, add the [Parallelizable] attribute at the assembly level with the ParallelScope.All. If you have tests that should not be run in parallel with others, you can use the NonParallelizable attribute at lower levels.
[assembly: Parallelizable(ParallelScope.All)]
Running test methods in parallel in NUnit is supported in NUnit 3.7 and later. Prior to that, it only supported running test classes in parallel. I would recommend starting any project with the most recent version of NUnit to take advantages of bug fixes, new features and improvements.
A somewhat basic solution would be static class with a static constructor and subscribing to the AppDomain.CurrentDomain.ProcessExit event.
public static class StaticFixture
{
static StaticFixture()
{
AppDomain.CurrentDomain.ProcessExit += (o, e) => Dispose();
// Initialization code here
}
private static void Dispose()
{
// Teardown code here
}
}
There's no guarantee when the static constructor gets called though, other than at or before first use.
I was wondering if it was possible to have a start-up script before running any load tests? For example, perhaps to seed some data or clear anything down prior to the tests executing.
In my instance I have a mixed bag of designer and coded tests. Put it simply, I have:
Two coded tests
A designer created web test which points to these coded tests
A load test which runs the designer
I have tried adding a class and decorating with the attributes [TestInitialize()], [ClassInitialize()] but this code doesn't seem to get run.
Some basic code to show this in practice (see below). Is there a way of doing this whereby I can have something run only the once before test run?
[TestClass]
public class Setup : WebTest
{
[TestInitialize()]
public static void Hello()
{
// Run some code
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}
Probably should also mention that on my coded tests I have added these attributes and they get ignored. I have come across a workaround which is to create a Plugin.
EDIT
Having done a little more browsing around I found this article on SO which shows how to implement a LoadTestPlugin.
Visual Studio provides a way of running a script before and also after a test run. They are intended for use in deploying data for a test and cleaning up after a test. The scripts are specified on the "Setup and cleanup" page in the ".testsettings" file.
A load test plugin can contain code to run before and after any test cases are executed, also at various stages during test execution. The interface is that events are raised at various points during the execution of a load test. User code can be called when these events occur. The LoadTestStarting event is raised before any test cases run. See here for more info.
If you are willing to use NUnit you have SetUp/TearDown for a per test scope and TestFixtureSetUp/TestFixtureTearDown to do something similar for a class (TestFixture)
Maybe a bit of a hack, but you can place your code inside the static constructor of your test class as it will automatically run exactly once before the first instance is created or any static members are referenced:
[TestClass]
public class Setup : WebTest
{
static Setup()
{
// prepare data for test
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}
GOAL:
I want to use the TestContext.TestName property to extract the name of the test being ran so that my [TestCleanup] function can log the outcome to our bespoke results repository, automatically when every test completes.
PROBLEM:
Even in my basic 'sanity check' test project that contains 5 tests that are similar to the structure below:
[TestMethod]
public void TestMethodX()
{
Console.WriteLine(String.Format("In Test '{0}'",_ctx.TestName));
Assert.IsTrue(true);
}
With a Class 'initializer' like below which sets _ctx for me:
[ClassInitialize]
public static void ClassInit(TestContext Context)
{
_ctx = Context;
Console.WriteLine("In ClassInit()");
}
[[NOTE: the Console.WriteLines are purely there for me to hover the mouse over and inspect value/properties, etc.]]
The _ctx.TestName NEVER changes from the name of the first test in the run of tests, i.e. If I was to run all five tests ('TestMethod1', 'TestMethod2', 'TestMethod3', etc.) they all log 'TestMethod1' as their testname in my results repository.
Running the tests individually it works fine, but that is of no use to me as I need to be able to run 10's/100's/1000's of tests against my application and have the testContext handle the testname or me.
I have tried this several times now and searched the internet loads and haven't anyone else with this problem, so i'm either: unique with this problem, have poor 'Google-Fu' skills, or am doing something REAL stupid. Hopefully this makes sense and someone has the answer.
Thanks in advance,
Andy
This is happening because the [ClassInitialize] is executed only once for the whole test run and you initialize the _ctx in there. Use the [TestInitialize] instead, which is executed before each test method and override the TestContext Class :
[TestClass]
public class TestClass
{
public TestContext TestContext { get; set; }
[TestInitialize]
public void Initialize()
{
// Runs once before each test method and logs the method's name
Console.WriteLine(TestContext.TestName);
}
[TestMethod]
public void TestMethod1()
{
// Logs the method name inside the method
Console.WriteLine(String.Format("In Test '{0}'", TestContext.TestName));
}
// ... Your rest test methods here
}
MSTest.exe outputs can be configured to output a .trx (xml)file with the complete results of you test, names , passed or failed and output from any of those test, there is also a tool to convert the TRX file to HTML http://trxtohtml.codeplex.com/
Hope this helps
I am attempting to unit test a WCF host management engine that I have written. The engine basically creates ServiceHost instances on the fly based on configuration. This allows us to dynamically reconfigure which services are available without having to bring all of them down and restart them whenever a new service is added or an old one is removed.
I have run into a difficulty in unit testing this host management engine, however, due to the way ServiceHost works. If a ServiceHost has already been created, opened, and not yet closed for a particular endpoint, another ServiceHost for the same endpoint can not be created, resulting in an exception. Because of the fact that modern unit testing platforms parallelize their test execution, I have no effective way to unit test this piece of code.
I have used xUnit.NET, hoping that because of its extensibility, I could find a way to force it to run the tests serially. However, I have not had any luck. I am hoping that someone here on SO has encountered a similar issue and knows how to get unit tests to run serially.
NOTE: ServiceHost is a WCF class, written by Microsoft. I don't have the ability to change it's behavior. Hosting each service endpoint only once is also the proper behavior...however, it is not particularly conducive to unit testing.
Each test class is a unique test collection and tests under it will run in sequence, so if you put all of your tests in same collection then it will run sequentially.
In xUnit you can make following changes to achieve this:
Following will run in parallel:
namespace IntegrationTests
{
public class Class1
{
[Fact]
public void Test1()
{
Console.WriteLine("Test1 called");
}
[Fact]
public void Test2()
{
Console.WriteLine("Test2 called");
}
}
public class Class2
{
[Fact]
public void Test3()
{
Console.WriteLine("Test3 called");
}
[Fact]
public void Test4()
{
Console.WriteLine("Test4 called");
}
}
}
To make it sequential you just need to put both the test classes under same collection:
namespace IntegrationTests
{
[Collection("Sequential")]
public class Class1
{
[Fact]
public void Test1()
{
Console.WriteLine("Test1 called");
}
[Fact]
public void Test2()
{
Console.WriteLine("Test2 called");
}
}
[Collection("Sequential")]
public class Class2
{
[Fact]
public void Test3()
{
Console.WriteLine("Test3 called");
}
[Fact]
public void Test4()
{
Console.WriteLine("Test4 called");
}
}
}
For more info you can refer to this link
Important: This answer applies to .NET Framework. For dotnet core, see Dimitry's answer regarding xunit.runner.json.
All good unit tests should be 100% isolated. Using shared state (e.g. depending on a static property that is modified by each test) is regarded as bad practice.
Having said that, your question about running xUnit tests in sequence does have an answer! I encountered exactly the same issue because my system uses a static service locator (which is less than ideal).
By default xUnit 2.x runs all tests in parallel. This can be modified per-assembly by defining the CollectionBehavior in your AssemblyInfo.cs in your test project.
For per-assembly separation use:
using Xunit;
[assembly: CollectionBehavior(CollectionBehavior.CollectionPerAssembly)]
or for no parallelization at all use:
[assembly: CollectionBehavior(DisableTestParallelization = true)]
The latter is probably the one you want. More information about parallelisation and configuration can be found on the xUnit documentation.
For .NET Core projects, create xunit.runner.json with:
{
"parallelizeAssembly": false,
"parallelizeTestCollections": false
}
Also, your csproj should contain
<ItemGroup>
<None Update="xunit.runner.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
For old .Net Core projects, your project.json should contain
"buildOptions": {
"copyToOutput": {
"include": [ "xunit.runner.json" ]
}
}
For .NET Core projects, you can configure xUnit with an xunit.runner.json file, as documented at https://xunit.net/docs/configuration-files.
The setting you need to change to stop parallel test execution is parallelizeTestCollections, which defaults to true:
Set this to true if the assembly is willing to run tests inside this assembly in parallel against each other. ... Set this to false to disable all parallelization within this test assembly.
JSON schema type: boolean
Default value: true
So a minimal xunit.runner.json for this purpose looks like
{
"parallelizeTestCollections": false
}
As noted in the docs, remember to include this file in your build, either by:
Setting Copy to Output Directory to Copy if newer in the file's Properties in Visual Studio, or
Adding
<Content Include=".\xunit.runner.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
to your .csproj file, or
Adding
"buildOptions": {
"copyToOutput": {
"include": [ "xunit.runner.json" ]
}
}
to your project.json file
depending upon your project type.
Finally, in addition to the above, if you're using Visual Studio then make sure that you haven't accidentally clicked the Run Tests In Parallel button, which will cause tests to run in parallel even if you've turned off parallelisation in xunit.runner.json. Microsoft's UI designers have cunningly made this button unlabelled, hard to notice, and about a centimetre away from the "Run All" button in Test Explorer, just to maximise the chance that you'll hit it by mistake and have no idea why your tests are suddenly failing:
This is old question but I wanted to write a solution to people searching newly like me :)
Note: I use this method in Dot Net Core WebUI integration tests with xunit version 2.4.1.
Create an empty class named NonParallelCollectionDefinitionClass and then give CollectionDefinition attribute to this class as below. (The important part is DisableParallelization = true setting.)
using Xunit;
namespace WebUI.IntegrationTests.Common
{
[CollectionDefinition("Non-Parallel Collection", DisableParallelization = true)]
public class NonParallelCollectionDefinitionClass
{
}
}
After then add Collection attribute to the class which you don't want it to run in parallel as below. (The important part is name of collection. It must be same with name used in CollectionDefinition)
namespace WebUI.IntegrationTests.Controllers.Users
{
[Collection("Non-Parallel Collection")]
public class ChangePassword : IClassFixture<CustomWebApplicationFactory<Startup>>
...
When we do this, firstly other parallel tests run. After that the other tests which has Collection("Non-Parallel Collection") attribute run.
you can Use Playlist
right click on the test method -> Add to playlist -> New playlist
then you can specify the execution order, the default is, as you add them to the play list but you can change the playlist file as you want
I don't know the details, but it sounds like you might be trying to do integration testing rather than unit testing. If you could isolate the dependency on ServiceHost, that would likely make your testing easier (and faster). So (for instance) you might test the following independently:
Configuration reading class
ServiceHost factory (possibly as an integration test)
Engine class that takes an IServiceHostFactory and an IConfiguration
Tools that would help include isolation (mocking) frameworks and (optionally) IoC container frameworks. See:
http://www.mockobjects.com/
http://www.hanselman.com/blog/ListOfNETDependencyInjectionContainersIOC.aspx
Maybe you can use Advanced Unit Testing. It allows you to define the sequence in which you run the test. So you may have to create a new cs file to host those tests.
Here's how you can bend the test methods to work in the sequence you want.
[Test]
[Sequence(16)]
[Requires("POConstructor")]
[Requires("WorkOrderConstructor")]
public void ClosePO()
{
po.Close();
// one charge slip should be added to both work orders
Assertion.Assert(wo1.ChargeSlipCount==1,
"First work order: ChargeSlipCount not 1.");
Assertion.Assert(wo2.ChargeSlipCount==1,
"Second work order: ChargeSlipCount not 1.");
...
}
Do let me know whether it works.
None of the suggested answers so far worked for me. I have a dotnet core app with XUnit 2.4.1.
I achieved the desired behavior with a workaround by putting a lock in each unit test instead. In my case, I didn't care about running order, just that tests were sequential.
public class TestClass
{
[Fact]
void Test1()
{
lock (this)
{
//Test Code
}
}
[Fact]
void Test2()
{
lock (this)
{
//Test Code
}
}
}
For me, in .Net Core Console application, when I wanted to run test methods ( not classes ) synchronously, the only solution which worked was this described in this blog:
xUnit: Control the Test Execution Order
I've added the attribute [Collection("Sequential")] in a base class:
namespace IntegrationTests
{
[Collection("Sequential")]
public class SequentialTest : IDisposable
...
public class TestClass1 : SequentialTest
{
...
}
public class TestClass2 : SequentialTest
{
...
}
}