Run a single unit test from a fixture of many in MbUnit - c#

Is there anyway to add an attribute to a [Test] method in a [TestFixture] so that only that method runs? This would be similar to the way the [CurrentFixture] attribute can be used to only run a single fixture. I ask as sometimes when I test the model I want to profile the sql being executed and I only want to focus on a single test. Currently I have to comment out all the other tests in the fixture.
Updated:
The code I'm using to initiate the test follows, I'm really looking for a solution I can weave into this code.
public static void Run(bool currentFixturesOnly) {
using(AutoRunner auto = new AutoRunner()) {
if(currentFixturesOnly) { // for processing [CurrentFixture]s only
auto.Domain.Filter = FixtureFilters.Current;
}
auto.Verbose = true;
auto.Run();
auto.ReportToHtml();
}
}

If you use a test runner like TestDriven.Net, ReSharper or Icarus then you can select the specific test to run and just run that. If you're using the command-line tools, consider using a filter.
eg.
Gallio.Echo MyTestAssembly.dll /f:Name:TheNameOfTheParticularIWantToRun

Related

Load testing Visual Studio, start up script / setup

I was wondering if it was possible to have a start-up script before running any load tests? For example, perhaps to seed some data or clear anything down prior to the tests executing.
In my instance I have a mixed bag of designer and coded tests. Put it simply, I have:
Two coded tests
A designer created web test which points to these coded tests
A load test which runs the designer
I have tried adding a class and decorating with the attributes [TestInitialize()], [ClassInitialize()] but this code doesn't seem to get run.
Some basic code to show this in practice (see below). Is there a way of doing this whereby I can have something run only the once before test run?
[TestClass]
public class Setup : WebTest
{
[TestInitialize()]
public static void Hello()
{
// Run some code
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}
Probably should also mention that on my coded tests I have added these attributes and they get ignored. I have come across a workaround which is to create a Plugin.
EDIT
Having done a little more browsing around I found this article on SO which shows how to implement a LoadTestPlugin.
Visual Studio provides a way of running a script before and also after a test run. They are intended for use in deploying data for a test and cleaning up after a test. The scripts are specified on the "Setup and cleanup" page in the ".testsettings" file.
A load test plugin can contain code to run before and after any test cases are executed, also at various stages during test execution. The interface is that events are raised at various points during the execution of a load test. User code can be called when these events occur. The LoadTestStarting event is raised before any test cases run. See here for more info.
If you are willing to use NUnit you have SetUp/TearDown for a per test scope and TestFixtureSetUp/TestFixtureTearDown to do something similar for a class (TestFixture)
Maybe a bit of a hack, but you can place your code inside the static constructor of your test class as it will automatically run exactly once before the first instance is created or any static members are referenced:
[TestClass]
public class Setup : WebTest
{
static Setup()
{
// prepare data for test
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}

How do you disable PostSharp when running unit tests?

I want my nunit tests not to apply any of my PostSharp aspects so I can test my methods in isolation. Can this be done somehow in the Test Fixture Setup, or can it only be done on a per project level?
You could set the 'SkipPostSharp' flag on the test build, so that it is not compiled into your binaries in the first place.
You can have a static flag on your aspect to toggle it on/off and check the status of the flag in your aspect implementation.
Then in your unit test setup just turn the static flag off.
e.g.
public static bool On = true;
...
public override void OnInvoke(MethodInterceptionArgs args)
{
if (!CacheAttribute.On)
{
args.ReturnValue = args.Invoke(args.Arguments);
}
If you are using Typemock in your unit tests you can use something like
MyAspect myAspectMock = Isolate.Fake.Instance<MyAspect>(Members.MustSpecifyReturnValues);
Isolate.Swap.AllInstances<MyAspect>().With(myAspectMock);
This allows you to control what tests the aspects are used on, and which ones are not, allowing you to test the method itself, and with the advices applied.
Presumably there would be a similar mechanism with other mocking frameworks

Why do my tests fail when run together, but pass individually?

When I write a test in Visual Studio, I check that it works by saving, building and then running the test it in Nunit (right click on the test then run).
The test works yay...
so I Move on...
Now I have written another test and it works as I have saved and tested it like above. But, they dont work when they are run together.
Here are my two tests that work when run as individuals but fail when run together:
using System;
using NUnit.Framework;
using OpenQA.Selenium.Support.UI;
using OpenQA.Selenium;
namespace Fixtures.Users.Page1
{
[TestFixture]
public class AdminNavigateToPage1 : SeleniumTestBase
{
[Test]
public void AdminNavigateToPage1()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
NavigateTo<Page1>();
var headerelement = Driver.FindElement(By.ClassName("header"));
Assert.That(headerelement.Text, Is.EqualTo("Page Title"));
Assert.That(Driver.Url, Is.EqualTo("http://localhost/Page Title"));
}
[Test]
public void AdminNavigateToPage1ViaMenu()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
Driver.FindElement(By.Id("menuitem1")).Click();
Driver.FindElement(By.Id("submenuitem4")).Click();
var headerelement = Driver.FindElement(By.ClassName("header"));
Assert.That(headerelement.Text, Is.EqualTo("Page Title"));
Assert.That(Driver.Url, Is.EqualTo("http://localhost/Page Title"));
}
}
}
When the second test fails because they have been run together
Nunit presents this:
Sse.Bec.Web.Tests.Fixtures.ManageSitesAndUsers.ChangeOfPremises.AdminNavigateToChangeOfPremises.AdminNavigateToPageChangeOfPremisesViaMenu:
OpenQA.Selenium.NoSuchElementException : The element could not be found
And this line is highlighted:
var headerelement = Driver.FindElement(By.ClassName("header"));
Does anyone know why my code fails when run together, but passes when run alone?
Any answer would be greatly appreciated!
Such a situation normally occurs when the unit tests are using shared resources/data in some way.
It can also happen if your system under test has static fields/properties which are being leveraged to compute the output on which you are asserting.
It can happen if the system under test is being shared (static) dependencies.
Two things you can try
put the break point between the following two lines. And see which page are you in when the second line is hit
Introduce a slight delay between these two lines via Thread.Sleep
Driver.FindElement(By.Id("submenuitem4")).Click();
var headerelement = Driver.FindElement(By.ClassName("header"));
If none of the answers above worked for you, i solved this issue by adding Thread.Sleep(1) before the assertion in the failing test...
Looks like tests synchronization is missed somewhere... Please note that my tests were not order dependant, that i haven't any static member nor external dependency.
look into the TestFixtureSetup, Setup, TestFixtureTearDown and TearDown.
These attributes allow you to setup the testenvironment once, instead of once per test.
Without knowing how Selenium works, my bet is on Driver which seems to be a static class so the 2 tests are sharing state. One example of shared state is Driver.Url. Because the tests are run in parallel, there is a race condition to set the state of this object.
That said, I do not have a solution for you :)
Are you sure that after running one of the tests the method
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
is taking you back to where you should be? It'd seem that the failure is due to improper navigation handler (supposing that the header element is present and found in both tests).
I think you need to ensure, that you can log on for the second test, this might fail, because you are logged on already?
-> putting the logon in a set up method or (because it seems you are using the same user for both tests) even up to the fixture setup
-> the logoff (if needed) might be put in the tear down method
[SetUp]
public void LaunchTest()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
}
[TearDown]
public void StopTest()
{
// logoff
}
[Test]
public void Test1()
{...}
[Test]
public void Test2()
{...}
If there are delays in the DOM instead of a thread.sleep I recommend to use webdriver.wait in combination with conditions. The sleep might work in 80% and in others not. The wait polls until a timeout is reached which is more reliable and also readable. Here an example how I usually approach this:
var webDriverWait = new WebDriverWait(webDriver, ..);
webDriverWait.Until(d => d.FindElement(By.CssSelector(".."))
.Displayed))
I realize this is an extremely old question but I just ran into it today and none of the answers addressed my particular case.
Using Selenium with NUnit for front end automation tests.
For my case I was using in my startup [OneTimeSetUp] and [OneTimeTearDown] trying to be more efficient.
This however has the problem of using shared resources, in my case the driver itself and the helper I use to validate/get elements.
Maybe a strange edge case - but took me a few hours to figure it out.

Make MSTest respect [Conditional()] attribute?

I'm using VS2010, I have the following method call:
[Conditional("DEBUG")]
public void VerboseLogging() { }
public void DoSomething() {
VerboseLogging();
Foo();
Bar();
}
Then I have a unit test for the DoSomething method which checks that it emits proper logging.
[Conditional("DEBUG"), TestMethod()]
public void EnsureVerboseLog() {
DoSomething();
VerifyVerboseLoggingCalled(); // <-- fail in release builds since VerboseLogging() calls get eliminated.
}
It seems that MSTest only sees TestMethod and executes it (generating failed test) even though I've marked it with Conditional("DEBUG") and compile it in release mode.
So, is there a way to exclude certain tests depending on compilation constant other than #if?
The ConditionalAttribute does not affect whether or not a method is compiled into an application. It controls whether or not calls to the method are compiled into the application.
There is no call to EnsureVerboseLog in this sample. MSTest just sees a method in the assemlby with the TestMethod attribute and correctly executes it. In order to prevent MSTest from running the method you will need to do one of the following
Not compile it into your application (possible through #if's)
Not annotate it with the TestMethod attribute
So, is there a way to exclude certain tests depending on compilation
constant other than #if?
I'd use the #if - it's readable.
[TestMethod]
#if !DEBUG
[Ignore]
#endif
public void AnyTest()
{
// Will invoke for developer and not in test-server!
}
HTH..
If you want to be able to conditionally run different tests, you will probably want to use the [TestCategory] attribute:
[TestMethod]
[TestCategory("SomeCategory")]
public void SomethingWorks()
{
...
}
And then use the filter parameter to include or exclude the categories:
dotnet test --filter "TestCategory=SomeCategory"
dotnet test --filter "TestCategory!=SomeCategory"
A work around is to set the Priority attribute to -1 to your method. Then run mstest with "minpriority:0" as an argument.
[TestMethod()]
[Priority(-1)]
public void Compute_Foo()
{
This method will not be executed
}

Execute unit tests serially (rather than in parallel)

I am attempting to unit test a WCF host management engine that I have written. The engine basically creates ServiceHost instances on the fly based on configuration. This allows us to dynamically reconfigure which services are available without having to bring all of them down and restart them whenever a new service is added or an old one is removed.
I have run into a difficulty in unit testing this host management engine, however, due to the way ServiceHost works. If a ServiceHost has already been created, opened, and not yet closed for a particular endpoint, another ServiceHost for the same endpoint can not be created, resulting in an exception. Because of the fact that modern unit testing platforms parallelize their test execution, I have no effective way to unit test this piece of code.
I have used xUnit.NET, hoping that because of its extensibility, I could find a way to force it to run the tests serially. However, I have not had any luck. I am hoping that someone here on SO has encountered a similar issue and knows how to get unit tests to run serially.
NOTE: ServiceHost is a WCF class, written by Microsoft. I don't have the ability to change it's behavior. Hosting each service endpoint only once is also the proper behavior...however, it is not particularly conducive to unit testing.
Each test class is a unique test collection and tests under it will run in sequence, so if you put all of your tests in same collection then it will run sequentially.
In xUnit you can make following changes to achieve this:
Following will run in parallel:
namespace IntegrationTests
{
public class Class1
{
[Fact]
public void Test1()
{
Console.WriteLine("Test1 called");
}
[Fact]
public void Test2()
{
Console.WriteLine("Test2 called");
}
}
public class Class2
{
[Fact]
public void Test3()
{
Console.WriteLine("Test3 called");
}
[Fact]
public void Test4()
{
Console.WriteLine("Test4 called");
}
}
}
To make it sequential you just need to put both the test classes under same collection:
namespace IntegrationTests
{
[Collection("Sequential")]
public class Class1
{
[Fact]
public void Test1()
{
Console.WriteLine("Test1 called");
}
[Fact]
public void Test2()
{
Console.WriteLine("Test2 called");
}
}
[Collection("Sequential")]
public class Class2
{
[Fact]
public void Test3()
{
Console.WriteLine("Test3 called");
}
[Fact]
public void Test4()
{
Console.WriteLine("Test4 called");
}
}
}
For more info you can refer to this link
Important: This answer applies to .NET Framework. For dotnet core, see Dimitry's answer regarding xunit.runner.json.
All good unit tests should be 100% isolated. Using shared state (e.g. depending on a static property that is modified by each test) is regarded as bad practice.
Having said that, your question about running xUnit tests in sequence does have an answer! I encountered exactly the same issue because my system uses a static service locator (which is less than ideal).
By default xUnit 2.x runs all tests in parallel. This can be modified per-assembly by defining the CollectionBehavior in your AssemblyInfo.cs in your test project.
For per-assembly separation use:
using Xunit;
[assembly: CollectionBehavior(CollectionBehavior.CollectionPerAssembly)]
or for no parallelization at all use:
[assembly: CollectionBehavior(DisableTestParallelization = true)]
The latter is probably the one you want. More information about parallelisation and configuration can be found on the xUnit documentation.
For .NET Core projects, create xunit.runner.json with:
{
"parallelizeAssembly": false,
"parallelizeTestCollections": false
}
Also, your csproj should contain
<ItemGroup>
<None Update="xunit.runner.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
For old .Net Core projects, your project.json should contain
"buildOptions": {
"copyToOutput": {
"include": [ "xunit.runner.json" ]
}
}
For .NET Core projects, you can configure xUnit with an xunit.runner.json file, as documented at https://xunit.net/docs/configuration-files.
The setting you need to change to stop parallel test execution is parallelizeTestCollections, which defaults to true:
Set this to true if the assembly is willing to run tests inside this assembly in parallel against each other. ... Set this to false to disable all parallelization within this test assembly.
JSON schema type: boolean
Default value: true
So a minimal xunit.runner.json for this purpose looks like
{
"parallelizeTestCollections": false
}
As noted in the docs, remember to include this file in your build, either by:
Setting Copy to Output Directory to Copy if newer in the file's Properties in Visual Studio, or
Adding
<Content Include=".\xunit.runner.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
to your .csproj file, or
Adding
"buildOptions": {
"copyToOutput": {
"include": [ "xunit.runner.json" ]
}
}
to your project.json file
depending upon your project type.
Finally, in addition to the above, if you're using Visual Studio then make sure that you haven't accidentally clicked the Run Tests In Parallel button, which will cause tests to run in parallel even if you've turned off parallelisation in xunit.runner.json. Microsoft's UI designers have cunningly made this button unlabelled, hard to notice, and about a centimetre away from the "Run All" button in Test Explorer, just to maximise the chance that you'll hit it by mistake and have no idea why your tests are suddenly failing:
This is old question but I wanted to write a solution to people searching newly like me :)
Note: I use this method in Dot Net Core WebUI integration tests with xunit version 2.4.1.
Create an empty class named NonParallelCollectionDefinitionClass and then give CollectionDefinition attribute to this class as below. (The important part is DisableParallelization = true setting.)
using Xunit;
namespace WebUI.IntegrationTests.Common
{
[CollectionDefinition("Non-Parallel Collection", DisableParallelization = true)]
public class NonParallelCollectionDefinitionClass
{
}
}
After then add Collection attribute to the class which you don't want it to run in parallel as below. (The important part is name of collection. It must be same with name used in CollectionDefinition)
namespace WebUI.IntegrationTests.Controllers.Users
{
[Collection("Non-Parallel Collection")]
public class ChangePassword : IClassFixture<CustomWebApplicationFactory<Startup>>
...
When we do this, firstly other parallel tests run. After that the other tests which has Collection("Non-Parallel Collection") attribute run.
you can Use Playlist
right click on the test method -> Add to playlist -> New playlist
then you can specify the execution order, the default is, as you add them to the play list but you can change the playlist file as you want
I don't know the details, but it sounds like you might be trying to do integration testing rather than unit testing. If you could isolate the dependency on ServiceHost, that would likely make your testing easier (and faster). So (for instance) you might test the following independently:
Configuration reading class
ServiceHost factory (possibly as an integration test)
Engine class that takes an IServiceHostFactory and an IConfiguration
Tools that would help include isolation (mocking) frameworks and (optionally) IoC container frameworks. See:
http://www.mockobjects.com/
http://www.hanselman.com/blog/ListOfNETDependencyInjectionContainersIOC.aspx
Maybe you can use Advanced Unit Testing. It allows you to define the sequence in which you run the test. So you may have to create a new cs file to host those tests.
Here's how you can bend the test methods to work in the sequence you want.
[Test]
[Sequence(16)]
[Requires("POConstructor")]
[Requires("WorkOrderConstructor")]
public void ClosePO()
{
po.Close();
// one charge slip should be added to both work orders
Assertion.Assert(wo1.ChargeSlipCount==1,
"First work order: ChargeSlipCount not 1.");
Assertion.Assert(wo2.ChargeSlipCount==1,
"Second work order: ChargeSlipCount not 1.");
...
}
Do let me know whether it works.
None of the suggested answers so far worked for me. I have a dotnet core app with XUnit 2.4.1.
I achieved the desired behavior with a workaround by putting a lock in each unit test instead. In my case, I didn't care about running order, just that tests were sequential.
public class TestClass
{
[Fact]
void Test1()
{
lock (this)
{
//Test Code
}
}
[Fact]
void Test2()
{
lock (this)
{
//Test Code
}
}
}
For me, in .Net Core Console application, when I wanted to run test methods ( not classes ) synchronously, the only solution which worked was this described in this blog:
xUnit: Control the Test Execution Order
I've added the attribute [Collection("Sequential")] in a base class:
namespace IntegrationTests
{
[Collection("Sequential")]
public class SequentialTest : IDisposable
...
public class TestClass1 : SequentialTest
{
...
}
public class TestClass2 : SequentialTest
{
...
}
}

Categories