How do you disable PostSharp when running unit tests? - c#

I want my nunit tests not to apply any of my PostSharp aspects so I can test my methods in isolation. Can this be done somehow in the Test Fixture Setup, or can it only be done on a per project level?

You could set the 'SkipPostSharp' flag on the test build, so that it is not compiled into your binaries in the first place.

You can have a static flag on your aspect to toggle it on/off and check the status of the flag in your aspect implementation.
Then in your unit test setup just turn the static flag off.
e.g.
public static bool On = true;
...
public override void OnInvoke(MethodInterceptionArgs args)
{
if (!CacheAttribute.On)
{
args.ReturnValue = args.Invoke(args.Arguments);
}

If you are using Typemock in your unit tests you can use something like
MyAspect myAspectMock = Isolate.Fake.Instance<MyAspect>(Members.MustSpecifyReturnValues);
Isolate.Swap.AllInstances<MyAspect>().With(myAspectMock);
This allows you to control what tests the aspects are used on, and which ones are not, allowing you to test the method itself, and with the advices applied.
Presumably there would be a similar mechanism with other mocking frameworks

Related

Skipping Feature - SpecFlow C#

I'm looking to intercept a test using the [BeforeFeature] SpecFlow Hook and ignore the entire feature file.
private static string FeatureName = FeatureContext.Current.FeatureInfo.Title;
[BeforeFeature]
public static void BeforeFeature()
{
Console.WriteLine("Before feature");
if (TestFilter.ShouldBeIgnored(FeatureName))
{
// Ignore Feature if it matches TestFilter Requirements
}
}
If you are using Specflow + Nunit, you can call
Assert.Ignore("ignore message here");
This will cause the individual tests to be ignored, if their feature is ran.
However, this may require you to use a BeforeScenario hook instead of a BeforeFeature hook.
Because BeforeScenario has access to the feature info, this should not be an issue.
Did you look into #ignore tag? You can skip features or scenarios.
link

How to perform global setup/teardown in xUnit and run tests in parallel?

Here is what I want to achieve with xUnit:
Run initialization code.
Run tests in parallel.
Perform teardown.
I have tried [CollectionDefinition]/[Collection]/ICollectionFixture
approach described here but it has disabled the parallel execution, which is critical for me.
Are there any way to run tests in parallel and be able to write global setup/tear-down code in xUnit?
If it is not possible with xUnit, does NUnit or MSUnit support this scenario?
NUnit supports this scenario. For global setup, create a class in one of your root namespaces and add the [SetupFixture] attribute to it. Then add a [OneTimeSetUp] method to that class. This method will get run once for all tests in that namespace and in child namespaces. This allows you to have additional namespace specific onetime setups.
[SetUpFixture]
public class MySetUpClass
{
[OneTimeSetUp]
public void RunBeforeAnyTests()
{
// ...
}
[OneTimeTearDown]
public void RunAfterAnyTests()
{
// ...
}
}
Then to run your tests in parallel, add the [Parallelizable] attribute at the assembly level with the ParallelScope.All. If you have tests that should not be run in parallel with others, you can use the NonParallelizable attribute at lower levels.
[assembly: Parallelizable(ParallelScope.All)]
Running test methods in parallel in NUnit is supported in NUnit 3.7 and later. Prior to that, it only supported running test classes in parallel. I would recommend starting any project with the most recent version of NUnit to take advantages of bug fixes, new features and improvements.
A somewhat basic solution would be static class with a static constructor and subscribing to the AppDomain.CurrentDomain.ProcessExit event.
public static class StaticFixture
{
static StaticFixture()
{
AppDomain.CurrentDomain.ProcessExit += (o, e) => Dispose();
// Initialization code here
}
private static void Dispose()
{
// Teardown code here
}
}
There's no guarantee when the static constructor gets called though, other than at or before first use.

Load testing Visual Studio, start up script / setup

I was wondering if it was possible to have a start-up script before running any load tests? For example, perhaps to seed some data or clear anything down prior to the tests executing.
In my instance I have a mixed bag of designer and coded tests. Put it simply, I have:
Two coded tests
A designer created web test which points to these coded tests
A load test which runs the designer
I have tried adding a class and decorating with the attributes [TestInitialize()], [ClassInitialize()] but this code doesn't seem to get run.
Some basic code to show this in practice (see below). Is there a way of doing this whereby I can have something run only the once before test run?
[TestClass]
public class Setup : WebTest
{
[TestInitialize()]
public static void Hello()
{
// Run some code
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}
Probably should also mention that on my coded tests I have added these attributes and they get ignored. I have come across a workaround which is to create a Plugin.
EDIT
Having done a little more browsing around I found this article on SO which shows how to implement a LoadTestPlugin.
Visual Studio provides a way of running a script before and also after a test run. They are intended for use in deploying data for a test and cleaning up after a test. The scripts are specified on the "Setup and cleanup" page in the ".testsettings" file.
A load test plugin can contain code to run before and after any test cases are executed, also at various stages during test execution. The interface is that events are raised at various points during the execution of a load test. User code can be called when these events occur. The LoadTestStarting event is raised before any test cases run. See here for more info.
If you are willing to use NUnit you have SetUp/TearDown for a per test scope and TestFixtureSetUp/TestFixtureTearDown to do something similar for a class (TestFixture)
Maybe a bit of a hack, but you can place your code inside the static constructor of your test class as it will automatically run exactly once before the first instance is created or any static members are referenced:
[TestClass]
public class Setup : WebTest
{
static Setup()
{
// prepare data for test
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}

Execute unit tests serially (rather than in parallel)

I am attempting to unit test a WCF host management engine that I have written. The engine basically creates ServiceHost instances on the fly based on configuration. This allows us to dynamically reconfigure which services are available without having to bring all of them down and restart them whenever a new service is added or an old one is removed.
I have run into a difficulty in unit testing this host management engine, however, due to the way ServiceHost works. If a ServiceHost has already been created, opened, and not yet closed for a particular endpoint, another ServiceHost for the same endpoint can not be created, resulting in an exception. Because of the fact that modern unit testing platforms parallelize their test execution, I have no effective way to unit test this piece of code.
I have used xUnit.NET, hoping that because of its extensibility, I could find a way to force it to run the tests serially. However, I have not had any luck. I am hoping that someone here on SO has encountered a similar issue and knows how to get unit tests to run serially.
NOTE: ServiceHost is a WCF class, written by Microsoft. I don't have the ability to change it's behavior. Hosting each service endpoint only once is also the proper behavior...however, it is not particularly conducive to unit testing.
Each test class is a unique test collection and tests under it will run in sequence, so if you put all of your tests in same collection then it will run sequentially.
In xUnit you can make following changes to achieve this:
Following will run in parallel:
namespace IntegrationTests
{
public class Class1
{
[Fact]
public void Test1()
{
Console.WriteLine("Test1 called");
}
[Fact]
public void Test2()
{
Console.WriteLine("Test2 called");
}
}
public class Class2
{
[Fact]
public void Test3()
{
Console.WriteLine("Test3 called");
}
[Fact]
public void Test4()
{
Console.WriteLine("Test4 called");
}
}
}
To make it sequential you just need to put both the test classes under same collection:
namespace IntegrationTests
{
[Collection("Sequential")]
public class Class1
{
[Fact]
public void Test1()
{
Console.WriteLine("Test1 called");
}
[Fact]
public void Test2()
{
Console.WriteLine("Test2 called");
}
}
[Collection("Sequential")]
public class Class2
{
[Fact]
public void Test3()
{
Console.WriteLine("Test3 called");
}
[Fact]
public void Test4()
{
Console.WriteLine("Test4 called");
}
}
}
For more info you can refer to this link
Important: This answer applies to .NET Framework. For dotnet core, see Dimitry's answer regarding xunit.runner.json.
All good unit tests should be 100% isolated. Using shared state (e.g. depending on a static property that is modified by each test) is regarded as bad practice.
Having said that, your question about running xUnit tests in sequence does have an answer! I encountered exactly the same issue because my system uses a static service locator (which is less than ideal).
By default xUnit 2.x runs all tests in parallel. This can be modified per-assembly by defining the CollectionBehavior in your AssemblyInfo.cs in your test project.
For per-assembly separation use:
using Xunit;
[assembly: CollectionBehavior(CollectionBehavior.CollectionPerAssembly)]
or for no parallelization at all use:
[assembly: CollectionBehavior(DisableTestParallelization = true)]
The latter is probably the one you want. More information about parallelisation and configuration can be found on the xUnit documentation.
For .NET Core projects, create xunit.runner.json with:
{
"parallelizeAssembly": false,
"parallelizeTestCollections": false
}
Also, your csproj should contain
<ItemGroup>
<None Update="xunit.runner.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
For old .Net Core projects, your project.json should contain
"buildOptions": {
"copyToOutput": {
"include": [ "xunit.runner.json" ]
}
}
For .NET Core projects, you can configure xUnit with an xunit.runner.json file, as documented at https://xunit.net/docs/configuration-files.
The setting you need to change to stop parallel test execution is parallelizeTestCollections, which defaults to true:
Set this to true if the assembly is willing to run tests inside this assembly in parallel against each other. ... Set this to false to disable all parallelization within this test assembly.
JSON schema type: boolean
Default value: true
So a minimal xunit.runner.json for this purpose looks like
{
"parallelizeTestCollections": false
}
As noted in the docs, remember to include this file in your build, either by:
Setting Copy to Output Directory to Copy if newer in the file's Properties in Visual Studio, or
Adding
<Content Include=".\xunit.runner.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
to your .csproj file, or
Adding
"buildOptions": {
"copyToOutput": {
"include": [ "xunit.runner.json" ]
}
}
to your project.json file
depending upon your project type.
Finally, in addition to the above, if you're using Visual Studio then make sure that you haven't accidentally clicked the Run Tests In Parallel button, which will cause tests to run in parallel even if you've turned off parallelisation in xunit.runner.json. Microsoft's UI designers have cunningly made this button unlabelled, hard to notice, and about a centimetre away from the "Run All" button in Test Explorer, just to maximise the chance that you'll hit it by mistake and have no idea why your tests are suddenly failing:
This is old question but I wanted to write a solution to people searching newly like me :)
Note: I use this method in Dot Net Core WebUI integration tests with xunit version 2.4.1.
Create an empty class named NonParallelCollectionDefinitionClass and then give CollectionDefinition attribute to this class as below. (The important part is DisableParallelization = true setting.)
using Xunit;
namespace WebUI.IntegrationTests.Common
{
[CollectionDefinition("Non-Parallel Collection", DisableParallelization = true)]
public class NonParallelCollectionDefinitionClass
{
}
}
After then add Collection attribute to the class which you don't want it to run in parallel as below. (The important part is name of collection. It must be same with name used in CollectionDefinition)
namespace WebUI.IntegrationTests.Controllers.Users
{
[Collection("Non-Parallel Collection")]
public class ChangePassword : IClassFixture<CustomWebApplicationFactory<Startup>>
...
When we do this, firstly other parallel tests run. After that the other tests which has Collection("Non-Parallel Collection") attribute run.
you can Use Playlist
right click on the test method -> Add to playlist -> New playlist
then you can specify the execution order, the default is, as you add them to the play list but you can change the playlist file as you want
I don't know the details, but it sounds like you might be trying to do integration testing rather than unit testing. If you could isolate the dependency on ServiceHost, that would likely make your testing easier (and faster). So (for instance) you might test the following independently:
Configuration reading class
ServiceHost factory (possibly as an integration test)
Engine class that takes an IServiceHostFactory and an IConfiguration
Tools that would help include isolation (mocking) frameworks and (optionally) IoC container frameworks. See:
http://www.mockobjects.com/
http://www.hanselman.com/blog/ListOfNETDependencyInjectionContainersIOC.aspx
Maybe you can use Advanced Unit Testing. It allows you to define the sequence in which you run the test. So you may have to create a new cs file to host those tests.
Here's how you can bend the test methods to work in the sequence you want.
[Test]
[Sequence(16)]
[Requires("POConstructor")]
[Requires("WorkOrderConstructor")]
public void ClosePO()
{
po.Close();
// one charge slip should be added to both work orders
Assertion.Assert(wo1.ChargeSlipCount==1,
"First work order: ChargeSlipCount not 1.");
Assertion.Assert(wo2.ChargeSlipCount==1,
"Second work order: ChargeSlipCount not 1.");
...
}
Do let me know whether it works.
None of the suggested answers so far worked for me. I have a dotnet core app with XUnit 2.4.1.
I achieved the desired behavior with a workaround by putting a lock in each unit test instead. In my case, I didn't care about running order, just that tests were sequential.
public class TestClass
{
[Fact]
void Test1()
{
lock (this)
{
//Test Code
}
}
[Fact]
void Test2()
{
lock (this)
{
//Test Code
}
}
}
For me, in .Net Core Console application, when I wanted to run test methods ( not classes ) synchronously, the only solution which worked was this described in this blog:
xUnit: Control the Test Execution Order
I've added the attribute [Collection("Sequential")] in a base class:
namespace IntegrationTests
{
[Collection("Sequential")]
public class SequentialTest : IDisposable
...
public class TestClass1 : SequentialTest
{
...
}
public class TestClass2 : SequentialTest
{
...
}
}

Run a single unit test from a fixture of many in MbUnit

Is there anyway to add an attribute to a [Test] method in a [TestFixture] so that only that method runs? This would be similar to the way the [CurrentFixture] attribute can be used to only run a single fixture. I ask as sometimes when I test the model I want to profile the sql being executed and I only want to focus on a single test. Currently I have to comment out all the other tests in the fixture.
Updated:
The code I'm using to initiate the test follows, I'm really looking for a solution I can weave into this code.
public static void Run(bool currentFixturesOnly) {
using(AutoRunner auto = new AutoRunner()) {
if(currentFixturesOnly) { // for processing [CurrentFixture]s only
auto.Domain.Filter = FixtureFilters.Current;
}
auto.Verbose = true;
auto.Run();
auto.ReportToHtml();
}
}
If you use a test runner like TestDriven.Net, ReSharper or Icarus then you can select the specific test to run and just run that. If you're using the command-line tools, consider using a filter.
eg.
Gallio.Echo MyTestAssembly.dll /f:Name:TheNameOfTheParticularIWantToRun

Categories