Skipping Feature - SpecFlow C# - c#

I'm looking to intercept a test using the [BeforeFeature] SpecFlow Hook and ignore the entire feature file.
private static string FeatureName = FeatureContext.Current.FeatureInfo.Title;
[BeforeFeature]
public static void BeforeFeature()
{
Console.WriteLine("Before feature");
if (TestFilter.ShouldBeIgnored(FeatureName))
{
// Ignore Feature if it matches TestFilter Requirements
}
}

If you are using Specflow + Nunit, you can call
Assert.Ignore("ignore message here");
This will cause the individual tests to be ignored, if their feature is ran.
However, this may require you to use a BeforeScenario hook instead of a BeforeFeature hook.
Because BeforeScenario has access to the feature info, this should not be an issue.

Did you look into #ignore tag? You can skip features or scenarios.
link

Related

How to perform global setup/teardown in xUnit and run tests in parallel?

Here is what I want to achieve with xUnit:
Run initialization code.
Run tests in parallel.
Perform teardown.
I have tried [CollectionDefinition]/[Collection]/ICollectionFixture
approach described here but it has disabled the parallel execution, which is critical for me.
Are there any way to run tests in parallel and be able to write global setup/tear-down code in xUnit?
If it is not possible with xUnit, does NUnit or MSUnit support this scenario?
NUnit supports this scenario. For global setup, create a class in one of your root namespaces and add the [SetupFixture] attribute to it. Then add a [OneTimeSetUp] method to that class. This method will get run once for all tests in that namespace and in child namespaces. This allows you to have additional namespace specific onetime setups.
[SetUpFixture]
public class MySetUpClass
{
[OneTimeSetUp]
public void RunBeforeAnyTests()
{
// ...
}
[OneTimeTearDown]
public void RunAfterAnyTests()
{
// ...
}
}
Then to run your tests in parallel, add the [Parallelizable] attribute at the assembly level with the ParallelScope.All. If you have tests that should not be run in parallel with others, you can use the NonParallelizable attribute at lower levels.
[assembly: Parallelizable(ParallelScope.All)]
Running test methods in parallel in NUnit is supported in NUnit 3.7 and later. Prior to that, it only supported running test classes in parallel. I would recommend starting any project with the most recent version of NUnit to take advantages of bug fixes, new features and improvements.
A somewhat basic solution would be static class with a static constructor and subscribing to the AppDomain.CurrentDomain.ProcessExit event.
public static class StaticFixture
{
static StaticFixture()
{
AppDomain.CurrentDomain.ProcessExit += (o, e) => Dispose();
// Initialization code here
}
private static void Dispose()
{
// Teardown code here
}
}
There's no guarantee when the static constructor gets called though, other than at or before first use.

Load testing Visual Studio, start up script / setup

I was wondering if it was possible to have a start-up script before running any load tests? For example, perhaps to seed some data or clear anything down prior to the tests executing.
In my instance I have a mixed bag of designer and coded tests. Put it simply, I have:
Two coded tests
A designer created web test which points to these coded tests
A load test which runs the designer
I have tried adding a class and decorating with the attributes [TestInitialize()], [ClassInitialize()] but this code doesn't seem to get run.
Some basic code to show this in practice (see below). Is there a way of doing this whereby I can have something run only the once before test run?
[TestClass]
public class Setup : WebTest
{
[TestInitialize()]
public static void Hello()
{
// Run some code
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}
Probably should also mention that on my coded tests I have added these attributes and they get ignored. I have come across a workaround which is to create a Plugin.
EDIT
Having done a little more browsing around I found this article on SO which shows how to implement a LoadTestPlugin.
Visual Studio provides a way of running a script before and also after a test run. They are intended for use in deploying data for a test and cleaning up after a test. The scripts are specified on the "Setup and cleanup" page in the ".testsettings" file.
A load test plugin can contain code to run before and after any test cases are executed, also at various stages during test execution. The interface is that events are raised at various points during the execution of a load test. User code can be called when these events occur. The LoadTestStarting event is raised before any test cases run. See here for more info.
If you are willing to use NUnit you have SetUp/TearDown for a per test scope and TestFixtureSetUp/TestFixtureTearDown to do something similar for a class (TestFixture)
Maybe a bit of a hack, but you can place your code inside the static constructor of your test class as it will automatically run exactly once before the first instance is created or any static members are referenced:
[TestClass]
public class Setup : WebTest
{
static Setup()
{
// prepare data for test
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}

How do you disable PostSharp when running unit tests?

I want my nunit tests not to apply any of my PostSharp aspects so I can test my methods in isolation. Can this be done somehow in the Test Fixture Setup, or can it only be done on a per project level?
You could set the 'SkipPostSharp' flag on the test build, so that it is not compiled into your binaries in the first place.
You can have a static flag on your aspect to toggle it on/off and check the status of the flag in your aspect implementation.
Then in your unit test setup just turn the static flag off.
e.g.
public static bool On = true;
...
public override void OnInvoke(MethodInterceptionArgs args)
{
if (!CacheAttribute.On)
{
args.ReturnValue = args.Invoke(args.Arguments);
}
If you are using Typemock in your unit tests you can use something like
MyAspect myAspectMock = Isolate.Fake.Instance<MyAspect>(Members.MustSpecifyReturnValues);
Isolate.Swap.AllInstances<MyAspect>().With(myAspectMock);
This allows you to control what tests the aspects are used on, and which ones are not, allowing you to test the method itself, and with the advices applied.
Presumably there would be a similar mechanism with other mocking frameworks

How to use ApprovalTests on Teamcity?

I am using Approval Tests. On my dev machine I am happy with DiffReporter that starts TortoiseDiff when my test results differ from approved:
[UseReporter(typeof (DiffReporter))]
public class MyApprovalTests
{ ... }
However when the same tests are running on Teamcity and results are different tests fail with the following error:
System.Exception : Unable to launch: tortoisemerge.exe with arguments ...
Error Message: The system cannot find the file specified
---- System.ComponentModel.Win32Exception : The system cannot find the file
specified
Obviously it cannot find tortoisemerge.exe and that is fine because it is not installed on build agent. But what if it gets installed? Then for each fail another instance of tortoisemerge.exe will start and nobody will close it. Eventually tons of tortoisemerge.exe instances will kill our servers :)
So the question is -- how tests should be decorated to run Tortoise Diff on local machine
and just report errors on build server? I am aware of #IF DEBUG [UseReporter(typeof (DiffReporter))] but would prefer another solution if possible.
There are a couple of solutions to the question of Reporters and CI. I will list them all, then point to a better solution, which is not quite enabled yet.
Use the AppConfigReporter. This allows you to set the reporter in your AppConfig, and you can use the QuietReporter for CI.
There is a video here, along with many other reporters. The AppConfigReporter appears at 6:00.
This has the advantage of separate configs, and you can decorate at the assembly level, but has the disadvantage of if you override at the class/method level, you still have the issue.
Create your own (2) reporters. It is worth noting that if you use a reporter, it will get called, regardless as to if it is working in the environment. IEnvironmentAwareReporter allows for composite reporters, but will not prevent a direct call to the reporter.
Most likely you will need 2 reporters, one which does nothing (like a quiet reporter) but only works on your CI server, or when called by TeamCity. Will call it the TeamCity Reporter. And One, which is a multiReporter which Calls teamCity if it is working, otherwise defers to .
Use a FrontLoadedReporter (not quite ready). This is how ApprovalTests currently uses NCrunch. It does the above method in front of whatever is loaded in your UseReporter attribute. I have been meaning to add an assembly level attribute for configuring this, but haven't yet (sorry) I will try to add this very soon.
Hope this helps.
Llewellyn
I recently came into this problem myself.
Borrowing from xunit and how they deal with TeamCity logging I came up with a TeamCity Reporter based on the NCrunch Reporter.
public class TeamCityReporter : IEnvironmentAwareReporter, IApprovalFailureReporter
{
public static readonly TeamCityReporter INSTANCE = new TeamCityReporter();
public void Report(string approved, string received) { }
public bool IsWorkingInThisEnvironment(string forFile)
{
return Environment.GetEnvironmentVariable("TEAMCITY_PROJECT_NAME") != null;
}
}
And so I could combine it with the NCrunch reporter:
public class TeamCityOrNCrunchReporter : FirstWorkingReporter
{
public static readonly TeamCityOrNCrunchReporter INSTANCE =
new TeamCityOrNCrunchReporter();
public TeamCityOrNCrunchReporter()
: base(NCrunchReporter.INSTANCE,
TeamCityReporter.INSTANCE) { }
}
[assembly: FrontLoadedReporter(typeof(TeamCityOrNCrunchReporter))]
I just came up with one small idea.
You can implement your own reporter, let's call it DebugReporter
public class DebugReporter<T> : IEnvironmentAwareReporter where T : IApprovalFailureReporter, new()
{
private readonly T _reporter;
public static readonly DebugReporter<T> INSTANCE = new DebugReporter<T>();
public DebugReporter()
{
_reporter = new T();
}
public void Report(string approved, string received)
{
if (IsWorkingInThisEnvironment())
{
_reporter.Report(approved, received);
}
}
public bool IsWorkingInThisEnvironment()
{
#if DEBUG
return true;
#else
return false;
#endif
}
}
Example of usage,
[UseReporter(typeof(DebugReporter<FileLauncherReporter>))]
public class SomeTests
{
[Test]
public void test()
{
Approvals.Verify("Hello");
}
}
If test is faling, it still would be red - but reporter would not came up.
The IEnvironmentAwareReporter is specially defined for that, but unfortunatelly whatever I return there, it still calls Report() method. So, I put the IsWorkingInThisEnvironment() call inside, which is a little hackish, but works :)
Hope that Llywelyn can explain why it acts like that. (bug?)
I'm using CC.NET and I do have TortoiseSVN installed on the server.
I reconfigured my build server to allow the CC.NET service to interact with the desktop. When I did that, TortiseMerge launched. So I think what's happening is that Approvals tries to launch the tool, but it cant because CC.NET is running as a service and the operating system prevents that behavior by default. If TeamCity runs as a service, you should be fine, but you might want to test.

Why do my tests fail when run together, but pass individually?

When I write a test in Visual Studio, I check that it works by saving, building and then running the test it in Nunit (right click on the test then run).
The test works yay...
so I Move on...
Now I have written another test and it works as I have saved and tested it like above. But, they dont work when they are run together.
Here are my two tests that work when run as individuals but fail when run together:
using System;
using NUnit.Framework;
using OpenQA.Selenium.Support.UI;
using OpenQA.Selenium;
namespace Fixtures.Users.Page1
{
[TestFixture]
public class AdminNavigateToPage1 : SeleniumTestBase
{
[Test]
public void AdminNavigateToPage1()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
NavigateTo<Page1>();
var headerelement = Driver.FindElement(By.ClassName("header"));
Assert.That(headerelement.Text, Is.EqualTo("Page Title"));
Assert.That(Driver.Url, Is.EqualTo("http://localhost/Page Title"));
}
[Test]
public void AdminNavigateToPage1ViaMenu()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
Driver.FindElement(By.Id("menuitem1")).Click();
Driver.FindElement(By.Id("submenuitem4")).Click();
var headerelement = Driver.FindElement(By.ClassName("header"));
Assert.That(headerelement.Text, Is.EqualTo("Page Title"));
Assert.That(Driver.Url, Is.EqualTo("http://localhost/Page Title"));
}
}
}
When the second test fails because they have been run together
Nunit presents this:
Sse.Bec.Web.Tests.Fixtures.ManageSitesAndUsers.ChangeOfPremises.AdminNavigateToChangeOfPremises.AdminNavigateToPageChangeOfPremisesViaMenu:
OpenQA.Selenium.NoSuchElementException : The element could not be found
And this line is highlighted:
var headerelement = Driver.FindElement(By.ClassName("header"));
Does anyone know why my code fails when run together, but passes when run alone?
Any answer would be greatly appreciated!
Such a situation normally occurs when the unit tests are using shared resources/data in some way.
It can also happen if your system under test has static fields/properties which are being leveraged to compute the output on which you are asserting.
It can happen if the system under test is being shared (static) dependencies.
Two things you can try
put the break point between the following two lines. And see which page are you in when the second line is hit
Introduce a slight delay between these two lines via Thread.Sleep
Driver.FindElement(By.Id("submenuitem4")).Click();
var headerelement = Driver.FindElement(By.ClassName("header"));
If none of the answers above worked for you, i solved this issue by adding Thread.Sleep(1) before the assertion in the failing test...
Looks like tests synchronization is missed somewhere... Please note that my tests were not order dependant, that i haven't any static member nor external dependency.
look into the TestFixtureSetup, Setup, TestFixtureTearDown and TearDown.
These attributes allow you to setup the testenvironment once, instead of once per test.
Without knowing how Selenium works, my bet is on Driver which seems to be a static class so the 2 tests are sharing state. One example of shared state is Driver.Url. Because the tests are run in parallel, there is a race condition to set the state of this object.
That said, I do not have a solution for you :)
Are you sure that after running one of the tests the method
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
is taking you back to where you should be? It'd seem that the failure is due to improper navigation handler (supposing that the header element is present and found in both tests).
I think you need to ensure, that you can log on for the second test, this might fail, because you are logged on already?
-> putting the logon in a set up method or (because it seems you are using the same user for both tests) even up to the fixture setup
-> the logoff (if needed) might be put in the tear down method
[SetUp]
public void LaunchTest()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
}
[TearDown]
public void StopTest()
{
// logoff
}
[Test]
public void Test1()
{...}
[Test]
public void Test2()
{...}
If there are delays in the DOM instead of a thread.sleep I recommend to use webdriver.wait in combination with conditions. The sleep might work in 80% and in others not. The wait polls until a timeout is reached which is more reliable and also readable. Here an example how I usually approach this:
var webDriverWait = new WebDriverWait(webDriver, ..);
webDriverWait.Until(d => d.FindElement(By.CssSelector(".."))
.Displayed))
I realize this is an extremely old question but I just ran into it today and none of the answers addressed my particular case.
Using Selenium with NUnit for front end automation tests.
For my case I was using in my startup [OneTimeSetUp] and [OneTimeTearDown] trying to be more efficient.
This however has the problem of using shared resources, in my case the driver itself and the helper I use to validate/get elements.
Maybe a strange edge case - but took me a few hours to figure it out.

Categories