I was reading through this link on category expressions when using /include or /exclude statement. I want to be able to include only run test to be run out of two tests available or run all tests but using the /include:A+B or /exclude:A. However, for some reason, it displays the wrong number of tests to be run and/or not run. Why is that?
Can anyone provide me with an example on how to category expressions (by manipulating source code) and add how to run the command in the console?
Essentially what I did was:
using System;
using NUnit;
using NUnit_Application;
using NUnit.Framework;
namespace NUnit_Application.Test
{
[TestFixture]
[Category("MathS")]
public class TestClass
{
[TestCase]
[Category("MathA")]
public void AddTest()
{
MathsHelper helper = new MathsHelper();
int result = helper.Add(20, 10);
Assert.AreEqual(40, result);
}
[TestCase]
[Category("MathB")]
public void SubtractTest()
{
MathsHelper helper = new MathsHelper();
int result = helper.Subtract(20, 10);
Assert.AreEqual(10, result);
}
}
}
And my command line statement was
nunit-console /framework:net-4.0 /run:NUnit_Application.Test.TestClass.AddTest C:~\NUnit_Application\NUnit_Application\NUnit_Application.Test\bin\Debug\NUnit_Application.Test.dll /include:"MathA"
The thing is, the console is familiar with what the commands means and it says it included Math A category. However, it shows that zero tests have ran and zero tests have not run.
I'm running NUnit 2.6.2, the console runner.
Here is command I used initially:
nunit-console /framework:net-4.0 /run:NUnit_Application.Test.TestClass.AddTest C:~\NUnit_Application\NUnit_Application\NUnit_Application.Test\bin\Debug\NUnit_Application.Test.dll /include:"MathA"
I noticed if I just call TestClass and not the individual test case, it works:
nunit-console /framework:net-4.0 /run:NUnit_Application.Test.TestClass C:~\NUnit_Application\NUnit_Application\NUnit_Application.Test\bin\Debug\NUnit_Application.Test.dll /include:"MathA"
I think it's because you have the whole class with the attribute :
[Category("MathS")]
So it skips over it.
Related
Can I build my TestCaseData list in my SetUp? Because with this setup my test is just being skipped. Other regular tests are running just fine.
[TestFixture]
public class DirectReader
{
private XDocument document;
private DirectUblReader directReader;
private static UblReaderResult result;
private static List<TestCaseData> rootElementsTypesData = new List<TestCaseData>();
[SetUp]
public void Setup()
{
var fileStream = ResourceReader.GetScenario("RequiredElements_2_1.xml");
document = XDocument.Load(fileStream);
directReader = new DirectUblReader();
result = directReader.Read(document);
// Is this allowed?
rootElementsTypesData.Add(new TestCaseData(result.Invoice.Id, new IdentifierType()));
rootElementsTypesData.Add(new TestCaseData(result.Invoice.IssueDate, new IdentifierType()));
}
[Test, TestCaseSource(nameof(rootElementsTypesData))]
public void Expects_TypeOfObject_ToBeTheSameAs_InputValue(object inputValue, object expectedTypeObject)
{
Assert.That(inputValue, Is.TypeOf(expectedTypeObject.GetType()));
}
}
As stated by #IMil, the answer is No... that's not possible.
TestCaseSource is used by NUnit to build a list of the tests to be run. It associates a method with a particular set of arguments. NUnit then creates an internal representation of all your tests.
OTOH SetUp (and even OneTimeSetUp is used when those tests are being run. By that time, the number of tests and the actual arguments to each of them are fixed nothing can change them.
So, in order to do what you seem to want to do, your TestCaseSource has to stand on it's own, fully identifying the arguments to be used for the test. That's why NUnit gives you the capability of making the source a method or property, rather than just a simple list.
In your case, I suggest something like...
private static IEnumerable<TestCaseData> RootElementsTypesData()
{
var fileStream = ResourceReader.GetScenario("RequiredElements_2_1.xml");
document = XDocument.Load(fileStream);
directReader = new DirectUblReader();
result = directReader.Read(document);
yield return new TestCaseData(result.Invoice.Id, new IdentifierType()));
yield return new TestCaseData(result.Invoice.IssueDate, new IdentifierType()));
}
Obviously, this is only "forum code" and you'll have to work with it to get something that actually compiles and works for your case.
No, this is impossible.
Methods decorated with [SetUp] are run before each test case.
This means NUnit will first build list of test cases, then run Setup() before each of them.
Therefore, your Setup() never gets called, and list of test cases remains empty.
I've got a set of test cases, some of which are expected to throw exceptions. Because of this, I have have set the attributes for these tests to expect exceptions like so:
[ExpectedException("System.NullReferenceException")]
When I run my tests locally all is good. However when I move my tests over to the CI server running TeamCity, all my tests that have expected exceptions fail. This is a known bug.
I am aware that there is also the Assert.Throws<> and Assert.Throws methods that NUnit offers.
My question is how can I make use of these instead of the attribute I'm currently using?
I've had a look around StackOverflow and tried a few things none of which seem to work for me.
Is there a simple 1 line solution to using this?
I'm not sure what you've tried that is giving you trouble, but you can simply pass in a lambda as the first argument to Assert.Throws. Here's one from one of my tests that passes:
Assert.Throws<ArgumentException>(() => pointStore.Store(new[] { firstPoint }));
Okay, that example may have been a little verbose. Suppose I had a test
[Test]
[ExpectedException("System.NullReferenceException")]
public void TestFoo()
{
MyObject o = null;
o.Foo();
}
which would pass normally because o.Foo() would raise a null reference exception.
You then would drop the ExpectedException attribute and wrap your call to o.Foo() in an Assert.Throws.
[Test]
public void TestFoo()
{
MyObject o = null;
Assert.Throws<NullReferenceException>(() => o.Foo());
}
Assert.Throws "attempts to invoke a code snippet, represented as a delegate, in order to verify that it throws a particular exception." The () => DoSomething() syntax represents a lambda, essentially an anonymous method. So in this case, we are telling Assert.Throws to execute the snippet o.Foo().
So no, you don't just add a single line like you do an attribute; you need to explicitly wrap the section of your test that will throw the exception, in a call to Assert.Throws. You don't necessarily have to use a lambda, but that's often the most convenient.
Here's a simple example using both ways.
string test = null;
Assert.Throws( typeof( NullReferenceException ), () => test.Substring( 0, 4 ) );
Assert.Throws<NullReferenceException>( () => test.Substring( 0, 4 ) );
If you don't want to use lambdas.
[Test]
public void Test()
{
Assert.Throws<NullReferenceException>( _TestBody );
}
private void _TestBody()
{
string test = null;
test.Substring( 0, 4 );
}
By default, TeamCity uses NUnit 2.2.10, which doesn't have ExpectedException. Check the TeamCity "NUnit for NAnt" docs to see how to change it to something more modern, including the specific list of releases TeamCity provides.
NUnit has added a new Record.Exception method.
If you prefer to separate Acts and Asserts then
Act:
ex = Record.Exception(()={throw new Exception()}
Assert:
Assert.NotNull(ex);
I have an existing project that runs tests sequentially, and I'm trying to implement parallel execution.
I initially added these attributes to AssemblyInfo.cs
[assembly: Parallelizable(ParallelScope.Fixtures)]
[assembly: LevelOfParallelism(2)]
In order to see that parallel execution was being attempted, I had to create two features, each with one test, then in Visual Studio's Test Explorer, run them all. This tried to run each test in each feature at the same time.
I have one of the tests set to run in Chrome, and the other in Firefox. This is also the order that the webdriver invokes the browser instances.
Chrome opens first, then Firefox opens - but Chrome is orphaned and the test gets conducted only in Firefox.
This I believe is because my webdriver is static, so Firefox is hijacking the thread used by Chrome. I've read that I cannot use a static webdriver for parallel testing, so I'm attempting to a non-static one.
It seems I now have to pass the driver between methods to ensure that all operations are conducted on that particular instance.
Having implemented the webdriver non-statically, I'm first trying to ensure that a single test runs, before trying to run all the tests in parallel.
But I've hit a road-block. In the following test, the driver is reset to null upon commencement of the second (When) step:
Scenario Outline: C214 Log in
Given I launch the site for <profile> and <environment> and <parallelEnvironment>
When I log in to the Normal account
Then I see that I am logged in
Examples:
| profile | environment | parallelEnvironment |
| single | Chrome75 | |
#| single | Firefox67 | |
How do I make the non-static webdriver persist between steps?
Is ThreadLocal the answer? If so, will using it be a problem later down the line if I want to use this parallel execution in Selenium Grid over Windows Desktop, android and iOS devices?
Here's my set-up:
SetUp.cs
using TechTalk.SpecFlow;
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
using OpenQA.Selenium.Edge;
using OpenQA.Selenium.Firefox;
using OpenQA.Selenium.IE;
using System;
namespace OurAutomation
{
[Binding]
public class SetUp
{
public IWebDriver Driver;
public string theEnvironment;
public IWebDriver InitialiseDriver(string profile, string environment, string parallelEnvironment)
{
theEnvironment = environment;
if (profile == "single")
{
if (environment.Contains("IE"))
{
Driver = new InternetExplorerDriver();
}
else if (environment.Contains("Edge"))
{
Driver = new EdgeDriver();
}
else if (environment.Contains("Chrome"))
{
Driver = new ChromeDriver(#"C:\Automation Test Drivers\");
}
else if (environment.Contains("Firefox"))
{
Driver = new FirefoxDriver(#"C:\Automation Test Drivers\");
}
}
Driver.Manage().Window.Maximize();
Driver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(5);
return Driver;
}
[AfterScenario]
public void AfterScenario()
{
Driver.Quit();
}
}
}
BaseSteps.cs
using OpenQA.Selenium;
namespace OurAutomation.Steps
{
public class BaseSteps : SetUp
{
public IWebDriver driver;
public void DriverSetup(string profile, string environment, string parallelEnvironment)
{
driver = InitialiseDriver(profile, environment, parallelEnvironment);
}
}
}
LaunchTestSteps.cs
using OurAutomation.BaseMethods;
using OurAutomation.Pages;
using TechTalk.SpecFlow;
namespace OurAutomation.Steps
{
[Binding]
public class LaunchTestSteps : BaseSteps
{
[Given(#"I launch the site for (.*) and (.*) and (.*)")]
public void ILaunchTheSite(string profile, string environment, string parallelEnvironment)
{
DriverSetup(profile, environment, parallelEnvironment);
new Common().LaunchSite(driver);
new Core().Wait(10, "seconds");
}
}
}
There's more but not sure whether the full suite will be needed to figure this out. Perhaps it is obvious so far as to my fatal errors!
After much ado, found https://github.com/minhhoangvn/AutomationFramework which enabled me to strip out the necessary code to achieve parallel testing using ThreadLocal.
I'm using Specflow, Visual studio 2015 and Nunit. I need if a test fails to run it once again. I have
[AfterScenario]
public void AfterScenario1()
{
if (Test Failed and the counter is 1)
{
StartTheLastTestOnceAgain();
}
}
How do I start the last test again?
In NUnit there is the RetryAttribute (https://github.com/nunit/docs/wiki/Retry-Attribute) for that. It looks like that the SpecFlow.Retry plugin is using that (https://www.nuget.org/packages/SpecFlow.Retry/). This plugin is a 3rd party plugin and I did not used it yet. So no guarantee that this works as you want.
As alternative you could use the SpecFlow+Runner (http://www.specflow.org/plus/). This specialized runner has the option to rerun your failed test. (http://www.specflow.org/plus/documentation/SpecFlowPlus-Runner-Profiles/#Execution - retryFor/retryCount config value).
Full disclosure: I am one of the developers of SpecFlow and SpecFlow+.
You could always just capture the failure during the assert step and then retry what ever it is that your testing for. Something like:
[Given(#"I'm on the homepage")]
public void GivenImOnTheHomepage()
{
go to homepage...
}
[When(#"When I click some button")]
public void WhenIClickSomeButton()
{
click button...
}
[Then(#"Something Special Happens")]
public void ThenSomethingSpecialHappens()
{
var theRightThingHappened = someWayToTellTheRightThingHappened();
var result = Assert.IsTrue(theRightThingHappened);
if(!result)
{
thenTrySomeStepsAgainHere and recheck result using another assert
}
}
Using MbUnit, I create tests using several StaticTestFactory methods, each having corresponding test setup and teardown methods. A requirement is to log test results to an external system, especially failed ones.
However, I am unable to get the correct test outcome status using TestContext.CurrentContext.Outcome.Status. Using below code, you will see that the test fails, but the Outcome.status is always returned as 'Passed' from FactoryAssignedTearDownMethod, even when both Gallio Icarus and Echo show the test as failed.
Looking for any workaround or fix to get the correct outcome in this scenario.
public class FactoryTest
{
[StaticTestFactory]
public static IEnumerable<Test> CreateStaticTests()
{
var testcase = new TestCase("simpletest" , () =>
{
Assert.Fail("staticfactory created test failed.");
});
testcase.TearDown = FactoryAssignedTearDownMethod;
yield return testcase;
}
public static void FactoryAssignedTearDownMethod()
{
//outcome value is always 'Passed', even when test fails
TestLog.WriteLine("Test Outcome Status from factory assigned method: " + TestContext.CurrentContext.Outcome.Status);
}
}
I worked around this by writing a Gallio TestRunnerExtension. By handling the TestStepFinished event, I can get the proper test result for all tests created with the StaticTestFactory.