I have an existing project that runs tests sequentially, and I'm trying to implement parallel execution.
I initially added these attributes to AssemblyInfo.cs
[assembly: Parallelizable(ParallelScope.Fixtures)]
[assembly: LevelOfParallelism(2)]
In order to see that parallel execution was being attempted, I had to create two features, each with one test, then in Visual Studio's Test Explorer, run them all. This tried to run each test in each feature at the same time.
I have one of the tests set to run in Chrome, and the other in Firefox. This is also the order that the webdriver invokes the browser instances.
Chrome opens first, then Firefox opens - but Chrome is orphaned and the test gets conducted only in Firefox.
This I believe is because my webdriver is static, so Firefox is hijacking the thread used by Chrome. I've read that I cannot use a static webdriver for parallel testing, so I'm attempting to a non-static one.
It seems I now have to pass the driver between methods to ensure that all operations are conducted on that particular instance.
Having implemented the webdriver non-statically, I'm first trying to ensure that a single test runs, before trying to run all the tests in parallel.
But I've hit a road-block. In the following test, the driver is reset to null upon commencement of the second (When) step:
Scenario Outline: C214 Log in
Given I launch the site for <profile> and <environment> and <parallelEnvironment>
When I log in to the Normal account
Then I see that I am logged in
Examples:
| profile | environment | parallelEnvironment |
| single | Chrome75 | |
#| single | Firefox67 | |
How do I make the non-static webdriver persist between steps?
Is ThreadLocal the answer? If so, will using it be a problem later down the line if I want to use this parallel execution in Selenium Grid over Windows Desktop, android and iOS devices?
Here's my set-up:
SetUp.cs
using TechTalk.SpecFlow;
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
using OpenQA.Selenium.Edge;
using OpenQA.Selenium.Firefox;
using OpenQA.Selenium.IE;
using System;
namespace OurAutomation
{
[Binding]
public class SetUp
{
public IWebDriver Driver;
public string theEnvironment;
public IWebDriver InitialiseDriver(string profile, string environment, string parallelEnvironment)
{
theEnvironment = environment;
if (profile == "single")
{
if (environment.Contains("IE"))
{
Driver = new InternetExplorerDriver();
}
else if (environment.Contains("Edge"))
{
Driver = new EdgeDriver();
}
else if (environment.Contains("Chrome"))
{
Driver = new ChromeDriver(#"C:\Automation Test Drivers\");
}
else if (environment.Contains("Firefox"))
{
Driver = new FirefoxDriver(#"C:\Automation Test Drivers\");
}
}
Driver.Manage().Window.Maximize();
Driver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(5);
return Driver;
}
[AfterScenario]
public void AfterScenario()
{
Driver.Quit();
}
}
}
BaseSteps.cs
using OpenQA.Selenium;
namespace OurAutomation.Steps
{
public class BaseSteps : SetUp
{
public IWebDriver driver;
public void DriverSetup(string profile, string environment, string parallelEnvironment)
{
driver = InitialiseDriver(profile, environment, parallelEnvironment);
}
}
}
LaunchTestSteps.cs
using OurAutomation.BaseMethods;
using OurAutomation.Pages;
using TechTalk.SpecFlow;
namespace OurAutomation.Steps
{
[Binding]
public class LaunchTestSteps : BaseSteps
{
[Given(#"I launch the site for (.*) and (.*) and (.*)")]
public void ILaunchTheSite(string profile, string environment, string parallelEnvironment)
{
DriverSetup(profile, environment, parallelEnvironment);
new Common().LaunchSite(driver);
new Core().Wait(10, "seconds");
}
}
}
There's more but not sure whether the full suite will be needed to figure this out. Perhaps it is obvious so far as to my fatal errors!
After much ado, found https://github.com/minhhoangvn/AutomationFramework which enabled me to strip out the necessary code to achieve parallel testing using ThreadLocal.
Related
This question already has answers here:
Popup's in selenium webdrivers
(3 answers)
Closed 5 years ago.
I'm using C# and Selenium to try to automate testing of our web site. I have a link that when clicked opens up a new window. I'm trying to figure out how to switch to this new window to continue the testing.
I've tried the following line which I've found on many blogs, but Last is not showing up in Intellisense and has the red squiggly under it.
driverIE.SwitchTo().Window(driverIE.WindowHandles.Last());
I'm new to working with C# in Visual Studio, so I'm not sure if I'm not including something I should be. Here is the start of the test I'm trying to run.
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Collections.ObjectModel;
using OpenQA.Selenium;
using OpenQA.Selenium.IE;
namespace SeleniumTest
{
[TestClass]
public class UnitTest1
{
static IWebDriver driverIE;
[AssemblyInitialize]
public static void SetUp(TestContext context)
{
driverIE = new InternetExplorerDriver(#"C:\Selenium");
}
[TestMethod]
public void TestIEDriver()
{
driverIE.Navigate().GoToUrl("http://localhost/site/");
driverIE.FindElement(By.Id("txtUserId")).SendKeys("username");
driverIE.FindElement(By.Id("txtPassword")).SendKeys("password");
driverIE.FindElement(By.Id("txtPassword")).SendKeys(Keys.Enter);
//Open Quote
driverIE.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(10);
driverIE.FindElement(By.LinkText("Personal Auto")).Click();
//Switch to Quote Window
driverIE.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(10);
ReadOnlyCollection<string> WindowList = driverIE.WindowHandles;
driverIE.SwitchTo().Window(driverIE.WindowHandles.Last());
driverIE.FindElement(By.Id("txtAgencyCd")).SendKeys("Code");
}
}
}
You can use this once you switch to your url
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(10)); //you can change 10 seconds to whatever it suits you the best.
wait.Until(ExpectedConditions.VisibilityOfAllElementsLocatedBy(By.Id("login")));
I'm using Specflow, Visual studio 2015 and Nunit. I need if a test fails to run it once again. I have
[AfterScenario]
public void AfterScenario1()
{
if (Test Failed and the counter is 1)
{
StartTheLastTestOnceAgain();
}
}
How do I start the last test again?
In NUnit there is the RetryAttribute (https://github.com/nunit/docs/wiki/Retry-Attribute) for that. It looks like that the SpecFlow.Retry plugin is using that (https://www.nuget.org/packages/SpecFlow.Retry/). This plugin is a 3rd party plugin and I did not used it yet. So no guarantee that this works as you want.
As alternative you could use the SpecFlow+Runner (http://www.specflow.org/plus/). This specialized runner has the option to rerun your failed test. (http://www.specflow.org/plus/documentation/SpecFlowPlus-Runner-Profiles/#Execution - retryFor/retryCount config value).
Full disclosure: I am one of the developers of SpecFlow and SpecFlow+.
You could always just capture the failure during the assert step and then retry what ever it is that your testing for. Something like:
[Given(#"I'm on the homepage")]
public void GivenImOnTheHomepage()
{
go to homepage...
}
[When(#"When I click some button")]
public void WhenIClickSomeButton()
{
click button...
}
[Then(#"Something Special Happens")]
public void ThenSomethingSpecialHappens()
{
var theRightThingHappened = someWayToTellTheRightThingHappened();
var result = Assert.IsTrue(theRightThingHappened);
if(!result)
{
thenTrySomeStepsAgainHere and recheck result using another assert
}
}
I'm trying to write some tests for testing GUI interface. I decided to choose NUnit.Forms. But the tests fall with the following error:
TearDown : System.ComponentModel.Win32Exception : The requested resource is in use
I have two versions of the source code tests.
First:
using System.Windows.Forms;
using NUnit.Extensions.Forms;
using NUnit.Framework;
using YAMP;
namespace Tests.GUITests
{
[TestFixture]
public class GuiTest : NUnitFormTest
{
private FrmMain _frm;
//[SetUp] // or it is still needed
public override void Setup()
{
base.Setup();
_frm = new FrmMain();
_frm.Show();
}
[Test]
public void TestData()
{
var txtInput = new TextBoxTester("txtInput") {["Text"] = "2+2"};
var txtOutput = new TextBoxTester("txtOutput");
Assert.AreEqual("2+2", txtInput.Text);
var btnRes = new ButtonTester("btnRes");
btnRes.Click();
Assert.AreEqual("4", txtOutput.Text);
}
}
}
Second:
using System.Windows.Forms;
using NUnit.Extensions.Forms;
using NUnit.Framework;
using YAMP;
namespace Tests.GUITests
{
[TestFixture]
public class GuiTest : NUnitFormTest
{
private FrmMain _frm;
//[SetUp] // or it is still needed
public override void Setup()
{
base.Setup();
_frm = new FrmMain();
_frm.Show();
}
[TearDown]
public override void TearDown()
{
_frm.Close();
_frm.Dispose();
}
[Test]
public void TestData()
{
var txtInput = new TextBoxTester("txtInput") {["Text"] = "2+2"};
var txtOutput = new TextBoxTester("txtOutput");
Assert.AreEqual("2+2", txtInput.Text);
var btnRes = new ButtonTester("btnRes");
btnRes.Click();
Assert.AreEqual("4", txtOutput.Text);
}
}
}
And there are two different versions of the method TestNoData:
public void TestFormNoDataHandler()
{
var messageBoxTester = new MessageBoxTester("Message");
messageBoxTester.ClickOk();
}
[Test]
public void TestNoData()
{
ExpectModal("Message", TestFormNoDataHandler);
var txtInput = new TextBoxTester("txtInput") {["Text"] = string.Empty};
Assert.AreEqual(string.Empty, txtInput.Text);
var btnRes = new ButtonTester("btnRes");
btnRes.Click();
Assert.IsFalse(_frm.DialogResult == DialogResult.OK);
}
[Test]
public void TestNoData()
{
var txtInput = new TextBoxTester("txtInput") {["Text"] = string.Empty };
Assert.AreEqual(string.Empty, txtInput.Text);
var btnRes = new ButtonTester("btnRes");
btnRes.Click();
Assert.IsFalse(_frm.Enable);
}
Testable form is very simple. There are two TextBox - "txtInput", "txtOutput" and button - "btnRes". In "txtInput" introduced a mathematical expression, and "txtOutput" output response. The decision of expression occurs when you press "btnRes". If the field "txtInput" empty, the button is disabled and you can not click on it.
When searching for solutions to this problem came on the following links:
AutomaticChainsaw: WinForms testing using NUnitForms
c# - I need to create a windows form from within a NUnit test - Stack Overflow
Unfortunately I can attach only 2 links. But the information I learned is very different. Especially the part of writing methods Setup and TearDown.
In any case, I specify the version I use:
Visual Studio 2015 Community
NUnit - 2.6.4.14350
NUnitForms - 1.3.1771.29165
Because it seems to me that the problem might be too recent versions of frameworks, as article I learned quite old.
Thank you for any suggestion.
UseHidden Property: Tests are run on a separate hidden desktop. This makes them much faster and it works for any tests that are not using the keyboard or mouse controllers. They are less disruptive and input tests cannot interfere with other applications.
UseHidden property controls whether a separate desktop is used at all.
Though tests on the separate desktop are faster and safer (There is no danger of keyboard or mouse input going to separate running applications.), however for some operating systems or environments the separate desktop does not work. And the tests throw up errors like:
System.ComponentModel.Win32Exception : The requested resource is in use
--TearDown
at NUnit.Extensions.Forms.Desktop.Destroy()
at NUnit.Extensions.Forms.Desktop.Dispose()
at NUnit.Extensions.Forms.NUnitFormTest.Verify()
In that case you can override UseHidden property from test class and set it to return false. This will cause the tests to run on original, standard desktop.
I don't think this is particular to Selenium, but I've included that tag because I think it's a problem that's very relevant to Selenium tests.
I have a Browser class that's working as it stands:
public static class Browser {
private static IWebDriver webDriver;
private static IWebDriver ieDriver;
private static IWebDriver chromeDriver;
private static BrowserType _browserType;
public static BrowserType BrowserType {
set {
_browserType = value;
switch (_browserType) {
case BrowserType.IE:
if (ieDriver == null)
{
var ieOptions = new InternetExplorerOptions();
ieOptions.InitialBrowserUrl = "about:home";
ieDriver = new InternetExplorerDriver(DriverPath, ieOptions);
}
webDriver = ieDriver;
break;
case BrowserType.Chrome:
if (chromeDriver == null)
{
chromeDriver = new ChromeDriver(DriverPath);
}
webDriver = chromeDriver;
break;
default:
if (chromeDriver == null)
{
chromeDriver = new ChromeDriver(DriverPath);
}
webDriver = chromeDriver;
break;
break;
}
} get { return _browserType; }
}
public static void Goto(string url) {
webDriver.Navigate().GoToUrl(url);
}
}
The problem is that each of these browsers should run in their own thread, so that each test can run on each browser simultaneously (cutting cross-browser test times to the time it takes to run a single browser's test). Right now tests are called sequentially with the following method:
public void RunTest(Func<TestSettings, TestRole, bool> testToRun)
{
foreach (var browserType in BrowserTypes)
{
// Assert test passes in given browser
// browser should have its own thread
}
}
How can multithreading be achieved in this scenario?
Multithreadding is usually achieved to run multiple tests with a testunit.
For PHP you have PHPUnit and some other options:
http://net.tutsplus.com/tutorials/php/parallel-testing-for-phpunit-with-paratest/
For Java you could try to dig in maven-surefire-plugin using JUnit.
http://maven.apache.org/surefire/maven-surefire-plugin/examples/junit.html
Don't know if it is achievable through any selenium API.
If you find a way, please make sure to let me know!
Hope this helps.
I see that you have only one driver:
private static IWebDriver webDriver;
When you set BrowserType for the first time (for example as IE) you assign webDriver (as IE).
Then when you set BrowserType for the second time (for example as Chrome) you re-assign webDriver (now it is Chrome, IE is lost). You will never get simultaneously run of both browsers in this way.
BrowserType should be set externally. For example, as a parameter of your test-project or from App.config. If you want to run tests in one machine simultaneously, create an app (console app for example), that launches your test-project with different BrowserType values in two different threads.
I was reading through this link on category expressions when using /include or /exclude statement. I want to be able to include only run test to be run out of two tests available or run all tests but using the /include:A+B or /exclude:A. However, for some reason, it displays the wrong number of tests to be run and/or not run. Why is that?
Can anyone provide me with an example on how to category expressions (by manipulating source code) and add how to run the command in the console?
Essentially what I did was:
using System;
using NUnit;
using NUnit_Application;
using NUnit.Framework;
namespace NUnit_Application.Test
{
[TestFixture]
[Category("MathS")]
public class TestClass
{
[TestCase]
[Category("MathA")]
public void AddTest()
{
MathsHelper helper = new MathsHelper();
int result = helper.Add(20, 10);
Assert.AreEqual(40, result);
}
[TestCase]
[Category("MathB")]
public void SubtractTest()
{
MathsHelper helper = new MathsHelper();
int result = helper.Subtract(20, 10);
Assert.AreEqual(10, result);
}
}
}
And my command line statement was
nunit-console /framework:net-4.0 /run:NUnit_Application.Test.TestClass.AddTest C:~\NUnit_Application\NUnit_Application\NUnit_Application.Test\bin\Debug\NUnit_Application.Test.dll /include:"MathA"
The thing is, the console is familiar with what the commands means and it says it included Math A category. However, it shows that zero tests have ran and zero tests have not run.
I'm running NUnit 2.6.2, the console runner.
Here is command I used initially:
nunit-console /framework:net-4.0 /run:NUnit_Application.Test.TestClass.AddTest C:~\NUnit_Application\NUnit_Application\NUnit_Application.Test\bin\Debug\NUnit_Application.Test.dll /include:"MathA"
I noticed if I just call TestClass and not the individual test case, it works:
nunit-console /framework:net-4.0 /run:NUnit_Application.Test.TestClass C:~\NUnit_Application\NUnit_Application\NUnit_Application.Test\bin\Debug\NUnit_Application.Test.dll /include:"MathA"
I think it's because you have the whole class with the attribute :
[Category("MathS")]
So it skips over it.