Our current C#/Nunit 2.6.3 test framework has a regression suite that takes over 35 hours to run on a single pc,fixtures with some tests lasting as long as 20 minutes. Setting up batches of tests to run on multiple machines is time consuming and inefficient so I'm trying to migrate the tests to NUnit 3 to get the benefit of parallel execution on Selenium Grid.
It is my aim to have 12 nodes each running a single instances of IE. However it appears the NUnit3 Test adapter for VS is trying to run all tests simultaneously.
As I will always be executing tests from more fixtures than I will have nodes it is important that fixtures will sit in a queue until a node becomes available. In practice a test fixture may have to wait a couple of hours for a free node.
For my current configuration experiment I have the following set up:
A hub with the following config:java -jar selenium-server-standalone-2.48.2.jar -role hub -newSessionWaitTimeout:-1 -browserTimeout 120 -timeout 3600
A single node in default config.
Two test fixtures, each with 10 tests. The test fixtures have the following attribute: [Parallelizable(ParallelScope.Self)]
In this situation I would expect that as only a single node that is supporting a single instance of IE then only a single test would be executed. The hub would then send the next test in the queue to the node when it became free. However it appears that both test fixtures are being run simultaneously. One test is pushed to the node but tests on the other fixture are failing with the following message:
Result Message:
OpenQA.Selenium.WebDriverException : The HTTP request to the remote WebDriver server for URL http://localhost:4444/wd/hub/session timed out after 60 seconds.
----> System.Net.WebException : The operation has timed out
When I used grid on Eclipse in a Java/JUnit framework I had no problems. The hub would queue tests until a node became free without any timeout, using the default config.
Does anyone know the correct configuration or is this a problem with the NUnit 3 Test adapter? Browser choice is unfortunately fixed as IE.
I found that number of parallel threads can be controlled by setting the LevelOfParallelism Attribute in the AssemblyInfo.
//Determines the number parallel threads that run simultaneously
[assembly: LevelOfParallelism(7)]
If this attribute is set to the number of nodes available then the tests queue as I expected.
Related
I a long running feature in the app. (entire test can take over 45 min) I test this feature and also run selenium to do some automated data creation in the test environment. Due to the nature of the system and code often times there are error messages that can be random based on data or just time of day. What I want to know is there a way to fail the test if any popup window is displayed. (I have three areas in the test where I have 20 minute timeout function but the error messages are displayed almost immediately so I want to bail out if any error message is displayed.)
You could add assertions for "popup window is not displayed". There are a couple ways I could see doing this:
Have the test for error messages run asynchronously, if possible, making the assertion every 30 seconds (or whatever interval you like).
Add the assertions in strategic locations in your test where you might expect the errors to appear.
This approach would cause a test to be marked as failed, but it wouldn't stop the tests from running.
I have dealt with a similar scenario using selenium with javascript, where tests run asynchronously. I'm not familiar with testing in C#.
How do you execute millions of unit test quickly, meaning 20 to 30 minutes?
Here is the scenario:
You are releasing certain hardware and you have, let's say, 2000 unit tests.
You are releasing new hardware and you have additional 1000 tests for that.
Each new hardware will include tests, but also you have to run and execute every previous one, and the number gets bigger as does execution time.
During development, this is solved by categorizing, using the TestCategory attribute and running only what you need to.
The CI, however, must run every single test. As the number increases, executing time is slower and sometimes times out. The .testrunconfig is already set for parallelTestCount execution, but over time this does not solve the issue permanently.
How would you solve this?
It seems like with each update on Visual Studio 2017, execution time changes. We currently have over 6000 tests, of which 15 to 20% are unit tests, and the rest are integration tests.
The bottleneck seemed to be the CI server itself, running on a single machine. 70% to 80% of the tests are asynchronous, and analysis showed that there are no blocking I/O operations. Besides IO, we do not use databases, caching so there is that.
Now, we are in the process of migrating to Jenkins and using its Parallel Test Executor Plugin to parallelize the tests across multiple nodes instead of a single machine. Initial testing showed that timing for executing 6000+ tests varies from 10 to 15 minutes versus the old CI which took 2 hours or stopped sometimes.
I decided to switch from current solution (using modified NUnit by our team some years ago NDistribUnit which run tests on VirtualMachines and then gather results on hub server) to Selenium Grid 2.
Option with ParallelizableAttribute was tried.
Unfortunately I noticed that IWebDriver was stored in Global variable (puhh). This caused tests to start several browser instances, but tests used single IWebDriver -> tests execution happened in single browser -> tests were run under single process, but with several 'worker' threads. It was tried using 2 VMs as 'nodes' and local PC as Hub.
I know best solution is to change invalid idea to store driver in global variable, but it'll take too much time: there are 3k+ heavy UI tests to be updated; many static methods expected to have driver as global var to be updated also.
As well NUnit 3.0 provides Option to run several assemblies in parallel. To run several test projects it's good, but currently we have 1 assembly per one application.
It would be nice to run tests for one application (one assembly) in parallel.
Is there other ways to use GRID + NUnit 3 here to make it work?
Finally, existed solution were refactored: now each test during execution has own driver. Due to this change a lot of code was re-written (it appears that too much methods expected to have IwebDriver as global variable)
Actually, there are 2 options to do that:
Refactoring - it's done for one test project.
Along with removing static variables (initial refactoring purpose) other code was changed as well.
A great minus - significant effort was required.
Using TeamCity agents for parallel run.
I forgot to mention finally tests are being executed on TeamCity, but by single agent.
For left 'old' tests (where Driver instance was stored in static variable) several TC agents were configured to run only several classes from tests solution.
This option is very 'fast' and doesn't require big code changes.
I thought I understood how MbUnit's parallel test execution worked, but the behaviour I'm seeing differs sufficiently much from my expectation that I suspect I'm missing something!
I have a set of UI tests that I wish to run concurrently. All of the tests are in the same assembly, split across three different namespaces. All of the tests are completely independent of one another, so I'd like all of them to be eligible for parallel execution.
To that end, I put the following in the AssemblyInfo.cs:
[assembly: DegreeOfParallelism(8)]
[assembly: Parallelizable(TestScope.All)]
My understanding was that this combination of assembly attributes should cause all of the tests to be considered [Parallelizable], and that the test runner should use 8 threads during execution. My individual tests are marked with the [Test] attribute, and nothing else. None of them are data-driven.
However, what I actually see is at most 5-6 threads being used, meaning that my test runs are taking longer than they should be.
Am I missing something? Do I need to do anything else to ensure that all of my 8 threads are being used by the runner?
N.B. The behaviour is the same irrespective of which runner I use. The GUI, command line and TD.Net runners all behave the same as described above, again leading me to think I've missed something.
EDIT: As pointed out in the comments, I'm running v3.1 of MbUnit (update 2 build 397). The documentation suggests that the assembly level [parallelizable] attribute is available, but it does also seem to reference v3.2 of the framework despite that not yet being available.
EDIT 2: To further clarify, the structure of my assembly is as follows:
assembly
- namespace
- fixture
- tests (each carrying only the [Test] attribute)
- fixture
- tests (each carrying only the [Test] attribute)
- namespace
- fixture
- tests (each carrying only the [Test] attribute)
- fixture
- tests (each carrying only the [Test] attribute)
- namespace
- fixture
- tests (each carrying only the [Test] attribute)
- fixture
- tests (each carrying only the [Test] attribute)
EDIT 3: OK, I've now noticed that if I only ever run one fixture at a time, the maximum number of tests running concurrently is always 8. As soon as I select multiple fixtures, it drops to either 5 or 6. If I take the contents of two fixtures (currently they contain 12 tests each) and drop them into the same fixture (for a total of 24 tests in that one fixture) that fixture will also always run 8 tests concurrently.
This seems to show that it isn't an issue in the individual tests, but rather in how the assembly level attributes percolate down to the fixture, or how the test runner consumes those attributes.
Additionally, I also observed (when running two fixtures) that once one of the two fixtures had been executed in its entirety, the runner starts to execute more tests concurrently when its back down to running only one fixture. For me right now, the first fixture gets done executing when there are 7 tests left to run in the second fixture. As soon as that happens, the number of tests running concurrently jumps up from the previous 5 or 6 to the maximum available of 7.
According to the release note of Gallio v3.0.6:
MbUnit helps you get the most out of your multi-core CPU. Mark any test [Parallelizable] and it will be permitted to run in parallel with other parallelizable tests in the same fixture.
Fixtures can also be marked parallelizable to enable them to be run in parallel with other parallelizable fixtures.
Please note that if you want all tests within a fixture to be considered parallelizable then you still need to add [Parallelizable] to each of them. (We might add a feature to set this at the fixture or assembly level later based on user feedback.)
Also note that just because a test or fixture is marked parallelizable does not mean it will run in parallel with other tests in particular. For the sake of efficiency, we limit the number of active tests threads based on the configured degree of parallelism. If you want a specific number of instances of a test to run in parallel with each other, consider using [ThreadedRepeat].
The degree of parallelism setting controls the maximum number of tests that MbUnit will attempt to run in parallel with one another. By default, the degree of parallelism equals the number of CPUs you have, or 2 at a minimum.
If you don't like the default then you can override the degree of parallelism at the assembly-level like this:
I don't know if it helps. Maybe Jeff could give more details as he had implemented that feature.
Ran into same problem, my findings
[assembly: Parallelizable(...)] at assembly level overrides fixture Parallelizable attributes and will result in fixture tests being run one at a time but at a fixture parallel level. Seems to have a max of between 5-6 fixtures in parallel.
[Parallelizable(TestScope.Descendants)] at fixture level will result in fixtures being run one at a time but the tests being run in parallel. Seems to have no max on tests in parallel.
Ultimately due to the assembly level constraint on fixture parallel limits, the only way is to use fixture level attributes and have fixtures tests run in parallel.
I would suggest creating less fixture and more tests per fixture to get around this issue. You could always launch multiple runners per assembly fixture perhaps.
Shame this is the case.
Override does not work for more than 5 test to run at a time.
We have 25 systems on Sauce labs available to execute 25 scripts at a time, we over ride the DegreeOfParallelism to 20 only 5 executes at a time.
[assembly: DegreeOfParallelism(20)] - Do not Work for Mbunit
Has anyone found a way to run Selenium RC / Selenium Grid tests, written in C# in parallel?
I've currently got a sizable test suite written using Selenium RC's C# driver. Running the entire test suite takes a little over an hour to complete. I normally don't have to run the entire suite so it hasn't been a concern up to now, but it's something that I'd like to be able to do more regularly (ie, as part of an automated build)
I've been spending some time recently poking around with the Selenium Grid project whose purpose essentially is to allow those tests to run in parallel. Unfortunately, it seems that the TestDriven.net plugin that I'm using runs the tests serially (ie, one after another). I'm assuming that NUnit would execute the tests in a similar fashion, although I haven't actually tested this out.
I've noticed that the NUnit 2.5 betas are starting to talk about running tests in parallel with pNUnit, but I haven't really familiarized myself enough with the project to know for sure whether this would work.
Another option I'm considering is separating my test suite into different libraries which would let me run a test from each library concurrently, but I'd like to avoid that if possible since I'm not convinced this is a valid reason for splitting up the test suite.
I am working on this very thing and have found Gallio latest can drive mbUnit tests in parallel. You can drive them against a single Selenium Grid hub, which can have several remote control servers listening.
I'm using the latest nightly from Gallio to get the ParallelizableAttribute and DegreeOfParallelismAttribute.
Something things I've noticed is I cannot rely on TestSet and TestTeardown be isolated the parallel tests. You'll need the test to look something like this:
[Test] public void Foo(){
var s = new DefaultSelenium("http://grid", 4444, "*firefox",
"http://server-under-test");
s.Start();
s.Open("mypage.aspx");
// Continue
s.Stop();
}
Using the [SetUp] attribute to start the Selenium session was causing the tests to not get the remote session from s.Start().
I wrote PNUnit as an extension for NUnit almost three years ago and I'm happy to see it was finally integrated into NUnit.
We use it on a daily basis to test our software under different distros and combinations. Just to give an example: we've a test suite of heavy tests (long ones) with about 210 tests. Each of them sets up a server and runs a client in command line running several operations (up to 210 scenarios).
Well, we use the same suite to run the tests on different Linux combinations and windows variations, and also combined ones like a windows server with a linux client, windows xp, vista, then domain controller, out of domain, and so on. We use the same binaries and then just have "agents" launched at several boxes.
We use the same platform for: balancing load test load -> I mean, running in chunks faster. Running several combinations at the same time, and what I think is more interesting: defining multi client scenarios: two clients wait for the server to start up, then launch operations, synch with each other and so on. We also use PNUnit for load testing (hundreds of boxes against a single server).
So, if you have any questions about how to set it up (which is not simple yet, I'm afraid), don't hesitate to ask.
Also I wrote an article long ago about it at DDJ: http://www.ddj.com/architect/193104810
Hope it helps
I don't know if no answer counts as an answer but I'd say you have researched everything and you really came up with the 2 possible solutions...
Test Suite runs tests in parallel
Split the test suite up
I am at a loss for any thing else.