I have a fairly large Coded UI test and have set up each task in its own .cs class file. The main objective of the test is to check that objects have loaded on various pages in a browser. The test is set up to loop through an XML config file and invoke each method listed in the XML as the user sees fit.
Because I don't want every test method to run every time, I do not have the [TestMethod] attribute declared at the top of each class/method. Unfortunately, this means that each method that is invoked will not show up individually in the test results view, which is a big disadvantage.
Is there a way that I can apply the [TestMethod] attribute each time a method is invoked, but only for the methods I want?
The test runner uses reflection on the test assemblies to find methods with the [TestMethod] attribute and then calls those methods one by one to execute the tests. To do what you want you'd need to change the test runner, and even then you'd have to do something to change the IL of the test assemblies to dynamically add the attributes, reload the assemblies, and probably a whole lot of other things I'm glossing over. You'd basically be writing your own test framework if you got that far.
Instead, is there a reason you don't want to use test lists? They do what you seem to be asking for.
Related
I have an NUnit test project containing a bunch of Test classes/fixtures each of which inherits from an abstract base class hierarchy. These sealed test classes all have a TestFixtureSource attribute attached at the class level:
[TestFixtureSource(typeof(ExecutionBrowsers))]
public sealed class MyTestClass : TestBase
Where ExecutionBrowsers is defined:
internal sealed class ExecutionBrowsers : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return Browser.Chrome;
yield return Browser.Edge;
yield return Browser.Firefox;
}
}
So essentially each test class will be instantiated 3 times, once for each browser. I want to run these tests in parallel in such as way that a browser does not have more than one test using it at the same time (I have a hard limitation on this - lets not get into that). So what I did was to add a .cs file at the root of the project and stick the following attributes in it:
[assembly: NUnit.Framework.FixtureLifeCycle(NUnit.Framework.LifeCycle.InstancePerTestCase)]
[assembly: NUnit.Framework.Parallelizable(NUnit.Framework.ParallelScope.Fixtures)]
[assembly: NUnit.Framework.LevelOfParallelism(3)]
This doesn't quite work though, it does not restrict tests to one per browser at any given time. It will start off with the first test in the first test class (some classes have more than one test) running that on each of the three browsers. However if one browser takes longer than the others it will get out of sync and begin executing two tests on one browser.
How can I achieve the behaviour that I want?
Well see, we have a situation unit test frameworks do not run tests of the same collection asynchronously so you must rethink the structure of your unit test so that they are separate from each other, I didn't see enough of your structure to be able to assist in this restructuring
You're trying to achieve fine control by putting attributes at the assembly level. There are lots of ways that can go wrong and you have discovered one of them. I recommend avoiding use of assembly-level ParallelizableAttribute unless you are absolutely sure that the specified parallel behavior will work for all your test fixtures as well as any you or others may add in the future. ;-)
Instead, add [Parallelizable] to the class. It will apply to each of your instances and will allow them to run against one another. The individual test cases will be non-parallelizable by default with respect to one another.
For the other attributes, you should eliminate [FixtureLifeCycle] unless you have a specific reason why you need it, i.e. unless your tests are running in parallel and changing the state of the fixture. You should only use [LevelOfParallelism] if it is needed for performance and should not count on it to keep any particular set of tests from running with one another.
You have not said how you run the tests. The above will work if you are running straight nunit console plus framework from the command line. If you are using Visual Studio, there are some other considerations because Test Explorer can change what NUnit thinks you are doing based on how it runs the tests.
Id like to have one file or class, in which i can control, which tests should be executed.
I know there is TestNG for Java, which can be used for that.
But i cant find anything for C# in google or here in stackoverflow related to this problem
My current test framework has 17 automation tests (17 classes) and much more will be added this year.
Therefor id like to have one file/class/method, in which i can set, which tests should be executed/not executed, as i don't want every test to be triggered, when i'm actively working on 2-3 automation tests.
My first idea:
In NUnit we can set a [Ignore("reason")] parameter above the class or method, which skips this test.
Is it possible to control these parameter outside of the class?
Id be happy and thankful for any other suggestions!
Nunit is the way to go for a single class with however many #Test methods you create.
You can use The #Ignore annotation you suggested to filter out tests that are not ready, or just dont want to execute.
And You can also use Event Listeners. Very useful tool that will help you control the actions of before, after, fail, pass etc... processed of all your tests. you can specify a condition on the unwanted tests and in the "TestStarted" event listener you Assert an ignore, this will effect all your tests marked by the specific condition.
I have a testing framework that has been converted to heavily utilize NUnit [Parallelizable]. I used to store contextual test data in the base class of the [TestFixture] which NUnit orchestrates the hooks like [OneTimeSetUp], [TearDown], etc.
For example:
[Test]
public void GoToGoogle()
{
var driver = new ChromeDriver();
// do some stuff
// Would like to pass data outside of test scope
TestContext.CurrentContext.Test.Properties.Set("DriverUrl", driver.Url); // Obviously does not work
Assert.Fail("This test should fail");
}
In the [TearDown] hook, I would like to get certain information about the test contextually. Because not everything is able to be handled nicely in asserts.
[TearDown]
public void TearDown()
{
var url = TestContext.CurrentContext.Test.Properties["DriverUrl"].ToString();
var msg = $"Test encountered an error at URL: {url}"
TestAPI.PushResult(Result.Fail, msg);
}
The code above involving the TestContext does not work for obvious reasons, but I am wondering if there is a best practice that allows for me to pass data in this manner, keeping in mind respect to [Parallelizable] and that I cannot scope test data or dependencies to the [TestFixture]
You say "for obvious reasons" but I'll first spell out the reasons why you cannot effectively set a property on the current test through TestContext. After all, other people just might be reading this. :-)
The Obvious Part
TestContext.CurrentContext.Test does not return the internal representation of a test from inside NUnit. Doing so would allow users to break NUnit in a variety of ways. In particular, TestContext.CurrentContext.Test.Properties returns a copy of the properties used within NUnit.
That copy of the properties is not readonly, so you are able to set properties on it. For that reason, one might expect to be able to set it in the [Test] method and access the value in the [Teardown].
Unfortunately, because of a minor implementation detail, that's not the case. In fact, each time you use TestContext.CurrentContext, an entirely new copy of the context is created. The only reason for this, I'm afraid, is that it was originally implemented that way and is a bit difficult to change in a non-breaking way.
As a result of this implementation detail, we lost an easy way for the three parts (SetUp, Test method, TearDown) of a test to communicate. Prior to the availability of parallel execution, it was possible to pass such information using members of the fixture class. That no longer works once tests are run in parallel.
Workarounds
Use Thread Local Storage to hold the retained information. SetUp, Test and Teardown all run on the same thread. Note that OneTimeSetUp and OneTimeTearDown will not generally use the same thread in a parallel execution environment.
If you are wiling to run fixtures in parallel but not individual test cases, then you can still use class members to retain information. As a further step, apply the SingleThreadedAttribute to your fixture, forcing all the code associated with it (including one-time setup and teardown) to run on the same thread.
If you have many fixtures, which can run in parallel, the second approach may actually give you a better performance trade-off than other approaches. Unfortunately, not everyone can use it - at least not without a major reorganization of their tests. You have to look at what your own tests are doing.
Permanent Solution
That would be to modify NUnit so that properties are both writable and shareable, at least within a single fixture instance. There have already been a few feature requests out there to do that on the NUnit GitHub project. I'm no longer active on the framework project, so I don't know what the plans are. However, I think I can say that it's not likely to happen before a major version change, i.e. NUnit 4.0.
Figure above showed a TestSuite/Plan in Ranorex.
[SETUP] represents launching .exe recording while [TEARDOWN] represents exiting .exe.
How can I imitate the test case plan structure using only Visual Studio coded ui.
Since it will be repetitive to launch and close my .exe in every test case. If possible I would like to set it only once.
Does a [TestMethod] in coded ui represents a test case?
We have faced the same problem and resolved it by first making an assumption.
A Microsoft TestMethod is not corresponding to a Ranorex Test Case, it is a Ranorex Run Configuration (as defined in the test suite).
A Run Configuration comes with configuration. As you may already know, on the command line, it is possible to execute a Ranorex Test Case or a Ranorex Run Configuration, but it is better/easier to execute a Run Configuration since it comes with context (and also most development can be done by non-programmer from within Ranorex!).
In the end, what we did is use TestMethod to call Run Configuration(s).
The following Ranorex How To article describes how to do this:
http://www.ranorex.com/news/article/howto-test-automation-with-tfs-and-ranorex.html
If this method does not suit your setup, you can probably invoke Ranorex Test Cases directly in test method (and do whatever sequence you wish to replicate as shown in your test suite), but that would be more complicated and involve more maintenance IMHO (which must be done by programmers).
Hope this helps!
Hugo
You're right about [TestMethod] representing a test case.
To Imitate the [Setup] and [TearDown] behavior of Ranorex, instead of using the [TestInitialize] and [TestCleanup] attributes, you should use the [ClassInitialize] and [ClassCleanup] attributes (or [AssemblyInitialize] and [AssemblyCleanp] if you want them to run once for all classes in the project).
Note that these methods must be static, and the initialize ones should accept a TestContext parameter.
I'm using the latest NUnit to run Selenium tests. The tests are compiled into a class library DLL file which is then run by NUnit.
My problem is that before the automation begins, I need to run some initialization such as creating a log file, setting up specific parameters, etc. I don't see a way to do this in NUnit - setup() does this but for every Test or Fixture - I just need to run this code once at the start of the application.
Any idea how I can do what I want?
Your help is very appreciated.
J.
Take a look at SetUpFixtureAttribute (more information here). It says:
This is the attribute that marks a class that contains the one-time setup or teardown methods for all the test fixtures under a given namespace. The class may contain at most one method marked with the SetUpAttribute and one method marked with the TearDownAttribute.