Figure above showed a TestSuite/Plan in Ranorex.
[SETUP] represents launching .exe recording while [TEARDOWN] represents exiting .exe.
How can I imitate the test case plan structure using only Visual Studio coded ui.
Since it will be repetitive to launch and close my .exe in every test case. If possible I would like to set it only once.
Does a [TestMethod] in coded ui represents a test case?
We have faced the same problem and resolved it by first making an assumption.
A Microsoft TestMethod is not corresponding to a Ranorex Test Case, it is a Ranorex Run Configuration (as defined in the test suite).
A Run Configuration comes with configuration. As you may already know, on the command line, it is possible to execute a Ranorex Test Case or a Ranorex Run Configuration, but it is better/easier to execute a Run Configuration since it comes with context (and also most development can be done by non-programmer from within Ranorex!).
In the end, what we did is use TestMethod to call Run Configuration(s).
The following Ranorex How To article describes how to do this:
http://www.ranorex.com/news/article/howto-test-automation-with-tfs-and-ranorex.html
If this method does not suit your setup, you can probably invoke Ranorex Test Cases directly in test method (and do whatever sequence you wish to replicate as shown in your test suite), but that would be more complicated and involve more maintenance IMHO (which must be done by programmers).
Hope this helps!
Hugo
You're right about [TestMethod] representing a test case.
To Imitate the [Setup] and [TearDown] behavior of Ranorex, instead of using the [TestInitialize] and [TestCleanup] attributes, you should use the [ClassInitialize] and [ClassCleanup] attributes (or [AssemblyInitialize] and [AssemblyCleanp] if you want them to run once for all classes in the project).
Note that these methods must be static, and the initialize ones should accept a TestContext parameter.
Related
I have a testing framework that has been converted to heavily utilize NUnit [Parallelizable]. I used to store contextual test data in the base class of the [TestFixture] which NUnit orchestrates the hooks like [OneTimeSetUp], [TearDown], etc.
For example:
[Test]
public void GoToGoogle()
{
var driver = new ChromeDriver();
// do some stuff
// Would like to pass data outside of test scope
TestContext.CurrentContext.Test.Properties.Set("DriverUrl", driver.Url); // Obviously does not work
Assert.Fail("This test should fail");
}
In the [TearDown] hook, I would like to get certain information about the test contextually. Because not everything is able to be handled nicely in asserts.
[TearDown]
public void TearDown()
{
var url = TestContext.CurrentContext.Test.Properties["DriverUrl"].ToString();
var msg = $"Test encountered an error at URL: {url}"
TestAPI.PushResult(Result.Fail, msg);
}
The code above involving the TestContext does not work for obvious reasons, but I am wondering if there is a best practice that allows for me to pass data in this manner, keeping in mind respect to [Parallelizable] and that I cannot scope test data or dependencies to the [TestFixture]
You say "for obvious reasons" but I'll first spell out the reasons why you cannot effectively set a property on the current test through TestContext. After all, other people just might be reading this. :-)
The Obvious Part
TestContext.CurrentContext.Test does not return the internal representation of a test from inside NUnit. Doing so would allow users to break NUnit in a variety of ways. In particular, TestContext.CurrentContext.Test.Properties returns a copy of the properties used within NUnit.
That copy of the properties is not readonly, so you are able to set properties on it. For that reason, one might expect to be able to set it in the [Test] method and access the value in the [Teardown].
Unfortunately, because of a minor implementation detail, that's not the case. In fact, each time you use TestContext.CurrentContext, an entirely new copy of the context is created. The only reason for this, I'm afraid, is that it was originally implemented that way and is a bit difficult to change in a non-breaking way.
As a result of this implementation detail, we lost an easy way for the three parts (SetUp, Test method, TearDown) of a test to communicate. Prior to the availability of parallel execution, it was possible to pass such information using members of the fixture class. That no longer works once tests are run in parallel.
Workarounds
Use Thread Local Storage to hold the retained information. SetUp, Test and Teardown all run on the same thread. Note that OneTimeSetUp and OneTimeTearDown will not generally use the same thread in a parallel execution environment.
If you are wiling to run fixtures in parallel but not individual test cases, then you can still use class members to retain information. As a further step, apply the SingleThreadedAttribute to your fixture, forcing all the code associated with it (including one-time setup and teardown) to run on the same thread.
If you have many fixtures, which can run in parallel, the second approach may actually give you a better performance trade-off than other approaches. Unfortunately, not everyone can use it - at least not without a major reorganization of their tests. You have to look at what your own tests are doing.
Permanent Solution
That would be to modify NUnit so that properties are both writable and shareable, at least within a single fixture instance. There have already been a few feature requests out there to do that on the NUnit GitHub project. I'm no longer active on the framework project, so I don't know what the plans are. However, I think I can say that it's not likely to happen before a major version change, i.e. NUnit 4.0.
I like the convenience of test runners to run a single test method. Is there a way to run any method without making it a test?
The reason being I'm tired of writing Console apps with a CLI menu or a Windows Form with a button per method only for the sake of running various utility code that doesn't need to be realized in a UI. A parameterized Console app is too much effort too.
I have multiple methods. They don't rely on each other. Each completes a task that I as a developer run.
Ideally I'd like to have a Visual Studio .csproj file with classes of individually runnable methods that would not be picked up by the test framework, but that I could choose to run from a tool like a test runner.
You can just do a method invocation in the Interactive Window in Visual Studio. You'll need type the fully qualified name out, like MyNamespace.MyType.MyMethod("some-arg"). If it's not static then you'll need to new it up first of course.
If you want more convenience and don't mind paying for it then ReSharper has the ability to right click and run/debug static methods
. Alive also has this feature.
Finally, if you want to use a test runner, but don't want these to be run with your "regular" tests, NUnit has the Explicit attribute, which means the tests only run when you explicitly run them. Other testing frameworks have concepts that are close.
I am looking at setting up SpecFlow for various levels of tests, and as part of that I want to be able to filter which tests run.
For example, say I want to do a full GUI test run, where I build up the dependencies for GUI testing on a dev environment and run all the specs tagged #gui, with the steps executed through the gui. Also from the same script I want to run only the tests tagged #smoke, and set up any dependencies needed for a deployed environment, with the steps executed through the api.
I'm aware that you can filter tags when running through the specflow runner, but I need to also change the way each test works in the context of the test run. Also I want this change of behaviour to be switched with a single config/command line arg when run on a build server.
So my solution so far is to have build configuration for each kind of test run, and config transforms so I can inject behaviour into specflow when the test run starts up. But I am not sure of the right way to filter by tag as well.
I could do somethig like this:
[BeforeFeature]
public void CheckCanRun()
{
if(TestCannotBeRunInThisContext())
{
ScenarioContext.Current.Pending();
}
}
I think this would work (it would not run the feature) but the test would still come up on my test results, which would be messy if I'm filtering out most of the tests with my tag. If there a way I can do this which removes the feature from running entirely?
In short, no I don't think there is anyway to do what you want other than what you have outlined above.
How would you exclude the tests from being run if they were just normal unit tests?
In ReSharper's runner you would probably create a test session with only the tests you wanted to run in. On the CI server you would only run tests in a specific dll or in particular categories.
Specflow is a unit test generation tool. It generates unit tests in the flavour specified in the config. The runner still has to decide which of those tests to run, so the same principles of choosing the tests to run above applies to specflow tests.
Placing them into categories and running only those categories is the simplest way, but having a more fine grained programmatic control of that is not really applicable. What you are asking to do is basically like saying 'run this test, but let me decide in the test if I want it to run' which doesn't really make sense.
I'm using the latest NUnit to run Selenium tests. The tests are compiled into a class library DLL file which is then run by NUnit.
My problem is that before the automation begins, I need to run some initialization such as creating a log file, setting up specific parameters, etc. I don't see a way to do this in NUnit - setup() does this but for every Test or Fixture - I just need to run this code once at the start of the application.
Any idea how I can do what I want?
Your help is very appreciated.
J.
Take a look at SetUpFixtureAttribute (more information here). It says:
This is the attribute that marks a class that contains the one-time setup or teardown methods for all the test fixtures under a given namespace. The class may contain at most one method marked with the SetUpAttribute and one method marked with the TearDownAttribute.
I know Visual Studio offers some Unit Testing goodies. How do I use them, how do you use them? What should I know about unit testing (assume I know nothing).
This question is similar but it does not address what Visual Studio can do, please do not mark this as a duplicate because of that. Posted as Community Wiki because I'm not trying to be a rep whore.
Easily the most significant difference is that the MSTest support is built in to Visual Studio and provides unit testing, code coverage and mocking support directly. In order to do the same types of things in the external (third-party) unit test frameworks generally requires multiple frameworks (a unit testing framework and a mocking framework) and other tools to do code coverage analysis.
The easist way to use the MSTest unit testing tools is to open the file you want to create unit tests for, right click in the editor window and choose the "Create Unit Tests..." menu from the context menu. I prefer putting my unit tests in a separate project, but that's just personal perference. Doing this will create a sort of "template" test class, which will contain test methods to allow you to test each of the functions and properties of your class. At that point, you need to determine what it means for the test to pass or fail (in other words, determine what should happen given a certain set of inputs).
Generally, you end up writing tests that look similar to this:
string stringVal = "This";
Assert.IsTrue(stringVal.Length == 4);
This says that for the varaible named stringVal, the Length property should be equal to 4 after the assignment.
The resources listed in the other thread should provide a good starting point to understandng what unit testing is in general.
The unit testing structure in VS is similar to NUnit in it's usage. One interesting (and useful) feature of it does differ from NUnit significantly. VS unit testing can be used with code that was not written with unit testing in mind.
You can build a unit testing framework after an application is written because the test structure allows you to externally reference method calls and use ramp-up and tear-down code to prep the test invironment. For example: If you have a method within a class that uses resources that are external to the method, you can create them in the ramp-up class (which VS creates for you) and then test it in the unit test class (also created for you by VS). When the test finishes, the tear-down class (yet again...provided for you by VS) will release resources and clean up. This entire process exists outside of your application thus does not interfere with the code base.
The VS unit testing framework is actually a very well implemented and easy to use. Best of all, you can use it with an application that was not designed with unit testing in mind (something that is not easy with NUnit).
First thing I would do is download a copy of TestDriven.Net to use as a test runner. This will add a right-click menu that will let you run individual tests by right-clicking in the test method and selecting Run Test(s). This also works for all tests in a class (right-click in class, but outside a method), a namespace (right click on project or in namespace outside a class), or a entire solution (right-click on solution). It also adds the ability to run tests with coverage (built-in or nCover) or the debugger from the same right-click menu.
As far as setting up tests, generally I stick with the one test project per project and one test class per class under test. Sometimes I will create test classes for aspects that run across a lot of classes, but not usually. The typical way I create them is to first create the skeleton of the class -- no properties, no constructor, but with the first method that I want to test. This method simply throws an NotImplementedException.
Once the class skeleton is created, I use the right-click Create Unit Tests in the method under test. This brins up a dialog that lets you create a new test project or select an existing one. I create, and name appropriately, a new test project and have the wizard create the classes. Once this is done you may want to also create the private accessor functions for the class in the test project as well. Sometimes these need to be updated (recreated) if your class changes substantially.
Now you have a test project and your first test. Start by modifying the test to define a desired behavior of the method. Write enough code to (just barely) pass the test. continue on with writing tests/writing codes, specifying more behavior for the method until all of the behavior for the method is defined. Then move onto the next method or class as appropriate until you have enough code to complete the feature that you are working on.
You can add more and different types of tests as required. You can also set up your source code control to require that some or all tests pass before check in.