I have a testing framework that has been converted to heavily utilize NUnit [Parallelizable]. I used to store contextual test data in the base class of the [TestFixture] which NUnit orchestrates the hooks like [OneTimeSetUp], [TearDown], etc.
For example:
[Test]
public void GoToGoogle()
{
var driver = new ChromeDriver();
// do some stuff
// Would like to pass data outside of test scope
TestContext.CurrentContext.Test.Properties.Set("DriverUrl", driver.Url); // Obviously does not work
Assert.Fail("This test should fail");
}
In the [TearDown] hook, I would like to get certain information about the test contextually. Because not everything is able to be handled nicely in asserts.
[TearDown]
public void TearDown()
{
var url = TestContext.CurrentContext.Test.Properties["DriverUrl"].ToString();
var msg = $"Test encountered an error at URL: {url}"
TestAPI.PushResult(Result.Fail, msg);
}
The code above involving the TestContext does not work for obvious reasons, but I am wondering if there is a best practice that allows for me to pass data in this manner, keeping in mind respect to [Parallelizable] and that I cannot scope test data or dependencies to the [TestFixture]
You say "for obvious reasons" but I'll first spell out the reasons why you cannot effectively set a property on the current test through TestContext. After all, other people just might be reading this. :-)
The Obvious Part
TestContext.CurrentContext.Test does not return the internal representation of a test from inside NUnit. Doing so would allow users to break NUnit in a variety of ways. In particular, TestContext.CurrentContext.Test.Properties returns a copy of the properties used within NUnit.
That copy of the properties is not readonly, so you are able to set properties on it. For that reason, one might expect to be able to set it in the [Test] method and access the value in the [Teardown].
Unfortunately, because of a minor implementation detail, that's not the case. In fact, each time you use TestContext.CurrentContext, an entirely new copy of the context is created. The only reason for this, I'm afraid, is that it was originally implemented that way and is a bit difficult to change in a non-breaking way.
As a result of this implementation detail, we lost an easy way for the three parts (SetUp, Test method, TearDown) of a test to communicate. Prior to the availability of parallel execution, it was possible to pass such information using members of the fixture class. That no longer works once tests are run in parallel.
Workarounds
Use Thread Local Storage to hold the retained information. SetUp, Test and Teardown all run on the same thread. Note that OneTimeSetUp and OneTimeTearDown will not generally use the same thread in a parallel execution environment.
If you are wiling to run fixtures in parallel but not individual test cases, then you can still use class members to retain information. As a further step, apply the SingleThreadedAttribute to your fixture, forcing all the code associated with it (including one-time setup and teardown) to run on the same thread.
If you have many fixtures, which can run in parallel, the second approach may actually give you a better performance trade-off than other approaches. Unfortunately, not everyone can use it - at least not without a major reorganization of their tests. You have to look at what your own tests are doing.
Permanent Solution
That would be to modify NUnit so that properties are both writable and shareable, at least within a single fixture instance. There have already been a few feature requests out there to do that on the NUnit GitHub project. I'm no longer active on the framework project, so I don't know what the plans are. However, I think I can say that it's not likely to happen before a major version change, i.e. NUnit 4.0.
Related
I have an NUnit test project containing a bunch of Test classes/fixtures each of which inherits from an abstract base class hierarchy. These sealed test classes all have a TestFixtureSource attribute attached at the class level:
[TestFixtureSource(typeof(ExecutionBrowsers))]
public sealed class MyTestClass : TestBase
Where ExecutionBrowsers is defined:
internal sealed class ExecutionBrowsers : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return Browser.Chrome;
yield return Browser.Edge;
yield return Browser.Firefox;
}
}
So essentially each test class will be instantiated 3 times, once for each browser. I want to run these tests in parallel in such as way that a browser does not have more than one test using it at the same time (I have a hard limitation on this - lets not get into that). So what I did was to add a .cs file at the root of the project and stick the following attributes in it:
[assembly: NUnit.Framework.FixtureLifeCycle(NUnit.Framework.LifeCycle.InstancePerTestCase)]
[assembly: NUnit.Framework.Parallelizable(NUnit.Framework.ParallelScope.Fixtures)]
[assembly: NUnit.Framework.LevelOfParallelism(3)]
This doesn't quite work though, it does not restrict tests to one per browser at any given time. It will start off with the first test in the first test class (some classes have more than one test) running that on each of the three browsers. However if one browser takes longer than the others it will get out of sync and begin executing two tests on one browser.
How can I achieve the behaviour that I want?
Well see, we have a situation unit test frameworks do not run tests of the same collection asynchronously so you must rethink the structure of your unit test so that they are separate from each other, I didn't see enough of your structure to be able to assist in this restructuring
You're trying to achieve fine control by putting attributes at the assembly level. There are lots of ways that can go wrong and you have discovered one of them. I recommend avoiding use of assembly-level ParallelizableAttribute unless you are absolutely sure that the specified parallel behavior will work for all your test fixtures as well as any you or others may add in the future. ;-)
Instead, add [Parallelizable] to the class. It will apply to each of your instances and will allow them to run against one another. The individual test cases will be non-parallelizable by default with respect to one another.
For the other attributes, you should eliminate [FixtureLifeCycle] unless you have a specific reason why you need it, i.e. unless your tests are running in parallel and changing the state of the fixture. You should only use [LevelOfParallelism] if it is needed for performance and should not count on it to keep any particular set of tests from running with one another.
You have not said how you run the tests. The above will work if you are running straight nunit console plus framework from the command line. If you are using Visual Studio, there are some other considerations because Test Explorer can change what NUnit thinks you are doing based on how it runs the tests.
I'm currently building out my automation framework using NUnit, I've got everything working just fine and the last enchancement I'd like to make is to be able to map my automated test scripts to test cases in my testing software.
I'm using TestRail for all my testcases.
My ideal situation is to be able to decorate each test case with the corresponding testcase ID in test rail and when it comes to report the test result in TestRail, I can just use the Case id. Currently I'm doing this via matching test name/script name.
Example -
[Test]
[TestCaseId("001")]
public void NavigateToSite()
{
LoginPage login = new LoginPage(Driver);
login.NavigateToLogInPage();
login.AssertLoginPageLoaded();
}
And then in my teardown method, it would be something like -
[TearDown]
public static void TestTearDown(IWebDriver Driver)
{
var testcaseId = TestContext.CurrentContext.TestCaseid;
var result = TestContext.CurrentContext.Result.Outcome;
//Static method to report to Testrail using API
Report.ReportTestResult(testcaseId, result);
}
I've just made up the testcaseid attribute, but this is what I'm looking for.
[TestCaseId("001")]
I may have missed this if it already exists, or how do I go about possibly extending NUnit to do this?
You can use PropertyAttribute supplied by NUnit.
Example:
[Property("TestCaseId", "001")]
[Test]
public void NavigateToSite()
{
...
}
[TearDown]
public void TearDown()
{
var testCaseId = TestContext.CurrentContext.Test.Properties["TestCaseId"];
}
In addition you can create custom property attribute - see NUnit link
For many years I recommended that people not do this: mix test management code into the tests themselves. It's an obvious violation of the single responsibility principle and it creates difficulties in maintaining the tests.
In addition, there's the problem that the result presented in TearDown may not be final. For example, if you have used [MaxTime] on the test and it exceeds the time specified, your successful test will change to a failure. Several other built-in attributes work this way and of course there is always the possibility of a user-created attribute. The purpose of TearDown is to clean up after your code, not as a springboard for creating a reporting or test management system.
That said, with older versions of NUnit, folks got into the habit of doing this. This was in part due to the fact that NUnit addins (the approach we designed) were fairly complicated to write. There were also fewer problems because NUNit V2 was significantly less extensible on the test side of things. With 3.0, we provided a means for creating test management functions such as this as extensions to the NUnit engine and I suggest you consider using that facility instead of mixing them in with the test code.
The general approach would be to create a property, as suggested in Sulo's answer but to replace your TearDown code with an EventListener extension that reports the result to TestRail. The EventListener has access all the result information - not just the limited fields available in TestContext - in XML format. You can readily extract whatever needs to go to TestRail.
Details of writing TestEngine extensions are found here: https://github.com/nunit/docs/wiki/Writing-Engine-Extensions
Note that are some outstanding issues if you want to use extensions under the Visual Studio adapter, which we are working on. Right now, you'll get the best experience using the console runner.
Figure above showed a TestSuite/Plan in Ranorex.
[SETUP] represents launching .exe recording while [TEARDOWN] represents exiting .exe.
How can I imitate the test case plan structure using only Visual Studio coded ui.
Since it will be repetitive to launch and close my .exe in every test case. If possible I would like to set it only once.
Does a [TestMethod] in coded ui represents a test case?
We have faced the same problem and resolved it by first making an assumption.
A Microsoft TestMethod is not corresponding to a Ranorex Test Case, it is a Ranorex Run Configuration (as defined in the test suite).
A Run Configuration comes with configuration. As you may already know, on the command line, it is possible to execute a Ranorex Test Case or a Ranorex Run Configuration, but it is better/easier to execute a Run Configuration since it comes with context (and also most development can be done by non-programmer from within Ranorex!).
In the end, what we did is use TestMethod to call Run Configuration(s).
The following Ranorex How To article describes how to do this:
http://www.ranorex.com/news/article/howto-test-automation-with-tfs-and-ranorex.html
If this method does not suit your setup, you can probably invoke Ranorex Test Cases directly in test method (and do whatever sequence you wish to replicate as shown in your test suite), but that would be more complicated and involve more maintenance IMHO (which must be done by programmers).
Hope this helps!
Hugo
You're right about [TestMethod] representing a test case.
To Imitate the [Setup] and [TearDown] behavior of Ranorex, instead of using the [TestInitialize] and [TestCleanup] attributes, you should use the [ClassInitialize] and [ClassCleanup] attributes (or [AssemblyInitialize] and [AssemblyCleanp] if you want them to run once for all classes in the project).
Note that these methods must be static, and the initialize ones should accept a TestContext parameter.
I have some classes that implements some logic related to file system and files. For example, I am performing following tasks as part of this logic:
checking if certain folder has certain structure (eg. it contains subfolders with specific names etc...)
loading some files from those folders and checking their structure (eg. these are some configuration files, located at certain place within certain folder)
load additional files for testing/validation from the configuration file (eg. this config file contains information about other files in the same folder, that should have other internal structure etc...)
Now all this logic has some workflow and exceptions are thrown, if something is not right (eg. configuration file is not found at the specific folder location). In addition, there is Managed Extensibility Framework (MEF) involved in this logic, because some of these files I am checking are managed DLLs that I am manually loading to MEF aggregates etc...
Now I'd like to test all this in some way. I was thinking of creating several physical test folders on HDD, that cover various test cases and then run my code against them. I could create for example:
folder with correct structure and all files being valid
folder with correct structure but with invalid configuration file
folder with correct structure but missing configuration file
etc...
Would this be the right approach? I am not sure though how exactly to run my code in this scenario... I certainly don't want to run the whole application and point it to check these mocked folders. Should I use some unit testing framework to write kind of "unit tests", that executes my code against these file system objects?
In general, is all this a correct approach for this kind of testing scenarios? Are there other better approaches?
First of all, I think, it is better to write unit tests to test your logic without touching any external resources. Here you have two options:
you need to use abstraction layer to isolate your logic from external dependencies such as the file system. You can easily stub or mock (by hand or with help of constrained isolation framework such as NSubstitute, FakeItEasy or Moq) this abstractions in unit tests. I prefer this option, because in this case tests push you to a better design.
if you have to deal with legacy code (only in this case), you can use one of the unconstrained isolation frameworks (such as TypeMock Isolator, JustMock or Microsoft Fakes) that can stub/mock pretty much everything (for instance, sealed and static classes, non-virtual methods). But they costs money. The only "free" option is Microsoft Fakes unless you are the happy owner of Visual Studio 2012/2013 Premium/Ultimate.
In unit tests you don't need to test the logic of external libraries such as MEF.
Secondly, if you want to write integration tests, then you need to write "happy path" test (when everything is OK) and some tests that testing your logic in boundary cases (file or directory not found). Unlike #Sergey Berezovskiy, I recommend creating separate folders for each test case. The main advantages is:
you can give your folder meaningful names that more clearly express your
intentions;
you don't need to write complex (i.e. fragile) setup/teardown logic.
even if you decide later to use another folder structure, then you can change it more easily, because you will already have working code and tests (refactoring under test harness is much easier).
For both, unit and integration tests, you can use ordinary unit testing frameworks (like NUnit or xUnit.NET). With this frameworks is pretty easy to launch your tests in Continuous integration scenarios on your Build server.
If you decide to write both kinds of tests, then you need to separate unit tests from integration tests (you can create separate projects for every kind of tests). Reasons for it:
unit tests is a safety net for developers. They must provide quick feedback about expected behavior of system units after last code changes (bug fixes, new features). If they are run frequently, then developer can quickly and easily identify piece of code, that broke the system. Nobody wants to run slow unit tests.
integration tests are generally slower than unit tests. But they have different purpose. They check that units works as expected with real dependencies.
You should test as much logic as possible with unit tests, by abstracting calls to the file system behind interfaces. Using dependency injection and a testing-framework such as FakeItEasy will allow you to test that your interfaces are actually being used/called to operate on the files and folders.
At some point however, you will have to test the implementations working on the file-system too, and this is where you will need integration tests.
The things you need to test seem to be relatively isolated since all you want to test is your own files and directories, on your own file system. If you wanted to test a database, or some other external system with multiple users, etc, things might be more complicated.
I don't think you'll find any "official rules" for how best to do integration tests of this type, but I believe you are on the right track. Some ideas you should strive towards:
Clear standards: Make the rules and purpose of each test absolutely clear.
Automation: The ability to re-run tests quickly and without too much manual tweaking.
Repeatability: A test-situation that you can "reset", so you can re-run tests quickly, with only slight variations.
Create a repeatable test-scenario
In your situation, I would set up two main folders: One in which everything is as it is supposed to be (i.e. working correctly), and one in which all the rules are broken.
I would create these folders and any files in them, then zip each of the folders, and write logic in a test-class for unzipping each of them.
These are not really tests; think of them instead as "scripts" for setting up your test-scenario, enabling you to delete and recreate your folders and files easily and quickly, even if your main integration tests should change or mess them up during testing. The reason for putting them in a test-class, is simply to make then easy to run from the same interface as you will be working with during testing.
Testing
Create two sets of test-classes, one set for each situation (correctly set up folder vs. folder with broken rules). Place these tests in a hierarchy of folders that feels meaningful to you (depending on the complexity of your situation).
It's not clear how familiar you are with unit-/integration-testing. In any case, I would recommend NUnit. I like to use the extensions in Should as well. You can get both of these from Nuget:
install-package Nunit
install-package Should
The should-package will let you write the test-code in a manner like the following:
someCalculatedIntValue.ShouldEqual(3);
someFoundBoolValue.ShouldBeTrue();
Note that there are several test-runners available, to run your tests with. I've personally only had any real experience with the runner built into Resharper, but I'm quite satisfied with it and I have no problems recommending it.
Below is an example of a simple test-class with two tests. Note that in the first, we check for an expected value using an extension method from Should, while we don't explicitly test anything in the second. That is because it is tagged with [ExpectedException], meaning it will fail if an Exception of the specified type is not thrown when the test is run. You can use this to verify that an appropriate exception is thrown whenever one of your rules is broken.
[TestFixture]
public class When_calculating_sums
{
private MyCalculator _calc;
private int _result;
[SetUp] // Runs before each test
public void SetUp()
{
// Create an instance of the class to test:
_calc = new MyCalculator();
// Logic to test the result of:
_result = _calc.Add(1, 1);
}
[Test] // First test
public void Should_return_correct_sum()
{
_result.ShouldEqual(2);
}
[Test] // Second test
[ExpectedException(typeof (DivideByZeroException))]
public void Should_throw_exception_for_invalid_values()
{
// Divide by 0 should throw a DivideByZeroException:
var otherResult = _calc.Divide(5, 0);
}
[TearDown] // Runs after each test (seldom needed in practice)
public void TearDown()
{
_calc.Dispose();
}
}
With all of this in place, you should be able to create and recreate test-scenarios, and run tests on them in a easy and repeatable way.
Edit: As pointed out in a comment, Assert.Throws() is another option for ensuring exceptions are thrown as required. Personally, I like the tag-variant though, and with parameters, you can check things like the error message there too. Another example (assuming a custom error message is being thrown from your calculator):
[ExpectedException(typeof(DivideByZeroException),
ExpectedMessage="Attempted to divide by zero" )]
public void When_attempting_something_silly(){
...
}
I'd go with single test folder. For various test cases you can put different valid/invalid files into that folder as part of context setup. In test teardown just remove those files from folder.
E.g. with Specflow:
Given configuration file not exist
When something
Then foo
Given configuration file exists
And some dll not exists
When something
Then bar
Define each context setup step as copying/not copying appropriate file to your folder. You also can use table for defining which file should be copied to folder:
Given some scenario
| FileName |
| a.config |
| b.invalid.config |
When something
Then foobar
I don't know your program's architecture to give a good advice, but I will try
I believe that you don't need to test real file structure. File access services are defined by system/framework, and they're don't need to be tested. You need to mock this services in related tests.
Also you don't need to test MEF. It is already tested.
Use SOLID principles to make unit tests. Especially take look at Single Responsibility Principle this will allow you to to create unit tests, which won't be related to each others. Just don't forget about mocking to avoid dependencies.
To make integration tests, you can create a set of helper classes, which will emulate scenarios of file structures, which you want to test. This will allow you to stay not attached to machine on which you will run this tests. Such approach maybe more complicated than creating real file structure, but I like it.
I would build framework logic and test concurrency issues and file system exceptions to ensure a well defined test environment.
Try to list all the boundaries of the problem domain. If there are too many, then consider the possibility that your problem is too broadly defined and needs to be broken down. What is the full set of necessary and sufficient conditions required to make your system pass all tests? Then look at every condition and treat it as an individual attack point. And list all the ways you can think of, of breaching that. Try to prove to yourself that you have found them all. Then write a test for each.
I would go through the above process first for the environment, build and test that first to a satisfactory standard and then for the more detailed logic within the workflow. Some iteration may be required if dependencies between the environment and the detailed logic occur to you during testing.
I'm writing a data structure in C# (a priority queue using a fibonacci heap) and I'm trying to use it as a learning experience for TDD which I'm quite new to.
I understand that each test should only test one piece of the class so that a failure in one unit doesn't confuse me with multiple test failures, but I'm not sure how to do this when the state of the data structure is important for a test.
For example,
private PriorityQueue<int> queue;
[SetUp]
public void Initialize()
{
this.queue = new PriorityQueue<int>();
}
[Test]
public void PeekShouldReturnMinimumItem()
{
this.queue.Enqueue(2);
this.queue.Enqueue(1);
Assert.That(this.queue.Peek(), Is.EqualTo(1));
}
This test would break if either Enqueue or Peek broke.
I was thinking that I could somehow have the test manually set up the underlying data structure's heap, but I'm not sure how to do that without exposing the implementation to the world.
Is there a better way to do this? Is relying on other parts ok?
I have a SetUp in place, just left it out for simplicity.
Add a private accessor for the class to your test project. Use the accessor to set up the private properties of the class in some known way instead of using the classes methods to do so.
You also need to use SetUp and TearDown methods on your test class to perform any initializations needed between tests. I would actually prefer recreating the queue in each test rather than reusing it between tests to reduce coupling between test cases.
Theoretically, you only want to test a single feature at a time. However, if your queue only has a couple methods (Enqueue, Peek, Dequeue, Count) then you're quite limited in the kinds of tests you can do while using one method only.
It's best you don't over-engineer the problem and simply create a few simple test cases (such as the one above) and build on top of that to ensure an appropriate coverage of various features.
I feel it is appropriate to write tests that cover multiple features, as long as you have something underneath that will also break if one of the used features is broken. Therefore, if you have a test suite and you break your Enqueue, obviously, all of your tests (or most of them will fail), but you'll know Enqueue broke because of your simplest tests. The relationship of a test to its test suite should not be neglected.
I think this is ok, but clear the queue at the start of your test method.