I have some tests that do some write-operation on a database. I know that´s not really unit-testing, but let´s leave that asside.
In order to enable every test to work on a clean workspace, I rollback all transactions doe so far. However I randomly get concurrency-errors due to database-locks that cannot be established.
This is my code:
Test1.dll
[TestFixture]
class MyTest1
{
[OneTimeSetup]
public void SetupFixture()
{
myworkspace.StartEditing(); // this will establish a lock on the underlying database
}
[OneTimeTearDow]
public void TearDownFixture()
{
myWorkspace.Rollback();
}
}
The same code also exists within another test-assembly, let's name it Test2.dll. Now when I use the nunit-console-runner using nunit3-console Test1.dll Test2.dll, I get the following error:
System.Runtime.InteropServices.COMException : Table 'GDB_DatabaseLocks' cannot be locked; it is currently used by user 'ADMIN' (this is me) on host 'MyHost'
at ESRI.ArcGIS.Geodatabase.IWorkspaceEdit.StartEditing(Boolean withUndoRedo)
myWorkspace is a COM-object (Arcobjects-interface IWorkspace) that relates to an MS-Access-Database. I assume this is because nunit creates multiple threads that enter the above code at the same time. So I added the NonParalizable-attribute to both assemblies without success. I also tried to add Apartment(ApartmentState.STA) to my assembly in order to execute everything in a single thread, which resulted in the console never finishing.
What drives me nuts is that running my tests using ReSahrpers test-runner works perfectly. However I have no clue how ReSharper starts nunit. It seems ReSharper does not use nunit-console, but the nunit-API instead.
Is there another way to force all my tests to run in a single thread? I use nunit3.10 and ArcGIS 10.8.
By default, the NUnit Console will run multiple test assemblies in parallel. Add --agents=1 to force the two assemblies to run sequentially, under a single agent.
Just to clarify some of the other things you tried as well...
[NonParallelizable] is used to prevent the parallelization of different tests within a single assembly. By default, tests within an assembly do not run in parallel, so adding this attribute when you haven't specifically added [Parallelizable] at a higher level will have no effect.
[Apartments(Apartment.STA)] can be added as an assembly-level attribute, and does not have to be added per test, as mentioned in the comments. Check out the docs here: https://docs.nunit.org/articles/nunit/writing-tests/attributes/apartment.html
Related
I have an NUnit test project containing a bunch of Test classes/fixtures each of which inherits from an abstract base class hierarchy. These sealed test classes all have a TestFixtureSource attribute attached at the class level:
[TestFixtureSource(typeof(ExecutionBrowsers))]
public sealed class MyTestClass : TestBase
Where ExecutionBrowsers is defined:
internal sealed class ExecutionBrowsers : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return Browser.Chrome;
yield return Browser.Edge;
yield return Browser.Firefox;
}
}
So essentially each test class will be instantiated 3 times, once for each browser. I want to run these tests in parallel in such as way that a browser does not have more than one test using it at the same time (I have a hard limitation on this - lets not get into that). So what I did was to add a .cs file at the root of the project and stick the following attributes in it:
[assembly: NUnit.Framework.FixtureLifeCycle(NUnit.Framework.LifeCycle.InstancePerTestCase)]
[assembly: NUnit.Framework.Parallelizable(NUnit.Framework.ParallelScope.Fixtures)]
[assembly: NUnit.Framework.LevelOfParallelism(3)]
This doesn't quite work though, it does not restrict tests to one per browser at any given time. It will start off with the first test in the first test class (some classes have more than one test) running that on each of the three browsers. However if one browser takes longer than the others it will get out of sync and begin executing two tests on one browser.
How can I achieve the behaviour that I want?
Well see, we have a situation unit test frameworks do not run tests of the same collection asynchronously so you must rethink the structure of your unit test so that they are separate from each other, I didn't see enough of your structure to be able to assist in this restructuring
You're trying to achieve fine control by putting attributes at the assembly level. There are lots of ways that can go wrong and you have discovered one of them. I recommend avoiding use of assembly-level ParallelizableAttribute unless you are absolutely sure that the specified parallel behavior will work for all your test fixtures as well as any you or others may add in the future. ;-)
Instead, add [Parallelizable] to the class. It will apply to each of your instances and will allow them to run against one another. The individual test cases will be non-parallelizable by default with respect to one another.
For the other attributes, you should eliminate [FixtureLifeCycle] unless you have a specific reason why you need it, i.e. unless your tests are running in parallel and changing the state of the fixture. You should only use [LevelOfParallelism] if it is needed for performance and should not count on it to keep any particular set of tests from running with one another.
You have not said how you run the tests. The above will work if you are running straight nunit console plus framework from the command line. If you are using Visual Studio, there are some other considerations because Test Explorer can change what NUnit thinks you are doing based on how it runs the tests.
I have a testing framework that has been converted to heavily utilize NUnit [Parallelizable]. I used to store contextual test data in the base class of the [TestFixture] which NUnit orchestrates the hooks like [OneTimeSetUp], [TearDown], etc.
For example:
[Test]
public void GoToGoogle()
{
var driver = new ChromeDriver();
// do some stuff
// Would like to pass data outside of test scope
TestContext.CurrentContext.Test.Properties.Set("DriverUrl", driver.Url); // Obviously does not work
Assert.Fail("This test should fail");
}
In the [TearDown] hook, I would like to get certain information about the test contextually. Because not everything is able to be handled nicely in asserts.
[TearDown]
public void TearDown()
{
var url = TestContext.CurrentContext.Test.Properties["DriverUrl"].ToString();
var msg = $"Test encountered an error at URL: {url}"
TestAPI.PushResult(Result.Fail, msg);
}
The code above involving the TestContext does not work for obvious reasons, but I am wondering if there is a best practice that allows for me to pass data in this manner, keeping in mind respect to [Parallelizable] and that I cannot scope test data or dependencies to the [TestFixture]
You say "for obvious reasons" but I'll first spell out the reasons why you cannot effectively set a property on the current test through TestContext. After all, other people just might be reading this. :-)
The Obvious Part
TestContext.CurrentContext.Test does not return the internal representation of a test from inside NUnit. Doing so would allow users to break NUnit in a variety of ways. In particular, TestContext.CurrentContext.Test.Properties returns a copy of the properties used within NUnit.
That copy of the properties is not readonly, so you are able to set properties on it. For that reason, one might expect to be able to set it in the [Test] method and access the value in the [Teardown].
Unfortunately, because of a minor implementation detail, that's not the case. In fact, each time you use TestContext.CurrentContext, an entirely new copy of the context is created. The only reason for this, I'm afraid, is that it was originally implemented that way and is a bit difficult to change in a non-breaking way.
As a result of this implementation detail, we lost an easy way for the three parts (SetUp, Test method, TearDown) of a test to communicate. Prior to the availability of parallel execution, it was possible to pass such information using members of the fixture class. That no longer works once tests are run in parallel.
Workarounds
Use Thread Local Storage to hold the retained information. SetUp, Test and Teardown all run on the same thread. Note that OneTimeSetUp and OneTimeTearDown will not generally use the same thread in a parallel execution environment.
If you are wiling to run fixtures in parallel but not individual test cases, then you can still use class members to retain information. As a further step, apply the SingleThreadedAttribute to your fixture, forcing all the code associated with it (including one-time setup and teardown) to run on the same thread.
If you have many fixtures, which can run in parallel, the second approach may actually give you a better performance trade-off than other approaches. Unfortunately, not everyone can use it - at least not without a major reorganization of their tests. You have to look at what your own tests are doing.
Permanent Solution
That would be to modify NUnit so that properties are both writable and shareable, at least within a single fixture instance. There have already been a few feature requests out there to do that on the NUnit GitHub project. I'm no longer active on the framework project, so I don't know what the plans are. However, I think I can say that it's not likely to happen before a major version change, i.e. NUnit 4.0.
I'm trying to write a NUnit 3 (3.8.1) extension that lets a test fail if it has a failed Debug.Assert(...) (instead of silently run through or even hang because it shows the failed assertion dialog).
In a NUnit 2 addin, I was able to so by unregistering all debug trace listeners and adding my own that just throws an exception (as for example explained here). However, this doesn't seem to work on NUnit 3 anymore.
I'm able to successfully deploy the extension and it's code is being executed.
[Extension(Description = "Failed Assertions Tracker", EngineVersion = "3.4")]
public class TrackerEventListener : ITestEventListener
{
public void OnTestEvent(string report)
{
Console.WriteLine(report); // prints -> so I know this method is being called
Debug.Listeners.Clear();
Debug.Listeners.Add(new UnitTestTraceListener());
}
}
However, my unit test unfortunately shows me that still the DefaultTraceListener is installed.
[Test]
public void FailingAssertionShouldNotHang()
{
foreach (object listener in Debug.Listeners)
{
Console.WriteLine(listener.GetType().FullName);
}
Debug.Fail("I'm sorry. I've failed.");
}
And so the test shows the dialog instead of simply failing.
What am I doing wrong? I suspect that the call to the static Listeners collection is ineffective because the actual test is run in a different context (different AppDomain, process, ?). But if this is the case, how can I solve my problem?
It's important to keep in mind that NUnit 3 Extensions, while capable of replacing NUnit 2 Addins in a few cases, are actually entirely different technology. NUnit 3 Extensions extend the Engine, which is separate from the framework.
In this case, your extension is setting up a Trace Listener that will capture any Debug Trace or Assert output produced by the engine. If the engine contained Trace statements (it doesn't) you would get the output. Meanwhile, the framework is happily continuing to run tests on its own.
Any code that will successfully capture Trace has to be part of the actual framework execution of the tests. This gives you two options.
Create a custom attribute that will capture trace. Custom attributes allow you to take actions when a test is being created or executed. They are created by implementing various interfaces supported by the framework. In your case, you would want to supply the attribute at assembly level, in order to capture all output produced by the assembly.
Create the code as part of your tests themself, without extending the framework at all. You would want to capture the Trace output in an assembly-level SetUpFixture using the OneTimeSetUp attribute and release it under the OneTimeTearDown attribute. Since this approach is simpler than creating a custom attribute, it's the one I would use.
Figure above showed a TestSuite/Plan in Ranorex.
[SETUP] represents launching .exe recording while [TEARDOWN] represents exiting .exe.
How can I imitate the test case plan structure using only Visual Studio coded ui.
Since it will be repetitive to launch and close my .exe in every test case. If possible I would like to set it only once.
Does a [TestMethod] in coded ui represents a test case?
We have faced the same problem and resolved it by first making an assumption.
A Microsoft TestMethod is not corresponding to a Ranorex Test Case, it is a Ranorex Run Configuration (as defined in the test suite).
A Run Configuration comes with configuration. As you may already know, on the command line, it is possible to execute a Ranorex Test Case or a Ranorex Run Configuration, but it is better/easier to execute a Run Configuration since it comes with context (and also most development can be done by non-programmer from within Ranorex!).
In the end, what we did is use TestMethod to call Run Configuration(s).
The following Ranorex How To article describes how to do this:
http://www.ranorex.com/news/article/howto-test-automation-with-tfs-and-ranorex.html
If this method does not suit your setup, you can probably invoke Ranorex Test Cases directly in test method (and do whatever sequence you wish to replicate as shown in your test suite), but that would be more complicated and involve more maintenance IMHO (which must be done by programmers).
Hope this helps!
Hugo
You're right about [TestMethod] representing a test case.
To Imitate the [Setup] and [TearDown] behavior of Ranorex, instead of using the [TestInitialize] and [TestCleanup] attributes, you should use the [ClassInitialize] and [ClassCleanup] attributes (or [AssemblyInitialize] and [AssemblyCleanp] if you want them to run once for all classes in the project).
Note that these methods must be static, and the initialize ones should accept a TestContext parameter.
I'm using the latest NUnit to run Selenium tests. The tests are compiled into a class library DLL file which is then run by NUnit.
My problem is that before the automation begins, I need to run some initialization such as creating a log file, setting up specific parameters, etc. I don't see a way to do this in NUnit - setup() does this but for every Test or Fixture - I just need to run this code once at the start of the application.
Any idea how I can do what I want?
Your help is very appreciated.
J.
Take a look at SetUpFixtureAttribute (more information here). It says:
This is the attribute that marks a class that contains the one-time setup or teardown methods for all the test fixtures under a given namespace. The class may contain at most one method marked with the SetUpAttribute and one method marked with the TearDownAttribute.