I'm using the latest NUnit to run Selenium tests. The tests are compiled into a class library DLL file which is then run by NUnit.
My problem is that before the automation begins, I need to run some initialization such as creating a log file, setting up specific parameters, etc. I don't see a way to do this in NUnit - setup() does this but for every Test or Fixture - I just need to run this code once at the start of the application.
Any idea how I can do what I want?
Your help is very appreciated.
J.
Take a look at SetUpFixtureAttribute (more information here). It says:
This is the attribute that marks a class that contains the one-time setup or teardown methods for all the test fixtures under a given namespace. The class may contain at most one method marked with the SetUpAttribute and one method marked with the TearDownAttribute.
Related
I have some tests that do some write-operation on a database. I know that´s not really unit-testing, but let´s leave that asside.
In order to enable every test to work on a clean workspace, I rollback all transactions doe so far. However I randomly get concurrency-errors due to database-locks that cannot be established.
This is my code:
Test1.dll
[TestFixture]
class MyTest1
{
[OneTimeSetup]
public void SetupFixture()
{
myworkspace.StartEditing(); // this will establish a lock on the underlying database
}
[OneTimeTearDow]
public void TearDownFixture()
{
myWorkspace.Rollback();
}
}
The same code also exists within another test-assembly, let's name it Test2.dll. Now when I use the nunit-console-runner using nunit3-console Test1.dll Test2.dll, I get the following error:
System.Runtime.InteropServices.COMException : Table 'GDB_DatabaseLocks' cannot be locked; it is currently used by user 'ADMIN' (this is me) on host 'MyHost'
at ESRI.ArcGIS.Geodatabase.IWorkspaceEdit.StartEditing(Boolean withUndoRedo)
myWorkspace is a COM-object (Arcobjects-interface IWorkspace) that relates to an MS-Access-Database. I assume this is because nunit creates multiple threads that enter the above code at the same time. So I added the NonParalizable-attribute to both assemblies without success. I also tried to add Apartment(ApartmentState.STA) to my assembly in order to execute everything in a single thread, which resulted in the console never finishing.
What drives me nuts is that running my tests using ReSahrpers test-runner works perfectly. However I have no clue how ReSharper starts nunit. It seems ReSharper does not use nunit-console, but the nunit-API instead.
Is there another way to force all my tests to run in a single thread? I use nunit3.10 and ArcGIS 10.8.
By default, the NUnit Console will run multiple test assemblies in parallel. Add --agents=1 to force the two assemblies to run sequentially, under a single agent.
Just to clarify some of the other things you tried as well...
[NonParallelizable] is used to prevent the parallelization of different tests within a single assembly. By default, tests within an assembly do not run in parallel, so adding this attribute when you haven't specifically added [Parallelizable] at a higher level will have no effect.
[Apartments(Apartment.STA)] can be added as an assembly-level attribute, and does not have to be added per test, as mentioned in the comments. Check out the docs here: https://docs.nunit.org/articles/nunit/writing-tests/attributes/apartment.html
I'm trying to write a NUnit 3 (3.8.1) extension that lets a test fail if it has a failed Debug.Assert(...) (instead of silently run through or even hang because it shows the failed assertion dialog).
In a NUnit 2 addin, I was able to so by unregistering all debug trace listeners and adding my own that just throws an exception (as for example explained here). However, this doesn't seem to work on NUnit 3 anymore.
I'm able to successfully deploy the extension and it's code is being executed.
[Extension(Description = "Failed Assertions Tracker", EngineVersion = "3.4")]
public class TrackerEventListener : ITestEventListener
{
public void OnTestEvent(string report)
{
Console.WriteLine(report); // prints -> so I know this method is being called
Debug.Listeners.Clear();
Debug.Listeners.Add(new UnitTestTraceListener());
}
}
However, my unit test unfortunately shows me that still the DefaultTraceListener is installed.
[Test]
public void FailingAssertionShouldNotHang()
{
foreach (object listener in Debug.Listeners)
{
Console.WriteLine(listener.GetType().FullName);
}
Debug.Fail("I'm sorry. I've failed.");
}
And so the test shows the dialog instead of simply failing.
What am I doing wrong? I suspect that the call to the static Listeners collection is ineffective because the actual test is run in a different context (different AppDomain, process, ?). But if this is the case, how can I solve my problem?
It's important to keep in mind that NUnit 3 Extensions, while capable of replacing NUnit 2 Addins in a few cases, are actually entirely different technology. NUnit 3 Extensions extend the Engine, which is separate from the framework.
In this case, your extension is setting up a Trace Listener that will capture any Debug Trace or Assert output produced by the engine. If the engine contained Trace statements (it doesn't) you would get the output. Meanwhile, the framework is happily continuing to run tests on its own.
Any code that will successfully capture Trace has to be part of the actual framework execution of the tests. This gives you two options.
Create a custom attribute that will capture trace. Custom attributes allow you to take actions when a test is being created or executed. They are created by implementing various interfaces supported by the framework. In your case, you would want to supply the attribute at assembly level, in order to capture all output produced by the assembly.
Create the code as part of your tests themself, without extending the framework at all. You would want to capture the Trace output in an assembly-level SetUpFixture using the OneTimeSetUp attribute and release it under the OneTimeTearDown attribute. Since this approach is simpler than creating a custom attribute, it's the one I would use.
Figure above showed a TestSuite/Plan in Ranorex.
[SETUP] represents launching .exe recording while [TEARDOWN] represents exiting .exe.
How can I imitate the test case plan structure using only Visual Studio coded ui.
Since it will be repetitive to launch and close my .exe in every test case. If possible I would like to set it only once.
Does a [TestMethod] in coded ui represents a test case?
We have faced the same problem and resolved it by first making an assumption.
A Microsoft TestMethod is not corresponding to a Ranorex Test Case, it is a Ranorex Run Configuration (as defined in the test suite).
A Run Configuration comes with configuration. As you may already know, on the command line, it is possible to execute a Ranorex Test Case or a Ranorex Run Configuration, but it is better/easier to execute a Run Configuration since it comes with context (and also most development can be done by non-programmer from within Ranorex!).
In the end, what we did is use TestMethod to call Run Configuration(s).
The following Ranorex How To article describes how to do this:
http://www.ranorex.com/news/article/howto-test-automation-with-tfs-and-ranorex.html
If this method does not suit your setup, you can probably invoke Ranorex Test Cases directly in test method (and do whatever sequence you wish to replicate as shown in your test suite), but that would be more complicated and involve more maintenance IMHO (which must be done by programmers).
Hope this helps!
Hugo
You're right about [TestMethod] representing a test case.
To Imitate the [Setup] and [TearDown] behavior of Ranorex, instead of using the [TestInitialize] and [TestCleanup] attributes, you should use the [ClassInitialize] and [ClassCleanup] attributes (or [AssemblyInitialize] and [AssemblyCleanp] if you want them to run once for all classes in the project).
Note that these methods must be static, and the initialize ones should accept a TestContext parameter.
I have three unit tests that cannot pass when run from the build server—they rely on the login credentials of the user who is running the tests.
Is there any way (attribute???) I can hide these three tests from the build server, and run all the others?
Our build-server expert tells me that generating a vsmdi file that excludes those tests will do the trick, but I'm not sure how to do that.
I know I can just put those three tests into a new project, and have our build-server admin explicitly exclude it, but I'd really love to be able to just use a simple attribute on the offending tests.
You can tag the tests with a category, and then run tests based on category.
[TestCategory("RequiresLoginCredentials")]
public void TestMethod() { ... }
When you run mstest, you can specify /category:"!RequiresLoginCredentials"
There is an IgnoreAttribute. The post also lists the other approaches.
Others answers are old.
In modern visual studio (2012 and above), tests run with vstest and not mstest.
New command line parameter is /TestCaseFilter:"TestCategory!=Nightly"
as explained in this article.
Open Test->Windows->Test List Editor.
There you can include / hide tests
I figured out how to filter the tests by category in the build definition of VS 2012. I couldn't find this information anywhere else.
in the Test Case Filter field under Test Source, under Automated Tests, under the Build process parameters, in the Process tab you need to write TestCategory=MyTestCategory (no quotes anywhere)
Then in the test source file you need to add the TestCategory attribute. I have seen a few ways to do this but what works for me is adding it in the same attribute as the TestMethod as follows.
[TestCategory("MyTestCategory"), TestMethod()]
Here you do need the quotes
When I run unit tests from VS build definition (which is not exactly MSTest), in the Criteria tab of Automated Tests property I specify:
TestCategory!=MyTestCategory
All tests with category MyTestCategory got skipped.
My preferred way to do that is to have 2 kinds of test projects in my solution : one for unit tests which can be executed from any context and should always pass and another one with integration tests that require a particular context to run properly (user credential, database, web services etc.). My test projects are using a naming convention (ex : businessLogic.UnitTests vs businessLogic.IntegrationTests) and I configure my build server to only run unit tests (*.UnitTests). This way, I don't have to comment IgnoreAttributes if I want to run the Integration Tests and I found it easier than editing test list.
I know Visual Studio offers some Unit Testing goodies. How do I use them, how do you use them? What should I know about unit testing (assume I know nothing).
This question is similar but it does not address what Visual Studio can do, please do not mark this as a duplicate because of that. Posted as Community Wiki because I'm not trying to be a rep whore.
Easily the most significant difference is that the MSTest support is built in to Visual Studio and provides unit testing, code coverage and mocking support directly. In order to do the same types of things in the external (third-party) unit test frameworks generally requires multiple frameworks (a unit testing framework and a mocking framework) and other tools to do code coverage analysis.
The easist way to use the MSTest unit testing tools is to open the file you want to create unit tests for, right click in the editor window and choose the "Create Unit Tests..." menu from the context menu. I prefer putting my unit tests in a separate project, but that's just personal perference. Doing this will create a sort of "template" test class, which will contain test methods to allow you to test each of the functions and properties of your class. At that point, you need to determine what it means for the test to pass or fail (in other words, determine what should happen given a certain set of inputs).
Generally, you end up writing tests that look similar to this:
string stringVal = "This";
Assert.IsTrue(stringVal.Length == 4);
This says that for the varaible named stringVal, the Length property should be equal to 4 after the assignment.
The resources listed in the other thread should provide a good starting point to understandng what unit testing is in general.
The unit testing structure in VS is similar to NUnit in it's usage. One interesting (and useful) feature of it does differ from NUnit significantly. VS unit testing can be used with code that was not written with unit testing in mind.
You can build a unit testing framework after an application is written because the test structure allows you to externally reference method calls and use ramp-up and tear-down code to prep the test invironment. For example: If you have a method within a class that uses resources that are external to the method, you can create them in the ramp-up class (which VS creates for you) and then test it in the unit test class (also created for you by VS). When the test finishes, the tear-down class (yet again...provided for you by VS) will release resources and clean up. This entire process exists outside of your application thus does not interfere with the code base.
The VS unit testing framework is actually a very well implemented and easy to use. Best of all, you can use it with an application that was not designed with unit testing in mind (something that is not easy with NUnit).
First thing I would do is download a copy of TestDriven.Net to use as a test runner. This will add a right-click menu that will let you run individual tests by right-clicking in the test method and selecting Run Test(s). This also works for all tests in a class (right-click in class, but outside a method), a namespace (right click on project or in namespace outside a class), or a entire solution (right-click on solution). It also adds the ability to run tests with coverage (built-in or nCover) or the debugger from the same right-click menu.
As far as setting up tests, generally I stick with the one test project per project and one test class per class under test. Sometimes I will create test classes for aspects that run across a lot of classes, but not usually. The typical way I create them is to first create the skeleton of the class -- no properties, no constructor, but with the first method that I want to test. This method simply throws an NotImplementedException.
Once the class skeleton is created, I use the right-click Create Unit Tests in the method under test. This brins up a dialog that lets you create a new test project or select an existing one. I create, and name appropriately, a new test project and have the wizard create the classes. Once this is done you may want to also create the private accessor functions for the class in the test project as well. Sometimes these need to be updated (recreated) if your class changes substantially.
Now you have a test project and your first test. Start by modifying the test to define a desired behavior of the method. Write enough code to (just barely) pass the test. continue on with writing tests/writing codes, specifying more behavior for the method until all of the behavior for the method is defined. Then move onto the next method or class as appropriate until you have enough code to complete the feature that you are working on.
You can add more and different types of tests as required. You can also set up your source code control to require that some or all tests pass before check in.