I am working in an application that is mostly single-thread, single user.
There are a few worker threads here and there, and they are only using thread safe objects and classes. The unit tests are actually testing those with multiple threads (explicitly created for the tests), and they test fine.
The VSTS unit tests fail when testing business objects and sub-systems that are not thread safe. It is okay for them not to be thread-safe, that's the way the application uses them.
But the 'one thread per TestMethod' approach of MS tests kills us. I've had to implement object locks in many unit test classes just to ensure that the tests are run one after the other (I don't really care about the order, but I can't have two test methods hitting the same object at the same time).
The code looks like this:
[TestClass]
public class TestSomeObject
{
static object turnStile = new object();
...
[TestMethod]
public void T01_TestThis()
{
lock(turnStile)
{
.. actual test code
}
}
[TestMethod]
public void T02_TestThat()
{
lock(turnStile)
{
-- actual test code
}
}
}
Is there a better/more elegant way to make the test run sequentially?
Use an Ordered Test.
Test > New Test > Ordered Test
you can Use Playlist
right click on the test method -> Add to playlist -> New playlist
you can then specify the execution order
There is the notion of an "Ordered Test" in which you can list tests in sequence. It is more geared towards ensuring a certain sequential order, but I can't see how that would be possible if B doesn't wait for A to complete.
Apart from that, it is unfortunate that your tests interfere with each other. There are Setup / TearDown methods that can be used per test such that it may after all be possible to isolate the tests from each other.
You can specifically require a mutex for each test execution, either in the specific tests you want to serialize, or for all the tests in a class (whatever shares the same mutex string).
For an entire test class, you can use the TestInitialize and TestCleanup attributes like so:
private readonly Mutex testMutex = new Mutex(true, "MySpecificTestScenarioUniqueMutexString");
[TestInitialize]
public void Initialize()
{
testMutex.WaitOne(TimeSpan.FromSeconds(1));
}
[TestCleanup]
public void Cleanup() {
testMutex.ReleaseMutex();
}
To be clear this isn't a feature of tests, ANY locking structure should work. I'm using the system provided Mutexes in this case:
https://msdn.microsoft.com/en-us/library/system.threading.mutex(v=vs.110).aspx
I finally used the ordered test method. It works well.
However, I had a hell of a time making it work with the NAnt build.
Running only the ordered test list in the build requires using the /testmetadata and /testlist switches in the MSTest invocation block.
The documentation on these is sketchy, to use a kind description. I google all over for examples of "MSTest /testmetadata /testlist" to no effect.
The trick is simple, however, and I feel compelled to give it back to the community, in case someone else bumps into the same issue.
Edit the test metadata file (with a .vsmdi extension), and add a new list
to the list of tests (the first node in the tree on the left
pane. Give it the name you want, for example 'SequentialTests'.
If you used a /testcontainer switch for the MSTest invocation, remove it.
Add a switch for MSTest
-> /testmetadata:
Add a switch for MSTEst
/testlist:SequentialTests (or whatever name you used)
Then MSTest runs only the tests listed in the test list you created.
If someone has a better method, I'd like to hear about it!
I used ordered tests, also configured them easily on jenkins just use command
MSTest /testcontainer:"orderedtestfilename.orderedtest" /resultsfile:"testresults.trx"
Related
Starting from NUnit 3.13.1 (I'm trying 3.13.1) a new attribute was introduced for TestFixture isolation when running tests in parallel within the class.
Has anybody managed to use [Parallelizable(ParallelScope.All)] + [FixtureLifeCycle(LifeCycle.SingleInstance)] and run Webdriver tests in parallel within the same class?
After activating this feature I started to get unpredictable errors, like I used to have without the new attribute. Looks like the Fixture is not isolated.
NOTE: everything works fine when running WebDriver test classes in parallel.
WebDriver is initialized in
TestFixture base class looks like the following
[SetUp]
protected void Initialize()
{
//InitializeWebDriver();
Driver = new MyDriver();
}
[TearDown]
public void TestFixtureTearDown()
{
try
{
//...
}
finally
{
Driver.Quit();
Driver = null;
}
}
}
Tests look like this:
[TestFixture]
[Parallelizable(ParallelScope.All)]
[FixtureLifeCycle(LifeCycle.SingleInstance)]
public class TestClassA : TestBase
{
[Test]
public void TestA1()
{
}
}
The mistake in the code was very obvious (used SingleInstance instead of InstancePerTestCase)
Created a template project with 2 classes with 3 tests each.
All 6 may be executed simultaneously without any failures.
https://github.com/andrewlaser/TestParallelNUnit
The general philosophy of NUnit attributes around parallel execution is that they are intended to tell NUnit that it's safe to run your class in parallel. Using them doesn't make it safe... that's up to you.
The new attribute makes it easier for you to do that but doesn't guarantee anything. It does protect you from a certain kind of error - where two parallel test case instances make incompatible changes to the same member of the test class. But that's a very small part of all the ways your tests can fail when run in parallel.
Putting it another way... your Fixture is now safe but the things your Fixture may refer to, drivers, files, remote services are in no way protected. If your fixtures share anything external, that's a source of failure.
Unfortunately, you haven't given enough information for me to point out what's specifically wrong here. For example, you haven't shown how or where the Driver property is declared. With more info on your part, I'll be glad to up date my answer.
Going back to my initial point, your use of the attributes is no more than a promise you are making to NUnit... something like "I have made sure that it's safe to run this test in parallel." In your case, you're making the even bigger promise that all the tests in your fixture are safe to run in parallel. That's not something I would do right out of the box. I'd start with just two tests, which I think can safely run together and I'd expand from there. Obviously, it's almost always safe to run one test in parallel. :-)
Is there any way I can filter test results by specifying a keyword that should NOT appear?
Context:
I've written some C# classes and methods, but did not implement those methods for now (I made them throw a NotImplementedException so that they clearly indicate this). I also written some test cases for those functions, but they currently fail because the methods throw the NotImplementedException. This is ok and I expect this for now.
I want to ignore these tests for now and look at other test results that are more meaningful, so I was trying to figure out how I can list results that do not have the "NotImplementedException". However, I can only list the results that do have that keyword, not those that don't. Is there any way I can list the results that don't? Using some wildcards or something?
I see a lot of information about the new Test Explorer in VS2012, but that's not a feature in 2010, which is what I'm using.
You can sort of cheat to pass this tests, if you want to, by marking that this test expects an exception to be thrown and thereby passes the test.
[TestMethod]
[ExpectedException(typeof(NotImplementedException))]
public void NotYetImplementedMethod(Object args)
{
....
}
Alternatively you can create categories for your tests. This way you can choose which tests to run in the Test explorer, if you assign a category to most of your tests.
[TestMethod]
[Testcategory("NotImplementedNotTested")]
public void NotYetImplementedMethod(Object args)
{
....
}
Last but not least the simplest solution [Ignore]. This will skip the tests alltogether.
[TestMethod]
[Ignore]
public void NotYetImplementedMethod(Object args)
{
....
}
Reference:
http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Getting-Started-with-Unit-Testing-Part-1
http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Getting-Started-with-Unit-Testing-Part-2
How to create unit tests which runs only when manually specified?
I also written some test cases for those functions
If your tests are linked to Test Cases work items on TFS you could simply set the Test Case's State to Design. Then, on your Test Plans exclude all test cases that are on Designed state.
If they are not linked to actual Test Cases work items (let's say a batch of unit tests), I believe the best solution is the Ignore attrbute (as #Serv already mentioned). Because I don't think you want to run tests that are not implemented yet and also waste time to find out how to exclude them from test results.
I've got a codebase that makes use of static variables in a number of cases where that makes sense, for example flagging that something's already run once since launch, etc etc.
Of course, this can lead to issues with unit testing whereupon suddenly order matters and the outcome of a test on a method of such a class may depend on whether other code has been hit before, etc. My understand of TestTools.UnitTesting is that whenever I run a set of unit tests, any within the same project run within the same process, thus any static state is maintained from test to test, whereas a Unit Test project boundary also implies a process boundary and thus, if I run 3 tests from project A then a fourth from project B, state is maintained from 1>2>3 (in whatever order they run) but then 4 is virgin and any static state is default.
So now my questions are two:
1) is my assessment correct that a unit test projects have a 1:1 relationship with processes when tests are run in a group (run all or run selected), or is there more nuance there that I'm missing?
2) Regardless, if I have a test that definitely needs fresh, default static state for the custom objects it uses and tests, do I have a more elegant option for creating it than giving it its own test project?
Statics are not actually per process, but per application domain, represented by the AppDomain class. A single process can have several AppDomains. AppDomains have their own statics, can provide sandboxing to partially trusted code, and can be unloaded allowing newer versions of the same assembly to be hot swapped without restarting the application.
Your test runner is likely creating a new AppDomain per test assembly so each assembly gets its own static variables. You can create an AppDomain to do the same on the fly. This is not typically great for pure unit tests, but I've had to work with "rude" libraries that do all kinds of static initialization and caching that cannot be cleaned out or reset. In those sorts of integration scenarios it is very useful.
You can use this helper to run a simple delegate:
public static class AppDomainHelper
{
public static void Run(Action action)
{
var domain = AppDomain.CreateDomain("test domain");
try
{
domain.DoCallBack(new CrossAppDomainDelegate(action));
}
finally
{
AppDomain.Unload(domain);
}
}
}
One caution is that the delegate action passed to Run cannot have any captured variables (as in from a lambda). That doesn't work because the compiler will generate a hidden class that is not serializable and so it cannot pass through an AppDomain boundary.
Your assessment is correct as far as I know -- the assembly is loaded at the start of the test process, and any static state is maintained throughout the tests.
You should always start with a "fresh" state. Unit tests should be able to be run in any order, with no dependencies whatsoever. The reason is because your tests need to be reliable -- a test should only ever fail for one reason: The code it's testing changed. If you have tests that depend on other tests, then you can easily end up with one test failing and "breaking the chain" such that a dozen other tests fail.
You can use the TestInitialize attribute to define a method that will run before every test that will reset the state to the baseline.
Another way to enable this is to wrap your static state into a singleton, then put a "back door" into the singleton so that you can inject an instance of the singleton class, allowing you to configure the state of the application as part of arranging your test.
If you're not looking to test global state, but rather what you do with values you get from global state, you can work around it.
Consider this simple class definition that uses some static property.
public class Foo
{
public int Bar(int baz)
{
return baz + GlobalState.StaticValue;
}
}
You can refactor it like this.
public class Foo
{
public virtual int GetGlobalStaticValue
{
return GlobalState.StaticValue;
}
public virtual int Bar(int baz)
{
return baz + this.GetGlobalStaticValue();
}
}
I added virtuals to the method definitions because that's particular to Rhino Mocks, but you get the idea - while running live, your class will pull global state as it does now, but it gives you the hooks to mock out the values that will be returned in your test scenario.
I have an application that has many unit tests in many classes. Many of the tests have DeploymentItem attributes to provide required test data:
[TestMethod]
[DeploymentItem("UnitTesting\testdata1.xml","mytestdata")]
public void Test1(){
/*test*/
}
[TestMethod]
[DeploymentItem("UnitTesting\testdata2.xml","mytestdata")]
public void Test1(){
/*test*/
}
When tests are run individually, they pass. When all are run at once (For example, when I select "Run all tests in the current context"), some tests fail, because the DeploymentItems left behind by other tests cause the tests to grab the wrong data. (Or, a test incorrectly use the files meant for another test that hasn't run yet.)
I discovered the [TestCleanup] and [ClassCleanup] attributes, which seem like the would help. I added this:
[TestCleanup]
public void CleanUp(){
if(Directory.Exists("mytestdata"))
Directory.Delete("mytestdata", true);
}
The trouble is, this runs after every test method, and it seems that it will delete DeploymentItems for tests that have not run yet. [ClassCleanup] would prevent this, but unfortunately, it would not run often enough to prevent the original issue.
From the MSDN documentation, it seems that DeploymentItem only guarantees that the files will be there before the test executes, but it is not more specific than that. I think I am seeing the following problem:
Deployment Item for test executes
(other stuff happens?)
Test cleanup from previous test executes
Next test executes
Test fails because files are gone
Does anyone know the execution order of the different test attributes? I've been searching but I haven't found much.
I have thought about having each deployment item use its own, unique folder for data, but this becomes difficult as there are hundreds of tests to go through.
The order of the test attributes are as follows:
Methods marked with the AssemblyInitializeAttribute.
Methods marked with the ClassInitializeAttribute.
Methods marked with the TestInitializeAttribute.
Methods marked with the TestMethodAttribute.
Part of the problem is that Visual Studio runs tests in a non-deterministic order(by default but this can be changed) and multiple at a time. This means that you cannot delete the folder after each test.
In general, if you can avoid going to the disk for unit tests it will be much better. In general you don't want to have anything besides code that can break your tests.
I had a similar problem. In few tests i need to delete a deployed item - all tests pass when run individually, but failed when run in a playlist. My solution is ugly, but simple: use a different folder for every test.
For example:
[TestMethod]
[DeploymentItem("Resources\\DSC06247.JPG", "D1")]
public void TestImageUploadWithRemoval()
{
// Arrange
myDeployedImagePath = Path.Combine(TestContext.DeploymentDirectory, "D1", "DSC06247.JPG");
// Act ...
}
[TestMethod]
[DeploymentItem("Resources\\DSC06247.JPG", "D2")]
public void TestImageUploadWithoutRemoval()
{
// Arrange
myDeployedImagePath = Path.Combine(TestContext.DeploymentDirectory, "D2", "DSC06247.JPG");
// Act...
}
I have several suites of integration tests implemented in C#/NUNit. Each test suite is a separate class, each fixture setup creates and populates a SQL Server database from scripts. This all used to work just fine prior to Resharper 5.1.
Unfortunately, Resharper 5.1 begins to run several fixtures at the same time. This is a breaking change - they are all attempting to create and populate the same database, which obviously ends up in an mess. Is there any way I could have Resharper run my test fixtures serially?
If not, what would you suggest to run my NUnit test fixtures serially, one fixture at a time?
The order in which individual tests run does not matter.
I don't know whether it is possible to prevent ReSharper from running tests in parallel; if not, the following hack might work: Create a static class with a static readonly Monitor member. Then, in [TestFixtureSetUp], call Enter() on the monitor, and call Exit() on the monitor in [TestFixtureTearDown]. That way, only one test fixture will be allowed to run at a time. Not pretty, though...
Are you sure about this ? I just tried this out.. by putting a trace of the following form in tests in 3 diff NUnit fixtures followed by a "Run all". Doesn't seem to be running in parallel.
Trace.WriteLine(DateTime.Now.ToString("hh:mm:ss.ffff") + "VC:Start");
Trace.WriteLine(DateTime.Now.ToString("hh:mm:ss.ffff") + "VC:Done");
Output I see is : (R# Build 5.1.1753.1)
01:06:41.6639IOC
01:06:41.6649Done - IOC
01:06:41.6679VC:Start
01:06:41.6729VC:Done
01:06:41.2439Mef
01:06:41.6589Mef-Done
Unless the code being tested does its own transaction management, you can run each test inside a transaction. In that way, the different tests won't disturb each other. Also, you don't have to worry about cleaning up after each test, since the each test's transaction can simply abort after the test is completed.
In our projects, we usually let our integration tests derive from a class that looks more or less like this:
public class TransactionalTestFixture
{
private TransactionScope TxScope;
[SetUp]
public void TransactionalTestFixtureSetUp()
{
TxScope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions {IsolationLevel = IsolationLevel.Serializable});
}
[TearDown]
public void TransactionalTestFixtureTearDown()
{
TxScope.Dispose();
}
}
Give them alphabetical names, i.e. prefix them with a letter that signifies their running order. If you need this to happen you should be able to accept this nasty naming convention too.