In Visual Studio Team Services (VSTS) when defining a build I can filter specific tests to be included or excluded when running tests.
Question: How do I filter complete test classes from execution? The example in the screenshot demonstrates how I filter tests based on their category.
Sample test class which I'd like to exclude:
[TestClass] // .NET 4.5
public class SampleTests
{
[TestMethod, TestCategory("Integration")]
public void Test1() {}
[TestMethod, TestCategory("Integration")]
public void Test2() {}
...
}
Current configuration to exclude my integration tests:
Trial: The filter criteria ClassName!=SampleTests doesn't work. It seems to be reserved for store apps only. Fairly good documentation here: MSDN Blog by Vikram Agrawal.
Reason for asking: I've got test classes initialize lots of data first before running any test and run a clean-up job at the end. When all my tests are excluded via the aforesaid filter the class initialization and clean-up still happen which consumes a lot of time and resources. I like to optimize this.
You can do this with:
FullyQualifiedName!=namespace.SampleTests
Related
I have a NUnit test project which has two [TextFixture]s.
I want one of them to run before the other as it deals with file creation. They're currently running in the wrong order.
Given that I can't change the [Test]s or group them into a single unit test, is there a way in which I can have control over test fixture running order?
I have tried [Order(int num)] attribute and have also tried to create a new playlist.
Both of them aren't working.
C#, .NET Framework, NUnit Testing Framework, Windows.
The documentation for [OrderAttribute] states that ordering for fixtures applies within the containing namespace.
Make sure that your fixtures are within the same namespace & that you've applied [OrderAttribute] at the test fixture level:
namespace SameNamespace {
[TestFixture, Order(1)]
public class MyFirstFixture
{
/* ... */ }
}
[TestFixture, Order(2)]
public class MySecondFixture
{
/* ... */ }
}
}
Also, it's important to remember that while MyFirstFixture will run before MySecondFixture, the ordering of the tests inside is local to the test fixture.
A test with [Order(1)] in MySecondFixture will run after all the tests in MyFirstFixture have completed.
Important note: the documentation also does not guarantee ordering.
Tests do not wait for prior tests to finish. If multiple threads are in use, a test may be started while some earlier tests are still being run.
Regardless, tests should be following the F.I.R.S.T principles of testing, introduced by Robert C. Martin in his book "Clean Code".
The I in F.I.R.ST. stands for isolated, meaning that tests should not be dependable on one another & each test should be responsible for the setup it requires to be executed correctly.
Try your best to eventually combine the tests into one if they are testing one thing, or rewrite your logic in a way where the piece of code being tested by test 1, can be tested isolated from the piece of code being tested by test 2.
This will also have the side effect of cleaner code adhering to SRP.
Win-win situation.
I'm currently building out my automation framework using NUnit, I've got everything working just fine and the last enchancement I'd like to make is to be able to map my automated test scripts to test cases in my testing software.
I'm using TestRail for all my testcases.
My ideal situation is to be able to decorate each test case with the corresponding testcase ID in test rail and when it comes to report the test result in TestRail, I can just use the Case id. Currently I'm doing this via matching test name/script name.
Example -
[Test]
[TestCaseId("001")]
public void NavigateToSite()
{
LoginPage login = new LoginPage(Driver);
login.NavigateToLogInPage();
login.AssertLoginPageLoaded();
}
And then in my teardown method, it would be something like -
[TearDown]
public static void TestTearDown(IWebDriver Driver)
{
var testcaseId = TestContext.CurrentContext.TestCaseid;
var result = TestContext.CurrentContext.Result.Outcome;
//Static method to report to Testrail using API
Report.ReportTestResult(testcaseId, result);
}
I've just made up the testcaseid attribute, but this is what I'm looking for.
[TestCaseId("001")]
I may have missed this if it already exists, or how do I go about possibly extending NUnit to do this?
You can use PropertyAttribute supplied by NUnit.
Example:
[Property("TestCaseId", "001")]
[Test]
public void NavigateToSite()
{
...
}
[TearDown]
public void TearDown()
{
var testCaseId = TestContext.CurrentContext.Test.Properties["TestCaseId"];
}
In addition you can create custom property attribute - see NUnit link
For many years I recommended that people not do this: mix test management code into the tests themselves. It's an obvious violation of the single responsibility principle and it creates difficulties in maintaining the tests.
In addition, there's the problem that the result presented in TearDown may not be final. For example, if you have used [MaxTime] on the test and it exceeds the time specified, your successful test will change to a failure. Several other built-in attributes work this way and of course there is always the possibility of a user-created attribute. The purpose of TearDown is to clean up after your code, not as a springboard for creating a reporting or test management system.
That said, with older versions of NUnit, folks got into the habit of doing this. This was in part due to the fact that NUnit addins (the approach we designed) were fairly complicated to write. There were also fewer problems because NUNit V2 was significantly less extensible on the test side of things. With 3.0, we provided a means for creating test management functions such as this as extensions to the NUnit engine and I suggest you consider using that facility instead of mixing them in with the test code.
The general approach would be to create a property, as suggested in Sulo's answer but to replace your TearDown code with an EventListener extension that reports the result to TestRail. The EventListener has access all the result information - not just the limited fields available in TestContext - in XML format. You can readily extract whatever needs to go to TestRail.
Details of writing TestEngine extensions are found here: https://github.com/nunit/docs/wiki/Writing-Engine-Extensions
Note that are some outstanding issues if you want to use extensions under the Visual Studio adapter, which we are working on. Right now, you'll get the best experience using the console runner.
I have come with a growing library of quick one-click functions that I want to call as development tools. So far, I have written them as [TestMethod, TestCategory("ToolsManagement")] functions that I set at [Ignore] and when I want to use my tool I remove the [Ignore] line and run the test.
What is a better way to organize my tools library as I am tired of seeing them as test functions?
Edit 1:
I'll try to explain a bit more what I need...I hope it will be clearer.
While developing, I debug and test the application, so I often insert/update data in the database. Often, I want to create a snapshot, restore a snapshot, or recreate the database so it is empty. I also coded functions to reproduce business cases, inserting a lot of data at different places for examples, instead of doing it manually. Those are example of my development tools that I want quick and easy access to, but not from the Test View.
I'd suggest you use a new Visual Studio 2015 Feature called "C# Interactive" to execute your utility functions. You can just mark them in the code and press CTRL+E,Ctrl+E to execute this method. Like this you can remove the [TestMethod] attributes all together and thus not have them show up in the VS test explorer!
Another suggestion in can give you is to bundle this code to be able to execute it from the CLI. This could help you automate tasks, have nice traces to the CLI about whats happening - It could be more convenient than executing it in VS!
I strongly suggest a separate console application containing all your jobs or, in case everything is related to your database server, one or more .sql files containing those operations (then, select what you need to run and CTRL+ALT+E to execute).
You can sort your tests by different types by clicking on the small icons above the run-all command. This might help you to sort and collapse the ones you aren't running. You could also pull those 'tools' out into another project in your solution so they aren't part of your Unit Tests project.
The functionality you are expecting is a conditional ignore attribute. Which is not available with MS Unit Testing.
Workaround 1:
As a workaround you can use the below code snippet
[TestClass]
public class UnitTest1
{
[TestInitialize()]
public void Initialize()
{
//Assert.Inconclusive("Message");
}
[TestMethod]
public void TestMethod1()
{
}
[TestMethod]
public void TestMethod2()
{
}
}
You can toggle the
//Assert.Inconclusive("Message");
and achieve your expected functionality.
Note: Since we use "[TestInitialize()]" all the test method in this TestClass will run this at first.
Workaround 2:
You can make use of conditional compilation as illustrated below
[TestClass]
public class UnitTest1
{
[TestMethod
#if !Ignore
, Ignore()
#endif
]
public void TestMethod1()
{
}
[TestMethod
#if !Ignore
, Ignore()
#endif
]
public void TestMethod2()
{
}
}
My current way of organizing unit tests boils down to the following:
Each project has its own dedicated project with unit tests. For a project BusinessLayer, there is a BusinessLayer.UnitTests test project.
For each class I want to test, there is a separate test class in the test project placed within exactly the same folder structure and in exactly the same namespace as the class under test. For a class CustomerRepository from a namespace BusinessLayer.Repositories, there is a test class CustomerRepositoryTests in a namespace BusinessLayerUnitTests.Repositories.
Methods within each test class follow simple naming convention MethodName_Condition_ExpectedOutcome. So the class CustomerRepositoryTests that contains tests for a class CustomerRepository with a Get method defined looks like the following:
[TestFixture]
public class CustomerRepositoryTests
{
[Test]
public void Get_WhenX_ThenRecordIsReturned()
{
// ...
}
[Test]
public void Get_WhenY_ThenExceptionIsThrown()
{
// ...
}
}
This approach has served me quite well, because it makes locating tests for some piece of code really simple. On the opposite site, it makes code refactoring really more difficult then it should be:
When I decide to split one project into multiple smaller ones, I also need to split my test project.
When I want to change namespace of a class, I have to remember to change a namespace (and folder structure) of a test class as well.
When I change name of a method, I have to go through all tests and change the name there, as well. Sure, I can use Search & Replace, but that is not very reliable. In the end, I still need to check the changes manually.
Is there some clever way of organizing unit tests that would still allow me to locate tests for a specific code quickly and at the same time lend itself more towards refactoring?
Alternatively, is there some, uh, perhaps Visual Studio extension, that would allow me to somehow say that "hey, these tests are for that method, so when name of the method changes, please be so kind and change the tests as well"? To be honest, I am seriously considering to write something like that myself :)
After working a lot with tests, I've come to realize that (at least for me) having all those restrictions bring a lot of problems in the long run, rather than good things. So instead of using "Names" and conventions to determine that, we've started using code. Each project and each class can have any number of test projects and test classes. All the test code is organized based on what is being tested from a functionality perspective (or which requirement it implements, or which bug it reproduced, etc...). Then for finding the tests for a piece of code we do this:
[TestFixture]
public class MyFunctionalityTests
{
public IEnumerable<Type> TestedClasses()
{
// We can find the tests for a class, because the test cases references in some special method.
return new []{typeof(SomeTestedType), typeof(OtherTestedType)};
}
[Test]
public void TestRequirement23423432()
{
// ... test code.
this.TestingMethod(someObject.methodBeingTested); //We do something similar for methods if we want to track which methods are being tested (we usually don't)
// ...
}
}
We can use tools like resharper "usages" to find the test cases, etc... And when that's not enough, we do some magic by reflection and LINQ by loading all the test classes, and running something like allTestClasses.where(testClass => testClass.TestedClasses().FindSomeTestClasses());
You can also use the TearDown to gather information about which methods are tested by each method/class and do the same.
One way to keep class and test locations in sync when moving the code:
Move the code to a uniquely named temporary namespace
Search for references to that namespace in your tests to identify the tests that need to be moved
Move the tests to the proper new location
Once all references to the temporary namespace from tests are in the right place, then move the original code to its intended target
One strength of end-to-end or behavioral tests is the tests are grouped by requirement and not code, so you avoid the problem of keeping test locations in sync with the corresponding code.
Regarding VS extensions that associate code to tests, take a look at Visual Studio's Test Impact. It runs the tests under a profiler and creates a compact database that maps IL sequence points to unit tests. So in other words, when you change the code Visual Studio knows which tests need to be run.
One unit test project per project is the way to go. We have tried with a mega unit test project but this increased the compile time.
To help you refactor use a product like resharper or code rush.
Is there some clever way of organizing unit tests that would still
allow me to locate tests for a specific code quickly
Resharper have some good short cuts that allows you to search file or code
As you said for class CustomerRepository their is a test CustomerRepositoryTests
R# shortcut shows inpput box for what you find in you case you can just input CRT and it will show you all the files starting with name have first as Capital C then R and then T
It also allow you do search by wild cards such as CR* will show you the list of file CustomerRepository and CustomerRepositoryTests
I am working on an MVC project and was wondering whether to use Basic Unit Test or Unit Test, I read articles / explanations about both but can't see much difference between the two. What are the main differences and which one is preferable for a large scale app with DB backend?
The difference between Visual Studio's Basic Unit Test item template and Unit Test item template is that the latter includes support for ClassInitialize, ClassCleanup, TestInitialize and TestCleanup routines allowing you to execute some code before/after the test fixture and some code before/after each unit test. If you don't need such functionality in your unit test you could go wit the basic template which generates the following file:
[TestClass]
public class UnitTest2
{
[TestMethod]
public void TestMethod1()
{
}
}
Of course you could always add the corresponding routines to a basic unit test if you want to support later this functionality.