What can cause the [Ignore] attribute to be ignored? - c#

I have a test suite with a few tests that are failing because the requirements have changed out from under them, necessitating code changes that break the tests. It's not immediately obvious how to fix the tests, so for the moment I want to simply disable them.
I added the Microsoft.VisualStudio.TestTools.UnitTesting.IgnoreAttribute attribute to these tests, but they're still being run by Test Explorer. I've considered the possibility that the test runner we're using would use its own mechanism, but that seems unlikely, as it responds to the TestMethodAttribute and TestCategoryAttribute attributes from the same namespace. One of the tests looks like this:
[TestMethod]
[TestCategory("Integration")]
[Ignore]
public void TestJobIntegrationDev01()
{
//test code goes here
}
How do I determine why Ignore is not working in this case?

Related

How do I order execution of NUnit test fixtures?

I have a NUnit test project which has two [TextFixture]s.
I want one of them to run before the other as it deals with file creation. They're currently running in the wrong order.
Given that I can't change the [Test]s or group them into a single unit test, is there a way in which I can have control over test fixture running order?
I have tried [Order(int num)] attribute and have also tried to create a new playlist.
Both of them aren't working.
C#, .NET Framework, NUnit Testing Framework, Windows.
The documentation for [OrderAttribute] states that ordering for fixtures applies within the containing namespace.
Make sure that your fixtures are within the same namespace & that you've applied [OrderAttribute] at the test fixture level:
namespace SameNamespace {
[TestFixture, Order(1)]
public class MyFirstFixture
{
/* ... */ }
}
[TestFixture, Order(2)]
public class MySecondFixture
{
/* ... */ }
}
}
Also, it's important to remember that while MyFirstFixture will run before MySecondFixture, the ordering of the tests inside is local to the test fixture.
A test with [Order(1)] in MySecondFixture will run after all the tests in MyFirstFixture have completed.
Important note: the documentation also does not guarantee ordering.
Tests do not wait for prior tests to finish. If multiple threads are in use, a test may be started while some earlier tests are still being run.
Regardless, tests should be following the F.I.R.S.T principles of testing, introduced by Robert C. Martin in his book "Clean Code".
The I in F.I.R.ST. stands for isolated, meaning that tests should not be dependable on one another & each test should be responsible for the setup it requires to be executed correctly.
Try your best to eventually combine the tests into one if they are testing one thing, or rewrite your logic in a way where the piece of code being tested by test 1, can be tested isolated from the piece of code being tested by test 2.
This will also have the side effect of cleaner code adhering to SRP.
Win-win situation.

How to Ignore in MSTests

I'm sorry this seems like such a basic question but I can't find the answer anywhere, including the MS Docs which talk about it but don't give an actual example.
I just want to ignore some tests. Here are some things that don't seem to work:
[TestMethod]
[Ignore]
public void TestStartAcquireEmpty()
{
}
[TestMethod]
[IgnoreAttribute]
public void TestStartAcquireEmpty()
{
}
[Ignore]
[TestMethod]
public void TestStartAcquireEmpty()
{
}
[IgnoreAttribute]
[TestMethod]
public void TestStartAcquireEmpty()
{
}
If I use [Ignore] without [TestMethod] the test does disappear from the test explorer. But what I want is to get the yellow triangle in the test explorer.
To see the yellow triangle for a test it is necessary to run the test first.
The triangle appears then in the test explorer when a test method has the [Ignore] attribute or asserts with Assert.Inconclusive().
The MSTest Test Explorer is a unit test runner with the capability to discover test methods automatically (and a few more features).
Use Test Explorer to run unit tests from Visual Studio or third-party unit test projects. You can also use Test Explorer to group tests into categories, filter the test list, and create, save, and run playlists of tests.
https://learn.microsoft.com/de-de/visualstudio/test/run-unit-tests-with-test-explorer?view=vs-2019
Test methods are discovered when the are within a public class with the [TestClass] attribute and have the [TestMethod] attribute. They need to be public with return type void.
Discovered test are shown in the test explorer with a blue exclamation mark icon that symbols that the test didn't ran by now (If you deactivate the real time test discovery you need to compile the test project first to see the test methods in the test explorer).
If a test method has the [Ignore] attribute there is metadata added at compile time. That metadata is examined at runtime like most other attributes (see https://learn.microsoft.com/en-us/dotnet/api/system.attribute?view=netcore-3.1#remarks).
Therefore the attribute is examined at runtime it's necessary to run the test first to see the outcome in the test explorer.
If you want to see the outcome of test immediately you may try Visual Studio Live Unit Testing:
https://learn.microsoft.com/de-de/visualstudio/test/live-unit-testing?view=vs-2019

How do I distinguish between Unit Tests and Integration Tests inside a test class?

My question is similar to this one: Junit: splitting integration test and Unit tests. However, my question regards NUnit instead of: JUnit. What is the best way to distinguish between Unit Tests and Integration Tests inside a test class? I was hoping to be able to do something like this:
[TestFixture]
public class MyFixture
{
[IntegrationTest]
[Test]
public void MyTest1()
{
}
[UnitTest]
[Test]
public void MyTest1()
{
}
}
Is there a way to do this with NUnit? Is there a better way to dot this?
Personally I've found it better to keep them in separate assemblies. You can use a convention, such as name.Integration.Tests and name.Tests (or whatever your team prefers).
Either assemblies or attributes work fine for CI servers like TeamCity. The pain with the attribute approach tends to show up in IDE test runners. I want to be able to quickly run only my unit tests. With separate assemblies, it's easy - select the appropriate test project and run tests.
The Category Attribute might help you do this.
https://github.com/nunit/docs/wiki/Category-Attribute
namespace NUnit.Tests
{
using System;
using NUnit.Framework;
[TestFixture]
public class SuccessTests
{
[Test]
[Category("Unit")]
public void VeryLongTest()
{ /* ... */ }
}
This answer shares some details with a few other answers, but I'd like to put the question in a slightly different perspective.
The design of TestFixtures is such that every test gets the same setup. To use TestFixtures correctly, you should divide your tests in such a way that all the tests with the same setup end up in the same test class. This is how almost every xunit framework is designed to be used and you always get better results when you use software as it is designed to be used.
Since Integration and Unit tests are not likely to share the same setup, this would naturally lead to putting them in a separate class. By doing that, you can group all integration tests under a namespace that makes them easy to run independently.
Even better, as another answer suggests, put them in a separate assembly. This works much better with most CI builds, since failure of an integration test may be more easily distinguished from failure of an integration test. Also, use of a separate assembly eliminates all the complication of using categories or special attributes.
Do not have them in the same class, either split them down into folders within your test assembly or split them into two separate test assemblies.
In the long run this will be far easier to manage especially if you use tools like NCrunch.

Unit Tests failing when I Run All Tests but pass when I Debug

I'm using NUnit3 in Visual Studio 2017 and doing TDD. Something really strange is happening since I updated my code to make my latest test pass.
Now, 3 of my other tests are failing when I click Run All Tests, as below:
It is telling me that the actual and expected values in my Assert method are not equal.
However, when I put a breakpoint at the line where the Assert method is and start debugging, the stacktrace is showing that expected and actual are the same value and then the test passes, as below:
Am I doing something stupid or could there be a bug in VS2017 or NUnit or something?
This ever happen to anyone else?
[Edit: I should probably add that I have written each test as a separate class]
The failing tests share a resource that affects them all when tested together. Recheck the affected tests and their subjects.
You should also look into static fields or properties in the subjects. They tends to cause issues if not used properly when designing your classes.
Some subtle differences might occur. For instance if a first test change a state which affects the behavior of a second test, then the outcome of this 2nd test may not be the same if I run it alone.
An idea to help understand a test failure when a breakpoint can't be used, could be to add logging.
Anyway, to answer your questions:
This ever happen to anyone else?
Yes
Am I doing something stupid or could there be a bug in VS2017 or NUnit or something?
I bet that it's neither: just a case a bit more subtle
I experienced a similar issue in Visual Studio 2017 using MSTest as the testing framework. Assertions in unit tests were failing when the tests were run but would pass when the unit tests were debugged. This was occurring for a handful of unit tests but not all of them. In addition to the Assertion failures, many of the units tests were also failing due to a System.TypeLoadException (Could not load type from assembly error). I ultimately did the following which solved the problem:
Open the Local.testsettings file in the solution
Go to the "Unit Test" settings
Uncheck the "Use the Load Context for assemblies in the test directory." checkbox
After taking these steps all unit tests started passing when run.
I encountered this phenomenon myself, but found the cause quite easily. More concretely, I tested some matrix calculations, and in my test class I defined data to calculate with as a class variable and performed my calculations with it. My matrix routines, however, modified the original data, so when I used "run tests" on the test class, the first test corrupted the data, and the next test could not succeed.
The sample code below is an attempt to show what I mean.
[TestFixture]
public void MyTestClass()
{
[Test]
public void TestMethod1()
{
MyMatrix m = new MyMatrix();
// Method1() modifies the data...
m.Method1(_data);
}
[Test]
public void TestMethod2()
{
MyMatrix m = new MyMatrix();
// here you test with modified data and, in general, cannot expect success
m.Method2(_data);
}
// the data to test with
private double[] _data = new double[1, 2, 3, 4]{};
}

asp.net Unit Test: Mark test as incomplete

I'd like to unit test a asp.net MVC webapplication.
We're not using TDD (well, not yet).
After touching a method I'd like to mark the appropriate unit test as incomplete or something so the other team members know they have to complete it.
Is there any possibility to do so?
We're using the built in Unit test possibility in Visual Studio 2010.
Thanks in advance.
Michael
Do you want the tests to not actually be run until they've been worked on further? If so, there's an [Ignore] attribute that you can add to each test, as in (for MSTest):
[TestMethod, Ignore]
public void TestThatNeedsToBeCompleted()
{
}
If you're using NUnit, you can add a reason parameter to the Ignore attribute to explain why the test is being ignored. I don't think that's available in MSTest, but don't quote me on that :)
You can simply fail the test with assertion or throw NotImplementedException. And you will see that these tests are not ok.
Or eventually use the IgnoreAttribute to enable/disable the test when you need.
[Ignore]
[TestMethod]
public void TestMethod { }

Categories