I'd like to unit test a asp.net MVC webapplication.
We're not using TDD (well, not yet).
After touching a method I'd like to mark the appropriate unit test as incomplete or something so the other team members know they have to complete it.
Is there any possibility to do so?
We're using the built in Unit test possibility in Visual Studio 2010.
Thanks in advance.
Michael
Do you want the tests to not actually be run until they've been worked on further? If so, there's an [Ignore] attribute that you can add to each test, as in (for MSTest):
[TestMethod, Ignore]
public void TestThatNeedsToBeCompleted()
{
}
If you're using NUnit, you can add a reason parameter to the Ignore attribute to explain why the test is being ignored. I don't think that's available in MSTest, but don't quote me on that :)
You can simply fail the test with assertion or throw NotImplementedException. And you will see that these tests are not ok.
Or eventually use the IgnoreAttribute to enable/disable the test when you need.
[Ignore]
[TestMethod]
public void TestMethod { }
Related
I'm sorry this seems like such a basic question but I can't find the answer anywhere, including the MS Docs which talk about it but don't give an actual example.
I just want to ignore some tests. Here are some things that don't seem to work:
[TestMethod]
[Ignore]
public void TestStartAcquireEmpty()
{
}
[TestMethod]
[IgnoreAttribute]
public void TestStartAcquireEmpty()
{
}
[Ignore]
[TestMethod]
public void TestStartAcquireEmpty()
{
}
[IgnoreAttribute]
[TestMethod]
public void TestStartAcquireEmpty()
{
}
If I use [Ignore] without [TestMethod] the test does disappear from the test explorer. But what I want is to get the yellow triangle in the test explorer.
To see the yellow triangle for a test it is necessary to run the test first.
The triangle appears then in the test explorer when a test method has the [Ignore] attribute or asserts with Assert.Inconclusive().
The MSTest Test Explorer is a unit test runner with the capability to discover test methods automatically (and a few more features).
Use Test Explorer to run unit tests from Visual Studio or third-party unit test projects. You can also use Test Explorer to group tests into categories, filter the test list, and create, save, and run playlists of tests.
https://learn.microsoft.com/de-de/visualstudio/test/run-unit-tests-with-test-explorer?view=vs-2019
Test methods are discovered when the are within a public class with the [TestClass] attribute and have the [TestMethod] attribute. They need to be public with return type void.
Discovered test are shown in the test explorer with a blue exclamation mark icon that symbols that the test didn't ran by now (If you deactivate the real time test discovery you need to compile the test project first to see the test methods in the test explorer).
If a test method has the [Ignore] attribute there is metadata added at compile time. That metadata is examined at runtime like most other attributes (see https://learn.microsoft.com/en-us/dotnet/api/system.attribute?view=netcore-3.1#remarks).
Therefore the attribute is examined at runtime it's necessary to run the test first to see the outcome in the test explorer.
If you want to see the outcome of test immediately you may try Visual Studio Live Unit Testing:
https://learn.microsoft.com/de-de/visualstudio/test/live-unit-testing?view=vs-2019
I have a test suite with a few tests that are failing because the requirements have changed out from under them, necessitating code changes that break the tests. It's not immediately obvious how to fix the tests, so for the moment I want to simply disable them.
I added the Microsoft.VisualStudio.TestTools.UnitTesting.IgnoreAttribute attribute to these tests, but they're still being run by Test Explorer. I've considered the possibility that the test runner we're using would use its own mechanism, but that seems unlikely, as it responds to the TestMethodAttribute and TestCategoryAttribute attributes from the same namespace. One of the tests looks like this:
[TestMethod]
[TestCategory("Integration")]
[Ignore]
public void TestJobIntegrationDev01()
{
//test code goes here
}
How do I determine why Ignore is not working in this case?
My question is similar to this one: Junit: splitting integration test and Unit tests. However, my question regards NUnit instead of: JUnit. What is the best way to distinguish between Unit Tests and Integration Tests inside a test class? I was hoping to be able to do something like this:
[TestFixture]
public class MyFixture
{
[IntegrationTest]
[Test]
public void MyTest1()
{
}
[UnitTest]
[Test]
public void MyTest1()
{
}
}
Is there a way to do this with NUnit? Is there a better way to dot this?
Personally I've found it better to keep them in separate assemblies. You can use a convention, such as name.Integration.Tests and name.Tests (or whatever your team prefers).
Either assemblies or attributes work fine for CI servers like TeamCity. The pain with the attribute approach tends to show up in IDE test runners. I want to be able to quickly run only my unit tests. With separate assemblies, it's easy - select the appropriate test project and run tests.
The Category Attribute might help you do this.
https://github.com/nunit/docs/wiki/Category-Attribute
namespace NUnit.Tests
{
using System;
using NUnit.Framework;
[TestFixture]
public class SuccessTests
{
[Test]
[Category("Unit")]
public void VeryLongTest()
{ /* ... */ }
}
This answer shares some details with a few other answers, but I'd like to put the question in a slightly different perspective.
The design of TestFixtures is such that every test gets the same setup. To use TestFixtures correctly, you should divide your tests in such a way that all the tests with the same setup end up in the same test class. This is how almost every xunit framework is designed to be used and you always get better results when you use software as it is designed to be used.
Since Integration and Unit tests are not likely to share the same setup, this would naturally lead to putting them in a separate class. By doing that, you can group all integration tests under a namespace that makes them easy to run independently.
Even better, as another answer suggests, put them in a separate assembly. This works much better with most CI builds, since failure of an integration test may be more easily distinguished from failure of an integration test. Also, use of a separate assembly eliminates all the complication of using categories or special attributes.
Do not have them in the same class, either split them down into folders within your test assembly or split them into two separate test assemblies.
In the long run this will be far easier to manage especially if you use tools like NCrunch.
I'm currently building out my automation framework using NUnit, I've got everything working just fine and the last enchancement I'd like to make is to be able to map my automated test scripts to test cases in my testing software.
I'm using TestRail for all my testcases.
My ideal situation is to be able to decorate each test case with the corresponding testcase ID in test rail and when it comes to report the test result in TestRail, I can just use the Case id. Currently I'm doing this via matching test name/script name.
Example -
[Test]
[TestCaseId("001")]
public void NavigateToSite()
{
LoginPage login = new LoginPage(Driver);
login.NavigateToLogInPage();
login.AssertLoginPageLoaded();
}
And then in my teardown method, it would be something like -
[TearDown]
public static void TestTearDown(IWebDriver Driver)
{
var testcaseId = TestContext.CurrentContext.TestCaseid;
var result = TestContext.CurrentContext.Result.Outcome;
//Static method to report to Testrail using API
Report.ReportTestResult(testcaseId, result);
}
I've just made up the testcaseid attribute, but this is what I'm looking for.
[TestCaseId("001")]
I may have missed this if it already exists, or how do I go about possibly extending NUnit to do this?
You can use PropertyAttribute supplied by NUnit.
Example:
[Property("TestCaseId", "001")]
[Test]
public void NavigateToSite()
{
...
}
[TearDown]
public void TearDown()
{
var testCaseId = TestContext.CurrentContext.Test.Properties["TestCaseId"];
}
In addition you can create custom property attribute - see NUnit link
For many years I recommended that people not do this: mix test management code into the tests themselves. It's an obvious violation of the single responsibility principle and it creates difficulties in maintaining the tests.
In addition, there's the problem that the result presented in TearDown may not be final. For example, if you have used [MaxTime] on the test and it exceeds the time specified, your successful test will change to a failure. Several other built-in attributes work this way and of course there is always the possibility of a user-created attribute. The purpose of TearDown is to clean up after your code, not as a springboard for creating a reporting or test management system.
That said, with older versions of NUnit, folks got into the habit of doing this. This was in part due to the fact that NUnit addins (the approach we designed) were fairly complicated to write. There were also fewer problems because NUNit V2 was significantly less extensible on the test side of things. With 3.0, we provided a means for creating test management functions such as this as extensions to the NUnit engine and I suggest you consider using that facility instead of mixing them in with the test code.
The general approach would be to create a property, as suggested in Sulo's answer but to replace your TearDown code with an EventListener extension that reports the result to TestRail. The EventListener has access all the result information - not just the limited fields available in TestContext - in XML format. You can readily extract whatever needs to go to TestRail.
Details of writing TestEngine extensions are found here: https://github.com/nunit/docs/wiki/Writing-Engine-Extensions
Note that are some outstanding issues if you want to use extensions under the Visual Studio adapter, which we are working on. Right now, you'll get the best experience using the console runner.
I am working on an MVC project and was wondering whether to use Basic Unit Test or Unit Test, I read articles / explanations about both but can't see much difference between the two. What are the main differences and which one is preferable for a large scale app with DB backend?
The difference between Visual Studio's Basic Unit Test item template and Unit Test item template is that the latter includes support for ClassInitialize, ClassCleanup, TestInitialize and TestCleanup routines allowing you to execute some code before/after the test fixture and some code before/after each unit test. If you don't need such functionality in your unit test you could go wit the basic template which generates the following file:
[TestClass]
public class UnitTest2
{
[TestMethod]
public void TestMethod1()
{
}
}
Of course you could always add the corresponding routines to a basic unit test if you want to support later this functionality.