XUnit - Mixing theory data mechanisms, input and expected data - c#

When creating Theory test cases with XUnit I would like to be able to include both the parameters and the expected outcome for each case. I have used the InlineData attribute but for heavy configuration loading this is less than optimal and does not permit reuse.
[InlineData(1,2,3,4,5,6,7,...)]
As such I have moved the test configurations out to a separate class and now load them with MemberData and MemberType.
[Theory]
[MemberData(nameof(DataClass.Data), MemberType = typeof(DataClass))]
public void TestValidConfig(Configuration config)
{
...
}
However this does not allow me to specify the expected outcome like i could if using a basic tag i.e.
[InlineData("Input1", "Input2", "Input3", "ExpectedResult")]
I don't want to include the expected outcome with the configuration data as this will be reused in multiple tests.
Has anyone got a solution to this challenge?
So the underlying challenge is having complex test data, that could be used in multiple places, but then wanting to separate the expected outcome. So in a calculator (bad example) you could have lists of numbers that are the test data. These could then be passed into an add or multiply or subtraction test. This is where I would want to separate the input and the expected output data.

Here's a suggestion:
Create a class to generate test data:
internal static class TestData
{
public static IList<T> Get<T>(int count = 10)
{
// I'm using NBuilder here to generate test data quickly.
// Use your own logic to create your test data.
return Builder<T>.CreateListOfSize(count).Build();
}
}
Now, all your test classes can leverage this to get the same set of test data. So, in your data class, you would do something along the lines of
public class DataClass
{
public static IEnumerable<object[]> Data()
{
return new List<object[]>
{
new object[] { TestData.Get(), this.ExpectedResult() }
};
}
}
Now you can follow through with your original approach:
[Theory]
[MemberData(nameof(DataClass.Data), MemberType = typeof(DataClass))]
public void TestValidConfig(Data input, Configuration expected)
{
...
}
If your tests don't mutate the input data, You can collect them into a fixture and inject the input data through constructor. This would speed up the tests since you don't have to generate the input data per test. Check out shared context for more information.

Related

AutoFixture Tests Stop Showing Up in Test Runner

AutoFixture (or my misuse of it) seems to have caused the xunit test runner to stop showing individual tests in the tree view for each instance of inline data. Usually if I use [Theory] and [InlineData] I get individual tests showing up in the test runner tree view, one for each [InlineData]. That no longer happens after the following:
I created a custom AutoMoqDataAttribute class to always use the AutoMoqCustomization and disable recursion:
public sealed class AutoMoqDataAttribute : AutoDataAttribute
{
/// <summary>
/// Automaticaly adds the "Auto Moq" customization to fixures that use the
/// AutoDataAttribute attribute.
/// </summary>
public AutoMoqDataAttribute() : base(() =>
{
// Create a fixture we can customize.
var fixture = new Fixture();
// Remove the recursion checks as we have objects that require recursion.
var throwingRecursionBehavior = fixture.Behaviors.FirstOrDefault(behavior => behavior.GetType() == typeof(ThrowingRecursionBehavior));
if (throwingRecursionBehavior != null) fixture.Behaviors.Remove(throwingRecursionBehavior);
// Have to also add in the OmitOnRecursionBehavior to avoid a stack overflow error
// presumably, since we removed the above, and without this, AutoFixture follows every
// object and just keeps trying to auto fill its data. By following the circular
// references it just goes on forever.
fixture.Behaviors.Add(new OmitOnRecursionBehavior());
// Use the "Auto Moq" customization so we have AutoFixture automatically mock objects for us.
fixture.Customize(new AutoMoqCustomization());
return fixture;
}) { }
}
Then, I want this to work when I use Theory/InlineData, so I also create this:
public class InlineAutoMoqDataAttribute : InlineAutoDataAttribute
{
public InlineAutoMoqDataAttribute(params object[] values) : base(new AutoMoqDataAttribute(), values) { }
}
Finally, I try to run some tests:
public class TestClass
{
public bool Val { get; set; }
}
[Theory]
[InlineData("blah 1", true)]
[InlineData("blah 2", true)]
public void Test1(string dummy, bool expected)
{
Assert.Equal(expected, true);
}
The above works fine... but then I run this:
[Theory]
[InlineAutoMoqData("blah 1", false)]
[InlineAutoMoqData("blah 2", false)]
public void Test2(string dummy, bool expected, TestClass sut)
{
Assert.Equal(expected, sut.Val);
}
The tests run but I don't see two tests, one for each InlineAutoMoqData, in test runner as I usually do. I can see them if I click the test and look in the "Test Detail Summary" panel, which would almost be good enough except I can't re-run failed tests (and I mean tests for individual InlineData cases) from there. I can from the tree view but again, they're not showing up there so that is a big problem as it runs all InlineAutoMoqData tests and I want to isolate and just run e.g. one that is failing.
If I add this attribute:
[DataDiscoverer("Xunit.Sdk.InlineDataDiscoverer", "xunit.core")]
to my InlineAutoMoqDataAttribute, the individual tests, one for each InlineAutoMoqData, show up in test explorer but then I get this error:
"InvalidOperationException : The test method expected 2 parameter values, but 1 parameter value was provided."
meaning AutoFixture auto mock values don't work.
Is there any way to have my custom InlineAutoMoqData attribute and still have tests for each individual InlineAutoMoqData showing up in test explorer? Is this a bug I should report to AutoFixture, or am I doing something wrong?
This is a known issue in AutoFixture, though I'm not sure there is a formal issue opened in the GitHub repository. If there isn't one you are welcome to create an issue and track the progress on it there.

Hierarchical "OneTimeSetUp" methods

Ok, I've got some nunit tests I'm writing to test an API. Any time I need to run these tests, I first need to login to the api to obtain a token. To start with, that's how I've written my OneTimeSetUp.
So, OneTimeSetUp is called, I log in, a shared field stores the token, each test is called a tests a different endpoint on api.
Now the problem. We've decided that we want to have individual tests for individual fields on the response, so that we can see what exactly is (and isn't failing) if something is wrong. So, we split out each endpoint into it's own test.
Now, OneTimeSetUp is called, it logs in, and calls the endpoint, stores the result, and all the tests fire, testing their little bit.
The problem is, logging in takes time, and there is no logical reason why all the separate tests couldn't just use the same login details. Is there any way of further sub-dividing tests/ adding extra levels of test? It would be great if we could get a test result that looks like this
ApiTests <--- shared sign-in at this level
- Endpoint 1 <--- call the endpoint at this level
- Field 1 \
- Field 2 --- individual test results here
- Field 3 /
- Endpoint 2 <--- call the endpoint at this level
- Field a \
- Field b --- individual test results here
- Field c /
You can group your test classes into the same namespaces and then add an additional class that is marked with the SetupFixture attribute. This will run the initialization code only once per namespace. (Not to be confused with the "TestFixtureSetUp" attribute, which is marked obsolete since NUnit v3. Thanks Charlie for your comment, I initially mixed it up.)
https://github.com/nunit/docs/wiki/SetUpFixture-Attribute
Code sample (as always, you are free to put each class into a separate code file):
using System.Diagnostics;
using NUnit.Framework;
namespace Test
{
[SetUpFixture]
public class SharedActions
{
[OneTimeSetUp]
public void SharedSignIn()
{
Debug.WriteLine("Signed in.");
}
[OneTimeTearDown]
public void SharedSignOut()
{
Debug.WriteLine("Signed out.");
}
}
[TestFixture]
public class FirstEndpointTests
{
[Test]
public void FirstEndpointTest()
{
Debug.WriteLine("Test for Endpoint A");
}
}
[TestFixture]
public class SecondEndpointTests
{
[Test]
public void SecondEndpointTest()
{
Debug.WriteLine("Test for Endpoint B");
}
}
}
When you "debug all" tests, the following output will appear in the debug window:
Signed in.
Test for Endpoint A
Test for Endpoint B
Signed out.
Here is one possible way of achieving this.
If you have a common base class (as it sounds from your description), you can create a protected lazy to get your token as per the example below
public class ApiTestsBase
{
protected static Lazy<string> TokenLazy = new Lazy<string>(() =>
{
// Log in and get your API token
Console.WriteLine("Logging into API to get token. You should only see this message on the first test that runs");
return "DEADBEEF";
});
}
[TestFixture]
public class EndpointATests : ApiTestsBase
{
private string GetResultFromEndPoint()
{
// Call endpoint with token from TokenLazy.Value
Console.WriteLine($"Calling EndpointA with token {TokenLazy.Value}");
return "PayloadA";
}
[Test]
public void Test1()
{
var payload = this.GetResultFromEndPoint();
// Assert things about payload
}
}
[TestFixture]
public class EndpointBTests : ApiTestsBase
{
private string GetResultFromEndPoint()
{
// Call endpoint with token from TokenLazy.Value
Console.WriteLine($"Calling EndpointB with token {TokenLazy.Value}");
return "PayloadB";
}
[Test]
public void Test1()
{
var payload = this.GetResultFromEndPoint();
// Assert things about payload
}
}
Now I am using string types, but you can use whatever request, response and token types are relevant to your situation. I suspect that you could also with a bit of creativity move the GetResultFromEndPoint call to the base class and use abstract methods or properties to fill in the endpoint specific detail, but you have not shared enough code for me to try that.
The magic is in the static keyword which means you will only have one instance per app domain. The Lazy simply defers creation until its first reference. It gets a little more complex if your test cases run for a long time because you will need to deal with token renewal, but it can still be achieved in a similar way using a singleton class that periodically re authenticates if token age > x. A singleton object can also be used in place of the static in the above example if you do not have a common base class for your fixtures.

Unit Testing a controller that uses windows authentication

-------Please see updates below as I now have this set up for dependency injection and the use of the MOQ mocking framework. I'd still like to split up my repository so it doesn't directly depend on pulling the windowsUser within the same function.
I have a Web API in an intranet site that populates a dropdown. The query behind the dropdown takes the windows username as a parameter to return the list.
I realize I don't have all of this set up correctly because I'm not able to unit test it. I need to know how this "should" be set up to allow unit testing and then what the unit tests should look like.
Additional info: this is an ASP.NET MVC 5 application.
INTERFACE
public interface ITestRepository
{
HttpResponseMessage DropDownList();
}
REPOSITORY
public class ExampleRepository : IExampleRepository
{
//Accessing the data through Entity Framework
private MyDatabaseEntities db = new MyDatabaseEntities();
public HttpResponseMessage DropDownList()
{
//Get the current windows user
string windowsUser = HttpContext.Current.User.Identity.Name;
//Pass the parameter to a procedure running a select query
var sourceQuery = (from p in db.spDropDownList(windowsUser)
select p).ToList();
string result = JsonConvert.SerializeObject(sourceQuery);
var response = new HttpResponseMessage();
response.Content = new StringContent(result, System.Text.Encoding.Unicode, "application/json");
return response;
}
}
CONTROLLER
public class ExampleController : ApiController
{
private IExampleRepository _exampleRepository;
public ExampleController()
{
_exampleRepository = new ExampleRepository();
}
[HttpGet]
public HttpResponseMessage DropDownList()
{
try
{
return _exampleRepository.DropDownList();
}
catch
{
throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound));
}
}
}
UPDATE 1
I have updated my Controller based on BartoszKP's suggestion to show dependency injection.
UPDATED CONTROLLER
public class ExampleController : ApiController
{
private IExampleRepository _exampleRepository;
//Dependency Injection
public ExampleController(IExampleRepository exampleRepository)
{
_exampleRepository = exampleRepository;
}
[HttpGet]
public HttpResponseMessage DropDownList()
{
try
{
return _exampleRepository.DropDownList();
}
catch
{
throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound));
}
}
}
UPDATE 2
I have decided to use MOQ as a mocking framework for unit testing. I'm able to test something simple, like the following. This would test a simple method that doesn't take any parameters and doesn't include the windowsUser part.
[TestMethod]
public void ExampleOfAnotherTest()
{
//Arrange
var mockRepository = new Mock<IExampleRepository>();
mockRepository
.Setup(x => x.DropDownList())
.Returns(new HttpResponseMessage(HttpStatusCode.OK));
ExampleController controller = new ExampleController(mockRepository.Object);
controller.Request = new HttpRequestMessage();
controller.Configuration = new HttpConfiguration();
//Act
var response = controller.DropDownList();
//Assert
Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
}
I need help testing the DropDownList method (one that does include code to get the windowsUser). I need advice on how to break this method apart. I know both parts shouldn't been in the same method. I don't know how to arrange splitting out the windowsUser variable. I realize this really should be brought in as a parameter, but I can't figure out how.
You usually do not unit-test repositories (integration tests verify if they really persist the data in the database correctly) - see for example this article on MSDN:
Typically, it is difficult to unit test the repositories themselves, so it is often better to write integration tests for them.
So, let's focus on testing only the controller.
Change the controller to take IExampleRepository in its constructor as a parameter:
private IExampleRepository _exampleRepository;
public ExampleController(IExampleRepository exampleRepository)
{
_exampleRepository = exampleRepository;
}
Then, in your unit tests, use one of mocking frameworks (such as RhinoMock for example) to create a stub for the sole purpose of testing the controller.
[TestFixture]
public class ExampleTestFixture
{
private IExampleRepository CreateRepositoryStub(fake data)
{
var exampleRepositoryStub = ...; // create the stub with a mocking framework
// make the stub return given fake data
return exampleRepositoryStub;
}
[Test]
public void GivenX_WhenDropDownListIsRequested_ReturnsY()
{
// Arrange
var exampleRepositoryStub = CreateRepositoryStub(X);
var exampleController = new ExampleController(exampleRepositoryStub);
// Act
var result = exampleController.DropDownList();
// Assert
Assert.That(result, Is.Equal(Y));
}
}
This is just a quick&dirty example - CreateRepositoryStub method should be of course extracted to some test utility class. Perhaps it should return a fluent interface to make the test's Arrange section more readable on what is given. Something more like:
// Arrange
var exampleController
= GivenAController()
.WithFakeData(X);
(with better names that reflect your business logic of course).
In case of ASP.NET MVC, the framework needs to know how to construct the controller. Fortunately, ASP.NET supports the Dependency Injection paradigm and a parameterless constructor is not required when using MVC unity.
Also, note the comment by Richard Szalay:
You shouldn't use HttpContext.Current in WebApi - you can use base.User which comes from HttpRequestBase.User and is mockable. If you really want to continue using HttpContext.Current, take a look at Mock HttpContext.Current in Test Init Method
One trick that I find very useful when trying to make old code testable when said code is accessing some global static or other messy stuff that I can't easily just parameterize is to wrap access to the resource in a virtual method call. Then you can subclass your system under test and use that in the unit test instead.
Example, using a hard dependency in the System.Random class
public class Untestable
{
public int CalculateSomethingRandom()
{
return new Random().Next() + new Random().Next();
}
}
Now we replace var rng = new Random();
public class Untestable
{
public int CalculateSomethingRandom()
{
return GetRandomNumber() + GetRandomNumber();
}
protected virtual int GetRandomNumber()
{
return new Random().Next();
}
}
Now we can create a testable version of the class:
public class Testable : Untestable
{
protected override int GetRandomNumber()
{
// You can return whatever you want for your test here,
// it depends on what type of behaviour you are faking.
// You can easily inject values here via a constructor or
// some public field in the subclass. You can also add
// counters for times method was called, save the args etc.
return 4;
}
}
The drawback with this method is that you can't use (most) isolation frameworks to implement protected methods (easily), and for good reason, since protected methods are sort of internal and shouldn't be all that important to your unit tests. It's still a really handy way of getting things covered with tests so you can refactor them, instead of having to spend 10 hours without tests, trying to do major architectual changes to your code before you get to "safety".
Just another tool to keep in mind, I find it comes in handy from time to time!
EDIT: More concretely, in your case you might want to create a protected virtual string GetLoggedInUserName(). This will technically speaking keep the actual call to HttpContext.Current.User.Identity.Name untested, but you will have isolated it to the simplest smallest possible method, so you can test that the code is calling the correct method the right amount of times with the correct args, and then you simply have to know that HttpContext.Current.User.Identity.Name contains what you want. This can later be refactored into some sort of user manager or logged in user provider, you'll see what suits best as you go along.

How do I mock a string response in a Unit Test?

Here's what I have in my test so far:
[TestFixture]
public class IndividualMovieTests
{
[Test]
public void WebClient_Should_Download_From_Correct_Endpoint()
{
const string correctEndpoint = "http://api.rottentomatoes.com/api/public/v1.0/movies/{movie-id}.json?apikey={your-api-key}";
ApiEndpoints.Endpoints["IndividualMovie"].ShouldEqual(correctEndpoint);
}
[Test]
public void Movie_Information_Is_Loaded_Correctly()
{
Tomato tomato = new Tomato("t4qpkcsek5h6vgbsy8k4etxdd");
var movie = tomato.FindMovieById(9818);
movie.Title.ShouldEqual("Gone With The Wind");
}
}
My FIndMovieById method goes online and fetches a JSON result, and that means it sort of breaks the principle behind unit testing. I have a feeling I have to mock this string response, but I don't really know how to approach this.
How would you approach this particular unit testing?
In your second [Test], I would suggest not focusing on a specific return value from your FindMovieById method, unless you truly want to test that your given inputs should always result in "Gone With the Wind". The test that you have seems to be a very specific test case in which a specific input number results in a specific output, which is something that may or may not change when running against your actual database. Also, since you're not going to be testing against the actual web service, doing this kind of validation is basically self-serving - you're not really testing anything. Instead, focus on testing how the Tomato class handles validation of the argument (if at all), and that the Tomato class actually invokes the service to get the return value. Rather than testing specific inputs and outputs, test the behavior of the class, so that if someone changes it in the future, the test should break to alert them that they may have broken working functionality.
For example, if you have input validation, you could test that your Tomato class throws an exception if an invalid input is detected.
Assuming that your Tomato class has some sort of web client functionality for requesting and retrieving the results, you could plug in some stub implementations of the actual web code, or mocked implementations to ensure that Tomato is in fact calling the appropriate web client code to request and process the response.
First off, you might not have to mock to test you code. For example, if you are just testing that you can deserialize JSON into a Movie object, you could just do that by testing a public or internal ParseJSON recond on the Movie class.
However, since you are asking about mocking, here's a quick overview of one way you could write this test using a mock. As it is written, Movie_Information_Is_Loaded_Correctly() looks like an integration test. To turn this into a unit test, you could mock out the web request the Tomato class makes. One way to do that would be to create a ITomatoWebRequester interface and pass that as a parameter to the Tomato class in the constructor. You could then mock the ITomatoWebRequester to return the web response you are expecting, and then you could test that the Tomato class properly parses that response.
The code could look something like this:
public class Tomato
{
private readonly ITomatoWebRequester _webRequester;
public Tomato(string uniqueID, ITomatoWebRequester webRequester)
{
_webRequester = webRequester;
}
public Movie FindMovieById(int movieID)
{
var responseJSON = _webRequester.GetMovieJSONByID(movieID);
//The next line is what we want to unit test
return Movie.Parse(responseJSON);
}
}
public interface ITomatoWebRequester
{
string GetMovieJSONByID(int movieID);
}
To test, you could use a mocking framework like Moq to create a ITomatoWebRequester that will return a result you expect. To do that with Moq the following code should work:
[Test]
public void Movie_Information_Is_Loaded_Correctly()
{
var mockWebRequester = new Moq.Mock<ITomatoWebRequester>();
var myJson = "enter json response you want to use to test with here";
mockWebRequester.Setup(a => a.GetMovieJSONByID(It.IsAny<int>())
.Returns(myJson);
Tomato tomato = new Tomato("t4qpkcsek5h6vgbsy8k4etxdd",
mockWebRequester.Object);
var movie = tomato.FindMovieById(9818);
movie.Title.ShouldEqual("Gone With The Wind");
}
The cool thing about the mock in this case is that you don't have to worry about all the hoops the actual ITomatoWebRequester has jump through to return the JSON it is supposed to return, you can just create a mock right in your test that returns exactly what you want. Hopefully this answer serves as a decent intro to mocking. I would definitely suggest reading up on mocking frameworks to get a better feel for how the process works.
Use Rhino.Mocks library and call Expectations where ever appropriate. Following is a sample mocking your movie object.
using System;
using NUnit.Framework;
using Rhino.Mocks;
namespace ConsoleApplication1
{
public class Tomato
{
public Tomato(string t4qpkcsek5h6vgbsy8k4etxdd)
{
//
}
public virtual Movie FindMovieById(int i)
{
return null;
}
}
public class Movie
{
public string Title;
public Movie( )
{
}
public void FindMovieById(int i)
{
throw new NotImplementedException();
}
}
[TestFixture]
public class IndividualMovieTests
{
[Test]
public void Movie_Information_Is_Loaded_Correctly()
{
//Create Mock.
Tomato tomato = MockRepository.GenerateStub<Tomato>("t4qpkcsek5h6vgbsy8k4etxdd");
//Put expectations.
tomato.Expect(t=>t.FindMovieById(0)).IgnoreArguments().Return(new Movie(){Title ="Gone With The Wind"});
//Test logic.
Movie movie = tomato.FindMovieById(9818);
//Do Assertions.
Assert.AreEqual("Gone With The Wind", movie.Title);
//Verify expectations.
tomato.VerifyAllExpectations();
}
}
}

NUnit Test Run Order

By default nunit tests run alphabetically. Does anyone know of any way to set the execution order? Does an attribute exist for this?
I just want to point out that while most of the responders assumed these were unit tests, the question did not specify that they were.
nUnit is a great tool that can be used for a variety of testing situations. I can see appropriate reasons for wanting to control test order.
In those situations I have had to resort to incorporating a run order into the test name. It would be great to be able to specify run order using an attribute.
NUnit 3.2.0 added an OrderAttribute, see:
https://github.com/nunit/docs/wiki/Order-Attribute
Example:
public class MyFixture
{
[Test, Order(1)]
public void TestA() { ... }
[Test, Order(2)]
public void TestB() { ... }
[Test]
public void TestC() { ... }
}
Your unit tests should each be able to run independently and stand alone. If they satisfy this criterion then the order does not matter.
There are occasions however where you will want to run certain tests first. A typical example is in a Continuous Integration situation where some tests are longer running than others. We use the category attribute so that we can run the tests which use mocking ahead of the tests which use the database.
i.e. put this at the start of your quick tests
[Category("QuickTests")]
Where you have tests which are dependant on certain environmental conditions, consider the TestFixtureSetUp and TestFixtureTearDown attributes, which allow you to mark methods to be executed before and after your tests.
Wanting the tests to run in a specific order does not mean that the tests are dependent on each other - I'm working on a TDD project at the moment, and being a good TDDer I've mocked/stubbed everything, but it would make it more readable if I could specify the order which the tests results are displayed - thematically instead of alphabetically. So far the only thing I can think of is to prepend a_ b_ c_ to classes to classes, namespaces and methods. (Not nice) I think a [TestOrderAttribute] attribute would be nice - not stricly followed by the framework, but a hint so we can achieve this
Regardless of whether or not Tests are order dependent... some of us just want to control everything, in an orderly fashion.
Unit tests are usually created in order of complexity. So, why shouldn't they also be run in order of complexity, or the order in which they were created?
Personally, I like to see the tests run in the order of which I created them. In TDD, each successive test is naturally going to be more complex, and take more time to run. I would rather see the simpler test fail first as it will be a better indicator as to the cause of the failure.
But, I can also see the benefit of running them in random order, especially if you want to test that your tests don't have any dependencies on other tests. How about adding an option to test runners to "Run Tests Randomly Until Stopped"?
I am testing with Selenium on a fairly complex web site and the whole suite of tests can run for more than a half hour, and I'm not near to covering the entire application yet. If I have to make sure that all previous forms are filled in correctly for each test, this adds a great deal of time, not just a small amount of time, to the overall test. If there's too much overhead to running the tests, people won't run them as often as they should.
So, I put them in order and depend on previous tests to have text boxes and such completed. I use Assert.Ignore() when the pre-conditions are not valid, but I need to have them running in order.
I really like the previous answer.
I changed it a little to be able to use an attribute to set the order range:
namespace SmiMobile.Web.Selenium.Tests
{
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using NUnit.Framework;
public class OrderedTestAttribute : Attribute
{
public int Order { get; set; }
public OrderedTestAttribute(int order)
{
Order = order;
}
}
public class TestStructure
{
public Action Test;
}
class Int
{
public int I;
}
[TestFixture]
public class ControllingTestOrder
{
private static readonly Int MyInt = new Int();
[TestFixtureSetUp]
public void SetUp()
{
MyInt.I = 0;
}
[OrderedTest(0)]
public void Test0()
{
Console.WriteLine("This is test zero");
Assert.That(MyInt.I, Is.EqualTo(0));
}
[OrderedTest(2)]
public void ATest0()
{
Console.WriteLine("This is test two");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(2));
}
[OrderedTest(1)]
public void BTest0()
{
Console.WriteLine("This is test one");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(1));
}
[OrderedTest(3)]
public void AAA()
{
Console.WriteLine("This is test three");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(3));
}
[TestCaseSource(sourceName: "TestSource")]
public void MyTest(TestStructure test)
{
test.Test();
}
public IEnumerable<TestCaseData> TestSource
{
get
{
var assembly =Assembly.GetExecutingAssembly();
Dictionary<int, List<MethodInfo>> methods = assembly
.GetTypes()
.SelectMany(x => x.GetMethods())
.Where(y => y.GetCustomAttributes().OfType<OrderedTestAttribute>().Any())
.GroupBy(z => z.GetCustomAttribute<OrderedTestAttribute>().Order)
.ToDictionary(gdc => gdc.Key, gdc => gdc.ToList());
foreach (var order in methods.Keys.OrderBy(x => x))
{
foreach (var methodInfo in methods[order])
{
MethodInfo info = methodInfo;
yield return new TestCaseData(
new TestStructure
{
Test = () =>
{
object classInstance = Activator.CreateInstance(info.DeclaringType, null);
info.Invoke(classInstance, null);
}
}).SetName(methodInfo.Name);
}
}
}
}
}
}
I know this is a relatively old post, but here is another way to keep your test in order WITHOUT making the test names awkward. By using the TestCaseSource attribute and having the object you pass in have a delegate (Action), you can totally not only control the order but also name the test what it is.
This works because, according to the documentation, the items in the collection returned from the test source will always execute in the order they are listed.
Here is a demo from a presentation I'm giving tomorrow:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
namespace NUnitTest
{
public class TestStructure
{
public Action Test;
}
class Int
{
public int I;
}
[TestFixture]
public class ControllingTestOrder
{
private static readonly Int MyInt= new Int();
[TestFixtureSetUp]
public void SetUp()
{
MyInt.I = 0;
}
[TestCaseSource(sourceName: "TestSource")]
public void MyTest(TestStructure test)
{
test.Test();
}
public IEnumerable<TestCaseData> TestSource
{
get
{
yield return new TestCaseData(
new TestStructure
{
Test = () =>
{
Console.WriteLine("This is test one");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(1));
}
}).SetName(#"Test One");
yield return new TestCaseData(
new TestStructure
{
Test = () =>
{
Console.WriteLine("This is test two");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(2));
}
}).SetName(#"Test Two");
yield return new TestCaseData(
new TestStructure
{
Test = () =>
{
Console.WriteLine("This is test three");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(3));
}
}).SetName(#"Test Three");
}
}
}
}
I am working with Selenium WebDriver end-to-end UI test cases written in C#, which are run using NUnit framework. (Not unit cases as such)
These UI tests certainly depend on order of execution, as other test needs to add some data as a precondition. (It is not feasible to do the steps in every test)
Now, after adding the 10th test case, i see NUnit wants to run in this order:
Test_1
Test_10
Test_2
Test_3
..
So i guess i have to too alphabetisize the test case names for now, but it would be good to have this small feature of controlling execution order added to NUnit.
There are very good reasons to utilise a Test Ordering mechanism. Most of my own tests use good practice such as setup/teardown. Others require huge amounts of data setup, which can then be used to test a range of features. Up till now, I have used large tests to handle these (Selenium Webdriver) integration tests. However, I think the above suggested post on https://github.com/nunit/docs/wiki/Order-Attribute has a lot of merit. Here is an example of why ordering would be extremely valuable:
Using Selenium Webdriver to run a test to download a report
The state of the report (whether it's downloadable or not) is cached for 10 minutes
That means, before every test I need to reset the report state and then wait up to 10 minutes before the state is confirmed to have changed, and then verify the report downloads correctly.
The reports cannot be generated in a practical / timely fashion through mocking or any other mechanism within the test framework due to their complexity.
This 10 minute wait time slows down the test suite. When you multiply similar caching delays across a multitude of tests, it consumes a lot of time. Ordering tests could allow data setup to be done as a "Test" right at the beginning of the test suite, with tests relying on the cache to bust being executed toward the end of the test run.
Usually Unit Test should be independent, but if you must, then you can name your methods in alphabetical order ex:
[Test]
public void Add_Users(){}
[Test]
public void Add_UsersB(){}
[Test]
public void Process_Users(){}
or you can do..
private void Add_Users(){}
private void Add_UsersB(){}
[Test]
public void Process_Users()
{
Add_Users();
Add_UsersB();
// more code
}
This question is really old now, but for people who may reach this from searching, I took the excellent answers from user3275462 and PvtVandals / Rico and added them to a GitHub repository along with some of my own updates. I also created an associated blog post with some additional info you can look at for more info.
Hope this is helpful for you all. Also, I often like to use the Category attribute to differentiate my integration tests or other end-to-end tests from my actual unit tests. Others have pointed out that unit tests should not have an order dependency, but other test types often do, so this provides a nice way to only run the category of tests you want and also order those end-to-end tests.
If you are using [TestCase], the argument TestName provides a name for the test.
If not specified, a name is generated based on the method name and the arguments provided.
You can control the order of test execution as given below:
[Test]
[TestCase("value1", TestName = "ExpressionTest_1")]
[TestCase("value2", TestName = "ExpressionTest_2")]
[TestCase("value3", TestName = "ExpressionTest_3")]
public void ExpressionTest(string v)
{
//do your stuff
}
Here I used the method name "ExpressionTest" suffix with a number.
You can use any names ordered alphabetical
see TestCase Attribute
I'm suprised the NUnit community hasn't come up with anything, so I went to create something like this myself.
I'm currently developing an open-source library that allows you to order your tests with NUnit. You can order test fixtures and ordering "ordered test specifications" alike.
The library offers the following features:
Build complex test ordering hierarchies
Skip subsequent tests if an test in order fails
Order your test methods by dependency instead of integer order
Supports usage side-by-side with unordered tests. Unordered tests are executed first.
The library is actually inspired in how MSTest does test ordering with .orderedtest files. Please look at an example below.
[OrderedTestFixture]
public sealed class MyOrderedTestFixture : TestOrderingSpecification {
protected override void DefineTestOrdering() {
TestFixture<Fixture1>();
OrderedTestSpecification<MyOtherOrderedTestFixture>();
TestFixture<Fixture2>();
TestFixture<Fixture3>();
}
protected override bool ContinueOnError => false; // Or true, if you want to continue even if a child test fails
}
You should not depend on the order in which the test framework picks tests for execution. Tests should be isolated and independent. In that they should not depend on some other test setting the stage for them or cleaning up after them. They should also produce the same result irrespective of the order of the execution of tests (for a given snapshot of the SUT)
I did a bit of googling. As usual, some people have resorted to sneaky tricks (instead of solving the underlying testability/design issue
naming the tests in an alphabetically ordered manner such that tests appear in the order they 'need' to be executed. However NUnit may choose to change this behavior with a later release and then your tests would be hosed. Better check in the current NUnit binaries to Source Control.
VS (IMHO encouraging the wrong behavior with their 'agile tools') has something called as "Ordered tests" in their MS testing framework. I didn't waste any time reading up but it seems to be targeted towards the same audience
See Also: characteristics of a good test
In case of using TestCaseSource the key is to override string ToString method, How that works:
Assume you have TestCase class
public class TestCase
{
public string Name { get; set; }
public int Input { get; set; }
public int Expected { get; set; }
}
And a list of TestCases:
private static IEnumerable<TestCase> TestSource()
{
return new List<TestCase>
{
new TestCase()
{
Name = "Test 1",
Input = 2,
Expected = 4
},
new TestCase()
{
Name = "Test 2",
Input = 4,
Expected = 16
},
new TestCase()
{
Name = "Test 3",
Input = 10,
Expected = 100
}
};
}
Now lets use it with a Test method and see what happen:
[TestCaseSource(nameof(TestSource))]
public void MethodXTest(TestCase testCase)
{
var x = Power(testCase.Input);
x.ShouldBe(testCase.Expected);
}
This will not test in order and the output will be like this:
So if we added override string ToString to our class like:
public class TestCase
{
public string Name { get; set; }
public int Input { get; set; }
public int Expected { get; set; }
public override string ToString()
{
return Name;
}
}
The result will change and we get the order and name of test like:
Note:
This is just example to illustrate how to get name and order in test, the order is taken numerically/alphabetically so if you have more than ten test I suggest making Test 01, Test 02.... Test 10, Test 11 etc because if you make Test 1 and at some point Test 10 than the order will be Test 1, Test 10, Test 2 .. etc.
The input and Expected can be any type, string, object or custom class.
Beside order, the good thing here is that you see the test name which is more important.

Categories