NUnit Test Run Order - c#

By default nunit tests run alphabetically. Does anyone know of any way to set the execution order? Does an attribute exist for this?

I just want to point out that while most of the responders assumed these were unit tests, the question did not specify that they were.
nUnit is a great tool that can be used for a variety of testing situations. I can see appropriate reasons for wanting to control test order.
In those situations I have had to resort to incorporating a run order into the test name. It would be great to be able to specify run order using an attribute.

NUnit 3.2.0 added an OrderAttribute, see:
https://github.com/nunit/docs/wiki/Order-Attribute
Example:
public class MyFixture
{
[Test, Order(1)]
public void TestA() { ... }
[Test, Order(2)]
public void TestB() { ... }
[Test]
public void TestC() { ... }
}

Your unit tests should each be able to run independently and stand alone. If they satisfy this criterion then the order does not matter.
There are occasions however where you will want to run certain tests first. A typical example is in a Continuous Integration situation where some tests are longer running than others. We use the category attribute so that we can run the tests which use mocking ahead of the tests which use the database.
i.e. put this at the start of your quick tests
[Category("QuickTests")]
Where you have tests which are dependant on certain environmental conditions, consider the TestFixtureSetUp and TestFixtureTearDown attributes, which allow you to mark methods to be executed before and after your tests.

Wanting the tests to run in a specific order does not mean that the tests are dependent on each other - I'm working on a TDD project at the moment, and being a good TDDer I've mocked/stubbed everything, but it would make it more readable if I could specify the order which the tests results are displayed - thematically instead of alphabetically. So far the only thing I can think of is to prepend a_ b_ c_ to classes to classes, namespaces and methods. (Not nice) I think a [TestOrderAttribute] attribute would be nice - not stricly followed by the framework, but a hint so we can achieve this

Regardless of whether or not Tests are order dependent... some of us just want to control everything, in an orderly fashion.
Unit tests are usually created in order of complexity. So, why shouldn't they also be run in order of complexity, or the order in which they were created?
Personally, I like to see the tests run in the order of which I created them. In TDD, each successive test is naturally going to be more complex, and take more time to run. I would rather see the simpler test fail first as it will be a better indicator as to the cause of the failure.
But, I can also see the benefit of running them in random order, especially if you want to test that your tests don't have any dependencies on other tests. How about adding an option to test runners to "Run Tests Randomly Until Stopped"?

I am testing with Selenium on a fairly complex web site and the whole suite of tests can run for more than a half hour, and I'm not near to covering the entire application yet. If I have to make sure that all previous forms are filled in correctly for each test, this adds a great deal of time, not just a small amount of time, to the overall test. If there's too much overhead to running the tests, people won't run them as often as they should.
So, I put them in order and depend on previous tests to have text boxes and such completed. I use Assert.Ignore() when the pre-conditions are not valid, but I need to have them running in order.

I really like the previous answer.
I changed it a little to be able to use an attribute to set the order range:
namespace SmiMobile.Web.Selenium.Tests
{
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using NUnit.Framework;
public class OrderedTestAttribute : Attribute
{
public int Order { get; set; }
public OrderedTestAttribute(int order)
{
Order = order;
}
}
public class TestStructure
{
public Action Test;
}
class Int
{
public int I;
}
[TestFixture]
public class ControllingTestOrder
{
private static readonly Int MyInt = new Int();
[TestFixtureSetUp]
public void SetUp()
{
MyInt.I = 0;
}
[OrderedTest(0)]
public void Test0()
{
Console.WriteLine("This is test zero");
Assert.That(MyInt.I, Is.EqualTo(0));
}
[OrderedTest(2)]
public void ATest0()
{
Console.WriteLine("This is test two");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(2));
}
[OrderedTest(1)]
public void BTest0()
{
Console.WriteLine("This is test one");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(1));
}
[OrderedTest(3)]
public void AAA()
{
Console.WriteLine("This is test three");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(3));
}
[TestCaseSource(sourceName: "TestSource")]
public void MyTest(TestStructure test)
{
test.Test();
}
public IEnumerable<TestCaseData> TestSource
{
get
{
var assembly =Assembly.GetExecutingAssembly();
Dictionary<int, List<MethodInfo>> methods = assembly
.GetTypes()
.SelectMany(x => x.GetMethods())
.Where(y => y.GetCustomAttributes().OfType<OrderedTestAttribute>().Any())
.GroupBy(z => z.GetCustomAttribute<OrderedTestAttribute>().Order)
.ToDictionary(gdc => gdc.Key, gdc => gdc.ToList());
foreach (var order in methods.Keys.OrderBy(x => x))
{
foreach (var methodInfo in methods[order])
{
MethodInfo info = methodInfo;
yield return new TestCaseData(
new TestStructure
{
Test = () =>
{
object classInstance = Activator.CreateInstance(info.DeclaringType, null);
info.Invoke(classInstance, null);
}
}).SetName(methodInfo.Name);
}
}
}
}
}
}

I know this is a relatively old post, but here is another way to keep your test in order WITHOUT making the test names awkward. By using the TestCaseSource attribute and having the object you pass in have a delegate (Action), you can totally not only control the order but also name the test what it is.
This works because, according to the documentation, the items in the collection returned from the test source will always execute in the order they are listed.
Here is a demo from a presentation I'm giving tomorrow:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
namespace NUnitTest
{
public class TestStructure
{
public Action Test;
}
class Int
{
public int I;
}
[TestFixture]
public class ControllingTestOrder
{
private static readonly Int MyInt= new Int();
[TestFixtureSetUp]
public void SetUp()
{
MyInt.I = 0;
}
[TestCaseSource(sourceName: "TestSource")]
public void MyTest(TestStructure test)
{
test.Test();
}
public IEnumerable<TestCaseData> TestSource
{
get
{
yield return new TestCaseData(
new TestStructure
{
Test = () =>
{
Console.WriteLine("This is test one");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(1));
}
}).SetName(#"Test One");
yield return new TestCaseData(
new TestStructure
{
Test = () =>
{
Console.WriteLine("This is test two");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(2));
}
}).SetName(#"Test Two");
yield return new TestCaseData(
new TestStructure
{
Test = () =>
{
Console.WriteLine("This is test three");
MyInt.I++; Assert.That(MyInt.I, Is.EqualTo(3));
}
}).SetName(#"Test Three");
}
}
}
}

I am working with Selenium WebDriver end-to-end UI test cases written in C#, which are run using NUnit framework. (Not unit cases as such)
These UI tests certainly depend on order of execution, as other test needs to add some data as a precondition. (It is not feasible to do the steps in every test)
Now, after adding the 10th test case, i see NUnit wants to run in this order:
Test_1
Test_10
Test_2
Test_3
..
So i guess i have to too alphabetisize the test case names for now, but it would be good to have this small feature of controlling execution order added to NUnit.

There are very good reasons to utilise a Test Ordering mechanism. Most of my own tests use good practice such as setup/teardown. Others require huge amounts of data setup, which can then be used to test a range of features. Up till now, I have used large tests to handle these (Selenium Webdriver) integration tests. However, I think the above suggested post on https://github.com/nunit/docs/wiki/Order-Attribute has a lot of merit. Here is an example of why ordering would be extremely valuable:
Using Selenium Webdriver to run a test to download a report
The state of the report (whether it's downloadable or not) is cached for 10 minutes
That means, before every test I need to reset the report state and then wait up to 10 minutes before the state is confirmed to have changed, and then verify the report downloads correctly.
The reports cannot be generated in a practical / timely fashion through mocking or any other mechanism within the test framework due to their complexity.
This 10 minute wait time slows down the test suite. When you multiply similar caching delays across a multitude of tests, it consumes a lot of time. Ordering tests could allow data setup to be done as a "Test" right at the beginning of the test suite, with tests relying on the cache to bust being executed toward the end of the test run.

Usually Unit Test should be independent, but if you must, then you can name your methods in alphabetical order ex:
[Test]
public void Add_Users(){}
[Test]
public void Add_UsersB(){}
[Test]
public void Process_Users(){}
or you can do..
private void Add_Users(){}
private void Add_UsersB(){}
[Test]
public void Process_Users()
{
Add_Users();
Add_UsersB();
// more code
}

This question is really old now, but for people who may reach this from searching, I took the excellent answers from user3275462 and PvtVandals / Rico and added them to a GitHub repository along with some of my own updates. I also created an associated blog post with some additional info you can look at for more info.
Hope this is helpful for you all. Also, I often like to use the Category attribute to differentiate my integration tests or other end-to-end tests from my actual unit tests. Others have pointed out that unit tests should not have an order dependency, but other test types often do, so this provides a nice way to only run the category of tests you want and also order those end-to-end tests.

If you are using [TestCase], the argument TestName provides a name for the test.
If not specified, a name is generated based on the method name and the arguments provided.
You can control the order of test execution as given below:
[Test]
[TestCase("value1", TestName = "ExpressionTest_1")]
[TestCase("value2", TestName = "ExpressionTest_2")]
[TestCase("value3", TestName = "ExpressionTest_3")]
public void ExpressionTest(string v)
{
//do your stuff
}
Here I used the method name "ExpressionTest" suffix with a number.
You can use any names ordered alphabetical
see TestCase Attribute

I'm suprised the NUnit community hasn't come up with anything, so I went to create something like this myself.
I'm currently developing an open-source library that allows you to order your tests with NUnit. You can order test fixtures and ordering "ordered test specifications" alike.
The library offers the following features:
Build complex test ordering hierarchies
Skip subsequent tests if an test in order fails
Order your test methods by dependency instead of integer order
Supports usage side-by-side with unordered tests. Unordered tests are executed first.
The library is actually inspired in how MSTest does test ordering with .orderedtest files. Please look at an example below.
[OrderedTestFixture]
public sealed class MyOrderedTestFixture : TestOrderingSpecification {
protected override void DefineTestOrdering() {
TestFixture<Fixture1>();
OrderedTestSpecification<MyOtherOrderedTestFixture>();
TestFixture<Fixture2>();
TestFixture<Fixture3>();
}
protected override bool ContinueOnError => false; // Or true, if you want to continue even if a child test fails
}

You should not depend on the order in which the test framework picks tests for execution. Tests should be isolated and independent. In that they should not depend on some other test setting the stage for them or cleaning up after them. They should also produce the same result irrespective of the order of the execution of tests (for a given snapshot of the SUT)
I did a bit of googling. As usual, some people have resorted to sneaky tricks (instead of solving the underlying testability/design issue
naming the tests in an alphabetically ordered manner such that tests appear in the order they 'need' to be executed. However NUnit may choose to change this behavior with a later release and then your tests would be hosed. Better check in the current NUnit binaries to Source Control.
VS (IMHO encouraging the wrong behavior with their 'agile tools') has something called as "Ordered tests" in their MS testing framework. I didn't waste any time reading up but it seems to be targeted towards the same audience
See Also: characteristics of a good test

In case of using TestCaseSource the key is to override string ToString method, How that works:
Assume you have TestCase class
public class TestCase
{
public string Name { get; set; }
public int Input { get; set; }
public int Expected { get; set; }
}
And a list of TestCases:
private static IEnumerable<TestCase> TestSource()
{
return new List<TestCase>
{
new TestCase()
{
Name = "Test 1",
Input = 2,
Expected = 4
},
new TestCase()
{
Name = "Test 2",
Input = 4,
Expected = 16
},
new TestCase()
{
Name = "Test 3",
Input = 10,
Expected = 100
}
};
}
Now lets use it with a Test method and see what happen:
[TestCaseSource(nameof(TestSource))]
public void MethodXTest(TestCase testCase)
{
var x = Power(testCase.Input);
x.ShouldBe(testCase.Expected);
}
This will not test in order and the output will be like this:
So if we added override string ToString to our class like:
public class TestCase
{
public string Name { get; set; }
public int Input { get; set; }
public int Expected { get; set; }
public override string ToString()
{
return Name;
}
}
The result will change and we get the order and name of test like:
Note:
This is just example to illustrate how to get name and order in test, the order is taken numerically/alphabetically so if you have more than ten test I suggest making Test 01, Test 02.... Test 10, Test 11 etc because if you make Test 1 and at some point Test 10 than the order will be Test 1, Test 10, Test 2 .. etc.
The input and Expected can be any type, string, object or custom class.
Beside order, the good thing here is that you see the test name which is more important.

Related

Unit test for void method with Interface as parameter

New to Unit testing, I have below sample code and I want to create a unit test for this , Please suggest what should i do to create a unit test for this ? any link or pointers would be helpful to start
public class UserNotification : Work
{
public override void Execute(IWorkContext iwc)
{
throw new InvalidWorkException($"some message:{iwc.Name} and :{iwc.Dept}");
}
}
Edit: using MSTest for Unit testing
First, you need a test project alongside with your regular project.
You can pick from these three:
MSTest
nUnit
xUnit
All of these should have a project template in VS2022.
xUnit is a popular one, so let's pick that. The usual naming convention for test projects is YourProject.Tests. Rename UnitTest1.cs class to UserNotificationTests.cs.
As simple as it gets, you can now start writing your tests. In xUnit, a method with [Fact] attribute is a test method.
using Xunit;
namespace MyProject.Tests
{
public class UserNotificationTests
{
[Fact]
public void Execute_Should_Throw_InvalidWorkException_With_Message()
{
}
}
}
Don't think these methods as the methods in the code, naming should be close to English sentences and should reveal the intent as a regular sentence.
Classic approach to unit testing has three phases:
Arrange: Take instances of your objects, set your expected output, mock dependencies, make them ready.
Act: Call the actual action you want to test.
Assert: Check if how your actual output relates to your expected output.
Let's start with arranging.
We need a new instance of UserNotification class so we can call Execute().
We need any dummy IWorkContext object so we can pass it. We'll use NSubstitute library for that.
// Don't forget to add using NSubstitute
// Arrange
var userNotification = new UserNotification();
var workContext = Substitute.For<IWorkContext>();
workContext.Name = "testName";
workContext.Dept = "testDept";
Now you act, and invoke your method:
// Act
Action act = () => userNotification.Execute(workContext);
And lastly we assert. I highly recommend FluentAssertations library for asserting.
// Assert
act.Should().Throw<InvalidWorkException>()
.WithMessage($"some message:{workContext.Name} and :{workContext.Dept}");
Navigate to View > Test Explorer and run your tests, you should see something similar to this:
Congratulations, you wrote your first unit test.
Here's the final version of your test code:
using FluentAssertions;
using NSubstitute;
using System;
using Xunit;
namespace MyProject.Tests
{
public class UserNotificationTests
{
[Fact]
public void Execute_Should_Throw_InvalidWorkException_With_Message()
{
// Arrange
var userNotification = new UserNotification();
var workContext = Substitute.For<IWorkContext>();
workContext.Name = "testName";
workContext.Dept = "testDept";
// Act
Action act = () => userNotification.Execute(workContext);
// Assert
act.Should().Throw<InvalidWorkException>()
.WithMessage($"some message:{workContext.Name} and :{workContext.Dept}");
}
}
public class UserNotification : Work
{
public override void Execute(IWorkContext iwc)
{
throw new InvalidWorkException($"some message:{iwc.Name} and :{iwc.Dept}");
}
}
public abstract class Work
{
public virtual void Execute(IWorkContext iwc) { }
}
public interface IWorkContext
{
public string Name { get; set; }
public string Dept { get; set; }
}
public class InvalidWorkException : System.Exception
{
public InvalidWorkException() { }
public InvalidWorkException(string message) : base(message) { }
public InvalidWorkException(string message, System.Exception inner) : base(message, inner) { }
protected InvalidWorkException(
System.Runtime.Serialization.SerializationInfo info,
System.Runtime.Serialization.StreamingContext context) : base(info, context) { }
}
}
Writing tests feels a lot different than writing regular code. But in time you'll get the hang of it. How to mock, how to act, how to assert, these may vary depending on what you are testing. The main point is to isolate the main thing you want to unit test, and mock the rest.
Good luck!
Because your title mentions specifically that you're trying to test a method with a void return type; I infer that you've already been testing methods with actual return values, and therefore that you already have a test project and know how to run a test once it is written. If not; the answer written by Mithgroth is a good explanation on how to get started on testing in general.
Your test is defined by the behavior that you wish to test. Your snippet has no behavior, which makes it hard to give you a concrete answer.
I've opted to rewrite your example:
public class UserNotification : Work
{
public override void Execute(IWorkContext iwc)
{
var splines = iwc.GetSplines();
iwc.Reticulate(splines);
}
}
Now we have some behavior that we want to test. The test goal is to answer the following question:
When calling Execute, does UserNotification fetch the needed splines and reticulate them?
When unit testing, you want to mock all other things. In this case, the IWorkContext is an external dependency, so it should be mocked. Mocking the work context allows us to easily configure the mock to help with the testing. When we run the test, we will pass an IWorkContext object which acts as a spy. In essence, this mocked object will:
... have been set up to return a very specific set of splines, one that we chose for the test's purpose.
... secretly record any calls made to the Reticulate method, and tracks the parameters that were passed into it.
Before we get into the nitty gritty on how to mock, we can already outline how our test is going to go:
[Test]
public void ReticulatesTheContextSplines()
{
// Arrange
IWorkContext mockedContext = ...; // This comes later
UserNotification userNotification = new UserNotification();
// Act
userNotification.Execute(mockedContext);
// Assert
// Confirm that Reticulate() was called
// Confirm that Reticulate() was given the result from `GetSplines()`
}
There's your basic unit test. All that's left is to create our mock.
You can write this yourself if you want. Simply create a new class that implements IWorkContext, and give it some more public properties/methods to help you keep track of things. A very simple example would be:
public class MockedWorkContext : IWorkContext
{
// Allows the test to set the returned result
public IEnumerable<Spline> Splines { get; set; }
// History of arguments used for calls made to Reticulate.
// Each call will add an entry to the list.
public List<IEnumerable<Spline>> ReticulateArguments { get; private set; } = new List<IEnumerable<Spline>>();
public IEnumerable<Spline> GetSplines()
{
// Returns the preset splines that the test configured
return this.Splines;
}
// Mocked implementation of Reticulate()
public void Reticulate(IEnumerable<Spline> splines)
{
// Does nothing except record what you passed into it
this.ReticulateArguments.Add(splines);
}
}
This is a very simplified implementation, but it gets the job done. The test will now look like this:
[Test]
public void ReticulatesTheContextSplines()
{
// Arrange
IEnumerable<Spline> splines = new List<Spline>() { new Spline(), new Spline() }; // Just create some items here, it's random test data.
IWorkContext mockedContext = new MockedWorkContext();
mockedContext.Splines = splines;
UserNotification userNotification = new UserNotification();
// Act
userNotification.Execute(mockedContext);
// Assert - Confirm that Reticulate() was called
mockedContext.ReticulateArguments.Should().HaveCount(1);
// Confirm that Reticulate() was given the result from `GetSplines()`
mockedContext.ReticulateArguments[0].Should().BeEquivalentTo(splines);
}
This test now exactly tests the behavior of your method. It uses the mocked context as a spy to report on what your unit under test (i.e. UserNotification) does with the context that you pass into it.
Note that I am using FluentAssertions here, as I find it the most easily readable syntax. Feel free to use your own assertion logic.
While you can write your own mocks; there are mocking libraries that help cut down on the boilerplating. Moq and NSubstitute are the two biggest favorites as far as I'm aware. I personally prefer NSubstitute's syntax; but both get the job done equally well.
If you want to use nunit the documentation with example is pretty easy to follow, link below.
Nunit documentation
And I think all other unit test framework have something similar to this.
[Test]
public void Execute_WhenCalled_ThrowArgumentException()
{
//Initialize an instance of IWorkContext
var iwc = new WorkContext();
//or use a Mock object, later on in assert use
//userNotification.Execute(iwc.Object)
var iwc = new Mock<IWorkContext>();
var userNotification = new UserNotification();
Assert.Throws(typeof(InvalidWorkException), () =>
{
userNotification.Execute(iwc)
});
}

AutoFixture Tests Stop Showing Up in Test Runner

AutoFixture (or my misuse of it) seems to have caused the xunit test runner to stop showing individual tests in the tree view for each instance of inline data. Usually if I use [Theory] and [InlineData] I get individual tests showing up in the test runner tree view, one for each [InlineData]. That no longer happens after the following:
I created a custom AutoMoqDataAttribute class to always use the AutoMoqCustomization and disable recursion:
public sealed class AutoMoqDataAttribute : AutoDataAttribute
{
/// <summary>
/// Automaticaly adds the "Auto Moq" customization to fixures that use the
/// AutoDataAttribute attribute.
/// </summary>
public AutoMoqDataAttribute() : base(() =>
{
// Create a fixture we can customize.
var fixture = new Fixture();
// Remove the recursion checks as we have objects that require recursion.
var throwingRecursionBehavior = fixture.Behaviors.FirstOrDefault(behavior => behavior.GetType() == typeof(ThrowingRecursionBehavior));
if (throwingRecursionBehavior != null) fixture.Behaviors.Remove(throwingRecursionBehavior);
// Have to also add in the OmitOnRecursionBehavior to avoid a stack overflow error
// presumably, since we removed the above, and without this, AutoFixture follows every
// object and just keeps trying to auto fill its data. By following the circular
// references it just goes on forever.
fixture.Behaviors.Add(new OmitOnRecursionBehavior());
// Use the "Auto Moq" customization so we have AutoFixture automatically mock objects for us.
fixture.Customize(new AutoMoqCustomization());
return fixture;
}) { }
}
Then, I want this to work when I use Theory/InlineData, so I also create this:
public class InlineAutoMoqDataAttribute : InlineAutoDataAttribute
{
public InlineAutoMoqDataAttribute(params object[] values) : base(new AutoMoqDataAttribute(), values) { }
}
Finally, I try to run some tests:
public class TestClass
{
public bool Val { get; set; }
}
[Theory]
[InlineData("blah 1", true)]
[InlineData("blah 2", true)]
public void Test1(string dummy, bool expected)
{
Assert.Equal(expected, true);
}
The above works fine... but then I run this:
[Theory]
[InlineAutoMoqData("blah 1", false)]
[InlineAutoMoqData("blah 2", false)]
public void Test2(string dummy, bool expected, TestClass sut)
{
Assert.Equal(expected, sut.Val);
}
The tests run but I don't see two tests, one for each InlineAutoMoqData, in test runner as I usually do. I can see them if I click the test and look in the "Test Detail Summary" panel, which would almost be good enough except I can't re-run failed tests (and I mean tests for individual InlineData cases) from there. I can from the tree view but again, they're not showing up there so that is a big problem as it runs all InlineAutoMoqData tests and I want to isolate and just run e.g. one that is failing.
If I add this attribute:
[DataDiscoverer("Xunit.Sdk.InlineDataDiscoverer", "xunit.core")]
to my InlineAutoMoqDataAttribute, the individual tests, one for each InlineAutoMoqData, show up in test explorer but then I get this error:
"InvalidOperationException : The test method expected 2 parameter values, but 1 parameter value was provided."
meaning AutoFixture auto mock values don't work.
Is there any way to have my custom InlineAutoMoqData attribute and still have tests for each individual InlineAutoMoqData showing up in test explorer? Is this a bug I should report to AutoFixture, or am I doing something wrong?
This is a known issue in AutoFixture, though I'm not sure there is a formal issue opened in the GitHub repository. If there isn't one you are welcome to create an issue and track the progress on it there.

XUnit - Mixing theory data mechanisms, input and expected data

When creating Theory test cases with XUnit I would like to be able to include both the parameters and the expected outcome for each case. I have used the InlineData attribute but for heavy configuration loading this is less than optimal and does not permit reuse.
[InlineData(1,2,3,4,5,6,7,...)]
As such I have moved the test configurations out to a separate class and now load them with MemberData and MemberType.
[Theory]
[MemberData(nameof(DataClass.Data), MemberType = typeof(DataClass))]
public void TestValidConfig(Configuration config)
{
...
}
However this does not allow me to specify the expected outcome like i could if using a basic tag i.e.
[InlineData("Input1", "Input2", "Input3", "ExpectedResult")]
I don't want to include the expected outcome with the configuration data as this will be reused in multiple tests.
Has anyone got a solution to this challenge?
So the underlying challenge is having complex test data, that could be used in multiple places, but then wanting to separate the expected outcome. So in a calculator (bad example) you could have lists of numbers that are the test data. These could then be passed into an add or multiply or subtraction test. This is where I would want to separate the input and the expected output data.
Here's a suggestion:
Create a class to generate test data:
internal static class TestData
{
public static IList<T> Get<T>(int count = 10)
{
// I'm using NBuilder here to generate test data quickly.
// Use your own logic to create your test data.
return Builder<T>.CreateListOfSize(count).Build();
}
}
Now, all your test classes can leverage this to get the same set of test data. So, in your data class, you would do something along the lines of
public class DataClass
{
public static IEnumerable<object[]> Data()
{
return new List<object[]>
{
new object[] { TestData.Get(), this.ExpectedResult() }
};
}
}
Now you can follow through with your original approach:
[Theory]
[MemberData(nameof(DataClass.Data), MemberType = typeof(DataClass))]
public void TestValidConfig(Data input, Configuration expected)
{
...
}
If your tests don't mutate the input data, You can collect them into a fixture and inject the input data through constructor. This would speed up the tests since you don't have to generate the input data per test. Check out shared context for more information.

How do I mock a string response in a Unit Test?

Here's what I have in my test so far:
[TestFixture]
public class IndividualMovieTests
{
[Test]
public void WebClient_Should_Download_From_Correct_Endpoint()
{
const string correctEndpoint = "http://api.rottentomatoes.com/api/public/v1.0/movies/{movie-id}.json?apikey={your-api-key}";
ApiEndpoints.Endpoints["IndividualMovie"].ShouldEqual(correctEndpoint);
}
[Test]
public void Movie_Information_Is_Loaded_Correctly()
{
Tomato tomato = new Tomato("t4qpkcsek5h6vgbsy8k4etxdd");
var movie = tomato.FindMovieById(9818);
movie.Title.ShouldEqual("Gone With The Wind");
}
}
My FIndMovieById method goes online and fetches a JSON result, and that means it sort of breaks the principle behind unit testing. I have a feeling I have to mock this string response, but I don't really know how to approach this.
How would you approach this particular unit testing?
In your second [Test], I would suggest not focusing on a specific return value from your FindMovieById method, unless you truly want to test that your given inputs should always result in "Gone With the Wind". The test that you have seems to be a very specific test case in which a specific input number results in a specific output, which is something that may or may not change when running against your actual database. Also, since you're not going to be testing against the actual web service, doing this kind of validation is basically self-serving - you're not really testing anything. Instead, focus on testing how the Tomato class handles validation of the argument (if at all), and that the Tomato class actually invokes the service to get the return value. Rather than testing specific inputs and outputs, test the behavior of the class, so that if someone changes it in the future, the test should break to alert them that they may have broken working functionality.
For example, if you have input validation, you could test that your Tomato class throws an exception if an invalid input is detected.
Assuming that your Tomato class has some sort of web client functionality for requesting and retrieving the results, you could plug in some stub implementations of the actual web code, or mocked implementations to ensure that Tomato is in fact calling the appropriate web client code to request and process the response.
First off, you might not have to mock to test you code. For example, if you are just testing that you can deserialize JSON into a Movie object, you could just do that by testing a public or internal ParseJSON recond on the Movie class.
However, since you are asking about mocking, here's a quick overview of one way you could write this test using a mock. As it is written, Movie_Information_Is_Loaded_Correctly() looks like an integration test. To turn this into a unit test, you could mock out the web request the Tomato class makes. One way to do that would be to create a ITomatoWebRequester interface and pass that as a parameter to the Tomato class in the constructor. You could then mock the ITomatoWebRequester to return the web response you are expecting, and then you could test that the Tomato class properly parses that response.
The code could look something like this:
public class Tomato
{
private readonly ITomatoWebRequester _webRequester;
public Tomato(string uniqueID, ITomatoWebRequester webRequester)
{
_webRequester = webRequester;
}
public Movie FindMovieById(int movieID)
{
var responseJSON = _webRequester.GetMovieJSONByID(movieID);
//The next line is what we want to unit test
return Movie.Parse(responseJSON);
}
}
public interface ITomatoWebRequester
{
string GetMovieJSONByID(int movieID);
}
To test, you could use a mocking framework like Moq to create a ITomatoWebRequester that will return a result you expect. To do that with Moq the following code should work:
[Test]
public void Movie_Information_Is_Loaded_Correctly()
{
var mockWebRequester = new Moq.Mock<ITomatoWebRequester>();
var myJson = "enter json response you want to use to test with here";
mockWebRequester.Setup(a => a.GetMovieJSONByID(It.IsAny<int>())
.Returns(myJson);
Tomato tomato = new Tomato("t4qpkcsek5h6vgbsy8k4etxdd",
mockWebRequester.Object);
var movie = tomato.FindMovieById(9818);
movie.Title.ShouldEqual("Gone With The Wind");
}
The cool thing about the mock in this case is that you don't have to worry about all the hoops the actual ITomatoWebRequester has jump through to return the JSON it is supposed to return, you can just create a mock right in your test that returns exactly what you want. Hopefully this answer serves as a decent intro to mocking. I would definitely suggest reading up on mocking frameworks to get a better feel for how the process works.
Use Rhino.Mocks library and call Expectations where ever appropriate. Following is a sample mocking your movie object.
using System;
using NUnit.Framework;
using Rhino.Mocks;
namespace ConsoleApplication1
{
public class Tomato
{
public Tomato(string t4qpkcsek5h6vgbsy8k4etxdd)
{
//
}
public virtual Movie FindMovieById(int i)
{
return null;
}
}
public class Movie
{
public string Title;
public Movie( )
{
}
public void FindMovieById(int i)
{
throw new NotImplementedException();
}
}
[TestFixture]
public class IndividualMovieTests
{
[Test]
public void Movie_Information_Is_Loaded_Correctly()
{
//Create Mock.
Tomato tomato = MockRepository.GenerateStub<Tomato>("t4qpkcsek5h6vgbsy8k4etxdd");
//Put expectations.
tomato.Expect(t=>t.FindMovieById(0)).IgnoreArguments().Return(new Movie(){Title ="Gone With The Wind"});
//Test logic.
Movie movie = tomato.FindMovieById(9818);
//Do Assertions.
Assert.AreEqual("Gone With The Wind", movie.Title);
//Verify expectations.
tomato.VerifyAllExpectations();
}
}
}

Help/advice needed with unit testing repositories

I am using .NET 4, NUnit and Rhino mocks. I want to unit test my news repository, but I am not sure of how to go about it. My news repository is what I will eventually be using to communicate to the database. I want to use it to test against fake/dummy data. Not sure if it is possible?? This is what I currently have:
public interface INewsRepository
{
IEnumerable<News> FindAll();
}
public class NewsRepository : INewsRepository
{
private readonly INewsRepository newsRepository;
public NewsRepository(INewsRepository newsRepository)
{
this.newsRepository = newsRepository;
}
public IEnumerable<News> FindAll()
{
return null;
}
}
My unit test looks like this:
public class NewsRepositoryTest
{
private INewsRepository newsRepository;
[SetUp]
public void Init()
{
newsRepository = MockRepository.GenerateMock<NewsRepository>();
}
[Test]
public void FindAll_should_return_correct_news()
{
// Arrange
List<News> newsList = new List<News>();
newsList.Add(new News { Id = 1, Title = "Test Title 1" });
newsList.Add(new News { Id = 2, Title = "Test Title 2" });
newsRepository.Stub(r => r.FindAll()).Return(newsList);
// Act
var actual = newsRepository.FindAll();
// Assert
Assert.AreEqual(2, actual.Count());
}
}
In the above code I am not sure what I need to mock. The code above compiles but fails in the NUnit GUI about a contructor value. I can only assume it has to do with the INewsRepository paramter that I need to supply to NewsRepository. I don't know how to do this in the test. Can someone please rectify my unit test so that it will pass in the NUnit GUI? Can someone also provide some feedback on if I am implementing my repositories correctly?
Being a newbie to mocking, is there anything that I need to verify? When would I need to verify? What is its purpose? I have been working through a couple of source code projects and some use verify and some don't.
If the above test passes, what does this prove to me as developer? What does another developer have to do to my repository to make it fail in the NUnit GUI?
Sorry for all the questions, but they are newbie questions :)
I hope soomeone can help me out.
As Steven has said, you're Asserting against the Mock NewsRepository in the above code.
The idea of mocking is to isolate the Code Under Test and to create fakes to replace their dependencies.
You use the Mock NewsRepository to test something that uses INewsRepository, in your case, you mention NewsService; NewsService will use your mock of INewsRepository.
If you search your solution for anything that uses INewsRepository.FindAll(), you will create a Mock Repository to test that code in isolation.
If you want to test something that calls your Service layer, you will need to mock NewsService.
Also, as Steven as said, there is no need for the NewsRepository to have a copy of itself injected by IoC, so:
public class NewsRepository : INewsRepository
{
private readonly INewsRepository newsRepository;
public NewsRepository(INewsRepository newsRepository)
{
this.newsRepository = newsRepository;
}
public IEnumerable<News> FindAll()
{
return null;
}
}
should become:
public class NewsRepository : INewsRepository
{
public IEnumerable<News> FindAll()
{
return null;
}
}
Once you have functionality in your FindAll() method that needs testing, you can mock the objects that they use.
As a point of style from the great Art Of Unit Testing initialisation of mock objects is best left out of the Setup method and carried out in a helper method called at the start of the method. Since the call to Setup will be invisible and makes the initalisation of the mock unclear.
As another point of style, from that book, a suggested unit test naming convention is: "MethodUnderTest_Scenario_ExpectedBehavior".
So,
FindAll_should_return_correct_news
could become, for example:
FindAll_AfterAddingTwoNewsItems_ReturnsACollectionWithCountOf2
I hope this makes the approach clearer.
Your FindAll_should_return_correct_news test method is not testing the repository, it is testing itself. You can see this when you simplify it to what it really does:
[Test]
public void FindAll_should_return_correct_news()
{
// Arrange
List<News> newsList = new List<News>();
newsList.Add(new News { Id = 1, Title = "Test Title 1" });
newsList.Add(new News { Id = 2, Title = "Test Title 2" });
// Act
var actual = newsList;
// Assert
Assert.AreEqual(2, actual.Count());
}
As you can see, what you're basically doing is creating a list, filling it and testing if it actually contains the number of records that you put in it.
When your repository does nothing else than database interaction (so no application logic) there is nothing to test using a unit test. You can solve this problem by writing integration tests for the repositories. What you can basically do with such a integration test is insert some records in a test database (use a real database though, not an in-memory database) and then call the real repository class to see if it fetches the expected records from your test database. All should be executed within a transaction and rolled back at the end of the test (this ensures these tests keep trustworthy).
When you're using a O/RM tool that allows you to write LINQ queries, you could also try a different approach. You can fake your LINQ provider, as you can see in this article.
Might want to read over this post by ayende

Categories