How to combine AutoDataAttribute with InlineData - c#

I heavily use the Autofixture AutoData Theories for creating my data and mocks. However this prevents me from using the InlineData Attributes from XUnit to pipe in a bunch of different data for my tests.
So I am basically looking for something like this:
[Theory, AutoMoqDataAttribute]
[InlineData(3,4)]
[InlineData(33,44)]
[InlineData(13,14)]
public void SomeUnitTest([Frozen]Mock<ISomeInterface> theInterface, MySut sut, int DataFrom, int OtherData)
{
// actual test omitted
}
Is something like this possible?

You'll have to create your own InlineAutoMoqDataAttribute, similar to this:
public class InlineAutoMoqDataAttribute : InlineAutoDataAttribute
{
public InlineAutoMoqDataAttribute(params object[] objects) : base(new AutoMoqDataAttribute(), objects) { }
}
and you'd use it like this:
[Theory]
[InlineAutoMoqData(3,4)]
[InlineAutoMoqData(33,44)]
[InlineAutoMoqData(13,14)]
public void SomeUnitTest(int DataFrom, int OtherData, [Frozen]Mock<ISomeInterface> theInterface, MySut sut)
{
// actual test omitted
}
Note that the inlined data, the ints in this case, must be the first parameters of the test method.
All the other parameters will be provided by AutoFixture.

With the latest AutoFixture, you can use Inline AutoData Theories
Uses the InlineData values for the the first method arguments, and then uses AutoData for the rest (when the InlineData values run out).

Related

How can I run test methods dynamically on runtime based on values from external file with NUnit?

I have created a TestFixture class with two test methods.
[TestFixture]
class SomeTests
{
[Test]
public void OpenScreen()
{
//Do something
}
[Test]
public void TestElement()
{
//Do something
}
}
My requirement is to run these tests based on inputs from an external file which looks like:
Test Value
Screen "Scr1"
Element "Ele1"
Element "Ele2"
Screen "Scr2"
Element "Ele3"
I am able to pass values to these test methods using:
[Test]
[TestCaseSource("GetTestValues")]
public void OpenScreen(string value)
{
//Do something
}
But I don't know how to run these tests in the exact order as received in the file. How can I achieve this?
Current order:
OpenScreen("Scr1")
OpenScreen("Scr2")
TestElement("Ele1")
TestElement("Ele2")
TestElement("Ele3")
Expected order:
OpenScreen("Scr1")
TestElement("Ele1")
TestElement("Ele2")
OpenScreen("Scr2")
TestElement("Ele3")
Edit: I'm using this for functional tests for testing some screens using Selenium.
The external file is a Json format string containing these values and GetTestValues simply deserializes the Json and returns the values.
Unfortunately, those test methods will be run by the NUnit runner and you won't be able to change the order in which they will run.
It seems what you are trying to do is to create some sort of acceptance test. What you can do is create a test method that wraps the sequence of steps and leave your OpenScreen and TestElement methods as simple helper methods:
[TestFixture]
class SomeTests
{
[Test]
public void TestInteraction() {
OpenScreen("Scr1")
TestElement("Ele1")
TestElement("Ele2")
OpenScreen("Scr2")
TestElement("Ele3")
}
private void OpenScreen(String arg)
{
//Do something
}
private void TestElement()
{
//Do something
}
}
There is the concept of "Page" objects, where you can write methods that represent actions you would perform on a screen or page. The OpenScreen and TestElement methods can be part of such object.

NUnit extending ICommandWrapper How do I wrap a TestCase?

I tried extending extending ICommandWrapper, following this article: https://www.skyrise.tech/blog/tech/extending-nunit-3-with-command-wrappers/. I figured out that I can also extend TestAttribute and it just works, then I tried extending TestCaseAttribute:
[AttributeUsage(AttributeTargets.Method), AllowMultiple = true]
public class MyTestCaseAttribute : TestCaseAttribute, IWrapSetUpTearDown
{
private object[] _args;
public MyTestCaseAttribute(params object[] args) : base(args)
{
_args = args;
}
public TestCommand Wrap(TestCommand command)
{
return new MyTestCommand(command, _args);
}
}
MyTestCommand extends DelegatingTestCommand, just like in the article.
The problem is, if I add multiple MyTestCaseAttributes to a test method, the test method gets wrapped by MyTestCommand.Execute's code multiple times.
[EDIT] Example:
Suppose MyTestCommand looks like this:
public abstract class MyCommandDecorator : DelegatingTestCommand
{
public override TestResult Execute(TestExecutionContext context)
private object[] _testCaseArgs;
protected TestCommandDecorator(TestCommand innerCommand, params object[] args) : base(innerCommand)
{
_testCaseArgs = args;
}
public override TestResult Execute(TestExecutionContext context)
{
DoSomething(_testCaseArgs);
return context.CurrentResult = innerCommand.Execute(context);
}
}
Suppose I decorate a test method with two [MyTestCase] attributes:
[MyTestCase(1)]
[MyTestCase(2)]
public void MyTest(int foo)
{
//...
}
The desired behaviour is something like:
DoSomething(1);
MyTest(1);
DoSomething(2);
MyTest(2);
But actual behaviour is:
DoSomething(2)
DoSomething(1)
MyTest(1)
DoSomething(2)
DoSomething(1)
MyTest(1)
The key to your problem is this... C# allows you to decorate a method or a class with an attribute. But an individual test case doesn't exist outside of NUnit - there is no C# equivalent - so you can't decorate it.
IOW your two Attributes apply to the method and cause NUnit to use that method to generate two test cases. However, your attributes also implement ICommandWrapper, which causes NUnit to wrap any test cases it generates. One part of NUnit is looking for test cases to create another part is looking for attributes to wrap test cases. Those two parts are entirely separated.
That's why NUnit uses properties on the test case method to indicate things like Ignoring the case. It can't use an attribute because an attribute would apply to every test case generated by that method.
Hopefully, that explains what's happening.
To get past the problem, your command wrapper should only apply itself to a test that was generated by that particular instance of the attribute. That means you have to get involved in the creation of the test, at least to the extent that your attribute remembers the reference to the test it created. This is a bit complicated, but you should look at the code for TestCaseAttribute to see how the test case is created.
Figured it out.
Instead of extending TestCaseAttribute, I can extend TestAttribute and obtain the arguments to pass to the wrapper class from standard TestCaseAttributes using command.Test.Arguments.
[AttributeUsage(AttributeTargets.Method), AllowMultiple = true]
public class MyTestAttribute : TestAttribute, IWrapSetUpTearDown
{
public TestCommand Wrap(TestCommand command)
{
return new MyTestCommand(command, command.Test.Arguments);
}
}
[TestCase(1)]
[TestCase(2)]
[MyTest]
public void MyTest(int foo)
{
//...
}

How to run the same nunit tests with different preconditions? (fixtures)

I have set of tests and have to run it with two different SetUp in base class.
Here is screenshot
How can I improve it?
Create a single, parameterized test fixture. Pass in information about which setup (probably OneTimeSetUp) should be used to each instance of the fixture. The information will have to be constant values like strings so that it can be used as an argument to the attribute.
For example...
[TestFixture("setup1", 5)]
[TestFixture("setup2", 9)]
public class MyTestFixture
{
public MyTestFixture(string setup, int counter)
{
// You can save the arguments, or do something
// with them and save the result in instance members
}
[Test]
public void SomeTest()
{
// Do what you need to do, using the arguments
}
}

How to use multiple TestCaseSource attributes for an N-Unit Test

How do you use multiple TestCaseSource attributes to supply test data to a test in N-Unit 2.62?
I'm currently doing the following:
[Test, Combinatorial, TestCaseSource(typeof(FooFactory), "GetFoo"), TestCaseSource(typeof(BarFactory), "GetBar")]
FooBar(Foo x, Bar y)
{
//Some test runs here.
}
And my test case data sources look like this:
internal sealed class FooFactory
{
public IEnumerable<Foo> GetFoo()
{
//Gets some foos.
}
}
internal sealed class BarFactory
{
public IEnumerable<Bar> GetBar()
{
//Gets some bars.
}
}
Unfortunately, N-Unit won't even kick off the test since it says I'm supplying the wrong number of arguments. I know you can specify a TestCaseObject as the return type and pass in an object array, but I thought that this approach was possible.
Can you help me resolve this?
The appropriate attribute to use in this situation is ValueSource. Essentially, you are specifying a data-source for every argument, like so.
public void TestQuoteSubmission(
[ValueSource(typeof(FooFactory), "GetFoo")] Foo x,
[ValueSource(typeof(BarFactory), "GetBar")] Bar y)
{
// Your test here.
}
This will enable the type of functionality I was looking for using the TestCaseSource attribute.

Using the same test suite on various implementations of a repository interface

I have been making a little toy web application in C# along the lines of Rob Connery's Asp.net MVC storefront.
I find that I have a repository interface, call it IFooRepository, with methods, say
IQueryable<Foo> GetFoo();
void PersistFoo(Foo foo);
And I have three implementations of this: ISqlFooRepository, IFileFooRepostory, and IMockFooRepository.
I also have some test cases. What I would like to do, and haven't worked out how to do yet, is to run the same test cases against each of these three implementations, and have a green tick for each test pass on each interface type.
e.g.
[TestMethod]
Public void GetFoo_NotNull_Test()
{
IFooRepository repository = GetRepository();
var results = repository. GetFoo();
Assert.IsNotNull(results);
}
I want this test method to be run three times, with some variation in the environment that allows it to get three different kinds of repository. At present I have three cut-and-pasted test classes that differ only in the implementation of the private helper method IFooRepository GetRepository(); Obviously, this is smelly.
However, I cannot just remove duplication by consolidating the cut and pasted methods, since they need to be present, public and marked as test for the test to run.
I am using the Microsoft testing framework, and would prefer to stay with it if I can. But a suggestion of how to do this in, say, MBUnit would also be of some interest.
Create an abstract class that contains concrete versions of the tests and an abstract GetRepository method which returns IFooRepository.
Create three classes that derive from the abstract class, each of which implements GetRepository in a way that returns the appropriate IFooRepository implementation.
Add all three classes to your test suite, and you're ready to go.
To be able to selectively run the tests for some providers and not others, consider using the MbUnit '[FixtureCategory]' attribute to categorise your tests - suggested categories are 'quick' 'slow' 'db' 'important' and 'unimportant' (The last two are jokes - honest!)
In MbUnit, you might be able to use the RowTest attribute to specify parameters on your test.
[RowTest]
[Row(new ThisRepository())]
[Row(new ThatRepository())]
Public void GetFoo_NotNull_Test(IFooRepository repository)
{
var results = repository.GetFoo();
Assert.IsNotNull(results);
}
If you have your 3 copy and pasted test methods, you should be able to refactor (extract method) it to get rid of the duplication.
i.e. this is what I had in mind:
private IRepository GetRepository(RepositoryType repositoryType)
{
switch (repositoryType)
{
case RepositoryType.Sql:
// return a SQL repository
case RepositoryType.Mock:
// return a mock repository
// etc
}
}
private void TestGetFooNotNull(RepositoryType repositoryType)
{
IFooRepository repository = GetRepository(repositoryType);
var results = repository.GetFoo();
Assert.IsNotNull(results);
}
[TestMethod]
public void GetFoo_NotNull_Sql()
{
this.TestGetFooNotNull(RepositoryType.Sql);
}
[TestMethod]
public void GetFoo_NotNull_File()
{
this.TestGetFooNotNull(RepositoryType.File);
}
[TestMethod]
public void GetFoo_NotNull_Mock()
{
this.TestGetFooNotNull(RepositoryType.Mock);
}
[TestMethod]
public void GetFoo_NotNull_Test_ForFile()
{
GetFoo_NotNull(new FileRepository().GetRepository());
}
[TestMethod]
public void GetFoo_NotNull_Test_ForSql()
{
GetFoo_NotNull(new SqlRepository().GetRepository());
}
private void GetFoo_NotNull(IFooRepository repository)
{
var results = repository. GetFoo();
Assert.IsNotNull(results);
}
To Sum up, there are three ways to go:
1) Make the tests one liners that call down to common methods (answer by Rick, also Hallgrim)
2) Use MBUnit's RowTest feature to automate this (answer by Jon Limjap). I would also use an enum here, e.g.
[RowTest]
[Row(RepositoryType.Sql)]
[Row(RepositoryType.Mock)]
public void TestGetFooNotNull(RepositoryType repositoryType)
{
IFooRepository repository = GetRepository(repositoryType);
var results = repository.GetFoo();
Assert.IsNotNull(results);
}
3) Use a base class, answer by belugabob
I have made a sample based on this idea
public abstract class TestBase
{
protected int foo = 0;
[TestMethod]
public void TestUnderTen()
{
Assert.IsTrue(foo < 10);
}
[TestMethod]
public void TestOver2()
{
Assert.IsTrue(foo > 2);
}
}
[TestClass]
public class TestA: TestBase
{
public TestA()
{
foo = 4;
}
}
[TestClass]
public class TestB: TestBase
{
public TestB()
{
foo = 6;
}
}
This produces four passing tests in two test classes.
Upsides of 3 are:
1) Least extra code, least maintenance
2) Least typing to plug in a new repository if need be - it would be done in one place, unlike the others.
Downsides are:
1) Less flexibility to not run a test against a provider if need be
2) Harder to read.

Categories