How to properly reuse test code in MSTest - c#

We've started to introduce some behavior tests that try to test some of out software modules like a complete black box.
This test suite was written using inheritance from base test class for easier organization.
Now we'd like to reuse this test suite for the testing of another interface-compatible module.
The solution we were able to find was to inherit the test class, and implement another constructor.
I'd like to confirm that there's no better option, because writing duplicate inherited classes for each of the test suite class seems wrong.
[TestClass]
public class RouModel_Basic_RunnerBasic : ROUModelTest_Basic
{
public RouModel_Basic_RunnerBasic() : base()
{
//init basic model here
model = basicModel;
}
}
[TestClass]
public class RouModel_Basic_RunnerOther : ROUModelTest_Basic
{
public RouModel_Basic_RunnerOther() : base()
{
//init other model here
model = otherModel;
}
}
public class ROUModelTest_Basic : RouModelTest
{
[TestMethod]
public void TestABC()
{
string input = "abc"
var result = model.run(input);
Assert.AreEqual("123", result);
}
}
public class RouModelTest
{
protected IModelTest model;
...
}

If you just want to re-use the test code as-is but with a different module under test, inheritance seems to be the most straightforward, since you will need a separate test method for each test, and inheritance is the only way to do that without having to type them yourself. This shouldn't introduce any duplication, since you only have to re-implement the parts that are actually different in each subclass.
If your issue is with the fact that you are building your test fixture in the test case class constructor, an alternative would be to apply the Template Method design pattern to your test methods, and add a virtual creation method for the module under test that subclasses can override to create instances of the specific module you want them to test. Alternatively, you could create a test setup method and mark it with the appropriate attribute, as described in this answer.
That being said, if you really want to keep them all in the same test case class, you might be able to do so if you implement creation methods for the individual modules under test on your base test case class, and then pass the names of those methods to your test methods and call them using reflection. There should be an attribute that allows you to pass arguments to test methods, which is discussed in this answer. However, the feasibility of this approach is just speculation on my part, and you might run the risk of making your tests more obscure.

Related

C# Unit tests verify on non-abstract method on class instantied by Activator

First, let me introduce you my project.
We are developping an app in which user can work with programs. As programs I mean list of instructions for a confidential use.
There are different types of Programs all inheriting from the Program abstract base class.
As the user can create different types of program, we developped a ProgramManager that can instantiante any type of Program by its type. We don't need to instantiate the abstract class but all the concrete classes (and it works) but as concrete Program have same methods (AddNewChannel, Save, ...) we handle them like Programs.
Here's a sample of code:
public Program CreateProgram(Type type)
{
Program program = Activator.CreateInstance(type) as Program;
program.Directory = ProgramsPath;
int nbChannels = 2; //not 2 but a long line searching the correct number where it is.
for (int i = 1; i <= nbChannels; i++)
{
program.AddNewChannel(i);
}
program.Save();
return program;
}
What I now have to do is test this function and I don't want to duplicate the unitTests I already made for the different Program classes.
As an example, here is one of my test functions (for the Save method) with it's init. I store the types I need to test in a xml file.
[TestInitialize]
public void TestInitialize()
{
if (!TestContext.TestName.StartsWith("GetKnownTypes"))
type = UnitTestsInitialization.applicationHMIAssembly.GetType((string)TestContext.DataRow["Data"]);
}
[TestMethod]
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.XML",
"|DataDirectory|\\" + DATA_FILE, "Row",
DataAccessMethod.Sequential)]
public void SavedProgramCreatesFile()
{
Program program = Activator.CreateInstance(type) as Program;
program.Name = "SavedProgramCreatesFile";
program.Directory = DIRECTORY;
program.Save();
string savedProgramFileName = program.GetFilePath();
bool result = File.Exists(savedProgramFileName);
Assert.IsTrue(result);
}
All my concrete Program classes have been tested separatly.
Thereby, I would like to test if the following methods program.AddNewChannel and program.Save are called.
I gave a look at Moq but the first problem is that the method Save is not abstract.
Also, using Activator doesn't allow me to make a Mock<Program>.
I tried the following in a unit test in order to try to instantiate the mock and use it like a program:
[TestMethod]
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.XML",
"|DataDirectory|\\" + DATA_FILE, "Row",
DataAccessMethod.Sequential)]
public void CreateProgram_CallsProgramSaveMethod()
{
Mock<Program> mock = new Mock<Program>();
mock.Setup(p => p.AddNewChannel(It.IsAny<int>()));
Program program = pm.CreateProgram(mock.Object.GetType());
mock.Verify(p => p.Save());
mock.Verify(p => p.GetFilePath(It.IsAny<string>()));
mock.Verify(p => p.AddNewChannel(It.IsAny<int>()), Times.Exactly(ProgramManager.NB_MACHINE_CHANNELS));
Assert.IsNotNull(program);
program.DeleteFile();
}
Which was inspired by this question: How to mock An Abstract Base Class
And it works until it reaches the line program.AddNewChannel(i); in the for loop. The error is the following:
System.NotImplementedException: 'This is a DynamicProxy2 error: The interceptor attempted to 'Proceed' for method 'Void AddNewChannel(Int32)' which is abstract. When calling an abstract method there is no implementation to 'proceed' to and it is the responsibility of the interceptor to mimic the implementation (set return value, out arguments etc)'
It seems that the setup doesn't work but I might understand why. (I try to instantiate a subtype of Proxy which doesn't implement verify method)
I also tried to use a Proxy over my program class which would implement an interface which would contain the methods I needed but the problem here is the activator again.
Can anyone suggest me any way of testing those method calls ? (Even if I need to change my method CreateProgram)
I gave a look here: How to mock non virtual methods? but I am not sure this would be applicable to my problem.
I use MSTests for my unittests.
NOTICE
Everything else works fine. All my other tests pass without troubles and my code seems to work (Tested by hand).
Thanks in advance.
The root cause of the problem is that you're using a type as a parameter, which you then use to create an instance of this type. However, you're passing in the type of an abstract class, which is specifically not made for instantiating. You need to work with the concrete classes directly.
Thereby, I would like to test if the following methods program.AddNewChannel and program.Save are called.
That's not sufficient as a test. You want to test whether these methods work as expected, not just if they're called and then assume that they work.
What you're describing is a (very rudimentary) integration test, not a unit test.
I don't want to duplicate the unitTests I already made for the different Program classes
This is a very dangerous decision. Part of the idea behind unit testing is that you create separate tests for different (concrete) objects. The tests need to be as segregated as is reasonably possible. You're trying to reuse testing logic, which is a good thing, but it needs to be done in a way that it does not compromise your test segregation.
But there are ways to do it without compromising your test segregation. I only have testing experience with NUnit but I assume a similar approach works in other frameworks as well.
Assume the following:
public abstract class Program
{
public bool BaseMethod() {}
}
public class Foo : Program
{
public bool CustomFooMethod() {}
}
public class Bar : Program
{
public bool CustomBarMethod() {}
}
Create an abstract class testing method:
[TestFixture]
[Ignore]
public class ProgramTests
{
public virtual Program GetConcrete()
{
throw new NotImplementedException();
}
[Test]
public void BaseMethodTestReturnsFalse()
{
var result = GetConcrete().BaseMethod();
Assert.IsFalse(result);
}
}
[Ignore] ensures that the ProgramTests class does not get tested by itself.
Then you inherit from this class, where the concrete classes will be tested:
[TestFixture]
public class FooTests
{
private readonly Foo Foo;
public FooTests()
{
this.Foo = new Foo();
}
public overrides Program GetConcrete()
{
return this.Foo;
}
[Test]
public void CustomFooMethodTestReturnsFalse()
{
var result = this.Foo.CustomFooMethod();
Assert.IsFalse(result);
}
}
BarTests is similarly implemented.
NUnit (presumably other testing frameworks as well) will discover all inherited tests and will run those tests for the derived class. Every class that derives from ProgramTests will therefore always include the BaseMethodTestReturnsTrue test.
This way, your base class' tests are reusable, but each concrete class will still be tested separately. This maintains your test separation, while also preventing you having to copy/paste test logic for every concrete class.
I also noticed this:
Mock<Program> mock = new Mock<Program>();
mock.Setup(p => p.AddNewChannel(It.IsAny<int>()));
Program program = pm.CreateProgram(mock.Object.GetType());
I don't understand the purpose of this code. How is it any different from simply doing:
Program program = pm.CreateProgram(typeof(Program).GetType());
As far as I can see, both the mock and its setup are irrelevant since you're only looking at its type and then having CreateProgram() create a new object for you anyway.
Secondly, and this refers back to my example of testing the concrete classes, you shouldn't be testing with Program directly, you should be testing your derived program classes (Foo and Bar).
This is the root cause of the problem. You're using a type as a parameter, which you then use to create an instance of this type. However, you're passing in the type of an abstract class, which is specifically not made for instantiating. You need to work with the concrete classes directly.
Create a wrapper interface and class around Activator, then pass the type to that:
public interface IActivatorWrapper
{
object CreateInstance(Type type);
}
public class ActivatorWrapper : IActivatorWrapper
{
public object CreateInstance(Type type)
{
return Activator.CreateInstance(type);
}
}
Use this instead of Activator directly, then mock the IActivatorWrapper to return whatever mock object you want.
Another idea to help with your problem would be to add an IProgram interface to your abstract Program class, then use that to refer to your concrete Program instances. This might also help you, should you ever want to write a concrete Program with a different base class.

How do I write unit tests for subclass with parameterized base class

Given that I have a class in one assembly called GBase with a constructor that takes 2 parameters and a subclass of GBase (call it GDerived) that takes the same parameters, how do I separate these so that I can unit test the subclass?
In OtherAssembly:
public class GBase
{
public GBase(ParamType1 param1, ParamType2 param2)
{
...
}
protected ParamType1 SomeProperty { get; set; }
// other stuff
}
In ThisAssembly:
public class GDerived : GBase
{
public GDerived(ParamType1 param1, ParamType2 param2)
:base(param1, param2)
{
// new code
SomeProperty = newCalculatedValue;
// other stuff
}
// other stuff
}
The original GBase class is legacy code, as is the general structure of the program -- changing the structure is out of the question due to the codebase size (10k lines plus) - none of which has ever had a unit test written for it until very recently.
So now I want to write a test (using NUnit) for the subclass constructor to verify the correct properties are populated with the correct values. Note the test classes are in the same project as the tested classes.
[TestFixture]
public class GDerivedTests
{
[Test]
public void GDerivedConstructor_ValidParams_PropertiesSetCorrectly()
{
var newGDerived = new GDerived(parameter1, parameter2);
Assert.That(SomeProperty == parameter1;
}
}
This is a very crude rep of what we have to deal with, and there are cases other than setting a property in the base class we need to test. I just don't even know for sure where to start. I have Michael Feathers' book Working Effectively with Legacy Code but it doesn't seem to cover this pervasive "design pattern", used extensively throughout the code we are dealing with. Is it because it's so simple any blinking idjyot should know how to deal with it, or is it because it's a rare case? Somehow I don't think it's either, but I could be wrong...
One possible method I thought of is to extract an interface for the base class and mock the base class constructor - but I'm not sure of the details on how to do that. Note we are all relative newbies at unit testing on the team, no experience to draw on. Not coding newbies, just unit test newbies.
TIA,
Dave
To start with: keep it simple! In your example, the only thing you can test is SomeProperty. Everything else is in the base class which you seem that you don't want to test so a test method GDerivedConstructor_ValidParams_PropertiesSetCorrectly() makes no sense. Long-term, it could be wise having tests for it though.
Tests typically contain three elements known as AAA: Arrange, Act and Assert. So write your test like this:
[Test]
public void GDerivedTestOfSomeProperty()
{
// arrange
ParamOfSomeProperty expected = ValueWeAreLookingFor; // this is something that you
// have in newCalculatedValue
// act
GDerived actual = new GDerived(
AnyValueThatMakesThisTestWork1, // maybe null?
AnyValueThatMakesThisTestWork2); // maybe null?
// assert
Assert.AreEqual(expected, actual.SomeProperty);
}
That's it for a start. Go from here. You will soon see that you get lots of redundant code so you possibly want to re-engineer that after a while.
Mocking makes sense for testing the base class or when the base class does some weird stuff with the objects that are injected. In this case, pass in mocks instead of real objects. I personally would use a mocking framework that does all the job for you and you can also use this for testing the base class itself. A famous example is moq.
On a side note: you'll be better off if you move your test classes into its own project. Testing code should not be released for various reasons plus building, testing and deploying may get easier if they are separated.

Automating Dependency Injection in Unit Testing

I have a folder with assemblies that all contain an implementation of a certain interface (different in each assembly). I have written some unit tests for that interface, and would like to automate the task of running the interface tests on each implementation.
I have a working solution that I don't like:
Write code in the actual test class to load (the assemblies) and instantiate the implementations, store these in a list.
Write each test to loop through the list of implementations, running its assertions on each.
What I want instead is to run all tests on one implementation, then move on to the next to run all tests again, and so on. My thought was to find a way to do something like (programmatically):
Load the assemblies and instantiate the implementations - like before, but not inside the test class.
Create an instance of the test class, injecting the next implementation.
Run the tests.
Move on to the next implementation, repeating the process.
(I realize that I could be shuffling files around in the file system - like put an assembly in one location > run the test which loads one implementation > replace the assembly with the next implementation, and then repeat the process. However, I would like something less crude, if possible.)
I've been looking at the nUnit test-runners (console etc.) for a short cut, but found none so far. Does anyone know if there is a way to achieve what I want using nUnit or any other test suite that can be controlled programmatically? Or maybe there's another way to go about it all, that will satisfy the "what I want" criteria above?
I ended up using the NUnit SuiteAttribute.
This approach involves creating an "umbrella class", like so:
namespace Validator {
public class AllTests {
[Suite]
public static IEnumerable Suite {
get {
var directory = #"[ImplementationAssembliesPath]";
var suite = new ArrayList();
// GetInstances is a method responsible for loading the
// assemblys and instantiating the implementations to be tested.
foreach (var instance in GetInstances(directory)) {
suite.Add(GetResolvedTest(instance));
}
return suite;
}
}
// This part is crucial - this is where I get to inject the
// implementations to the test.
private static Object GetResolvedTest(ICalculator instance) {
return new CalculatorTests {Calculator = instance};
}
[...]
}
Note that the test class has a property for injecting the implementation I want. I choose a property injection because the test-runners usually dislike other than default constructors. However, I had to remove the TestFixtureAttribute from the actual test class (omitted here) to not confuse the Console-Runner on what to run.
Then I created a simple console application to run the NUnit Console-Runner with the /fixture argument:
namespace TestRunner {
using System;
using NUnit.ConsoleRunner;
internal class Program {
private static void Main(String[] args) {
var testDllPath = #"[TestAssemblyPath]/Validator.dll";
var processArgument = #"/process=Separate";
var domainArgument = #"/domain=Multiple";
var runtimeArgument = #"/framework=4.5";
var shadowArgument = #"/noshadow";
var fixtureArgument = String.Format(#"/fixture={0}", "[Namespace].AllTests");
Runner.Main(new[] {
testDllPath,
processArgument,
domainArgument,
runtimeArgument,
shadowArgument,
fixtureArgument
});
Console.ReadLine();
}
}
}
I would still be interested in hearing your opinion on this, and on alternative solutions.
If you want to test a fixed set of assemblies you don't have to do fancy stuff like moving assemblies or instructing test runners.
Like with normal classes you can use inheritance for your unit test classes. I would suggest that you create an abstract base class which does the heavy lifting for testing implementations of this interface. For each implementation of the interface you can create a new class which inherits from the base class.
The base class can look like this:
public class BaseMyInterfaceImplementationTest
{
protected MyInterface ClassUnderTest;
//Add your tests here with the [Test] attribute:
[Test]
public void TestScenario1()
{
//do your test on ClassUnderTest
}
}
And the derived classes like this:
[TestFixture]
public class Implementation1Tests : BaseMyInterfaceImplementationTest
{
[SetUp]
public void BaseTestInitialize()
{
ClassUnderTest = new Implementation1();
}
}

Testing if another method on same object was called with testing the targetObject

public Class Test{
GetDataset(RandomBoolean uncertain);
GetDataset2();
GetDataset3();
}
where method definitions are
public virtual void GetDataset2(){}
public virtual void GetDataset3(){}
public virtual void GetDataset(RandomBoolean uncertain)
{
if (uncertain.State){
GetDataset2();
}
else{
GetDataset3();
}
}
//mocking uncertain.State to return true
//ACT
testObject.GetDataset(uncertainMock);
I want to test if GetDataset2() was called internally when I act on testObject.GetDataset();
I am not mocking the testObject because it's the test object so if I try to do
testObject.AssertWasCalled(x => x.GetDataset2());
It won't let me do this because testObject is not a mocked object.
I am using Rhino Mocks 3.5, I am definitely missing something here.
What is the best way to achieve this.
The short answer is: you can't. On the other thing usually you don't want to. When you are unit testing the class, you want to make sure that the class does its computation correctly and that it has correct side effects. You shouldn't test the internals of the class, because this causes the coupling of the real code and the tests to be too strong. The idea is that you can freely change the implementation of your class and use your tests to make sure it still works correctly. You wouldn't be able to do it if your tests inspect the internal state or flow.
You have 2 options (depending on context)
You can structure your tests in a way that they only look at externally visible behaviour
If (1) is too hard, consider refactoring GetDataset2 into a separate class. Then you would be able to mock it while testing GetDataset method.
That's generally not how unit testing with mocks works.
You should be concerned with collaborators (which you stub/mock) and with results (state changes in the case of void methods), not with the internal workings of the system under test (calls to collaborators notwithstanding).
That is both because and why you can't make those types of behavioural observations (at least not without changing your classes to accommodate testing, by exposing private members or adding state-revealing members -- not good ideas).
Besides using a partial mock via Rhino Mocks, you could also create class derived from Test that replaces the implementation of GetDataSet2() with a function that records it was called. Then check that in your test.
It is a code smell that you're doing too much in one class though.
There is some info about partial mocks here. Here are some code snippets on how to do that with RhinoMocks and Moq.
Try this:
using Rhino.Mocks;
public class TestTest {
[Test]
public void FooTest()
{
var mock = new MockRepository().PartialMock<Test>();
mock.Expect(t => t.GetDataset2());
mock.GetDataset((RandomBoolean)null);
}
}

Applying one test to two separate classes

I have two different classes that share a common interface. Although the functionality is the same they work very differently internally. So naturally I want to test them both.
The best example I can come up with; I
serialize something to a file, one
class serialize it to plaintext, the
other to xml. The data (should) look
the same before and after the
serialization regardless of method
used.
What is the best approach to test both classes the same way? The tests only differs in the way that they instantiate different classes. I dont want to copy the entire test, rename it and change one line.
The tests are currently in JUnit, but Im going to port them to NUnit anyway so code doesnt really matter. Im more looking for a design pattern to apply to this test suite.
Create a common abstract base test class for the test.
abstract class BaseTest{
#Test
public void featureX(){
Type t = createInstance();
// do something with t
}
abstract Type createInstance();
}
ConcreteTest extends BaseTest{
Type createInstace(){
return //instantiate concrete type here.
}
}
I'd reuse the code either with inheritance or aggregation.
To have the shortest code, I'd move a tested instance creation to a factory method in, say, XmlImplementationTest class, and inherit a TextImplementationTest from it:
XmlImplementationTest extends TestCase
{
Interface tested = null
Interface createTested() { return new XmlImplementation() }
...
void setUp() { tested = createTested(); }
}
TextImplementationTest extends XmlImplementationTest
{
override Interface createTested() { return new TextImplementation() }
}
This is not completely correct OO design, as it's TextImplementationTest is NOT a XmlImplementationTest. But usually you don't need to care about it.
Or readdress the test method calls to some common utility class. This would involve more code and not show proper test class in test reports, but might be easier to debug.
I tend to avoid any relations between test classes. I like to keep testcases (or classes) as atomic as possible. The benefit of using inheritance here doesn't outweight the strong coupling you get by it.
I guess it would be helpful, if you could share the validation of the result of the two classes (Assuming blackbox tests). If both classes are able to let you set an outputstream, you might validate that, while the classes itself write to PrintWriter or FileWriter (or whatever you need in your cases).
Furthermore I would avoid to create files during unit-tests, because it might take too much time (+ it might not work on the build machine) and therefore delay your build.
In C#, I'd use a generic helper method to test both cases, something like:
internal static void SerializationTestHelper<T>() where T : IMySerialize
{
T serialize = new T();
// do some testing
}
[TestMethod]
public void XmlTest()
{
SerializationTestHelper<XmlSerialize>();
}
[TestMethod]
public void PlainTextTest()
{
SerializationTestHelper<PlainTextSerialize>();
}

Categories