I have a folder with assemblies that all contain an implementation of a certain interface (different in each assembly). I have written some unit tests for that interface, and would like to automate the task of running the interface tests on each implementation.
I have a working solution that I don't like:
Write code in the actual test class to load (the assemblies) and instantiate the implementations, store these in a list.
Write each test to loop through the list of implementations, running its assertions on each.
What I want instead is to run all tests on one implementation, then move on to the next to run all tests again, and so on. My thought was to find a way to do something like (programmatically):
Load the assemblies and instantiate the implementations - like before, but not inside the test class.
Create an instance of the test class, injecting the next implementation.
Run the tests.
Move on to the next implementation, repeating the process.
(I realize that I could be shuffling files around in the file system - like put an assembly in one location > run the test which loads one implementation > replace the assembly with the next implementation, and then repeat the process. However, I would like something less crude, if possible.)
I've been looking at the nUnit test-runners (console etc.) for a short cut, but found none so far. Does anyone know if there is a way to achieve what I want using nUnit or any other test suite that can be controlled programmatically? Or maybe there's another way to go about it all, that will satisfy the "what I want" criteria above?
I ended up using the NUnit SuiteAttribute.
This approach involves creating an "umbrella class", like so:
namespace Validator {
public class AllTests {
[Suite]
public static IEnumerable Suite {
get {
var directory = #"[ImplementationAssembliesPath]";
var suite = new ArrayList();
// GetInstances is a method responsible for loading the
// assemblys and instantiating the implementations to be tested.
foreach (var instance in GetInstances(directory)) {
suite.Add(GetResolvedTest(instance));
}
return suite;
}
}
// This part is crucial - this is where I get to inject the
// implementations to the test.
private static Object GetResolvedTest(ICalculator instance) {
return new CalculatorTests {Calculator = instance};
}
[...]
}
Note that the test class has a property for injecting the implementation I want. I choose a property injection because the test-runners usually dislike other than default constructors. However, I had to remove the TestFixtureAttribute from the actual test class (omitted here) to not confuse the Console-Runner on what to run.
Then I created a simple console application to run the NUnit Console-Runner with the /fixture argument:
namespace TestRunner {
using System;
using NUnit.ConsoleRunner;
internal class Program {
private static void Main(String[] args) {
var testDllPath = #"[TestAssemblyPath]/Validator.dll";
var processArgument = #"/process=Separate";
var domainArgument = #"/domain=Multiple";
var runtimeArgument = #"/framework=4.5";
var shadowArgument = #"/noshadow";
var fixtureArgument = String.Format(#"/fixture={0}", "[Namespace].AllTests");
Runner.Main(new[] {
testDllPath,
processArgument,
domainArgument,
runtimeArgument,
shadowArgument,
fixtureArgument
});
Console.ReadLine();
}
}
}
I would still be interested in hearing your opinion on this, and on alternative solutions.
If you want to test a fixed set of assemblies you don't have to do fancy stuff like moving assemblies or instructing test runners.
Like with normal classes you can use inheritance for your unit test classes. I would suggest that you create an abstract base class which does the heavy lifting for testing implementations of this interface. For each implementation of the interface you can create a new class which inherits from the base class.
The base class can look like this:
public class BaseMyInterfaceImplementationTest
{
protected MyInterface ClassUnderTest;
//Add your tests here with the [Test] attribute:
[Test]
public void TestScenario1()
{
//do your test on ClassUnderTest
}
}
And the derived classes like this:
[TestFixture]
public class Implementation1Tests : BaseMyInterfaceImplementationTest
{
[SetUp]
public void BaseTestInitialize()
{
ClassUnderTest = new Implementation1();
}
}
Related
We've started to introduce some behavior tests that try to test some of out software modules like a complete black box.
This test suite was written using inheritance from base test class for easier organization.
Now we'd like to reuse this test suite for the testing of another interface-compatible module.
The solution we were able to find was to inherit the test class, and implement another constructor.
I'd like to confirm that there's no better option, because writing duplicate inherited classes for each of the test suite class seems wrong.
[TestClass]
public class RouModel_Basic_RunnerBasic : ROUModelTest_Basic
{
public RouModel_Basic_RunnerBasic() : base()
{
//init basic model here
model = basicModel;
}
}
[TestClass]
public class RouModel_Basic_RunnerOther : ROUModelTest_Basic
{
public RouModel_Basic_RunnerOther() : base()
{
//init other model here
model = otherModel;
}
}
public class ROUModelTest_Basic : RouModelTest
{
[TestMethod]
public void TestABC()
{
string input = "abc"
var result = model.run(input);
Assert.AreEqual("123", result);
}
}
public class RouModelTest
{
protected IModelTest model;
...
}
If you just want to re-use the test code as-is but with a different module under test, inheritance seems to be the most straightforward, since you will need a separate test method for each test, and inheritance is the only way to do that without having to type them yourself. This shouldn't introduce any duplication, since you only have to re-implement the parts that are actually different in each subclass.
If your issue is with the fact that you are building your test fixture in the test case class constructor, an alternative would be to apply the Template Method design pattern to your test methods, and add a virtual creation method for the module under test that subclasses can override to create instances of the specific module you want them to test. Alternatively, you could create a test setup method and mark it with the appropriate attribute, as described in this answer.
That being said, if you really want to keep them all in the same test case class, you might be able to do so if you implement creation methods for the individual modules under test on your base test case class, and then pass the names of those methods to your test methods and call them using reflection. There should be an attribute that allows you to pass arguments to test methods, which is discussed in this answer. However, the feasibility of this approach is just speculation on my part, and you might run the risk of making your tests more obscure.
First, let me introduce you my project.
We are developping an app in which user can work with programs. As programs I mean list of instructions for a confidential use.
There are different types of Programs all inheriting from the Program abstract base class.
As the user can create different types of program, we developped a ProgramManager that can instantiante any type of Program by its type. We don't need to instantiate the abstract class but all the concrete classes (and it works) but as concrete Program have same methods (AddNewChannel, Save, ...) we handle them like Programs.
Here's a sample of code:
public Program CreateProgram(Type type)
{
Program program = Activator.CreateInstance(type) as Program;
program.Directory = ProgramsPath;
int nbChannels = 2; //not 2 but a long line searching the correct number where it is.
for (int i = 1; i <= nbChannels; i++)
{
program.AddNewChannel(i);
}
program.Save();
return program;
}
What I now have to do is test this function and I don't want to duplicate the unitTests I already made for the different Program classes.
As an example, here is one of my test functions (for the Save method) with it's init. I store the types I need to test in a xml file.
[TestInitialize]
public void TestInitialize()
{
if (!TestContext.TestName.StartsWith("GetKnownTypes"))
type = UnitTestsInitialization.applicationHMIAssembly.GetType((string)TestContext.DataRow["Data"]);
}
[TestMethod]
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.XML",
"|DataDirectory|\\" + DATA_FILE, "Row",
DataAccessMethod.Sequential)]
public void SavedProgramCreatesFile()
{
Program program = Activator.CreateInstance(type) as Program;
program.Name = "SavedProgramCreatesFile";
program.Directory = DIRECTORY;
program.Save();
string savedProgramFileName = program.GetFilePath();
bool result = File.Exists(savedProgramFileName);
Assert.IsTrue(result);
}
All my concrete Program classes have been tested separatly.
Thereby, I would like to test if the following methods program.AddNewChannel and program.Save are called.
I gave a look at Moq but the first problem is that the method Save is not abstract.
Also, using Activator doesn't allow me to make a Mock<Program>.
I tried the following in a unit test in order to try to instantiate the mock and use it like a program:
[TestMethod]
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.XML",
"|DataDirectory|\\" + DATA_FILE, "Row",
DataAccessMethod.Sequential)]
public void CreateProgram_CallsProgramSaveMethod()
{
Mock<Program> mock = new Mock<Program>();
mock.Setup(p => p.AddNewChannel(It.IsAny<int>()));
Program program = pm.CreateProgram(mock.Object.GetType());
mock.Verify(p => p.Save());
mock.Verify(p => p.GetFilePath(It.IsAny<string>()));
mock.Verify(p => p.AddNewChannel(It.IsAny<int>()), Times.Exactly(ProgramManager.NB_MACHINE_CHANNELS));
Assert.IsNotNull(program);
program.DeleteFile();
}
Which was inspired by this question: How to mock An Abstract Base Class
And it works until it reaches the line program.AddNewChannel(i); in the for loop. The error is the following:
System.NotImplementedException: 'This is a DynamicProxy2 error: The interceptor attempted to 'Proceed' for method 'Void AddNewChannel(Int32)' which is abstract. When calling an abstract method there is no implementation to 'proceed' to and it is the responsibility of the interceptor to mimic the implementation (set return value, out arguments etc)'
It seems that the setup doesn't work but I might understand why. (I try to instantiate a subtype of Proxy which doesn't implement verify method)
I also tried to use a Proxy over my program class which would implement an interface which would contain the methods I needed but the problem here is the activator again.
Can anyone suggest me any way of testing those method calls ? (Even if I need to change my method CreateProgram)
I gave a look here: How to mock non virtual methods? but I am not sure this would be applicable to my problem.
I use MSTests for my unittests.
NOTICE
Everything else works fine. All my other tests pass without troubles and my code seems to work (Tested by hand).
Thanks in advance.
The root cause of the problem is that you're using a type as a parameter, which you then use to create an instance of this type. However, you're passing in the type of an abstract class, which is specifically not made for instantiating. You need to work with the concrete classes directly.
Thereby, I would like to test if the following methods program.AddNewChannel and program.Save are called.
That's not sufficient as a test. You want to test whether these methods work as expected, not just if they're called and then assume that they work.
What you're describing is a (very rudimentary) integration test, not a unit test.
I don't want to duplicate the unitTests I already made for the different Program classes
This is a very dangerous decision. Part of the idea behind unit testing is that you create separate tests for different (concrete) objects. The tests need to be as segregated as is reasonably possible. You're trying to reuse testing logic, which is a good thing, but it needs to be done in a way that it does not compromise your test segregation.
But there are ways to do it without compromising your test segregation. I only have testing experience with NUnit but I assume a similar approach works in other frameworks as well.
Assume the following:
public abstract class Program
{
public bool BaseMethod() {}
}
public class Foo : Program
{
public bool CustomFooMethod() {}
}
public class Bar : Program
{
public bool CustomBarMethod() {}
}
Create an abstract class testing method:
[TestFixture]
[Ignore]
public class ProgramTests
{
public virtual Program GetConcrete()
{
throw new NotImplementedException();
}
[Test]
public void BaseMethodTestReturnsFalse()
{
var result = GetConcrete().BaseMethod();
Assert.IsFalse(result);
}
}
[Ignore] ensures that the ProgramTests class does not get tested by itself.
Then you inherit from this class, where the concrete classes will be tested:
[TestFixture]
public class FooTests
{
private readonly Foo Foo;
public FooTests()
{
this.Foo = new Foo();
}
public overrides Program GetConcrete()
{
return this.Foo;
}
[Test]
public void CustomFooMethodTestReturnsFalse()
{
var result = this.Foo.CustomFooMethod();
Assert.IsFalse(result);
}
}
BarTests is similarly implemented.
NUnit (presumably other testing frameworks as well) will discover all inherited tests and will run those tests for the derived class. Every class that derives from ProgramTests will therefore always include the BaseMethodTestReturnsTrue test.
This way, your base class' tests are reusable, but each concrete class will still be tested separately. This maintains your test separation, while also preventing you having to copy/paste test logic for every concrete class.
I also noticed this:
Mock<Program> mock = new Mock<Program>();
mock.Setup(p => p.AddNewChannel(It.IsAny<int>()));
Program program = pm.CreateProgram(mock.Object.GetType());
I don't understand the purpose of this code. How is it any different from simply doing:
Program program = pm.CreateProgram(typeof(Program).GetType());
As far as I can see, both the mock and its setup are irrelevant since you're only looking at its type and then having CreateProgram() create a new object for you anyway.
Secondly, and this refers back to my example of testing the concrete classes, you shouldn't be testing with Program directly, you should be testing your derived program classes (Foo and Bar).
This is the root cause of the problem. You're using a type as a parameter, which you then use to create an instance of this type. However, you're passing in the type of an abstract class, which is specifically not made for instantiating. You need to work with the concrete classes directly.
Create a wrapper interface and class around Activator, then pass the type to that:
public interface IActivatorWrapper
{
object CreateInstance(Type type);
}
public class ActivatorWrapper : IActivatorWrapper
{
public object CreateInstance(Type type)
{
return Activator.CreateInstance(type);
}
}
Use this instead of Activator directly, then mock the IActivatorWrapper to return whatever mock object you want.
Another idea to help with your problem would be to add an IProgram interface to your abstract Program class, then use that to refer to your concrete Program instances. This might also help you, should you ever want to write a concrete Program with a different base class.
I have a class with a construct like this:
private static Dictionary<Contract, IPriceHistoryManager> _historyManagers = new Dictionary<Contract, IPriceHistoryManager>();
and lets say 2 methods like:
public void AddSth()
{
_historManagers.Add(new Contract(), new PriceHistoryManager());
}
public int CountDic()
{
return _historyManagers.Count();
}
Problem:
When running unittests there is no way to "reset" the Dictionary and when i create multiple unittests with seperate instances of the class, then "CountDic" gives unpredictable results and i can't test the listentries.
Question:
Is this generally considered a "bad" approach and if yes: how to do it better/more unittestable?
And if not: How to unittest this best?
Thx.
Don't be afraid to expose public operations for testing purposes. Paraphrased from "The Art of Unit Testing" by Roy Osherove: When Toyota builds a car, there are testing points available. When Intel builds a chip, there are testing points available. There are interfaces to the car or chip that exist only for testing. Why don't we do the same for software? Would a ResetHistory() method completely destroy your API?
If that answer is yes, then create the method, but make the method internal. You can then use the assembly InternalsVisibleTo to expose the guts to your unit test library. You have a method available to you created 100% for testing, but there's no change to your public API.
In your example, CountDic isn't unpredictable: it should return one more than before the call to AddSth().
So:
[Test]
public void Test()
{
var item = new ClassUnderTest();
int initialCount = item.CountDic();
item.AddSth();
int finalCount = item.CountDic();
Assert.That(finalCount == initialCount + 1);
}
In general, though, testing classes that maintain state can be tricky. Sometimes it's necessary to break out the part of the class that maintains state (in your case, the dictionary) and move it to another class. Then, you can mock that "storage" class and pass it in through a constructor.
This is my first question so please be kind! :)
What I am trying to do is write some tests for a manager class that during construction adds many new instances of a single item class to a list. When the UpdateAllItems is called in this manager class the intention is to iterate the list and call Increment on each single item.
The manager class is my code, but the single item class is not so I can't modify it.
I use NUnit for a testing framework and am starting to work with Moq. Because the manager class uses the single item class I would think I need to use a Moq so I am testing only the manager, not the single item.
How do I write tests for my UpdateAllItems method? (Technically I should be writing the tests first I know).
Here is a some sample code that gives a general idea of what I am working with...
public class SingleItem_CodeCantBeModified
{
public int CurrentValue { get; private set; }
public SingleItem_CodeCantBeModified(int startValue)
{
CurrentValue = startValue;
}
public void Increment()
{
CurrentValue++;
}
}
public class SingleItemManager
{
List<SingleItem_CodeCantBeModified> items = new List<SingleItem_CodeCantBeModified>();
public SingleItemManager()
{
items.Add(new SingleItem_CodeCantBeModified(100));
items.Add(new SingleItem_CodeCantBeModified(200));
}
public void UpdateAllItems()
{
items.ForEach(item => item.Increment());
}
}
Thanks in advance for all the help!
The simple answer is, you can't. The method that UpdateAllItems calls (Increment()) is non-virtual, so you won't be able to mock it.
Your options, as I see it, are:
Don't test UpdateAllItems at all. Its implementation is trivial, so this is an option to consider (though not ideal).
Create real SingleItem_CodeCantBeModified instances in your test. Purists would say that you no longer have a unit test at this point, but it could still be a useful test.
Add an ISingleItem interface, and an SingleItemAdapter : ISingleItem class that holds onto a reference to a SingleItem_CodeCantBeModified and forwards the calls. Then you can write SingleItemManager to operate on ISingleItems, and you'll be free to pass in mock ISingleItems in your tests. (Depending on how your system is set up, you might even be able to descend from SingleItem_CodeCantBeModified, implement the interface on your descendant, and use those objects instead of writing an adapter.)
That last option gives you the most options, but at the cost of some complexity. Choose the option that's best suited for what you're trying to accomplish.
Your Manager is too dependent on Item (in List<Item>). Can you extract list population into separate class to be able to mock it? e.g.:
public SingleItemManager()
{
items.Add(ItemRepository.Get(100));
items.Add(ItemRepository.Get(200));
}
Testing (some code omitted):
int i = 0;
var itemMock = new Mock<Item>();
itemMock.Setup(i => i.Increment()).Callback(() => i++);
var repositoryMock = new Moc<ItemRepository>();
repositoryMock.Setup(r => r.Get(It.IsAny<int>()).Returns(itemMock.Object);
var manager = new SingleItemManager();
manager.UpdateAllItems();
Assert.AreEqual(i, 1);
As usual, you can add another level of indirection.
Create a wrapper class around SingleItem_CodeCantBeModified
Make this wrapper inherit IItem interface
Make SingleItemManager depend on IItem instead of SingleItem_CodeCantBeModified
OR
If Increment is a virtual method (I understand it isn't in your sample code, but just in case), use partial mocking.
Instead of hard-coding your additional concrete items, have the SingleItem_CodeCantBeModified implement an interface (or embed it in a wrapper which implements the interface) then pass in a (new) factory which will create these items.
In your test you will create a mock of the factory to pass in to your Manager class, then you can monitor what methods are called on that mocked object.
Although this would be more about testing the internals of the system, not the byproducts. What interface is the Manager implementing? If it's not proving itself externally, what results are you testing for?
Presently I'm starting to introduce the concept of Mock objects into my Unit Tests. In particular I'm using the Moq framework. However, one of the things I've noticed is that suddenly the classes I'm testing using this framework are showing code coverage of 0%.
Now I understand that since I'm just mocking the class, its not running the actual class itself....but how do I write these tests and have Code Coverage return accurate results? Do I have to write one set of tests that use Mocks and one set to instantiate the class directly.
Perhaps I am doing something wrong without realizing it?
Here is an example of me trying to Unit Test a class called "MyClass":
using Moq;
using NUnitFramework;
namespace MyNameSpace
{
[TestFixture]
public class MyClassTests
{
[Test]
public void TestGetSomeString()
{
const string EXPECTED_STRING = "Some String!";
Mock<MyClass> myMock = new Mock<MyClass>();
myMock.Expect(m => m.GetSomeString()).Returns(EXPECTED_STRING);
string someString = myMock.Object.GetSomeString();
Assert.AreEqual(EXPECTED_STRING, someString);
myMock.VerifyAll();
}
}
public class MyClass
{
public virtual string GetSomeString()
{
return "Hello World!";
}
}
}
Does anyone know what I should be doing differently?
You are not using your mock objects correctly. When you are using mock objects you meant to be testing how your code interacts with other objects without actually using the real objects. See the code below:
using Moq;
using NUnitFramework;
namespace MyNameSpace
{
[TestFixture]
public class MyClassTests
{
[Test]
public void TestGetSomeString()
{
const string EXPECTED_STRING = "Some String!";
Mock<IDependance> myMock = new Mock<IDependance>();
myMock.Expect(m => m.GiveMeAString()).Returns("Hello World");
MyClass myobject = new MyClass();
string someString = myobject.GetSomeString(myMock.Object);
Assert.AreEqual(EXPECTED_STRING, someString);
myMock.VerifyAll();
}
}
public class MyClass
{
public virtual string GetSomeString(IDependance objectThatITalkTo)
{
return objectThatITalkTo.GiveMeAString();
}
}
public interface IDependance
{
string GiveMeAString();
}
}
It doesn't look like it is doing anything useful when your code is just returning a string without any logic behind it.
The real power comes if you GetSomeString() method did some logic that may change the result of the output string depending on the return from the IDependdance .GiveMeAString() method, then you can see how your method handles bad data being sent from the IDependdance interface.
Something like:
public virtual string GetSomeString(IDependance objectThatITalkTo)
{
if (objectThatITalkTo.GiveMeAString() == "Hello World")
return "Hi";
return null;
}
Now if you have this line in your test:
myMock.Expect(m => m.GiveMeAString()).Returns(null);
What will happen to your GetSomeString() method?
Big mistake is mocking the System Under Test (SUT), you test something else. You should mock only SUT dependencies.
I would recommend staying away from mocking frameworks until you understand the interactions that are going on here.
IMO it's better to learn with manually created test doubles, then graduate to a mocking framework afterwards. My reasoning:
Mocking frameworks abstract away what's actually happening; it's easier to grasp the interactions if you have to create your dependencies explicitly, then follow the tests in the debugger.
It's easy to misuse frameworks. If you roll your own when you're learning, you are more likely to understand the differences between different type of test doubles. If you go straight to a mocking framework, it's easy to use mocks when you wanted stubs and vice versa -- there is a big difference.
Think of it this way: The class under test is the focus. You create an instance of it, call its methods and then assert that the result is correct. If the class under test has dependencies (e.g. something is required in the constructor), you satisfy those dependencies using either A: real classes or B: test doubles.
The reason we use test doubles is that it isolates the class under test, meaning that you can exercise its code in a more controlled fashion.
E.g. if you have a class that contains a network object, you cannot test the owning class's error handling routines that detect dead connections if you're forced to use a concrete network connection object. Instead, you inject a fake connection object and tell it to throw an exception when its "SendBytes" method is called.
I.e. In each test, the dependencies of the class under test are created specifically to exercise a particular piece of code.