Consequences of using Virtual keyword on all methods in a class? - c#

I am new to TDD and I am using Moq as my mocking framework.
I am trying to check if a method has been called in my class.
The class is not implementing any Interface.
var mockFooSaverService = new Mock<FooSaverService>();
mockFooSaverService.Verify(service => service.Save(mockNewFoo.Object));
to make this work I found here that I have to make the Save() method as a Virtual method.
Question:
What are the consequences of using Virtual keyword for all methods in a class just for the sake of making it testable?

TL;DR
As per the comments, the need for the virtual keyword indicates that your class hierarchy is too tightly coupled, and you should apply SOLID principles to decouple them from eachother. This has the "happy" side effect of making your class hierarchy easier to Unit Test, as dependencies can be mocked via the interface abstraction.
In more Detail
The need make all public methods virtual to allow Moq to override them is frequently indicative of a separation of concerns or class coupling smell.
e.g. this scenario needed virtual methods because class under test has multiple concerns, and there was a need to mock one method and actually invoke another method in the same system under test.
As per #JonSkeet's comment, it is commonplace SOLID best practice to abstract dependencies as interfaces. As it stands, your class under test (May I call it "Controller"?) is dependent on the concrete FooSaverService to save Foos.
By applying the Dependency Inversion Principle, this coupling can be loosened, by abstracting just the externally useful methods, properties and events of FooSaverService to an interface (IFooSaverService), and then
FooSaverService implements IFooSaverService
Controller depends only on IFooSaverService
(Obviously, there are likely other optimizations, e.g. to make the IFooSaverService generic, e.g. ISaverService<Foo> but not in scope here)
Re : Mock<Foo> - it is fairly uncommon to need to Mock simple data storage classes (POCO's, Entities, DTO's etc) - since these will typically retain data stored in them and can be reasoned over directly in unit tests.
To answer your question re implications of Virtual (which is hopefully less relevant now):
You are breaking the (polymorphic) Open and Closed Principle - it is inviting others to override behaviour without deliberately designing for this - there may be unintended consequence.
As per Henk's comment, there will be a small performance impact in administering the virtual method table
A code example
If you put all this together, you'll wind up with a class hierarchy like so:
// Foo is assumed to be an entity / POCO
public class Foo
{
public string Name { get; set; }
public DateTime ExpiryDate { get; set; }
}
// Decouple the Saver Service dependency via an interface
public interface IFooSaverService
{
void Save(Foo aFoo);
}
// Implementation
public class FooSaverService : IFooSaverService
{
public void Save(Foo aFoo)
{
// Persist this via ORM, Web Service, or ADO etc etc.
}
// Other non public methods here are implementation detail and not relevant to consumers
}
// Class consuming the FooSaverService
public class FooController
{
private readonly IFooSaverService _fooSaverService;
// You'll typically use dependency injection here to provide the dependency
public FooController(IFooSaverService fooSaverService)
{
_fooSaverService = fooSaverService;
}
public void PersistTheFoo(Foo fooToBeSaved)
{
if (fooToBeSaved == null) throw new ArgumentNullException("fooToBeSaved");
if (fooToBeSaved.ExpiryDate.Year > 2015)
{
_fooSaverService.Save(fooToBeSaved);
}
}
}
And then you'll be able to test your class which has the IFooSaverService dependency as follows:
[TestFixture]
public class FooControllerTests
{
[Test]
public void PersistingNullFooMustThrow()
{
var systemUnderTest = new FooController(new Mock<IFooSaverService>().Object);
Assert.Throws<ArgumentNullException>(() => systemUnderTest.PersistTheFoo(null));
}
[Test]
public void EnsureOldFoosAreNotSaved()
{
var mockFooSaver = new Mock<IFooSaverService>();
var systemUnderTest = new FooController(mockFooSaver.Object);
systemUnderTest.PersistTheFoo(new Foo{Name = "Old Foo", ExpiryDate = new DateTime(1999,1,1)});
mockFooSaver.Verify(m => m.Save(It.IsAny<Foo>()), Times.Never);
}
[Test]
public void EnsureNewFoosAreSaved()
{
var mockFooSaver = new Mock<IFooSaverService>();
var systemUnderTest = new FooController(mockFooSaver.Object);
systemUnderTest.PersistTheFoo(new Foo { Name = "New Foo", ExpiryDate = new DateTime(2038, 1, 1) });
mockFooSaver.Verify(m => m.Save(It.IsAny<Foo>()), Times.Once);
}
}

TL;DR;
Another good answer is - making classes extendable, and providing virtual methods (i.e. the possibility to extend them) is "feature" of that class. And this feature needs to be supported and tested as any other feature.
Much better explanation can be read on Eric Lippert's blog.

Related

Proper use of internal class when testing with generic base test class

TL;DR
I can't seem to get InternalsVisibleTo to work with my Unit Tests
Background
I'm currently developing a library where I'd like to make some (but not all) classes internal to avoid confusing the users. Only SOME of the classes should be public from that dll.
I figured this would be a good project to learn how to deal with the internal keyword in C#.
Whenever I make a new project, I find myself using a variant of DDD, where I'll split up responsibilities into different DLL's, but for the sake of this question, think of my project structure like this (from top to bottom):
The executable using my library
The library that I'm developing
A unit test library for unit-testing my library
Testing tools library, containing base class for all unit tests
For a working example of the architecture, you can look at my HelloWorld project over on github. This example does not replicate the problem here though, it only serves to illustrate how I typically layer my code.
I'll often create a base class for my unit tests that creates mocks for any type that I'm testing, i.e. this example:
public class TestsFor<TInstance> where TInstance : class
{
protected MoqAutoMocker<TInstance> AutoMock { get; set; }
protected TInstance Instance { get; set; }
public TestsFor()
{
AutoMock = new MoqAutoMocker<TInstance>();
RunBeforeEachUnitTest(); // virtual
Instance = AutoMock.ClassUnderTest;
RunAfterEachUnitTest(); // virtual
}
}
Problem
The Unit-Tests that I write often take them form of:
public class ReportServiceTests : TestsFor<ReportService>
{
[Fact]
public async Task CreateReport_WhenCalled_LogsTheCall()
{
// Act
await Instance.CreateReport();
// Assert
GetMockFor<ILogger>().Verify(logger => logger.Enter(Instance, nameof(Instance.CreateReport)), Times.Once());
}
}
Where each Unit test will derive the TestsFor<T> class in order to give me an out-of-the-box mocked test class. However, even though I've marked my internal classes with InternalsVisibleTo pointing them to both the unit-test assembly as well as the test-tools assembly (where the unit-test baseclass is) I'm STILL getting Inconsistent accessibility errors.
Does anyone know how to get around this?
The problem you're running into is that you are trying to create a class that is more accessible than its base class.
You can delegate instead of deriving:
public class ReportServiceTests
{
private Tests tests = new Tests();
[Fact]
public async Task CreateReport_WhenCalled_LogsTheCall()
{
tests.CreateReport_WhenCalled_LogsTheCall();
}
private class Tests : TestsFor<ReportService>
{
public async Task CreateReport_WhenCalled_LogsTheCall()
{
// Act
await Instance.CreateReport();
// Assert
GetMockFor<ILogger>().Verify(logger => logger.Enter(Instance, nameof(Instance.CreateReport)), Times.Once());
}
}
}

Unit Testing Interface and abstract memebers using shims in Visual Studio 2013

I have below code which I want to unit test.
public abstract class Manager : MyPermissions, IManager
{
public IManager empManager { get; set; }
public void UpdatePermission()
{
if (empManager != null)
empManager.UpdatePermissions();
}
}
I don't have an class that derives from the above class within the same library otherwise I would have preferred to test the derived class for testing the above code. For now I have below test which I am running but it actually doesn't hit the actual code for testing.
[TestMethod]
public void empManagerGetSet()
{
using (ShimsContext.Create())
{
StubIManager sManager;
sManager = new StubIManager();
sManager.empManagerGet = () => { return (IManager)null; };
var result = sManager.empManagerGet;
Assert.IsNotNull(result);
}
}
Is there any other approach I can use to write a better UT in this scenario?
You don't say what your MyPermissions class looks like, if it has a constructor and if so what it does.. so this might not be the right approach. Note, you'd also need to implement stubs for any abstract methods defined in the Manager class.
If you just want to test the empManager property, you can just create a testable derived type in your test project and test the properties on that. This would give you something like this:
class TestableManager : Manager {
}
Then have a test something like this:
[TestMethod]
public void TestManagerPropertyRoundTrip {
var sut = new TestableManager();
Assert.IsNull(sut.empManager);
sut.empManager = sut;
Assert.AreEqual(sut, sut.empManager);
}
You can also test any other methods on the Manager class, via the TestableManager, since it only exists to make the class concrete.
There's a suggestion in the comments on your question that there is no point testing public properties. This is somewhat opinion based. I tend to take the view that if you were following a test first based approach, you wouldn't necessarily know that the properties were going to be implemented using auto properties, rather than a backing field. So, the behaviour of being able to set a property and retrieve it again is something that I would usually test.

What does it mean to "Test to the Interface"?

I know this is kindof a generic programming question, but I have Googled it on several occasions in the past and I have never found a firm answer.
Several months back I had a conversation about Interfaces with a senior engineer at another company. He said he prefers to write Interfaces for everything because (among other things) it allows him to "test to the interface". I didn't think about the phrase too much at the time, (if I had I would have just asked him to explain!) but it confused me a bit.
I think this means he would write a unit test based on the interface, and that test would then be used to analyze every implementation of the interface. If thats what he meant, it makes sense to me. However, that explanation still left me wondering what the best practice would be when, for example, one of your implementations exposes additional public methods that are not defined in the interface? Would you just write an additional test for that class?
Thanks in advance for any thoughts on the subject.
Are you sure he said test to the interface and not program to the interface?
In very simple terms what program to an interface means is that your classes should not depend on a concrete implementation. They should instead depend on an interface.
The advantage of this is that you can provide different implementations to an interface, and that enables you to unit test your class because you can provide a mock/stub to that interface.
Imagine this example:
public class SomeClass{
StringAnalyzer stringAnalizer = new StringAnalizer();
Logger logger = new Logger();
public void SomeMethod(){
if (stringAnalyzer.IsValid(someParameter))
{
//do something with someParameter
}else
{
logger.Log("Invalid string");
}
}
}
Contrast that with this one:
class SomeClass
{
IStringAnalyzer stringAnalizer;
ILogger logger;
public SomeClass(IStringAnalyzer stringAnalyzer, ILogger logger)
{
this.logger = logger;
this.stringAnalyzer = stringAnalyzer;
}
public void SomeMethod(string someParameter)
{
if (stringAnalyzer.IsValid(someParameter))
{
//do something with someParameter
}else
{
logger.Log("Invalid string");
}
}
}
This enables you to write tests like this:
[Test]
public void SomeMethod_InvalidParameter_CallsLogger
{
Rhino.Mocks.MockRepository mockRepository = new Rhino.Mocks.MockRepository();
IStringAnalyzer s = mockRepository.Stub<IStringRepository>();
s.Stub(s => s.IsValid("something, doesnt matter").IgnoreParameters().Return(false);
ILogger l = mockRepository.DynamicMock<ILogger>();
SomeClass someClass = new SomeClass(s, l);
mockRepository.ReplayAll();
someClass.SomeMethod("What you put here doesnt really matter because the stub will always return false");
l.AssertWasCalled(l => l.Log("Invalid string"));
}
Because in the second example you depend on interfaces and not concrete classes, you can easily swap them by fakes in your tests. And that is only one of the advantages, in the end it boils down to that this approach enables you to take advantage of polymorphism and that is useful not only for tests, but for any situation where you may want to provide alternative implementations for the dependencies of your class.
Full explanation of the example above can be found here.
Testing to an interface - while I've never heard that terminology before - would basically mean that while you test a concrete implementation of your interface, you only test the methods provided BY that interface. For example, consider the following classes:
interface A
{
int MustReturn3();
}
class B : A
{
public int MustReturn3()
{
return Get3();
}
public int Get3()
{
return 2 + 1;
}
}
When you want to test an implementation of A, what do you test?
Well, my implementation is B. I want to make sure that B accomplishes the tasks of A as it is supposed to.
I don't really care about testing Get3(). I only care that MustReturn3() will follow the interface detail, ie, it will return 3.
So I would write a test like so:
private A _a;
[TestInitialize]
public void Initialize()
{
_a = new B();
}
[TestMethod]
public void ShouldReturn3WhenICallMustReturn3()
{
Assert.AreEqual(3, _a.MustReturn3());
}
This ensures I am not testing any implementation detail; I'm only testing what the interface tells me that the class implementation should do.
This is how I write my unit tests, actually.
You can see a real working version of a test like this here.
It makes unit testing easier as you can easily mock interfaces to return you data needed for the code your testing.

Data encapsulation consideration for method parameters (dependency injection)

I have a spec translator, like below.
//all specifications implement this base class
public abstract class SpecBase
{
public abstract void Translate(IContext context);
}
//spec translator implementation
public interface ISpecTranslator
{
void Translate(IContext context);
}
I need to inject the dependency of the SpecTranslator constructor. I have two ways to express the depenency.
Solution 1
public class SpecTranslator:ISpecTranslator
{
IList<SpecBase> specs;
public SpecTranslator(IList<SpecBase> specs)
{
this.specs = specs;
}
}
Please note using IList<SpecBase> works for now, but seems solution 2 provides more protection.
Solution 2:
public class SpecTranslator:ISpecTranslator
{
ISpec spec;
public SpecTranslator(ISpec spec)
{
this.spec = spec;
}
}
public interface ISpec
{
IList<SpecBase> specs {get;}
}
However, the implementation of ISpec have the same problem when using constructor dependency injection.
Any idea on pros and cons on these two solutions, or other solutions?
It seems in order to "translate" (analyze) the list of specs, the contents of the ISpec instance given need to be destructured in all cases. A list has to be obtained and seen through. No matter how many layers of abstraction you weave in, the SpecTranslator will finally need a list.
In your case I'd think of ISpec as a factory. If the list is not lazily calculated there is no value in it.
Also, simplicity is an important design principle. As ISpec does not add any capability or architectural freedom it does not carry its own weight.

IoC and constructor over-injection anti-pattern resolution

This question is a result of a post by Jeffery Palermo on how to get around branched code and dependency injection http://jeffreypalermo.com/blog/constructor-over-injection-anti-pattern/
In his post, Jeffery has a class (public class OrderProcessor : IOrderProcessor) that takes 2 interfaces on the constructor. One is a validator IOrderValidator and an IOrderShipper interface. His method code branches after only using methods on the IOrderValidator interface and never uses anything on the IOrderShipper interface.
He suggests creating a factory that will call a static method to get the delegate of the interface. He is creating a new object in his refactored code which seems unnecessary.
I guess the crux of the issue is we are using IoC to build all our objects regardless if they're being used or not. If you instantiate an object with 2 interfaces and have code that could branch to not use one of them, how do you handle it?
In this example, we assume _validator.Validate(order) always returns false and the IOrderShipper.Ship() method is never called.
Original Code:
public class OrderProcessor : IOrderProcessor
{
private readonly IOrderValidator _validator;
private readonly IOrderShipper _shipper;
public OrderProcessor(IOrderValidator validator, IOrderShipper shipper)
{
_validator = validator;
_shipper = shipper;
}
public SuccessResult Process(Order order)
{
bool isValid = _validator.Validate(order);
if (isValid)
{
_shipper.Ship(order);
}
return CreateStatus(isValid);
}
private SuccessResult CreateStatus(bool isValid)
{
return isValid ? SuccessResult.Success : SuccessResult.Failed;
}
}
public class OrderShipper : IOrderShipper
{
public OrderShipper()
{
Thread.Sleep(TimeSpan.FromMilliseconds(777));
}
public void Ship(Order order)
{
//ship the order
}
}
Refactored Code
public class OrderProcessor : IOrderProcessor
{
private readonly IOrderValidator _validator;
public OrderProcessor(IOrderValidator validator)
{
_validator = validator;
}
public SuccessResult Process(Order order)
{
bool isValid = _validator.Validate(order);
if (isValid)
{
IOrderShipper shipper = new OrderShipperFactory().GetDefault();
shipper.Ship(order);
}
return CreateStatus(isValid);
}
private SuccessResult CreateStatus(bool isValid)
{
return isValid ? SuccessResult.Success : SuccessResult.Failed;
}
}
public class OrderShipperFactory
{
public static Func<IOrderShipper> CreationClosure;
public IOrderShipper GetDefault()
{
return CreationClosure(); //executes closure
}
}
And here is the method that configures this factory at start-up time (global.asax for ASP.NET):
private static void ConfigureFactories()
{
OrderShipperFactory.CreationClosure =
() => ObjectFactory.GetInstance<IOrderShipper>();
}
I just posted a rebuttal of Jeffrey Palermos post.
In short, we should not let concrete implementation details influence our design. That would be violating the Liskov Substitution Principle on the architectural scale.
A more elegant solution lets us keep the design by introducing a Lazy-loading OrderShipper.
I'm running late for a meeting, but a few quick points...
Sticking to the code branching of using only one dependency, there are two branches to propose:
Applying DDD practices, you would not have an OrderProcessor with a dependency on IOrderValidator. Instead, you'd make the Order() entity be responsible for its own validation. Or, stick to your IOrderValidator, but have its dependency within the OrderShipper() that is implemented - since it will return any error codes.
Ensure the dependencies being injected are constructed using a Singleton approach (and configured as Singleton in the IoC container being used). This resolves any memory concerns.
Like someone else that mentioned here, the point is to break the dependency on concrete classes and loosely couple the dependencies using Dependency Injection with some IoC container in use.
This makes future refactoring and swapping out legacy code far easier on a grand scale by limiting the technical debt incurred in the future. Once you have a project near 500,000 lines of code, with 3000 unit tests, you'll know first hand why IoC is so important.

Categories