I have a number of controllers that I am testing, each of which has a dependency on a repository. This is how I am supplying the mocked repository in the case of each test fixture:
[SetUp]
public void SetUp()
{
var repository = RepositoryMockHelper.MockRepository();
controller = new HomeController(repository.Object);
}
And here is the MockRepository helper method for good measure:
internal static Mock<IRepository> MockRepository()
{
var repository = new Mock<IRepository>();
var posts = new InMemoryDbSet<Post>()
{
new Post {
...
},
...
};
repository.Setup(db => db.Posts).Returns(posts);
return repository;
}
... = code removed for the sake of brevity.
My intention is to use a new instance of InMemoryDbSet for each test. I thought that using the SetUp attribute would achieve this but clearly not.
When I run all of the tests, the results are inconsistent as the tests do not appear to be isolated. One test will for example, remove an element from from the data store and assert that the count has been decremented but according to the whim of the test runner, another test might have incremented the count, causing both tests to fail.
Am I approaching these tests in the correct way? How can I resolve this issue?
The package you reference you are using for your InMemoryDataSet uses a static backing data structure, and so will persist across test runs. This is why you're seeing the inconsistent behaviors. You can get around this by using another package (as you mention), or pass in a new HashSet to the constructor in every test so it doesn't use a static member.
As to the rest of your question, I think you're approaching the testing well. The only thing is that since all of your tests have the same setup (in your SetUp method), they could influence each other. I.e., if you have a test that relies on having no Foo objects in the set and one that needs at least one, then you're either going to add one in SetUp and remove it in the test that doesn't need it, or vice versa. It could be more clear to have specific set up procedures as #BillSambrone mentioned.
As #PatrickQuirk pointed out, I think your problem is due to what InMemoryDbSet does under the covers.
Regarding the "Am I approaching this right ?" part :
If, as I suspect, your Repository exposes some kind of IDbSet, it's probably a leaky abstraction. The contract of an IDbSet is far too specific for what a typical Repository client wants to do with the data. Better to return an IEnumerable or some sort of read-only collection instead.
From what you describe, it seems that consumers of Posts will manipulate it as a read-write collection. This isn't a typical Repository implementation - you'd normally have separate methods such as Get(), Add(), etc. The actual internal data collection is never exposed, which means you can easily stub or mock out just the individual Repository operations you need and not fear the kind of issues you had with your test data.
Whatever is in your [SetUp] method will be called for each test. This is probably behavior that you don't want.
You can either place the code you have in the [SetUp] method inside each individual test, or you can create a separate private method within your unit test class that will spin up a freshly mocked DbSet for you to keep things DRY.
Related
Am new to unit testing, and just getting started writing unit tests for an existing code base.
I would like to write a unit test for the following method of a class.
public int ProcessFileRowQueue()
{
var fileRowsToProcess = this.EdiEntityManager.GetFileRowEntitiesToProcess();
foreach (var fileRowEntity in fileRowsToProcess)
{
ProcessFileRow(fileRowEntity);
}
return fileRowsToProcess.Count;
}
The problem is with GetFileRowEntitiesToProcess(); The Entity Manager is a wrapper around the Entity Framework Context. I have searched on this and found one solution is to have a test database of a known state to test with. However, it seems to me that creating a few entities in the test code would yield more consistent test results.
But as it exists, I don't see a way to mock the Manager without some refactoring.
Is there a best practice for resolving this? I apologize for this question being a bit naïve, but I just want to make sure I go down the right road for the rest of the project.
I'm hearing two questions here:
Should I mock EdiEntityManager?
Yes. It's a dependency external to the code being tested, its creation and behavior are defined outside of that code. So for testing purposes a mock with known behavior should be injected.
How can I mock EdiEntityManager?
That we can't know from the code posted. It depends on what that type is, how it's created and supplied to that containing object, etc. To answer this part of the question, you should attempt to:
Create a mock with known behavior for the one method being invoked (GetFileRowEntitiesToProcess()).
Inject that mock into this containing object being tested.
For either of these efforts, discover what may prevent that from happening. Each such discovery is either going to involve learning a little bit more about the types and the mocks, or is going to reveal a need for refactoring to allow testability. The code posted doesn't reveal that.
As an example, suppose EdiEntityManager is created in the constructor:
public SomeObject()
{
this.EdiEntityManager = new EntityManager();
}
That would be something that prevents mocking because it gets in the way of Step 2 above. Instead, the constructor would be refactored to require rather than instantiate:
public SomeObject(EntityManager ediEntityManager)
{
this.EdiEntityManager = ediEntityManager;
}
That would allow a test to supply a mock, and conforms with the Dependency Inversion Principle.
Or perhaps EntityManager is too concrete a type and difficult to mock/inject, then perhaps the actual type should be an interface which EntityManager defines. Worst case scenario with this problem could be that you don't control the type at all and simply need to define a wrapper object (which itself has a mockable interface) to enclose the EntityManager dependency.
I've been trying to shift myself into a more test driven methodology when writing my .net MVC based app. I'm doing all my dependency injection using constructor-based injection. Thus far it's going well, but I've found myself doing something repeatedly and I'm wondering if there is a better practice out there.
Let's say I want to test a Controller. It has a dependency on a Unit Of Work (database) object. Simple enough... I write my controller to take that interface in its constructor, and my DI framework (Ninject) can inject it at runtime. Easy. Meanwhile in my unit test, I can manually construct my controller with a mocked-up database object. I like that I can write lots of individual self-contained tests that take care of all the object construction and testing.
Now I've moved on and started adding new features & functions to my Controller object. The Controller now has one or two more dependencies. I can write more tests using these 3 dependencies, but my older tests are all broken (won't compile), as the compiler throws a whole bunch of errors like this:
'MyProject.Web.Api.Controllers.MyExampleController' does not contain a constructor that takes 3 arguments
What I've been doing (which smells bad) is going back and updating all my unit tests, changing the construction code, and adding null parameters for all the new dependencies that my old tests don't care about, like this:
From this:
var controllerToTest = new MyExampleController(mockUOW.Object);
To this:
var controllerToTest = new MyExampleController(mockUOW.Object, null, null);
This gets everything to compile and gets my tests to run again, but I don't like the prospect of going back and editing tons of old tests just to change calls to my object constructors.
Is there a better way to write your unit tests (or a better way to write your classes and do DI?) so they don't all break when you add a new dependency?
You've run into a common issue when unit testing: duplication of code causing significant refactoring overhead. The solution is to try to restrict the instantiation of the object(s) under test to a single place. If you can, try to do it in the TestInitialize method of your tests:
[TestInitialize]
public void Init()
{
this.mockUOW = new Mock<ISomeDependency>();
this.mock2 = new Mock<IAnotherDependency>();
this.mock3 = new Mock<IYetAnotherDependency>();
// Do initial set-up on your mocks
this.controllerToTest = new MyExampleController(this.mockUOW, this.mock2, this.mock3);
}
Sometimes, this isn't practical: you need to do specific set-up in each test before you create an instance of the class under test. In this situation, move the code to create the object into a well-named method, and call that:
[TestMethod]
public void MyTestMethod()
{
// Do any required set-up on mocks, etc.
this.CreateController();
}
private void CreateController()
{
this.controllerToTest = new MyExampleController(this.mockUOW, this.mock2, this.mock3);
}
Now you (hopefully) have only a single place in your tests to update when dependencies get added to one of your controllers.
Note also that that the CreateController method isn't named CreateMyExampleController. As a best practice, I try to avoid method names in tests that are specific to certain classes or methods. Here, for example, including the class name in the method adds another refactoring dependency which is easily overlooked.
You just inline the mock initialization. Instead of supplying null just supply new Mock<IClassName>.Object().
You can try using automock (on codeplex) to automatically mock out your top-level objects, this reduces the amount of retyping that you need to fix objects.
Tests should not share the same context - you should be trying to test a stateless system. For the most part without refactoring tools enabled you're just going to have to deal with it.
Resharper is very powerful in this regard. When you add a new class, you can Ctrl+F6 and add a new parameter to a constructor. It will prompt you with how to fill caller locations - just type your mock in there and everywhere that it matters (if you are injecting all your dependencies the right way) will be filled in automatically.
If your DI container supports multiple instances (i.e. Unity does) you can use local container to resolve all references instead of calling constructor.
Pseudocode:
var container = new MyDependencyContainer();
container.RegisterInstance<IMyInterface>(mockedMyInterface);
var myClass = container.Reslove(myClass);
To simplify code in each test you can have container/mocks as field and configure container in method marked with [TestInitialize] attribute.
Note that such code will not break at compile time when you add dependency to constructor, but likely still require changes (i.e. adding new dependencies to container).
I currently have a repository that is using Entity Framework for my CRUD operations.
This is injected into my service that needs to use this repo.
Using AutoMapper, I project the entity Model onto a Poco model and the poco gets returned by the service.
If my objects have multiple properties, what is a correct way to set-up and then assert my properties?
If my service has multiple repo dependencies what is the correct way to setup all my mocks? * - A class [setup] where all the mocks and objects are configured for these test fixtures?*****
I want to avoid having 10 tests and each test has 50 asserts on properties and dozens on mocks set-up for each test. This makes maintainability and readability difficult.
I have read Art of Unit Testing and did not discover any suggestions how to handle this case.
The tooling I am using is Rhino Mocks and NUnit.
I also found this on SO but it doesn't answer my question: Correctly Unit Test Service / Repository Interaction
Here is a sample that expresses what I am describing:
public void Save_ReturnSavedDocument()
{
//Simulate DB object
var repoResult = new EntityModel.Document()
{
DocumentId = 2,
Message = "TestMessage1",
Name = "Name1",
Email = "Email1",
Comment = "Comment1"
};
//Create mocks of Repo Methods - Might have many dependencies
var documentRepository = MockRepository.GenerateStub<IDocumentRepository>();
documentRepository.Stub(m => m.Get()).IgnoreArguments().Return(new List<EntityModel.Document>()
{
repoResult
}.AsQueryable());
documentRepository.Stub(a => a.Save(null, null)).IgnoreArguments().Return(repoResult);
//instantiate service and inject repo
var documentService = new DocumentService(documentRepository);
var savedDocument = documentService.Save(new Models.Document()
{
ID = 0,
DocumentTypeId = 1,
Message = "TestMessage1"
});
//Assert that properties are correctly mapped after save
Assert.AreEqual(repoResult.Message, savedDocument.Message);
Assert.AreEqual(repoResult.DocumentId, savedDocument.DocumentId);
Assert.AreEqual(repoResult.Name, savedDocument.Name);
Assert.AreEqual(repoResult.Email, savedDocument.Email);
Assert.AreEqual(repoResult.Comment, savedDocument.Comment);
//Many More properties here
}
First of all, each test should only have one assertion (unless the other validates the real one) e.q. if you want to assert that all elements of a list are distinct, you may want to assert first that the list is not empty. Otherwise you may get a false positive. In other cases there should only be one assert for each test. Why? If the test fails, it's name tells you exactly what is wrong. If you have multiple asserts and the first one fails you don't know if the rest was ok. All you know than is "something went wrong".
You say you don't want to setup all mocks/stubs in 10 tests. This is why most frameworks offer you a Setup method which runs before each test. This is where you can put most of your mocks configuration in one place and reuse it. In NUnit you just create a method and decorate it with a [SetUp] attribute.
If you want to test a method with different values of a parameter you can use NUnit's [TestCase] attributes. This is very elegant and you don't have to create multiple identical tests.
Now lets talk about the useful tools.
AutoFixture this is an amazing and very powerful tool that allows you to create an object of a class which requires multiple dependencies. It setups the dependencies with dummy mocks automatically, and allows you to manually setup only the ones you need in a particular test. Say you need to create a mock for UnitOfWork which takes 10 repositories as dependencies. In your test you only need to setup one of them. Autofixture allows you to create that UnitOfWork, setup that one particular repository mock (or more if you need). The rest of the dependencies will be set up automatically with dummy mocks. This saves you a huge amount of useless code. It is a little bit like an IOC container for your test.
It can also generate fake objects with random data for you. So e.q. the whole initialization of EntityModel.Document would be just one line
var repoResult = _fixture.Create<EntityModel.Document>();
Especially take a look at:
Create
Freeze
AutoMockCustomization
Here you will find my answer explaining how to use AutoFixture.
SemanticComparison Tutorial This is what will help you to avoid multiple assertions while comparing properties of objects of different types. If the properties have the same names it will to it almost automatically. If not, you can define the mappings. It will also tell you exactly which properties do not match and show their values.
Fluent assertions This just provides you a nicer way to assert stuff.
Instead of
Assert.AreEqual(repoResult.Message, savedDocument.Message);
You can do
repoResult.Message.Should().Be(savedDocument.Message);
To sum up. These tools will help you create your test with much less code and will make them much more readable. It takes time to get to know them well. Especially AutoFixture, but when you do, they become first things you add to your test projects - believe me :). Btw, they are all available from Nuget.
One more tip. If you have problems with testing a class it usually indicates a bad architecture. The solution usually is to extract smaller classes from the problematic class. (Single Responsibility Principal) Than you can easily test the small classes for business logic. And easily test the original class for interactions with them.
Consider using anonymous types:
public void Save_ReturnSavedDocument()
{
// (unmodified code)...
//Assert that properties are correctly mapped after save
Assert.AreEqual(
new
{
repoResult.Message,
repoResult.DocumentId,
repoResult.Name,
repoResult.Email,
repoResult.Comment,
},
new
{
savedDocument.Message,
savedDocument.DocumentId,
savedDocument.Name,
savedDocument.Email,
savedDocument.Comment,
});
}
There is one thing to look-out for: nullable types (eg. int?) and properties that might have slightly different types (float vs double) - but you can workaround this by casting properties to specific types (eg. (int?)repoResult.DocumentId ).
Another option would be to create a custom assert class/method(s).
Basically, the trick is to push as much clutter as you can outside of the unittests, so that only the behaviour
that is to be tested remains.
Some ways to do that:
Don't declare instances of your model/poco classes inside each test, but rather use a static TestData class that
exposes these instances as properties. Usually these instances are useful for more than one test as well.
For added robustness, have the properties on the TestData class create and return a new object instance every time
they're accessed, so that one unittest cannot affect the next by modifying the testdata.
On your testclass, declare a helper method that accepts the (usually mocked) repositories and returns the
system-under-test (or "SUT", i.e. your service). This is mainly useful in situations where configuring the SUT
takes more than 2 or more statements, since it tidies up your test code.
As an alternative to 2, have your testclass expose properties for each of the mocked Repositories, so that you don't need to declare these in your unittests; you can even pre-initialize them with a default behaviour to reduce the configuration per unittest even further.
The helper method that returns the SUT then doesn't take the mocked Repositories as arguments, but rather contstructs the SUT using the properties. You might want to reinitialize each Repository property on each [TestInitialize].
To reduce the clutter for comparing each property of your Poco with the corresponding property on the Model object, declare a helper method on your test class that does this for you (i.e. void AssertPocoEqualsModel(Poco p, Model m)). Again, this removes some clutter and you get the reusability for free.
Or, as an alternative to 4, don't compare all properties in every unittest, but rather test the mapping code in only one place with a separate set of unittests. This has the added benefit that, should the mapping ever include new properties or change
in any other way, you don't have to update 100-odd unittests.
When not testing the property mappings, you should just verify that the SUT returns the correct object instances (i.e. based on Id or Name), and that just the properties that might be changed (by the business logic being currently tested)
contain the correct values (such as Order total).
Personally, I prefer 5 because of its maintainability, but this isn't always possible and then 4 is usually a viable alternative.
Your test code would then look like this (unverified, just for demonstration purposes):
[TestClass]
public class DocumentServiceTest
{
private IDocumentRepository DocumentRepositoryMock { get; set; }
[TestInitialize]
public void Initialize()
{
DocumentRepositoryMock = MockRepository.GenerateStub<IDocumentRepository>();
}
[TestMethod]
public void Save_ReturnSavedDocument()
{
//Arrange
var repoResult = TestData.AcmeDocumentEntity;
DocumentRepositoryMock
.Stub(m => m.Get())
.IgnoreArguments()
.Return(new List<EntityModel.Document>() { repoResult }.AsQueryable());
DocumentRepositoryMock
.Stub(a => a.Save(null, null))
.IgnoreArguments()
.Return(repoResult);
//Act
var documentService = CreateDocumentService();
var savedDocument = documentService.Save(TestData.AcmeDocumentModel);
//Assert that properties are correctly mapped after save
AssertEntityEqualsModel(repoResult, savedDocument);
}
//Helpers
private DocumentService CreateDocumentService()
{
return new DocumentService(DocumentRepositoryMock);
}
private void AssertEntityEqualsModel(EntityModel.Document entityDoc, Models.Document modelDoc)
{
Assert.AreEqual(entityDoc.Message, modelDoc.Message);
Assert.AreEqual(entityDoc.DocumentId, modelDoc.DocumentId);
//...
}
}
public static class TestData
{
public static EntityModel.Document AcmeDocumentEntity
{
get
{
//Note that a new instance is returned on each invocation:
return new EntityModel.Document()
{
DocumentId = 2,
Message = "TestMessage1",
//...
}
};
}
public static Models.Document AcmeDocumentModel
{
get { /* etc. */ }
}
}
In general, if your having a hard time creating a concise test, your testing the wrong thing or the code your testing has to many responsibilities. (In my experience)
In specific, it looks like your testing the wrong thing here. If your repo is using entity framework, your getting the same object back that your sending in. Ef just updates to Id for new objects and any time stamp fields you might have.
Also, if you can't get one of your asserts to fail without a second assert failing, then you don't need one of them. Is it really possible for "name" to come back ok but for "email" to fail? If so, they should be in separate tests.
Finally, trying to do some tdd might help. Comment out all the could in your service.save. Then, write a test that fails. Then un comment out only enough code to make your test pass. Them write your next failing test. Can't write a test that fails? Then your done.
I am writing unit tests with C#, NUnit and Rhino Mocks.
Here are the relevant parts of a class I am testing:
public class ClassToBeTested
{
private IList<object> insertItems = new List<object>();
public bool OnSave(object entity, object id)
{
var auditable = entity as IAuditable;
if (auditable != null) insertItems.Add(entity);
return false;
}
}
I want to test the values in insertItems after a call to OnSave:
[Test]
public void OnSave_Adds_Object_To_InsertItems_Array()
{
Setup();
myClassToBeTested.OnSave(auditableObject, null);
// Check auditableObject has been added to insertItems array
}
What is the best practice for this? I have considered adding insertItems as a Property with a public get, or injecting a List into ClassToBeTested, but not sure I should be modifying the code for purposes of testing.
I have read many posts on testing private methods and refactoring, but this is such a simple class I wondered what is the best option.
The quick answer is that you should never, ever access non-public members from your unit tests. It totally defies the purpose of having a test suite, since it locks you into internal implementation details that you may not want to keep that way.
The longer answer relates to what to do then? In this case, it is important to understand why the implementation is as it is (this is why TDD is so powerful, because we use the tests to specify the expected behavior, but I get the feeling that you are not using TDD).
In your case, the first question that comes to mind is: "Why are the IAuditable objects added to the internal list?" or, put differently, "What is the expected externally visible outcome of this implementation?" Depending on the answer to those questions, that's what you need to test.
If you add the IAuditable objects to your internal list because you later want to write them to an audit log (just a wild guess), then invoke the method that writes the log and verify that the expected data was written.
If you add the IAuditable object to your internal list because you want to amass evidence against some kind of later Constraint, then try to test that.
If you added the code for no measurable reason, then delete it again :)
The important part is that it is very beneficial to test behavior instead of implementation. It is also a more robust and maintainable form of testing.
Don't be afraid to modify your System Under Test (SUT) to be more testable. As long as your additions make sense in your domain and follow object-oriented best practices, there are no problems - you would just be following the Open/Closed Principle.
You shouldn't be checking the list where the item was added. If you do that, you'll be writing a unit test for the Add method on the list, and not a test for your code. Just check the return value of OnSave; that's really all you want to test.
If you're really concerned about the Add, mock it out of the equation.
Edit:
#TonE: After reading your comments I'd say you may want to change your current OnSave method to let you know about failures. You may choose to throw an exception if the cast fails, etc. You could then write a unit test that expects and exception, and one that doesn't.
I would say the "best practice" is to test something of significance with the object that is different now that it stored the entity in the list.
In other words, what behavior is different about the class now that it stored it, and test for that behavior. The storage is an implementation detail.
That being said, it isn't always possible to do that.
You can use reflection if you must.
If I'm not mistaken, what you really want to test is that it only adds items to the list when they can be cast to IAuditable. So, you might write a few tests with method names like:
NotIAuditableIsNotSaved
IAuditableInstanceIsSaved
IAuditableSubclassInstanceIsSaved
... and so forth.
The problem is that, as you note, given the code in your question, you can only do this by indirection - only by checking the private insertItems IList<object> member (by reflection or by adding a property for the sole purpose of testing) or injecting the list into the class:
public class ClassToBeTested
{
private IList _InsertItems = null;
public ClassToBeTested(IList insertItems) {
_InsertItems = insertItems;
}
}
Then, it's simple to test:
[Test]
public void OnSave_Adds_Object_To_InsertItems_Array()
{
Setup();
List<object> testList = new List<object>();
myClassToBeTested = new MyClassToBeTested(testList);
// ... create audiableObject here, etc.
myClassToBeTested.OnSave(auditableObject, null);
// Check auditableObject has been added to testList
}
Injection is the most forward looking and unobtrusive solution unless you have some reason to think the list would be a valuable part of your public interface (in which case adding a property might be superior - and of course property injection is perfectly legit too). You could even retain a no-argument constructor that provides a default implementation (new List()).
It is indeed a good practice; It might strike you as a bit overengineered, given that it's a simple class, but the testability alone is worth it. Then on top of that, if you find another place you want to use the class, that will be icing on the cake, since you won't be limited to using an IList (not that it would take much effort to make the change later).
If the list is an internal implementation detail (and it seems to be), then you shouldn't test it.
A good question is, what is the behavior that would be expected if the item was added to the list? This may require another method to trigger it.
public void TestMyClass()
{
MyClass c = new MyClass();
MyOtherClass other = new MyOtherClass();
c.Save(other);
var result = c.Retrieve();
Assert.IsTrue(result.Contains(other));
}
In this case, i'm asserting that the correct, externally visible behavior, is that after saving the object, it will be included in the retrieved collection.
If the result is that, in the future, the passed-in object should have a call made to it in certain circumstances, then you might have something like this (please forgive pseudo-API):
public void TestMyClass()
{
MyClass c = new MyClass();
IThing other = GetMock();
c.Save(other);
c.DoSomething();
other.AssertWasCalled(o => o.SomeMethod());
}
In both cases, you're testing the externally visible behavior of the class, not the internal implementation.
The number of tests you need is dependent on the complexity of the code - how many decision points are there, roughly. Different algorithms can achieve the same result with different complexity in their implementation. How do you write a test that is independent of the implementation and still be sure you have adequate coverage of your decision points?
Now, if you are designing larger tests, at say the integration level, then, no, you would not want to write to implementation or test private methods, but the question was directed to the small, unit test scope.
I'm writing some unit tests and I need to be able to Assert whether a method has been called based upon the setup data.
E.g.
String testValue = "1234";
MyClass target = new MyClass();
target.Value = testValue;
target.RunConversion();
// required Assertion
Assert.MethodCalled(MyClass.RunSpecificConversion);
Then there would be a second test where testValue is null and I would want to assert that the method has NOT been called.
Update for specific test scenario:
I haev a class which represents information deserialized from XML. This class contains a number of pieces of information that I need to convert into my own classes.
The XML class is information about a person, including account info and a few phone numbers in different fields.
I have a method to create my Account class from the XML class, and methods to create the correct phone classes from the XML class. I have unit tests for each of these methods, but I'd like to test that when the account convertion is called, it runs the phone conversions as the results are actually properties of the account class.
I know I could test the properties of the account class after feeding in the correct information, however I have otehr nested properties that have further nested and testing the entire tree could become very cumbersome. I guess I could just have each level test the next level below it, but ideally I'd like to make sure the correct conversino methods are being called and the code is not being duplicated in the implementation.
Without using a Mocking framework such as Moq, TypeMock, RhinoMocks that can verify your expectations, I would look at parsing the stack trace.
The MSDN documentation here should help.
Kindness,
Dan
You want to investigate the use of mocking frameworks.
Or you can create your own fake objects to record the calls. You'll need to create an interface that the class implements.
One method I have seen of creating fakes looks like this:
interface MyInterface
{
void Method();
}
// Real class
class MyClass : MyInterface
{
}
// Fake class for recording calls
class FakeMyClass : MyInterface
{
public bool MethodCalled;
public void Method()
{
this.MethodCalled = true;
}
}
You then need to use some dependency injection to get this fake class used instead of the real one whilst running the tests.
Of course the issues with this is that the Fake class will only record method calls but not actually do anything real. This won't always be applicable. It works okay in a Model-View-Presenter environment.
You can use a mocking framework like Rhino Mocks or moq. You can also use Isolation framework like Isolator to do that.
Another option is to inherit the class you want to verify against and raise a flag inside it that the method was called. Instead of the assert in your test assert against the flag. (basically it's a handrolled mock)
Example using Isolator:
Isolate.Verify.WasCalledWithAnyArguments(() => target.RunSpecificConversion());
Disclaimer - I work at Typemock
Another perspective - you might be unit testing at too low a level which can cause your tests to be brittle.
Typically, it is better to test the business requirement rather than implementation details. e.g. you run a conversion with "1234" as the input, and the conversion should reverse the input, so you expect "4321" as the output.
Don't test that you expect "1234" to be converted by a specific sequence of steps. In the future you might change the implementation details, then the test will fail even if the business requirements are still being met.
Of course your test in the question could be an actual business requirement in which case it would be correct.
The other case when you would want to do this is if invoking the conversion in the real MyClass is not suitable for a unit test, i.e. requires a lot of setup, or is time intensive. Then you will need to mock or stub it out.
Reply to question edit:
Based on your scenario, I would still be inclined to test by checking the output rather than checking for whether specific methods were called.
You could have tests with different XML inputs to ensure that the different conversion methods have to be called in order to pass the tests.
And I wouldn't rely on tests to check whether there was duplicate code, but rather would refactor away duplicate code when I came across it, and just rely on the unit tests to ensure that the code still performs the same function after refactoring.