Tool to create mock from a real execution - c#

I'm working on this ,let's call it legacy, code that makes calls to another component using an interface (IFjuk).
I realize that mocking is generally intended for unit testing, but I thought it might be useful for a form of "system test". My primary goal is to get rid of a dependency on a piece of external hardware.
The execution makes many calls to IFjuk, which would make it cumbersome to manually write and maintain code that defines the mock expectations.
One idea I have is to use Castle Dynamic Proxy to record calls (including return values from the real component) and generate C# code from that which defines RhinoMock mocks, but I suspect someone must have built something similar already...
Is there a tool that can record calls and responses to IFjuk against the actual component, so that I can use that data to generate mocks?

No there is no any builtin "call tracer" available, but I think this is one of the place where AOP http://www.c-sharpcorner.com/uploadfile/shivprasadk/aspect-oriented-programming-in-C-Sharp-net-part-i/ can become very useful.

Related

Untestable Braintree API: Should I alter the source code or individually wrap every class? [duplicate]

This question already has answers here:
Instancing a class with an internal constructor
(9 answers)
Closed 8 years ago.
I am working with the Braintree API for .NET to take care of processing payments. Their business does a fine job of processing payments and the API wrapper works for straightforward use. However, the provided API wrapper begins to fail quickly upon closer investigation or more strenuous use; for example, it contains hand-rolled enums. My problem comes with unit testing my code that uses this wrapper.
In order to do this, I essentially need to mock up my own 'fake' Braintree gateway that will have some known values in it, generate errors when requested, etc. My plan of attack was to override the functionality of the Braintree API wrapper and reroute the requests to a local in-memory endpoint. Then I could use dependency injection to link up the proper gateway/wrapper at runtime.
Initially, it seemed to be going swimmingly: despite the sins against software engineering that had been committed in the API wrapper, every method that I would need to override was miraculously marked virtual. However, that came to a screeching halt: almost constructor in the API wrapper is marked internal. As such, I can neither inherit off of these classes nor create them at whim to store for testing.
An aside: I grok internal constructors, and the reasons that one would legitimately want to use them. However, I have looked at the source code for this, and every internal constructor performs only trivial property assignments. As such, I am comfortable in claiming that a different coding practice should have been followed.
So, I'm essentially left with three options:
Write my own API wrapper from scratch. This is obviously doable, and holds the advantage that it would yield a well-engineered infrastructure. The disadvantages, however, are too numerous to list briefly.
Pull the source code from the API down and include it in my solution. I could change all of the internal constructors to be whatever I need to make them work. The disadvantage is that I would have to re-update all of these changes upon every subsequent API wrapper release.
Write wrapper classes for every single object that I need to use in the whole API wrapper. This holds the advantage of not altering the provided source code; the disadvantages are large, though: essentially rewriting every class in the wrapper three times (an interface, a Braintree API wrapper adapter, and a testable version).
Unfortunately, all of those suck. I feel like option 2 may be the least bad of the options, but it makes me feel dirty. Has anyone solved this problem already/written a better, more testable wrapper? If not, have I missed a possible course of action? If not, which of those three options seems least distasteful?
Perhaps this stackoverflow entry could help
Also, A random blog entry on the subject
Since you're not testing their API, I would use a Facade pattern. You don't need to wrap everything they provide, just encapsulate the functionality that you're using. This also gives you an advantage: If you decide to ditch that API in the future, you just need to reimplement your wrapper.

Why should I use a mocking framework instead of fakes?

There are some other variations of this question here at SO, but please read the entire question.
By just using fakes, we look at the constructor to see what kind of dependencies that a class have and then create fakes for them accordingly.
Then we write a test for a method by just looking at it's contract (method signature). If we can't figure out how to test the method by doing so, shouldn't we rather try to refactor the method (most likely break it up in smaller pieces) than to look inside it to figure our how we should test it? In other words, it also gives us a quality control by doing so.
Isn't mocks a bad thing since they require us to look inside the method that we are going to test? And therefore skip the whole "look at the signature as a critic".
Update to answer the comment
Say a stub then (just a dummy class providing the requested objects).
A framework like Moq makes sure that Method A gets called with the arguments X and Y. And to be able to setup those checks, one needs to look inside the tested method.
Isn't the important thing (the method contract) forgotten when setting up all those checks, as the focus is shifted from the method signature/contract to look inside the method and create the checks.
Isn't it better to try to test the method by just looking at the contract? After all, when we use the method we'll just look at the contract when using it. So it's quite important the it's contract is easy to follow and understand.
This is a bit of a grey area and I think that there is some overlap. On the whole I would say using mock objects is preferred by me.
I guess some of it depends on how you go about testing code - test or code first?
If you follow a test driven design plan with objects implementing interfaces then you effectively produce a mock object as you go.
Each test treats the tested object / method as a black box.
It focuses you onto writing simpler method code in that you know what answer you want.
But above all else it allows you to have runtime code that uses mock objects for unwritten areas of the code.
On the macro level it also allows for major areas of the code to be switched at runtime to use mock objects e.g. a mock data access layer rather than one with actual database access.
Fakes are just stupid dummy objects. Mocks enable you to verify that the controlflow of the unit is correct (e.g. that it calls the correct functions with the expected arguments). Doing so is very often a good way to test things. An example is that a saveProject()-function probably want's to call something like saveToProject() on the objects to be saved. I consider doing this a lot better than saving the project to a temporary buffer, then loading it to verify that everything was fine (this tests more than it should - it also verifies that the saveToProject() implementation(s) are correct).
As of mocks vs stubs, I usually (not always) find that mocks provide clearer tests and (optionally) more fine-grained control over the expectations. Mocks can be too powerful though, allowing you to test an implementation to the level that changing implementation under test leaving the result unchanged, but the test failing.
By just looking on method/function signature you can test only the output, providing some input (stubs that are only able to feed you with needed data). While this is ok in some cases, sometimes you do need to test what's happening inside that method, you need to test whether it behaves correctly.
string readDoc(name, fileManager) { return fileManager.Read(name).ToString() }
You can directly test returned value here, so stub works just fine.
void saveDoc(doc, fileManager) { fileManager.Save(doc) }
here you would much rather like to test, whether method Save got called with proper arguments (doc). The doc content is not changing, the fileManager is not outputting anything. This is because the method that is tested depends on some other functionality provided by the interface. And, the interface is the contract, so you not only want to test whether your method gives correct results. You also test whether it uses provided contract in correct way.
I see it a little different. Let me explain my view:
I use a mocking framework. When I try to test a class, to ensure it will work as intended, I have to test all the situations may happening. When my class under test uses other classes, I have to ensure in certain test situation that a special exceptions is raised by a used class or a certain return value, and so on... This is hardly to simulate with the real implementations of those classes, so I have to write fakes of them. But I think that in the case I use fakes, tests are not so easy to understand. In my tests I use MOQ-Framework and have the setup for the mocks in my test method. In case I have to analyse my testmethod, I can easy see how the mocks are configured and have not to switch to the coding of the fakes to understand the test.
Hope that helps you finding your answer ...

Using MSTest to test multiple database types

I am currently developing an application that connects to multiple database engines (2 right now, but this will grow in the future) but does similar things on each database. I would like to develop a set of unit tests that I only have to write once, but can be run on a different database engine. This application will be extremely complex, and I predict that I will be writing hundreds if not thousands of tests for it.
For example, I have a method that retrieves all the databases available in a database server, and I have two classes which have the same interface that defines the GetDatabases() method. I would like to develop one method that creates an instance of a class that implements IDatabaseEngine, and call the GetDatabases() method on it.
I then want to call this method once with my MySQLDatabaseEngine class, and once again with my SqlServerDatabaseEngine class and test the output.
I am currently using MSTest, because this is what I am most familiar with, but I am not against switching my test engine if it proves to be unsuitable for this task. As I only started this morning, I have only written three tests for this so far, so switching would not be a problem at all.
It may not even be necessary to do something different with the configuration of MSTest, but to develop some sort of test harness inside MSTest to run a method twice with different parameters. However, I would like to avoid any sort of situation where I have to
I have considered code generation, but I would really like to avoid this.
you can also have private test methods which accept the interface implementation as parameter and you call them twice with one or the other class (mySql or SQL Server), this way you have one or two callers ( better two ) but the test in itself is written only once.
Or, you could successfully use dependency injection to get what you need, I don't have very much experience with DI but for what I have heard is great to mock and simplify these usage patterns, among tons of other advantages.
Search about NInject or Unity.

What are the advantages to wrapping system objects (File, ServiceController, etc) using the Adapter pattern vs. detouring for unit testing?

Consider the following method that stops a service:
Public Function StopService(ByVal serviceName As String, ByVal timeoutMilliseconds As Double) As Boolean
Try
Dim service As New ServiceController(serviceName)
Dim timeout As TimeSpan = TimeSpan.FromMilliseconds(timeoutMilliseconds)
service.[Stop]()
If timeoutMilliseconds <= 0 Then
service.WaitForStatus(ServiceControllerStatus.Stopped)
Else
service.WaitForStatus(ServiceControllerStatus.Stopped, timeout)
End If
Return service.Status = ServiceControllerStatus.Stopped
Catch ex As Win32Exception
'error occured when accessing a system API'
Return False
Catch ex As TimeoutException
Return False
End Try
End Function
In order to unit test the the method I basically have two options:
Use the Adapter pattern to wrap the ServiceController class's methods I need into an interface I can control. This interface can then be injected into the service class (a.k.a Inversion of Control). This way I have a loosely coupled code and can use the traditional mocking frameworks to test.
Keep the class as is and use
Microsoft Moles (or any other code
detouring framework) to intercept
the calls to ServiceController to
return canned results for testing
purposes.
I agree that for domain model code that using the "traditional" unit testing approach makes the most sense as this would lead to a design that is easiest to maintain. However, for code that deals with the .net implementation of Windows API related stuff (file system, services, etc), is there really an advantage to going thru the extra work to get "traditionally" testable code?
It's hard for me to see the disadvantages of using Microsoft Moles for things such as ServiceController (or the File object). I really don't see any advantage of doing the traditional approach in this case. Am I missing anything?
Great question btw.. Just had a look at MS Moles video right now. Although I'm skeptical of MS Unit-testing tools, I must say this one looks interesting. My comparison stands at:
Adapter/Facade
Pro: allows you to extract a meaningful role with intention revealing methods. e.g. ServiceManager.StartService(name) could abstract the details {1. ServiceController.GetServices(), 2. handle case where ServiceController.Status != Stopped, 3. ServiceController.Start()}. The mock/fake approach here would involve less work than setting up 3 delegates. Here this approach is an opportunity to improve your design by coming up with meaningful contracts/interfaces (also allows you to hide stuff that you don't care about -- e.g. Winapi semantics, constants, etc)
Pro: Mocking frameworks would give you better diagnostics for argument checks, number of times called, expectations not called etc.
Interceptor
Pro: Less work if you're just interested in stubbing out a problematic call on a dependency
Pro: definitely a good tool in your toolbox when dealing with legacy code (where the fear of change is overwhelming)
Con: does it have a MSTest dependency? Initial searches seem to indicate that you need some plugins or extensions if you're not using MSTest.
I believe that this is a good case for mocking and there're some advantages of doing it with IoC:
You do actual unit-testing, as your tests aren't testing underlying layers - this would be an integration test -.
Easy to plug and unplug a mock object.
You don't need to define a "fake" implementation on each stub. You've a "fake", mocked implementation by configuration.
For me, the first reason is the most important.
Why don't to use Moles? Because I believe it's not the right tool for something like this. Moles is more for things like I want a fixed value of DateTime.Now to an specific time, so you can test some code in some situation whenever you want with no troubles.
Moles is an easy way to auto-mock certain methods, properties in specific tests, while IoC+fake, it's to isolate your test.

Third Party Components with TDD

Am trying to get started using TDD on a class which spits out an object belonging to a third party component. However am getting a bit confused in that apparently:
a) With unit tests objects should be tested in isolation
b) Third-party components should be wrapped into an adapter
Do these rules apply when writing tests for code which returns an instance of an object belonging to a third party component? As an example, here's the test so far:
// Arrange
string foodXml = "<food><ingredient>Cabbages</ingredient>" +
"<ingredient>Bananas</ingredient></food>";
IFoodMixer mixer = new FoodMixer();
// Act
// Smoothie is the third-party component object
Smoothie urgh = mixer.Mix(foodXml);
// Assert
Assert.AreEquals("Cabbages", urgh.Ingredients[0].Name);
Assert.AreEquals("Bananas", urgh.Ingredients[1].Name);
Apologies if this question seems a bit basic (or if the concept above seems a tad silly!) - am just struggling to understand how the two rules above could apply in this situation.
Thanks in advance for any advice given!
I would be practical with it. If Smoothie is just a data object, don't bother wrapping it.
There's something inside that FoodMixer which is creating the Smoothie in the first place. If that's a 3rd party component, I would wrap it up (you can delegate from a class to a static method if required), then dependency-inject the wrapper and mock it out in your unit test.
Your unit test is then describing the behaviour and responsibilities of your FoodMixer, independently of the SmoothieMaker (whether it's 3rd-party or otherwise). Part of the FoodMixer's responsibility is to ask the SmoothieMaker for a Smoothie, not to actually make the Smoothie itself. By mocking it out we can express that responsibility and class scope.
If your Smoothie is not just a data object but has rich behaviour, I would wrap that within your wrapped SmoothieMaker too.
Now you are completely decoupled from your 3rd party libraries, and you can unit-test easily as a useful by-product.
Look at mockito as a simpler way to create mocks automatically and verify assertions instead of using adapters.
There are also many good tutorial on mockito (and JMocks) that are also good TDD tutorials.

Categories