Where do I put my mocks? - c#

I'm struggling to get mocks working, for a change, and was wondering where people generally put their mock classes. I seem to have three basic choices none of which seem to work.
I can put them in with the application assembly itself, in which case they ship with the application, which seems bad, but they are available for unit tests during the final builds and there are no circular references. This seems the simplest approach.
I can create a separate mock assembly, so they are available during the unit tests, can be consumed from the application and the test application but I end up with either having to move all of the actual types to this assembly or creating circular references.
I can put them in the test assembly, but then they are unable to be used from the application itself and therefore I cant use them as a process for building up chunks of the application.
I tend to try and use the mocks for helping develop the system as well as for the testing parts and therefore I find it hard to know where to put them. Additionally all of the final releases of code have to run through unit test processes therefore I need the mocks available during the build cycle.
Does anyone have any thoughts as to where mock classes should be placed?
thanks for any help
T

Your mocks should go in your unit tests projects. Your application should not be dependent on your mock objects. Generally your application will use interfaces and your mocks will implement those interfaces. Your application won't need to or should reference your test project.

Mocks should be kept in a separate project. We have a total of 3 options
Mocks in a unit test project
Will not work if the UI project needs to use this for startup(eg:mock service/partial integration test/smoke test). Even if the test project is referenced in config file through dependency injection, we do not want to carry unit test dlls to other environments. More over focus is more on integration tests these days as Unit tests are less productive, which of course is outside of this topic but we should realise mocks are not just about unit tests.
Mocks in the application projects itself(service-mocks in
service-project)
Developers can accidentally forget to remove the mocks. eg: a new developer tries mocks and forgets to include it in dependency config files. Let us not leave it for chances as it can hinder expansion of teams.
Mocks in a separate project.
Here both unit test projects, as well as other startup projects, can refer this. Integration tests using front end has the additional possibility to mock a certain area(eg: external APIs). Or smoke test the UI with mocks(while other teams deploy the back end). In short, we have a lot of options to use the mock. Not tied to unit test alone.
But the most important benefit is, when we want to be sure of going live, we can remove mock project(or dll) from deployment. This way if any project or config files accidentally refer the mock, we get runtime error. Which helps to nip it in the bud.

What we do on our projects is identify internal and external dependencies. Mocks for internal dependencies go in the unit-test project (or a separate Mocks project if they are used across the solution). Mocks for external dependencies go into the application itself, and are then used for deployment and integration testing.
An external dependency is something that is environmental - so, Active Directory, a web-service, a database, a logger, that sort of thing. Dependency injection is handled in the same way - external dependencies get defined in a configuration file, so we can easily choose which we want to use at run-time.
An internal dependency is pretty much everything else.

Related

How to unit test methods that run git commands

I have a project that helps teams with their work flow in git. It has some functionality to enforce naming conventions and keep branches in sync with the remote (i.e. prevents branches from diverging as a result of the user not pulling before starting development)
The project is developed around abstractions of the entities in a repo such as an interface ICommitTag which models a tag in git (name, SHA,Creator etc). Any part of the code that is dependent on branchs or tags can be mocked and tested however the parts of the code that create branches or tags or do any operation that results in a state change in the repo can't be tested reliably and I am encountering issues on releases where some of these functions are failing for corner cases which means I am relying on functional testing to check that these parts of the app are working.
When I say any part of the code that is dependent on branches or tags I mean that I use an interface
interface ICommitTagSource
{
IEnumerable<ICommitTag> GetAllTags();
}
When I am testing code that is dependent on tags, the method under test resolves (using prism IoC) an instance of ICommitTagSource and in my unit test I create mock values for ICommitTag and a mock instance of ICommitTagSource that returns the mocked commit tags. Mocking is done using moq. The unit test then registers the mock instances.
Can anyone provide some strategy on how I can reliably test the parts of my app that are responsible for state changes in a repo, is there a general strategy for testing something like this?
I have tried using test repos however my problem there is that I need so many repos to test the various parts that its hard to manage. The tests are cumbersome and not the most reliable because previous executions of tests change the state. Issues I have ran into are tests which should create branches failing due to branches existing already from a previous test where the branch was not deleted after test.

how to test an executable in .NET

Every time, when I deal with writing Tests in .NET (C#), I read, you can test DLLs only. But my app hasn't any DLLs and I want test some controllers and behaviours too. How it is possible, without creating an DLL project? Is it possible?
Hints are welcome.
Your unit test project(s) can still reference application projects like any other class library. It may be considered less ideal, but there's nothing preventing it from working. As the logic of your system grows, even a little bit, you'll certainly want to consider moving the business logic into Class Library projects and keeping the application layer as thin as possible.
I read, you can test DLLs only.
This was an oversimplification. What they probably meant was that you can test discrete functionality only. Regardless of what kind of project is hosting that functionality, the functionality itself has to be well defined, with separated concerns, and individually testable. If that's causing a problem in your design, then you have more work to do in order to unit test your code.
But to get started, simply create your unit test project and reference your other project. Then start writing tests for your individual units of functionality. Each test should be testing only one discrete thing, and should consist of the simple steps of:
Arrange
Act
Assert
Unit tests shouldn't require any additional setup, nor should they produce any side effects. This is where the "discrete" part comes in, each one should be individual and not depend on others.

Retrofit unit tests to large solution, IOC, Moq

I am in the process of retrofitting unit tests for a asp.net solution written in VB.Net and c#.
The unit tests need to verify the current functionality and act as a check for future breaking changes.
The solution comprises of:
1 MVC web project
written in vb.net (don't ask, it's a legacy thing)
10 other supporting projects each containing logically grouped functionality
written in C#, each project contains repositories and DAL
All the classes are tightly coupled as there is no inversion of control (IOC) implemented anywhere, yet.
currently to test a controller there is the following stack:
controller
repository
dal
logging
First question, to unit test this correctly would I setup 1 test project and run all tests from it, or should I setup 1 test project for each project to test the functionality of that DLL only?
Second question, do I need to implement IOC to be able to use MOQ?
Third question, is it even possible to refactor IOC into a huge solution like this?
Forth question, what other options are available to get this done asap?
I am in the process of retrofitting unit tests for a asp.net solution written in VB.Net and c#. The unit tests need to verify the current functionality and act as a check for future breaking changes.
When working with a large code base that doesn't have unit tests and hasn't been written with testing in mind, there is a good chance that in order to write a useful set of unit tests you will have to modify the code, hence you're going to be triggering the event that you're planning on writing the unit tests to support. This is obviously risky, but may not be any riskier than what you're already doing on a day to day basis.
There are a number of approaches that you could take (and there's a good chance that this question will be closed as too broad). One approach is to create a good set of integration tests ensure that the core functionality is working. These tests won't be as fast to run as unit tests, but they will be further decoupled from the legacy code base. This will give you a good safety net for any changes that you need to make as part of introducing unit testing.
If you have an appropriate version of visual studio, then you may also be able to use shims (or if you have funds, typemock may be an option) to isolate elements of your application when writing your initial tests. So, you could for example create shims of your dal to isolate the rest of your code from the db.
First question, to unit test this correctly would i setup 1 test project and run all tests from it, or should i setup 1 test project for each project to test the functionality of that dll only?
Personally, I prefer think of each assembly as a testable unit, so I tend to create at least one test project for each assembly containing production code. Whether or not that makes sense though, depends a bit on what's contained in each of the assemblies... I'd also tend to have at least one test project for integration tests of the top level project.
Second question, do i need to implement IOC to be able to use MOQ?
The short answer is no, but it depends what your classes do. If you want to test using Moq, then it's certainly easier to do so if your classes support dependency injection, although you don't need to use an IOC container to achieve this. Hand rolled injection either through constructors like below, or through properties can form a bridge to allow testing stubs to be injected.
public SomeConstructor(ISomeDependency someDependency = null) {
if(null == someDependency) {
someDependency = new SomeDependency();
}
_someDependency = someDependency;
}
Third question, is it even possible refactor IOC into a huge solution like this?
Yes it's possible. The bigger question is it worth it? You appear to be suggesting a big bang approach to the migration. If you have a team of developers that don't have much experience in this area, this seems awfully risky. A safer approach might be to target a specific area of the application and migrate that section. If your assemblies are discrete then they should form fairly easy split points in your application. Learn what works and what doesn't, along with what benefits and unexpected pain you're feeling. Use that to inform your decision about how and when to migrate the rest of the code.
Forth question, what other options are available to get this done asap?
As I've said above, I'm not sure that ASAP is really the right approach to take. Working towards unit-testing can be done as a slow migration, adding tests as you actually change the code due to business requirements. This helps to ensure that testers are also allocated to catch any errors that you introduce as part of the refactoring that might need to take place to support the testing.

How to do integration testing in .NET with real files?

I have some classes that implements some logic related to file system and files. For example, I am performing following tasks as part of this logic:
checking if certain folder has certain structure (eg. it contains subfolders with specific names etc...)
loading some files from those folders and checking their structure (eg. these are some configuration files, located at certain place within certain folder)
load additional files for testing/validation from the configuration file (eg. this config file contains information about other files in the same folder, that should have other internal structure etc...)
Now all this logic has some workflow and exceptions are thrown, if something is not right (eg. configuration file is not found at the specific folder location). In addition, there is Managed Extensibility Framework (MEF) involved in this logic, because some of these files I am checking are managed DLLs that I am manually loading to MEF aggregates etc...
Now I'd like to test all this in some way. I was thinking of creating several physical test folders on HDD, that cover various test cases and then run my code against them. I could create for example:
folder with correct structure and all files being valid
folder with correct structure but with invalid configuration file
folder with correct structure but missing configuration file
etc...
Would this be the right approach? I am not sure though how exactly to run my code in this scenario... I certainly don't want to run the whole application and point it to check these mocked folders. Should I use some unit testing framework to write kind of "unit tests", that executes my code against these file system objects?
In general, is all this a correct approach for this kind of testing scenarios? Are there other better approaches?
First of all, I think, it is better to write unit tests to test your logic without touching any external resources. Here you have two options:
you need to use abstraction layer to isolate your logic from external dependencies such as the file system. You can easily stub or mock (by hand or with help of constrained isolation framework such as NSubstitute, FakeItEasy or Moq) this abstractions in unit tests. I prefer this option, because in this case tests push you to a better design.
if you have to deal with legacy code (only in this case), you can use one of the unconstrained isolation frameworks (such as TypeMock Isolator, JustMock or Microsoft Fakes) that can stub/mock pretty much everything (for instance, sealed and static classes, non-virtual methods). But they costs money. The only "free" option is Microsoft Fakes unless you are the happy owner of Visual Studio 2012/2013 Premium/Ultimate.
In unit tests you don't need to test the logic of external libraries such as MEF.
Secondly, if you want to write integration tests, then you need to write "happy path" test (when everything is OK) and some tests that testing your logic in boundary cases (file or directory not found). Unlike #Sergey Berezovskiy, I recommend creating separate folders for each test case. The main advantages is:
you can give your folder meaningful names that more clearly express your
intentions;
you don't need to write complex (i.e. fragile) setup/teardown logic.
even if you decide later to use another folder structure, then you can change it more easily, because you will already have working code and tests (refactoring under test harness is much easier).
For both, unit and integration tests, you can use ordinary unit testing frameworks (like NUnit or xUnit.NET). With this frameworks is pretty easy to launch your tests in Continuous integration scenarios on your Build server.
If you decide to write both kinds of tests, then you need to separate unit tests from integration tests (you can create separate projects for every kind of tests). Reasons for it:
unit tests is a safety net for developers. They must provide quick feedback about expected behavior of system units after last code changes (bug fixes, new features). If they are run frequently, then developer can quickly and easily identify piece of code, that broke the system. Nobody wants to run slow unit tests.
integration tests are generally slower than unit tests. But they have different purpose. They check that units works as expected with real dependencies.
You should test as much logic as possible with unit tests, by abstracting calls to the file system behind interfaces. Using dependency injection and a testing-framework such as FakeItEasy will allow you to test that your interfaces are actually being used/called to operate on the files and folders.
At some point however, you will have to test the implementations working on the file-system too, and this is where you will need integration tests.
The things you need to test seem to be relatively isolated since all you want to test is your own files and directories, on your own file system. If you wanted to test a database, or some other external system with multiple users, etc, things might be more complicated.
I don't think you'll find any "official rules" for how best to do integration tests of this type, but I believe you are on the right track. Some ideas you should strive towards:
Clear standards: Make the rules and purpose of each test absolutely clear.
Automation: The ability to re-run tests quickly and without too much manual tweaking.
Repeatability: A test-situation that you can "reset", so you can re-run tests quickly, with only slight variations.
Create a repeatable test-scenario
In your situation, I would set up two main folders: One in which everything is as it is supposed to be (i.e. working correctly), and one in which all the rules are broken.
I would create these folders and any files in them, then zip each of the folders, and write logic in a test-class for unzipping each of them.
These are not really tests; think of them instead as "scripts" for setting up your test-scenario, enabling you to delete and recreate your folders and files easily and quickly, even if your main integration tests should change or mess them up during testing. The reason for putting them in a test-class, is simply to make then easy to run from the same interface as you will be working with during testing.
Testing
Create two sets of test-classes, one set for each situation (correctly set up folder vs. folder with broken rules). Place these tests in a hierarchy of folders that feels meaningful to you (depending on the complexity of your situation).
It's not clear how familiar you are with unit-/integration-testing. In any case, I would recommend NUnit. I like to use the extensions in Should as well. You can get both of these from Nuget:
install-package Nunit
install-package Should
The should-package will let you write the test-code in a manner like the following:
someCalculatedIntValue.ShouldEqual(3);
someFoundBoolValue.ShouldBeTrue();
Note that there are several test-runners available, to run your tests with. I've personally only had any real experience with the runner built into Resharper, but I'm quite satisfied with it and I have no problems recommending it.
Below is an example of a simple test-class with two tests. Note that in the first, we check for an expected value using an extension method from Should, while we don't explicitly test anything in the second. That is because it is tagged with [ExpectedException], meaning it will fail if an Exception of the specified type is not thrown when the test is run. You can use this to verify that an appropriate exception is thrown whenever one of your rules is broken.
[TestFixture]
public class When_calculating_sums
{
private MyCalculator _calc;
private int _result;
[SetUp] // Runs before each test
public void SetUp()
{
// Create an instance of the class to test:
_calc = new MyCalculator();
// Logic to test the result of:
_result = _calc.Add(1, 1);
}
[Test] // First test
public void Should_return_correct_sum()
{
_result.ShouldEqual(2);
}
[Test] // Second test
[ExpectedException(typeof (DivideByZeroException))]
public void Should_throw_exception_for_invalid_values()
{
// Divide by 0 should throw a DivideByZeroException:
var otherResult = _calc.Divide(5, 0);
}
[TearDown] // Runs after each test (seldom needed in practice)
public void TearDown()
{
_calc.Dispose();
}
}
With all of this in place, you should be able to create and recreate test-scenarios, and run tests on them in a easy and repeatable way.
Edit: As pointed out in a comment, Assert.Throws() is another option for ensuring exceptions are thrown as required. Personally, I like the tag-variant though, and with parameters, you can check things like the error message there too. Another example (assuming a custom error message is being thrown from your calculator):
[ExpectedException(typeof(DivideByZeroException),
ExpectedMessage="Attempted to divide by zero" )]
public void When_attempting_something_silly(){
...
}
I'd go with single test folder. For various test cases you can put different valid/invalid files into that folder as part of context setup. In test teardown just remove those files from folder.
E.g. with Specflow:
Given configuration file not exist
When something
Then foo
Given configuration file exists
And some dll not exists
When something
Then bar
Define each context setup step as copying/not copying appropriate file to your folder. You also can use table for defining which file should be copied to folder:
Given some scenario
| FileName |
| a.config |
| b.invalid.config |
When something
Then foobar
I don't know your program's architecture to give a good advice, but I will try
I believe that you don't need to test real file structure. File access services are defined by system/framework, and they're don't need to be tested. You need to mock this services in related tests.
Also you don't need to test MEF. It is already tested.
Use SOLID principles to make unit tests. Especially take look at Single Responsibility Principle this will allow you to to create unit tests, which won't be related to each others. Just don't forget about mocking to avoid dependencies.
To make integration tests, you can create a set of helper classes, which will emulate scenarios of file structures, which you want to test. This will allow you to stay not attached to machine on which you will run this tests. Such approach maybe more complicated than creating real file structure, but I like it.
I would build framework logic and test concurrency issues and file system exceptions to ensure a well defined test environment.
Try to list all the boundaries of the problem domain. If there are too many, then consider the possibility that your problem is too broadly defined and needs to be broken down. What is the full set of necessary and sufficient conditions required to make your system pass all tests? Then look at every condition and treat it as an individual attack point. And list all the ways you can think of, of breaching that. Try to prove to yourself that you have found them all. Then write a test for each.
I would go through the above process first for the environment, build and test that first to a satisfactory standard and then for the more detailed logic within the workflow. Some iteration may be required if dependencies between the environment and the detailed logic occur to you during testing.

Effective transition from unit testing to integration testing

I'm currently investigating how we should perform our testing in an upcoming project. In order to find bugs early in the development process, the developers will write unit tests before the actual code (TDDish). The unit tests will focus, as they should, on the unit (a method in this case) in isolation so dependencies will be mocked etc etc.
Now, I also would like to test these units when they interact with other units and I was thinking that there should be a effective best practice to do this since the unit tests have already been written. My thought is that the unit tests will be reused but the mocked objects will be removed and replaced with real ones. The diffent ideas I have right now is:
Use a global flag in each test class that decides if mock objects should be used or not. This approach will require several if statements
Use a factory class that either creates a "instanceWithMocks" or "instanceWithoutMocks". This approach might be troublesome for the new developers to use and requires some extra classes
Separate the integration tests from the unit tests in different classes. This will however require a lot of redundant code and maintaining the test cases will be twice the work
The way I see it all of these approaches have pros and cons. Which of these would be preferred and why? And is there a better way to effective transition from unit testing to integration testing? Or is this usually done in some other way?
I would go for the third option
Seperate the integration tests from the unit tests in different
classes. This will however require alot of redundant code and
maintaining the test cases will be twice the work
This is because unit tests and integration tests have different purposes. A unit test shows that an individual piece of functionality works in isolation. An integration test shows that different pieces of functionality still work when they interact with each other.
So for a unit test you want to mock things so that you are only testing the one piece of functionality.
For an integration test mock as little as possible.
I would have them in separate projects. What works well at my place is to have a unit test project using NUnit and Moq. This is written TDD as the code is written. The integration tests are Specflow/Selenium and the feature files are written with the help of the product owner in the planning session so we can verify that we are delivering what the owner wants.
This does create extra work in the short term but leads to fewer bugs, easier maintenance, and delivery matching requirements.
I agree to most other answers, that unittesting should be seperate from integrationtesting (option 3).
But i do not agree to your contra arguments:
[...] This (seperating unit from integration testing) will however
require a lot of redundant code and maintaining the test cases will be twice the work.
Generating objects with test data can be a lot of work but this can be refactored to test-helper clases aka ObjectMother that can be used from
unit and integration testing so there is no need for redundancy there
In unit tests you check different conditions of the class under tests.
For integration testing it is not neccessary to re-check every of these special cases.
Instead you check that the components work together.
Example
You may have unit-tests for 4 different situations where an exception is thrown.
For the integration it is not neccessary to re-test all 4 conditions
One exception-related integration test is enough to verify that the integrated system can handle exceptions.
An IoC container like Ninject/Autofac/StructureMap may be of use to you here. The unit tests can resolve the system-under-test through the container, and it is simply a matter of registration whether you have mocks or real objects registered. Similar to your factory approach, but the IoC container is the factory. New developers would need a little training to understand, but that's the case with any complex system. The disadvantage to this is that the registration scenarios can become fairly complicated, but it's hard to say for any given system whether they'd be too complicated without trying it out. I suspect this is the reason you haven't found any answers that seem definitive.
The integration tests should be different classes than your unit tests since you are testing a different behavior. The way that I think of integration tests is that they are the ones that you execute when trying to make sure that everything works together. They would be using inputs to portions of the application and making sure that the expected output is returned.
I think you are messing up the purpose of unit testing and integration testing.
Unit testing is for testing a single class - this is low level API.
Integration testing is testing how classes cooperate. This is another higher level API.
Normally, you can not reuse unit tests in integration testing because they represent different level of system view.
Using spring context may help with setting up environment for integration testing.
I'm not sure reusing your unit tests with real objects instead of mocks is the right approach to implementing integration tests.
The purpose of a unit test is to verify the basic correctness of an object in isolation from the outside world. Mocks are there to ensure that isolation. If you substitute them for real implementations, you'll actually end up testing something completely different - the correctness of large portions of the same chain of objects, and you're redundantly testing it many times.
By making integration tests distinct from unit tests, you'll be able to choose the portions of your system you want to verify - generally, it's a good idea to test parts that imply configuration, I/O, interation with third-party systems, the UI or anything else that unit tests have a hard time covering.

Categories