How to unit test methods that run git commands - c#

I have a project that helps teams with their work flow in git. It has some functionality to enforce naming conventions and keep branches in sync with the remote (i.e. prevents branches from diverging as a result of the user not pulling before starting development)
The project is developed around abstractions of the entities in a repo such as an interface ICommitTag which models a tag in git (name, SHA,Creator etc). Any part of the code that is dependent on branchs or tags can be mocked and tested however the parts of the code that create branches or tags or do any operation that results in a state change in the repo can't be tested reliably and I am encountering issues on releases where some of these functions are failing for corner cases which means I am relying on functional testing to check that these parts of the app are working.
When I say any part of the code that is dependent on branches or tags I mean that I use an interface
interface ICommitTagSource
{
IEnumerable<ICommitTag> GetAllTags();
}
When I am testing code that is dependent on tags, the method under test resolves (using prism IoC) an instance of ICommitTagSource and in my unit test I create mock values for ICommitTag and a mock instance of ICommitTagSource that returns the mocked commit tags. Mocking is done using moq. The unit test then registers the mock instances.
Can anyone provide some strategy on how I can reliably test the parts of my app that are responsible for state changes in a repo, is there a general strategy for testing something like this?
I have tried using test repos however my problem there is that I need so many repos to test the various parts that its hard to manage. The tests are cumbersome and not the most reliable because previous executions of tests change the state. Issues I have ran into are tests which should create branches failing due to branches existing already from a previous test where the branch was not deleted after test.

Related

DDD and Unit Test, should I create entities directly or through their Domain service?

Some of the entities that are under test, cannot be directly created using the constructor, but only through a Domain service, because the use of a Repository is needed, may be for some validation that requires a hit in the DB (imagine a unique code validation).
In my tests I have two options:
Create an entity using the domain service that exposes the entity creation, this requires me to mock all the repository interfaces needed by that service and instruct the relevant ones to behave correctly for a successfull creation
Somehow use directly the entity constructor (I use c# so i can expose an internal constructor to the test assembly) and get the entity bypassing the service logic.
I'm not sure on which is the best approach,
the 1st is the one I prefer because it tests the public behaviour of the Domain model, since from an outside perspective the only way to create the entity is passing through the Domanin service. But this solution brings in al lot of "Arrange" code due to the mock configuration needed.
The 2nd one is more direct, it creates the object bypassing the service logic, but it's a sort of cheating on the Domain model, it assumes that the test code knows the internals of the Domain model and that's not a good point. But the code is a bit more readable.
I make use of Builders the create entities in tests, so the configuration code needed by the 1st approach would be isolated in the builder code, but I still want to know what would be the correct way.
Essentially you are asking what 'level' you should test at. Option 2 is very much a Unit Test, as it would test the code of a single class only. Option 1 is more of an Integration Test as it would test several components together.
I tend to prefer Option 2 for unit tests, for the following reasons:
Unit tests are simpler and more effective if they test a single class only. If you use the factory service to create the object under test, your test doesn't have direct control over how the object is constructed. The will lead to messy and tedious test code, such as mocking all the repository interfaces.
I will usually have, in a different part of my test code base, actual Integration Tests (or Acceptance Tests) which test the entire application from front to back via it's public interfaces (with external dependencies such as databases mocked/stubbed out). I would expect these tests to cover Option 1 from your question so I don't really need to repeat Option 1 in the unit test suite.
You may ask, what's the point of starting up my whole application just to test a couple of classes? The answer is quite simple - by sticking to only two levels of testing, your test code base will be clean, readable and easy to refactor. If your tests are very varied in terms of the 'level' that they test at (some test a single class, some a couple of classes together, some the whole application) then the test code just becomes hard to maintain.
Some caveats:
This advice is for if you are developing an "application" that will be deployed and run. If you are developing a "shared library" that will be distributed to other teams to use as they see fit, then you should test from all the public entry points to the library, regardless of the 'level'. (But I still wouldn't call these tests "unit tests" and would separate them in the code base.)
If you don't have the ability to write full integration tests, then I would use Option 1 and 2. Just be wary of the test code base becoming bloated.
One more point - test things together if they change for the same reason. The situation you don't want to end up in after choosing Option 1 is having to change your Entity tests every time you make a change to the factory/repository code. If the behavior of each Entity has not changed, then you shouldn't have to change the tests.
You could probably avoid that conundrum by not creating your entity through a domain service in the first place.
If you feel the need to validate something about an entity before creating it, you could probably see it as a domain invariant and have it enforced by an Aggregate. That aggregate root would expose a method to create the entity.
As soon as the invariant is guaranteed by the Aggregate in charge of spawning the new Entity, everything can be tested against concrete objects in memory since the aggregate should have all needed data inside itself to check the invariant - there is no resorting to an external Repository. You can set up the creator aggregate to be in an invariant breaking state or non-invariant-breaking state all in memory and exercise the test directly on the aggregate's CreateMyEntity method.
Don't Create Aggregate Roots by Udi Dahan is a good read on that approach - the basic idea is that entities and aggregate roots aren't just born out of nowhere.

Unit test of Dataprovider in .Net

I have runned into to the issue regarding unit test of dataprovider's. What are the best way to implement that.
One solution would be to insert something into the database and read it to make sure that it's as expected. And then removing it again. But this requires more coding.
The other solution is to have an extra database, which i could test against. This also requires alot of work to implement it.
What are the correct way to implement it?
As others have pointed out, what you are describing is called integration testing. Integration testing is something you should definitely do but it's good to understand the differences.
A unit test tests an individual piece of code without any dependencies. Dependencies are things like a database, file system or a web service but also other internal classes that are complex and require their own unit tests. Unit tests are made to run very fast. Especially when performing test driven development (TDD) you want your unit tests to execute in the order of milliseconds.
Integration tests are used to test how different components work together. If you have made sure through unit tests that your business logic is correct, your integration tests only have to make sure that all connections between different elements are in place. Integration tests can take a long time but you have fewer of them than unit tests.
I wrote a blog post on this some time ago that explains the differences and shows you ways to remove external dependencies while unit testing: Unit Testing, hell or heaven?.
Now regarding, your question. When running integration tests against a database you have a couple of options:
Use delta testing. This means that at the beginning of your test you record the current state of your database. For example, you store that are now 3 people in the people table. Then in your test you add one person and verify that there are now 4 people. in the database. This can be used quite effectively in simple scenarios. However, when your project grows more complex this is probably not the way to go.
Use a transaction around your unit tests. This is an easy way to make sure that your tests don't leave any data behind. Just start a new transaction (using the TransactionScope class in the .NET Framework) at the beginning of the test. As long as you don't complete the transaction, all changes will be rolled back automatically.
Use a new database for each test. Using localdb support in Visual Studio 2012 and higher, this can be done relatively fast.
I've chosen for the transaction scope a couple of times before and it worked quite well. One thing that's very important when writing integration tests like this is to make sure that your tests don't depend upon eachother. They need to run in whatever order the test runner decides on.
You should also make sure to avoid any 'magic numbers'. For example, maybe you know that your database contains 3 people so in your test you add one person and then assert that there are four in the database. For readers of your tests (which will be you in a couple of days, weeks or months) this is very hard to understand. Make sure that your tests are self explaining and that you don't depend on external state that's not obvious from the test.
You cannot unit test external dependencies like database connections. There is a good post here about why this is the case. In short: external dependencies should be tested, but that's integration tests, not unit tests.
Normally you do write intergration test when you call your database from code. If you want to write unittest, you should have a look at mocking frameworks.

Effective transition from unit testing to integration testing

I'm currently investigating how we should perform our testing in an upcoming project. In order to find bugs early in the development process, the developers will write unit tests before the actual code (TDDish). The unit tests will focus, as they should, on the unit (a method in this case) in isolation so dependencies will be mocked etc etc.
Now, I also would like to test these units when they interact with other units and I was thinking that there should be a effective best practice to do this since the unit tests have already been written. My thought is that the unit tests will be reused but the mocked objects will be removed and replaced with real ones. The diffent ideas I have right now is:
Use a global flag in each test class that decides if mock objects should be used or not. This approach will require several if statements
Use a factory class that either creates a "instanceWithMocks" or "instanceWithoutMocks". This approach might be troublesome for the new developers to use and requires some extra classes
Separate the integration tests from the unit tests in different classes. This will however require a lot of redundant code and maintaining the test cases will be twice the work
The way I see it all of these approaches have pros and cons. Which of these would be preferred and why? And is there a better way to effective transition from unit testing to integration testing? Or is this usually done in some other way?
I would go for the third option
Seperate the integration tests from the unit tests in different
classes. This will however require alot of redundant code and
maintaining the test cases will be twice the work
This is because unit tests and integration tests have different purposes. A unit test shows that an individual piece of functionality works in isolation. An integration test shows that different pieces of functionality still work when they interact with each other.
So for a unit test you want to mock things so that you are only testing the one piece of functionality.
For an integration test mock as little as possible.
I would have them in separate projects. What works well at my place is to have a unit test project using NUnit and Moq. This is written TDD as the code is written. The integration tests are Specflow/Selenium and the feature files are written with the help of the product owner in the planning session so we can verify that we are delivering what the owner wants.
This does create extra work in the short term but leads to fewer bugs, easier maintenance, and delivery matching requirements.
I agree to most other answers, that unittesting should be seperate from integrationtesting (option 3).
But i do not agree to your contra arguments:
[...] This (seperating unit from integration testing) will however
require a lot of redundant code and maintaining the test cases will be twice the work.
Generating objects with test data can be a lot of work but this can be refactored to test-helper clases aka ObjectMother that can be used from
unit and integration testing so there is no need for redundancy there
In unit tests you check different conditions of the class under tests.
For integration testing it is not neccessary to re-check every of these special cases.
Instead you check that the components work together.
Example
You may have unit-tests for 4 different situations where an exception is thrown.
For the integration it is not neccessary to re-test all 4 conditions
One exception-related integration test is enough to verify that the integrated system can handle exceptions.
An IoC container like Ninject/Autofac/StructureMap may be of use to you here. The unit tests can resolve the system-under-test through the container, and it is simply a matter of registration whether you have mocks or real objects registered. Similar to your factory approach, but the IoC container is the factory. New developers would need a little training to understand, but that's the case with any complex system. The disadvantage to this is that the registration scenarios can become fairly complicated, but it's hard to say for any given system whether they'd be too complicated without trying it out. I suspect this is the reason you haven't found any answers that seem definitive.
The integration tests should be different classes than your unit tests since you are testing a different behavior. The way that I think of integration tests is that they are the ones that you execute when trying to make sure that everything works together. They would be using inputs to portions of the application and making sure that the expected output is returned.
I think you are messing up the purpose of unit testing and integration testing.
Unit testing is for testing a single class - this is low level API.
Integration testing is testing how classes cooperate. This is another higher level API.
Normally, you can not reuse unit tests in integration testing because they represent different level of system view.
Using spring context may help with setting up environment for integration testing.
I'm not sure reusing your unit tests with real objects instead of mocks is the right approach to implementing integration tests.
The purpose of a unit test is to verify the basic correctness of an object in isolation from the outside world. Mocks are there to ensure that isolation. If you substitute them for real implementations, you'll actually end up testing something completely different - the correctness of large portions of the same chain of objects, and you're redundantly testing it many times.
By making integration tests distinct from unit tests, you'll be able to choose the portions of your system you want to verify - generally, it's a good idea to test parts that imply configuration, I/O, interation with third-party systems, the UI or anything else that unit tests have a hard time covering.

Unit testing database code [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Unit testing on code that uses the Database
I am just starting with unit testing and wondering how to unit test methods that are making actual changes to my database. Would the best way be to put them into transactions and then rollback, or are there better approaches to this?
If you want proper test coverage, you need two types of tests:
Unit-tests which mock all your actual data access. These tests will not acually write to the database, but test the behaviour of the class that does (which methods it calls on other dependencies, etc.)
System tests (or integration tests) which check that your database can be accessed and modified. I would considered two types of tests here: simple plain CRUD tests (create / read / update / delete) for each one of your model objects, and more complex system tests for your actual methods, and everything you deem interesting or valuable to test. Good practices here are to have each test starting from an empty (or "ready for the test") database, do its stuff, then check state of the database. Transactions / rollbacks are one good way to achieve this.
For unit testing you need to mock or stub the data access code, mostly you have repository interface and you can stub it by creating a concrete repository which stores data in memory, or you could mock it using dynamic mocking framework ..
For system or integration testing, you need to re-create the entire database before each test method in order to maintain a stable state before each test.
As per some of the previous answers if you want to test your data access code then you might want to think about mocks and a system/integration test strategy.
But, if you want to unit test your SQL objects (eg sprocs, views, constraints in tables etc) - then there are a number of database unit testing frameworks out there that might be of interest (including one that I have written).
Some implement tests within SQL, others within your code and use mbUnit/NUnit etc.
I have written a number of articles with examples on how I approach this - see http://dbtestunit.wordpress.com/
Other resources that might be of use:
http://www.simple-talk.com/sql/t-sql-programming/close-those-loopholes---testing-stored-procedures--/
http://tsqlt.org/articles/
The general approach is to have a way to mock you database actions. So that your unit tests are not reliant on the database being available or in a certain state. That said it also implies design that facilitates the isolation required to mock away your data layer. Unit test and how to do it well is a huge topic. Take a look on the googley for Mock Frameworks, and Dependency injection for a start.
If you are not developing an O/R mapper, there's no need to test database code. You don't want to test ADO.NET methods, right? Instead you want to verify that the ADO.NET methods are called with the right values.
Search Google for repository pattern. You will create an implementation of IRepository interface with CRUD methods and test/mock this.
If you want to test against a real database, this would be more of an integration then a unit test. Wrapping your tests in transaction could be an idea to keep your database in a consistent state.
We've done this in a base class and used the TestInitialize and TestCleanup functions to make sure this always happens.
But testing against a real database will certainly bring you into performance problems. So make sure from the beginning that you can swap your database access code with something that runs in memory. I don't now which database access code your targeting but design patterns like UnitOfWork and Repository can help you to isolate your database code and replace it with an in memory solution.

Where do I put my mocks?

I'm struggling to get mocks working, for a change, and was wondering where people generally put their mock classes. I seem to have three basic choices none of which seem to work.
I can put them in with the application assembly itself, in which case they ship with the application, which seems bad, but they are available for unit tests during the final builds and there are no circular references. This seems the simplest approach.
I can create a separate mock assembly, so they are available during the unit tests, can be consumed from the application and the test application but I end up with either having to move all of the actual types to this assembly or creating circular references.
I can put them in the test assembly, but then they are unable to be used from the application itself and therefore I cant use them as a process for building up chunks of the application.
I tend to try and use the mocks for helping develop the system as well as for the testing parts and therefore I find it hard to know where to put them. Additionally all of the final releases of code have to run through unit test processes therefore I need the mocks available during the build cycle.
Does anyone have any thoughts as to where mock classes should be placed?
thanks for any help
T
Your mocks should go in your unit tests projects. Your application should not be dependent on your mock objects. Generally your application will use interfaces and your mocks will implement those interfaces. Your application won't need to or should reference your test project.
Mocks should be kept in a separate project. We have a total of 3 options
Mocks in a unit test project
Will not work if the UI project needs to use this for startup(eg:mock service/partial integration test/smoke test). Even if the test project is referenced in config file through dependency injection, we do not want to carry unit test dlls to other environments. More over focus is more on integration tests these days as Unit tests are less productive, which of course is outside of this topic but we should realise mocks are not just about unit tests.
Mocks in the application projects itself(service-mocks in
service-project)
Developers can accidentally forget to remove the mocks. eg: a new developer tries mocks and forgets to include it in dependency config files. Let us not leave it for chances as it can hinder expansion of teams.
Mocks in a separate project.
Here both unit test projects, as well as other startup projects, can refer this. Integration tests using front end has the additional possibility to mock a certain area(eg: external APIs). Or smoke test the UI with mocks(while other teams deploy the back end). In short, we have a lot of options to use the mock. Not tied to unit test alone.
But the most important benefit is, when we want to be sure of going live, we can remove mock project(or dll) from deployment. This way if any project or config files accidentally refer the mock, we get runtime error. Which helps to nip it in the bud.
What we do on our projects is identify internal and external dependencies. Mocks for internal dependencies go in the unit-test project (or a separate Mocks project if they are used across the solution). Mocks for external dependencies go into the application itself, and are then used for deployment and integration testing.
An external dependency is something that is environmental - so, Active Directory, a web-service, a database, a logger, that sort of thing. Dependency injection is handled in the same way - external dependencies get defined in a configuration file, so we can easily choose which we want to use at run-time.
An internal dependency is pretty much everything else.

Categories