Unit test of Dataprovider in .Net - c#

I have runned into to the issue regarding unit test of dataprovider's. What are the best way to implement that.
One solution would be to insert something into the database and read it to make sure that it's as expected. And then removing it again. But this requires more coding.
The other solution is to have an extra database, which i could test against. This also requires alot of work to implement it.
What are the correct way to implement it?

As others have pointed out, what you are describing is called integration testing. Integration testing is something you should definitely do but it's good to understand the differences.
A unit test tests an individual piece of code without any dependencies. Dependencies are things like a database, file system or a web service but also other internal classes that are complex and require their own unit tests. Unit tests are made to run very fast. Especially when performing test driven development (TDD) you want your unit tests to execute in the order of milliseconds.
Integration tests are used to test how different components work together. If you have made sure through unit tests that your business logic is correct, your integration tests only have to make sure that all connections between different elements are in place. Integration tests can take a long time but you have fewer of them than unit tests.
I wrote a blog post on this some time ago that explains the differences and shows you ways to remove external dependencies while unit testing: Unit Testing, hell or heaven?.
Now regarding, your question. When running integration tests against a database you have a couple of options:
Use delta testing. This means that at the beginning of your test you record the current state of your database. For example, you store that are now 3 people in the people table. Then in your test you add one person and verify that there are now 4 people. in the database. This can be used quite effectively in simple scenarios. However, when your project grows more complex this is probably not the way to go.
Use a transaction around your unit tests. This is an easy way to make sure that your tests don't leave any data behind. Just start a new transaction (using the TransactionScope class in the .NET Framework) at the beginning of the test. As long as you don't complete the transaction, all changes will be rolled back automatically.
Use a new database for each test. Using localdb support in Visual Studio 2012 and higher, this can be done relatively fast.
I've chosen for the transaction scope a couple of times before and it worked quite well. One thing that's very important when writing integration tests like this is to make sure that your tests don't depend upon eachother. They need to run in whatever order the test runner decides on.
You should also make sure to avoid any 'magic numbers'. For example, maybe you know that your database contains 3 people so in your test you add one person and then assert that there are four in the database. For readers of your tests (which will be you in a couple of days, weeks or months) this is very hard to understand. Make sure that your tests are self explaining and that you don't depend on external state that's not obvious from the test.

You cannot unit test external dependencies like database connections. There is a good post here about why this is the case. In short: external dependencies should be tested, but that's integration tests, not unit tests.

Normally you do write intergration test when you call your database from code. If you want to write unittest, you should have a look at mocking frameworks.

Related

Retrofit unit tests to large solution, IOC, Moq

I am in the process of retrofitting unit tests for a asp.net solution written in VB.Net and c#.
The unit tests need to verify the current functionality and act as a check for future breaking changes.
The solution comprises of:
1 MVC web project
written in vb.net (don't ask, it's a legacy thing)
10 other supporting projects each containing logically grouped functionality
written in C#, each project contains repositories and DAL
All the classes are tightly coupled as there is no inversion of control (IOC) implemented anywhere, yet.
currently to test a controller there is the following stack:
controller
repository
dal
logging
First question, to unit test this correctly would I setup 1 test project and run all tests from it, or should I setup 1 test project for each project to test the functionality of that DLL only?
Second question, do I need to implement IOC to be able to use MOQ?
Third question, is it even possible to refactor IOC into a huge solution like this?
Forth question, what other options are available to get this done asap?
I am in the process of retrofitting unit tests for a asp.net solution written in VB.Net and c#. The unit tests need to verify the current functionality and act as a check for future breaking changes.
When working with a large code base that doesn't have unit tests and hasn't been written with testing in mind, there is a good chance that in order to write a useful set of unit tests you will have to modify the code, hence you're going to be triggering the event that you're planning on writing the unit tests to support. This is obviously risky, but may not be any riskier than what you're already doing on a day to day basis.
There are a number of approaches that you could take (and there's a good chance that this question will be closed as too broad). One approach is to create a good set of integration tests ensure that the core functionality is working. These tests won't be as fast to run as unit tests, but they will be further decoupled from the legacy code base. This will give you a good safety net for any changes that you need to make as part of introducing unit testing.
If you have an appropriate version of visual studio, then you may also be able to use shims (or if you have funds, typemock may be an option) to isolate elements of your application when writing your initial tests. So, you could for example create shims of your dal to isolate the rest of your code from the db.
First question, to unit test this correctly would i setup 1 test project and run all tests from it, or should i setup 1 test project for each project to test the functionality of that dll only?
Personally, I prefer think of each assembly as a testable unit, so I tend to create at least one test project for each assembly containing production code. Whether or not that makes sense though, depends a bit on what's contained in each of the assemblies... I'd also tend to have at least one test project for integration tests of the top level project.
Second question, do i need to implement IOC to be able to use MOQ?
The short answer is no, but it depends what your classes do. If you want to test using Moq, then it's certainly easier to do so if your classes support dependency injection, although you don't need to use an IOC container to achieve this. Hand rolled injection either through constructors like below, or through properties can form a bridge to allow testing stubs to be injected.
public SomeConstructor(ISomeDependency someDependency = null) {
if(null == someDependency) {
someDependency = new SomeDependency();
}
_someDependency = someDependency;
}
Third question, is it even possible refactor IOC into a huge solution like this?
Yes it's possible. The bigger question is it worth it? You appear to be suggesting a big bang approach to the migration. If you have a team of developers that don't have much experience in this area, this seems awfully risky. A safer approach might be to target a specific area of the application and migrate that section. If your assemblies are discrete then they should form fairly easy split points in your application. Learn what works and what doesn't, along with what benefits and unexpected pain you're feeling. Use that to inform your decision about how and when to migrate the rest of the code.
Forth question, what other options are available to get this done asap?
As I've said above, I'm not sure that ASAP is really the right approach to take. Working towards unit-testing can be done as a slow migration, adding tests as you actually change the code due to business requirements. This helps to ensure that testers are also allocated to catch any errors that you introduce as part of the refactoring that might need to take place to support the testing.

TDD in a rest api

I am developing a Rest api with ServiceStack. I'm doing a tdd aproach, and write tests with each new service I implement.
My DAL is pretty thin, with my repositories consisting of only crud operations. Moreover, the repos inherit from a C# generics repository, with 12 out of my 14 not needing any customization.
For each service I build a test bed, and go through all the possible error/success scenarios that can ocurr.
Is it correct in this scenario to only produce tests for repositories? In what situations should I consider testing other system components?
Thanks
TTD involves using many tests to keep you code clean and functional. It sound to me that so far you have implemented unit tests. Unit tests are tests that only look at a single class and attempt to determine if the class is working correctly. Nothing more, nothing less.
If you want to test a little more broadly, you can implement integration tests. Integration tests tend to test a full system to see if everything together is capable of doing what it is supposed to.
Unit test are all supposed to be very fast, so that you can run them all every few minutes without much slowdown.
Integration tests are allowed to take longer, because you might only run them every few hours to see if everything is still integrating well.
A combination of both types of tests help drive a TTD approach.

Effective transition from unit testing to integration testing

I'm currently investigating how we should perform our testing in an upcoming project. In order to find bugs early in the development process, the developers will write unit tests before the actual code (TDDish). The unit tests will focus, as they should, on the unit (a method in this case) in isolation so dependencies will be mocked etc etc.
Now, I also would like to test these units when they interact with other units and I was thinking that there should be a effective best practice to do this since the unit tests have already been written. My thought is that the unit tests will be reused but the mocked objects will be removed and replaced with real ones. The diffent ideas I have right now is:
Use a global flag in each test class that decides if mock objects should be used or not. This approach will require several if statements
Use a factory class that either creates a "instanceWithMocks" or "instanceWithoutMocks". This approach might be troublesome for the new developers to use and requires some extra classes
Separate the integration tests from the unit tests in different classes. This will however require a lot of redundant code and maintaining the test cases will be twice the work
The way I see it all of these approaches have pros and cons. Which of these would be preferred and why? And is there a better way to effective transition from unit testing to integration testing? Or is this usually done in some other way?
I would go for the third option
Seperate the integration tests from the unit tests in different
classes. This will however require alot of redundant code and
maintaining the test cases will be twice the work
This is because unit tests and integration tests have different purposes. A unit test shows that an individual piece of functionality works in isolation. An integration test shows that different pieces of functionality still work when they interact with each other.
So for a unit test you want to mock things so that you are only testing the one piece of functionality.
For an integration test mock as little as possible.
I would have them in separate projects. What works well at my place is to have a unit test project using NUnit and Moq. This is written TDD as the code is written. The integration tests are Specflow/Selenium and the feature files are written with the help of the product owner in the planning session so we can verify that we are delivering what the owner wants.
This does create extra work in the short term but leads to fewer bugs, easier maintenance, and delivery matching requirements.
I agree to most other answers, that unittesting should be seperate from integrationtesting (option 3).
But i do not agree to your contra arguments:
[...] This (seperating unit from integration testing) will however
require a lot of redundant code and maintaining the test cases will be twice the work.
Generating objects with test data can be a lot of work but this can be refactored to test-helper clases aka ObjectMother that can be used from
unit and integration testing so there is no need for redundancy there
In unit tests you check different conditions of the class under tests.
For integration testing it is not neccessary to re-check every of these special cases.
Instead you check that the components work together.
Example
You may have unit-tests for 4 different situations where an exception is thrown.
For the integration it is not neccessary to re-test all 4 conditions
One exception-related integration test is enough to verify that the integrated system can handle exceptions.
An IoC container like Ninject/Autofac/StructureMap may be of use to you here. The unit tests can resolve the system-under-test through the container, and it is simply a matter of registration whether you have mocks or real objects registered. Similar to your factory approach, but the IoC container is the factory. New developers would need a little training to understand, but that's the case with any complex system. The disadvantage to this is that the registration scenarios can become fairly complicated, but it's hard to say for any given system whether they'd be too complicated without trying it out. I suspect this is the reason you haven't found any answers that seem definitive.
The integration tests should be different classes than your unit tests since you are testing a different behavior. The way that I think of integration tests is that they are the ones that you execute when trying to make sure that everything works together. They would be using inputs to portions of the application and making sure that the expected output is returned.
I think you are messing up the purpose of unit testing and integration testing.
Unit testing is for testing a single class - this is low level API.
Integration testing is testing how classes cooperate. This is another higher level API.
Normally, you can not reuse unit tests in integration testing because they represent different level of system view.
Using spring context may help with setting up environment for integration testing.
I'm not sure reusing your unit tests with real objects instead of mocks is the right approach to implementing integration tests.
The purpose of a unit test is to verify the basic correctness of an object in isolation from the outside world. Mocks are there to ensure that isolation. If you substitute them for real implementations, you'll actually end up testing something completely different - the correctness of large portions of the same chain of objects, and you're redundantly testing it many times.
By making integration tests distinct from unit tests, you'll be able to choose the portions of your system you want to verify - generally, it's a good idea to test parts that imply configuration, I/O, interation with third-party systems, the UI or anything else that unit tests have a hard time covering.

Do I need duplicate test methods; 1 for unit and 1 for integration tests?

I'm newer to unit testing and it seems like most of the information I find is on the unit testing side of things. I'm getting a good grasp around this and am planning on using MS Test Framework with Moq so I don't have to hand roll any mocks for my unit test dependencies.
Let's say I have the following unit test method:
[TestMethod]
public void GetCustomerByIDUnitTest()
{
//Uses Moq for dependency for getting customer to make sure
//ID I set up is same one returned to test in Assertion
}
Do I have to create another identical test that instead uses the actual Entity Framework and Database call to make an integration test?
[TestMethod]
public void GetCustomerByIDIntegrationTest()
{
//Uses actual repository interface for EF and DB to do integration testing
}
For the purpose of this question please leave topics about TDD or BDD out; I'm simple trying to determine if I physically need (2) separate tests and the manner of organizing these tests. Is this a requirement when doing both unit and integration testing?
Thanks!
In my opinion, it is somewhat situational. If I am working on a small personal project, then no, I just do the unit tests.
If it is a corporate / enterprise project then I do tend to do both unit and integration tests. However, keep unit and integration tests separated. Developers should be able to run unit tests frequently and quickly. Integration tests can be run less frequently because they usually take a long time to run. Usually I just run integration tests once before a commit, whereas I run unit tests much more frequently.
As an additional note, make your test names explain what should be happening. The test name GetCustomerByIDUnitTest really doesn't tell me much. Better would be something like: GetCustomerByID_ReturnsTheCorrectUser_WhenAValidIdIsPassed and conversely GetCustomerByID_ReturnsNull_WhenNonExistantIdIsPassed
I tend to favor a What_Does_When naming convention, but that too is a personal preference. In general, the more explanatory the better though.
Hmm, I hope I do not fail you by mentioning things you rather have unmentioned. But here my 2 cents on it. One disclaimer up-front. I use nunit with RhinoMocks, so syntax could be different, concepts are the same though.
Yes, you need separate tests. You can debate if you want to store the tests in the same test class, and tag them with [Category("integrationtest")] so that you can easily run your unit tests without running integration tests, and the other way around. With your TDD practices (oops, I know you don't want me talking about that :)) you need your unit tests to be completed as fast as possible.
To look at this from a slightly different angle; you are not really duplicating your tests. Your integration tests validate the functionality, while your unit tests validate a method in isolation. So they can very well have completely different names. As long as they make sense to you (or if you develop something with a team: as long as it makes sense for your team).
I think the most important thing is that you find a way that works for you. There isn't really a right or wrong. I think it's a big plus that you are writing both unit tests and integration tests. How you organize them is kinda up to you. I had different approaches in different projects I participated in:
Project A:
1 test class for integration tests
1 test class for unit tests
That helped to create meaningful names for the test classes, they could capture the actual feature we are testing. As for the unit tests, the test class had the same name as the class that we are testing.
Project B:
Mixed up integration tests with unit tests in one test class.
This worked fine as well, although we sometimes did have trouble finding an integration test. But tbh, with resharper at your side, how hard can it be :).
As I know you should have separate project for UnitTesting and IntegrationTesting.
The suggestion of This book is to create two projects and name them like ProjectName.UnitTests and ProjectName.IntegrationTests.
developers must run each of them separately and easily.
You can find many interesting topics and videos about testing here

Using unit tests and a test database

How would I use NUnit and a test database to verify my code? I would in theory use mocks (moq) but my code is more in maintenance shape and fix it mode and I don't have the to setup all the mocks.
Do I just create a test project, then write tests that actually connect to my test database and execute the code as I wwould in the app? Then I check the code with asserts and make sure what I'm requesting is what I'm getting back correctly?
How would I use NUnit and a test database to verify my code? I would
in theory use mocks (moq) but my code is more in maintenance shape and
fix it mode and I don't have the to setup all the mocks.
Using mocks is only useful if you want to test the exact implementation behavior of a class. That means you are literally asserting that one class calls a specific method on another class. For example: I want to assert that Ninja.Attack() calls Sword.Unsheath().
Do I just create a test project, then write tests that actually
connect to my test database and execute the code as I wwould in the
app? Then I check the code with asserts and make sure what I'm
requesting is what I'm getting back correctly?
This is just a plain old unit test. If there are no obstacles to achieving this, that's a good indicator that this is going to be your most effective method of testing. It's practical and highly effective.
There's one other testing tool you didn't mention, which is called a stub. I highly recommend you read this classic article for more info:
http://martinfowler.com/articles/mocksArentStubs.html
Since we are not talking about theoretical case, this is what I would do - From my understanding what you want to test is that whether your app is properly connecting to the DB and fetching the desired data or not.
Create a test DB with the same schema
Add some dummy data in that
Open a connection to the DB from the code, request desired data
Write assertions to test what you got from the DB against what you expected
Also, I don't think these tests should be called unit tests because they are not self contained and are dependent on other factors like whether your database is up and running or not. I would say they fall close to integration tests that will test if different components of your applications are working as expectation when used together.
(Dan's answer ^^ pretty much sums what I wanted to say)

Categories