How do I test database-related code with NUnit? - c#

I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic:
http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx
http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx
http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx
http://haacked.com/archive/2005/12/28/11377.aspx
These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes.
That's great but...
Does this functionality exists somewhere in NUnit natively?
Has this technique been improved upon in the last 4 years?
Is this still the best way to test database-related code?
Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one.
Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.

NUnit now has a [Rollback] attribute, but I prefer to do it a different way. I use the TransactionScope class. There are a couple of ways to use it.
[Test]
public void YourTest()
{
using (TransactionScope scope = new TransactionScope())
{
// your test code here
}
}
Since you didn't tell the TransactionScope to commit it will rollback automatically. It works even if an assertion fails or some other exception is thrown.
The other way is to use the [SetUp] to create the TransactionScope and [TearDown] to call Dispose on it. It cuts out some code duplication, but accomplishes the same thing.
[TestFixture]
public class YourFixture
{
private TransactionScope scope;
[SetUp]
public void SetUp()
{
scope = new TransactionScope();
}
[TearDown]
public void TearDown()
{
scope.Dispose();
}
[Test]
public void YourTest()
{
// your test code here
}
}
This is as safe as the using statement in an individual test because NUnit will guarantee that TearDown is called.
Having said all that I do think that tests that hit the database are not really unit tests. I still write them, but I think of them as integration tests. I still see them as providing value. One place I use them often is in testing LINQ to SQL code. I don't use the designer. I hand write the DTO's and attributes. I've been known to get it wrong. The integration tests help catch my mistake.

I just went to a .NET user group and the presenter said he used SQLlite in test setup and teardown and used the in memory option. He had to fudge the connection a little and explicit destroy the connection, but it would give a clean DB every time.
http://houseofbilz.com/archive/2008/11/14/update-for-the-activerecord-quotmockquot-framework.aspx

I would call these integration tests, but no matter. What I have done for such tests is have my setup methods in the test class clear all the tables of interest before each test. I generally hand write the SQL to do this so that I'm not using the classes under test.
Generally, I rely on an ORM for my datalayer and thus I don't write unit tests for much there. I don't feel a need to unit test code that I don't write. For code that I add in the layer, I generally use dependency injection to abstract out the actual connection to the database so that when I test my code, it doesn't touch the actual database. Do this in conjunction with a mocking framework for best results.

For this sort of testing, I experimented with NDbUnit (working in concert with NUnit). If memory serves, it was a port of DbUnit from the Java platform. It had a lot of slick commands for just the sort of thing you're trying to do. The project appears to have moved here:
http://code.google.com/p/ndbunit/
(it used to be at http://ndbunit.org).
The source appears to be available via this link:
http://ndbunit.googlecode.com/svn/trunk/

Consider creating a database script so that you can run it automatically from NUnit as well as manually for other types of testing. For example, if using Oracle then kick off SqlPlus from within NUnit and run the scripts. These scripts are usually faster to write and easier to read. Also, very importantly, running SQL from Toad or equivalent is more illuminating than running SQL from code or going through an ORM from code. Generally I'll create both a setup and teardown script and put them in setup and teardown methods.
Whether you should be going through the DB at all from unit tests is another discussion. I believe it often does make sense to do so. For many apps the database is the absolute center of action, the logic is highly set based, and all the other technologies and languages and techniques are passing ghosts. And with the rise of functional languages we are starting to realize that SQL, like JavaScript, is actually a great language that was right there under our noses all these years.
Just as an aside, Linq to SQL (which I like in concept though have never used) almost seems to me like a way to do raw SQL from within code without admitting what we are doing. Some people like SQL and know they like it, others like it and don't know they like it. :)

For anyone coming to this thread these days like me, I'd like to recommend trying the Reseed library I'm developing currently for this specific case.
Neither in-memory db replacement (lack of features) nor transaction rollback (transactions can't be nested) were a suitable option for me, so I ended up with a simple delete/insert cycle for the data restore purpose. Ended up with a library to generate those, while trying to optimize my tests speed and simplicity of setup. Would be happy if it helps anyone else.
Another alternative I'd recommend is using database snapshots to restore data, which is of comparable performance and usability.
Workflow is as follows:
delete existing snapshots;
create db;
insert data;
create snapshot ;
execute test;
restore from snapshot;
go to "execute test" until none left;
drop snapshot.
It's suitable if you could have the only data script for all the tests and allows you to execute the insertion (which is supposed to be the slowest) the only time, moreover you don't need data cleanup script at all.
For further performance improvement, as such tests could take a lot of time, consider using a pool of databases and tests parallelization.

Related

Unit test of Dataprovider in .Net

I have runned into to the issue regarding unit test of dataprovider's. What are the best way to implement that.
One solution would be to insert something into the database and read it to make sure that it's as expected. And then removing it again. But this requires more coding.
The other solution is to have an extra database, which i could test against. This also requires alot of work to implement it.
What are the correct way to implement it?
As others have pointed out, what you are describing is called integration testing. Integration testing is something you should definitely do but it's good to understand the differences.
A unit test tests an individual piece of code without any dependencies. Dependencies are things like a database, file system or a web service but also other internal classes that are complex and require their own unit tests. Unit tests are made to run very fast. Especially when performing test driven development (TDD) you want your unit tests to execute in the order of milliseconds.
Integration tests are used to test how different components work together. If you have made sure through unit tests that your business logic is correct, your integration tests only have to make sure that all connections between different elements are in place. Integration tests can take a long time but you have fewer of them than unit tests.
I wrote a blog post on this some time ago that explains the differences and shows you ways to remove external dependencies while unit testing: Unit Testing, hell or heaven?.
Now regarding, your question. When running integration tests against a database you have a couple of options:
Use delta testing. This means that at the beginning of your test you record the current state of your database. For example, you store that are now 3 people in the people table. Then in your test you add one person and verify that there are now 4 people. in the database. This can be used quite effectively in simple scenarios. However, when your project grows more complex this is probably not the way to go.
Use a transaction around your unit tests. This is an easy way to make sure that your tests don't leave any data behind. Just start a new transaction (using the TransactionScope class in the .NET Framework) at the beginning of the test. As long as you don't complete the transaction, all changes will be rolled back automatically.
Use a new database for each test. Using localdb support in Visual Studio 2012 and higher, this can be done relatively fast.
I've chosen for the transaction scope a couple of times before and it worked quite well. One thing that's very important when writing integration tests like this is to make sure that your tests don't depend upon eachother. They need to run in whatever order the test runner decides on.
You should also make sure to avoid any 'magic numbers'. For example, maybe you know that your database contains 3 people so in your test you add one person and then assert that there are four in the database. For readers of your tests (which will be you in a couple of days, weeks or months) this is very hard to understand. Make sure that your tests are self explaining and that you don't depend on external state that's not obvious from the test.
You cannot unit test external dependencies like database connections. There is a good post here about why this is the case. In short: external dependencies should be tested, but that's integration tests, not unit tests.
Normally you do write intergration test when you call your database from code. If you want to write unittest, you should have a look at mocking frameworks.

What's the best way to use different SQL commands for unit testing only?

I have a .NET project that uses NHibernate. Due to some project requirements, a very specific section of code uses HQL to select a random record using "order by newid()". However, for unit test purposes, I'm using an in-memory SQLite database, which of course chokes on newid(). I need to have this method use an alternate SQLite compatible query only when run from the unit test. I can't add conditional compilation constants only for unit test purposes, and #define only works at the file level, so I can't simply add a constant there either.
I really don't want to have to muck up my repository class with some junk code just to enable this unit test. What are my options?
Edit:
I already have a global class for other stuff, so I added a static TestMode property to it which will be false any time other than when I explicitly set it in my unit test, so the code now looks like:
string random, update;
if (Globals.TestMode)
{
random = "from Customer order by random()";
}
else
{
random = "from Customer order by newid()";
}
This works, but I'd hoped to avoid exactly such an if statement. Still looking for suggestions.
You could abstract out the responsibility of providing the HQL query to a separate class that gets injected into your repository. That way your test code can just inject a different implementation of that class (even a mocked implementation) which provides HQL that you know will work on your SQLite server.
Bear in mind that this isn't really a unit test because you're testing a lot more than just your code. In order for your test to pass, you're relying on the database and the NHibernate provider to all be working. Furthermore, it's impossible to deterministically test that a selected record is "random." Unit tests are deterministic. You're describing an integration test, as you're trying to test an interaction with the database through an ORM.
The problem is that you're trying to test it using a different database server than you'll actually use in production. This is of questionable value since, as you've noticed, even if the test passes you can't guarantee that the code will actually work in a production environment.

What do I gain by trying to isolate my C# code from the underlying DB when testing?

I'm in the midst of rewriting a database application and using Entity Framework to access the DB. Currently I am using MSTest and a copy of the underlying database as part of these tests. My MSTest involves the following code as part of each test:
[TestInitialize()]
public void MyTestInitialize()
{
transScope = new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions { Timeout = new TimeSpan(0, 10, 0) });
}
[TestCleanup()]
public void MyTestCleanup()
{
Transaction.Current.Rollback();
transScope.Dispose();
}
Now, this seems to work pretty well for testing and resets the DB between tests. My tests use the DB context to do CRUD operations against the test DB and then roll them back afterward.
I've read a bit about isolating the C# library from the underlying DB for testing but I'm wondering what this actually buys me. As part of this rewrite, most (but not all) of the code that was in stored procedures has been moved in the C# layer but a few remain which are called via triggers on tables. What do I gain from going through the exercise of mocking out that database layer? Frankly, when I look at doing this, it seems like a lot of additional work without any obvious value, but perhaps I'm missing the point here.
Thoughts?
It depends on the the kinds of tests you are writing. When writing unit tests where you want to test one unit of code--typically a class--then usually it's good for two things to hold true:
The tests should run as fast as possible. Meaning 100's of tests per second.
The code you are testing should be isolated from other code such that you can control the way dependencies work and test different kinds of inputs and outputs easily.
Both of these things are difficult to do if you use a real database for all tests. The first because a round-trip to the database usually takes a lot more time than just running some code. The second because you need to have your database setup with lots of different kinds of data, including negative cases and corner cases, and sometimes even make the database fail. It's often easier to instead mock the dependencies of your class and pass in any inputs needed.
That being said, I have written many tests that use a similar pattern to the one you describe and they work well and run relatively fast. Personally, I would use a combination of real unit tests without a database, and tests like the ones you have that are more like functional testing of a component.
In the end, do what works for you.

Unit testing database code [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Unit testing on code that uses the Database
I am just starting with unit testing and wondering how to unit test methods that are making actual changes to my database. Would the best way be to put them into transactions and then rollback, or are there better approaches to this?
If you want proper test coverage, you need two types of tests:
Unit-tests which mock all your actual data access. These tests will not acually write to the database, but test the behaviour of the class that does (which methods it calls on other dependencies, etc.)
System tests (or integration tests) which check that your database can be accessed and modified. I would considered two types of tests here: simple plain CRUD tests (create / read / update / delete) for each one of your model objects, and more complex system tests for your actual methods, and everything you deem interesting or valuable to test. Good practices here are to have each test starting from an empty (or "ready for the test") database, do its stuff, then check state of the database. Transactions / rollbacks are one good way to achieve this.
For unit testing you need to mock or stub the data access code, mostly you have repository interface and you can stub it by creating a concrete repository which stores data in memory, or you could mock it using dynamic mocking framework ..
For system or integration testing, you need to re-create the entire database before each test method in order to maintain a stable state before each test.
As per some of the previous answers if you want to test your data access code then you might want to think about mocks and a system/integration test strategy.
But, if you want to unit test your SQL objects (eg sprocs, views, constraints in tables etc) - then there are a number of database unit testing frameworks out there that might be of interest (including one that I have written).
Some implement tests within SQL, others within your code and use mbUnit/NUnit etc.
I have written a number of articles with examples on how I approach this - see http://dbtestunit.wordpress.com/
Other resources that might be of use:
http://www.simple-talk.com/sql/t-sql-programming/close-those-loopholes---testing-stored-procedures--/
http://tsqlt.org/articles/
The general approach is to have a way to mock you database actions. So that your unit tests are not reliant on the database being available or in a certain state. That said it also implies design that facilitates the isolation required to mock away your data layer. Unit test and how to do it well is a huge topic. Take a look on the googley for Mock Frameworks, and Dependency injection for a start.
If you are not developing an O/R mapper, there's no need to test database code. You don't want to test ADO.NET methods, right? Instead you want to verify that the ADO.NET methods are called with the right values.
Search Google for repository pattern. You will create an implementation of IRepository interface with CRUD methods and test/mock this.
If you want to test against a real database, this would be more of an integration then a unit test. Wrapping your tests in transaction could be an idea to keep your database in a consistent state.
We've done this in a base class and used the TestInitialize and TestCleanup functions to make sure this always happens.
But testing against a real database will certainly bring you into performance problems. So make sure from the beginning that you can swap your database access code with something that runs in memory. I don't now which database access code your targeting but design patterns like UnitOfWork and Repository can help you to isolate your database code and replace it with an in memory solution.

Integration Testing: Am I doing it right?

Here's an integration test I wrote for a class that interacts with a database:
[Test]
public void SaveUser()
{
// Arrange
var user = new User();
// Set a bunch of properties of the above User object
// Act
var usersCountPreSave = repository.SearchSubscribersByUsername(user.Username).Count();
repository.Save(user);
var usersCountPostSave = repository.SearchSubscribersByUsername(user.Username).Count();
// Assert
Assert.AreEqual(userCountPreSave + 1, userCountPostSave);
}
It seems to me that I can't test the Save function without involving the SearchSubscriberByUsername function to find out if the user was successfully saved. I realize that integration tests aren't meant to be unit tests which are supposed to test one unit of code at a time. But ideally, it would be nice if I could test one function in my repository class per test but I don't know how I can accomplish that.
Is it fine how I've written the code so far or is there a better way?
You have a problem with your test. When you're testing that data is saved into the database, you should be testing that it's in the database, not that the repository says that it's in the database.
If you're testing the functionality of repository, then you can't verify that functionality by asking if it has done it correctly. It's the equivalent of saying to someone 'Did you do this correctly?' They are going to say yes.
Imagine that repository never commits. Your test will pass fine, but the data won't be in the database.
So, what I would do is to to open a connection (pure SQL) to the database and check that the data has been saved correctly. You only need to a select count(*) before and after to ensure that the user has been saved. If you do this, you can avoid using the SearchSubscribersByUsername as well.
If you're testing the functionality of repository, you can't trust repository, by definition.
To unit test something like a "Save" function, you will definitely need some trustworthy channel to check the result of the operation. If you trust SearchSubscribersByUsername (because you have already made some unit tests for that function on its own), you can use it here.
If you don't trust SearchSubscribersByUsername and you think your unit test could also break because there is an error in that function (and not in Save), you should think about a different channel (perhaps you have a possibility to make a bypassing SQL access to your DB to check the Save result, which may be simpler than the implementation of SearchSubscribersByUsername)? However, do not reimplement SearchSubscribersByUsername again, that would be getting pointless. Either way, you will need at least some other function you can trust.
Unless the method you are testing returns definitive information about what you have done I don't see any way to avoid calling other methods. I think you are correct in your assumption that Integration testing needs a different way of thinking from Unit testing.
I would still build tests that focus on individual methods. So in testing Save() I may well use the capabilities of Search(), but my focus is on the edge cases of Save(). I build tests that deal with duplicate insertions or invalid input data. Then later I build a whole raft of Search() tests that deal with the edge cases of Search().
Now one possible way of thinking is that Save and Search have some commonality, a bug in Search might mask a bug in Save. Imagine, for example, if you had a caching layer down there. So possibly an alternative approach is to use some other verification mechanism. For example a direct JDBC call to the database, or alteratively introducing mocking layers at some point in your infrastructure. When building complex Integrated Systems this kind of "Back Door" verification may be essential.
Personally, I've written countless tests very similar to this and think its fine. The alternative is to stub out the database so that searchSubscribers never actually does anything but thats great deal of work for what I'd say is little gain.

Categories