In my ASP.Net MVC application I am using IoC to facilitate unit testing. The structure of my application is a Controller -> Service Class -> Repository type of structure. In order to do unit testing, I have I have an InMemoryRepository class that inherits my IRepository, which instead of going out to a database, it uses an internal List<T> member. When I construct my unit tests, I just pass an instance of an internal repository instead of my EF repository.
My service classes retrieve objects from the repository through an AsQueryable interface that my repository classes implement, thus allowing me to use Linq in my service classes without the service class while still abstracting the data access layer out. In practice this seems to work well.
The problem that I am seeing is that every time I see Unit Testing talked about, they are using mock objects instead of the internal method that I see. On the face value it makes sense, because if my InMemoryRepository fails, not only will my InMemoryRepository unit tests fail, but that failure will cascade down into my service classes and controllers as well. More realistically I am more concerned about failures in my service classes affecting controller unit tests.
My method also requires me to do more setup for each unit test, and as things become more complicated (e.g. I implement authorization into the service classes) the setup becomes much more complicated, because I then have to make sure each unit test authorizes it with the service classes correctly so the main aspect of that unit test doesn't fail. I can clearly see how mock objects would help out in that regard.
However, I can't see how to solve this completely with mocks and still have valid tests. For example, one of my unit tests is that if I call _service.GetDocumentById(5), It gets the correct document from the repository. The only way this is a valid unit test (as far as I understand it) is if I have 2 or 3 documents stored, and my GetdocumentById() method correctly retrieves the one with an Id of 5.
How would I have a mocked repository with an AsQueryable call, and how would I make sure I don't mask any issues I make with my Linq statements by hardcoding the return statements when setting up the mocked repository? Is it better to keep my service class unit test using the InMemoryRepository but change my controller unit tests to use mocked service objects?
Edit:
After going over my structure again I remembered a complication that is preventing mocking in controller unit tests, as I forgot my structure is a bit more complicated than I originally said.
A Repository is a data store for one type of object, so if my document service class needs document entities, it creates a IRepository<Document>.
Controllers are passed an IRepositoryFactory. The IRepositoryFactory is a class which is supposed to make it easy to create repositories without having to repositories directly into the controller, or having the controller worry about what service classes require which repositories. I have an InMemoryRepositoryFactory, which gives the service classes InMemoryRepository<Entity> instantiations, and the same idea goes for my EFRepositoryFactory.
In the controller's constructors, private service class objects are instantiated by passing in the IRepositoryFactory object that is passed into that controller.
So for example
public class DocumentController : Controller
{
private DocumentService _documentService;
public DocumentController(IRepositoryFactory factory)
{
_documentService = new DocumentService(factory);
}
...
}
I can't see how to mock my service layer with this architecture so that my controllers are unit tested and not integration tested. I could have a bad architecture for unit testing, but I'm not sure how to better solve the issues that made me want to make a repository factory in the first place.
One solution to your problem is to change your controllers to demand IDocumentService instances instead of constructing the services themselves:
public class DocumentController : Controller
{
private IDocumentService _documentService;
// The controller doesn't construct the service itself
public DocumentController(IDocumentService documentService)
{
_documentService = documentService;
}
...
}
In your real application, let your IoC container inject IRepositoryFactory instances into your services. In your controller unit tests, just mock the services as needed.
(And see Misko Hevry's article about constructors doing real work for an extended discussion of the benefits of restructuring your code like this.)
Personally, I would design the system around the Unit of Work pattern that references repositories. This could make things much simpler and allows you to have more complex operations running atomically. You would typically have a IUnitOfWorkFactory that is supplied as dependency in the Service classes. A service class would create a new unit of work and that unit of work references repositories. You can see an example of this here.
If I understand correctly you are concerned about errors in one piece of (low level) code failing a lot of tests, making it harder to see the actual problem. You take InMemoryRepository as a concrete example.
While your concern is valid, I personally wouldn't worry about a failing InMemoryRepository. It is a test objects, and you should keep those tests objects as simple as possible. This prevents you from having to write tests for your test objects. Most of the time I assume they are correct (however, I sometimes use self checks in such a class by writing Assert statements). A test will fail when such an object misbehaves. It's not optimal, but you would normally find out quick enough what the problem is in my experience. To be productive, you will have to draw a line somewhere.
Errors in the controller caused by the service are another cup of tea IMO. While you could mock the service, this would make testing more difficult and less trustworthy. It would be better to NOT test the service at all. Only test the controller! The controller will call into the service and if your service doens't behave well, your controller tests would find out. This way you only test the top level objects in your application. Code coverage will help you spot parts of your code you don't test. Of course this isn't possible in all scenario's, but this often works well. When the service works with a mocked repository (or unit of work) this would work very well.
Your second concern was that those depedencies make you have much test setup. I've got two things to say about this.
First of all, I try to minimize my dependency inversion to only what I need to be able to run my unit tests. Calls to the system clock, database, Smtp server and file system should be faked to make unit tests fast and reliable. Other things I try not to invert, because the more you mock, the less reliable the tests become. You are testing less. Minimizing the dependency inversion (to what you need to have good RTM unit tests) helps making test setup easier.
But (second point) you also need to write your unit tests in a way that they are readable and maintainable (the hard part about unit testing, or in fact making software in general). Having big test setups makes them hard to understand and makes test code hard to change when a class gets a new dependency. I found out that one of the best ways to make tests more readable and maintainable is to use simple factory methods in your test class to centralize the creation of types that you need in the test (I never use mocking frameworks). There are two patterns that I use. One is a simple factory method, such as one that creates a valid type:
FakeDocumentService CreateValidService()
{
return CreateValidService(CreateInitializedContext());
}
FakeDocumentService CreateValidService(InMemoryUnitOfWork context)
{
return new FakeDocumentSerice(context);
}
This way tests can simply call these methods and when they need a valid object they simply call one of the factory methods. Of course when one of these methods accidentally creates an invalid object, many tests will fail. It's hard to prevent this, but easily fixed. And easily fixed means that the tests are maintainable.
The other pattern I use is the use of a container type that holds the arguments/properties of the actual object you want to create. This gets especially useful when an object has many different properties and/or constructor arguments. Mix this with a factory for the container and a builder method for the object to create and you get very readable test code:
[TestMethod]
public void Operation_WithValidArguments_Succeeds()
{
// Arrange
var validArgs = CreateValidArgs();
var service = BuildNewService(validArgs);
// Act
service.Operation();
}
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))]
public void Operation_NegativeAge_ThrowsException()
{
// Arrange
var invalidArgs = CreateValidArgs();
invalidArgs.Age = -1;
var service = BuildNewService(invalidArgs);
// Act
service.Operation();
}
This allows you to let the test only specify what matters! This is very important to make tests readable! The CreateValidArgs() method could create an container with over 100 arguments that would make a valid SUT (system under test). You now centralized in one place the default valid configuration. I hope this makes sense.
Your third concern was about not being able to test if LINQ queries behave expectedly with the given LINQ provider. This is a valid problem, because it is quite easy to write LINQ (to Expression tree) queries that run perfectly when used over in-memory objects, but fail when querying the database. Sometimes it is impossible to translate a query (because you call an .NET method that has no counterpart in the database) or the LINQ provider has limitations (or bugs). Especially the LINQ provider of Entity Framework 3.5 sucks hard.
However, this is a problem you cannot solve with unit tests per definition. Because when you call the database in your tests, it's not a unit test anymore. Unit tests however never totally replace manual testing :-)
Still, it's a valid concern. In addition to unit testing you can do integration testing. In this case you run your code with the real provider and a (dedicated) test database. Run each test within a database transaction and rollback the transaction at the end of the test (TransactionScope works great with this!). Note however that writing maintainable integration tests is even harder than writing maintainable unit tests. You have to make sure that the model of your test database is in sync. Each integration test should insert the data it needs for that test in the database, which is often a lot of work to write and maintain. Best is to keep the amount of integration tests to a minimum. Have enough integration tests to make you feel confident about making changes to the system. For instance, having to call a service method with a complicated LINQ statement in a single test will often be enough to test if your LINQ provider is able to build valid SQL out of it. Most of the time I just assume the LINQ provider will have the same behavior as the LINQ to Objects (.AsQueryable()) provider. Again, you will have to draw the line somewhere.
I hope this helps.
I think your approach is sound for testing the service layer itself, but, as you suggested, it would be better if the service layer is mocked out completely for your business logic and other high-level testing. This makes your higher-level tests easier to implement/maintain, as there's no need to exercise the service layer again if it's already been tested.
Related
Some of the entities that are under test, cannot be directly created using the constructor, but only through a Domain service, because the use of a Repository is needed, may be for some validation that requires a hit in the DB (imagine a unique code validation).
In my tests I have two options:
Create an entity using the domain service that exposes the entity creation, this requires me to mock all the repository interfaces needed by that service and instruct the relevant ones to behave correctly for a successfull creation
Somehow use directly the entity constructor (I use c# so i can expose an internal constructor to the test assembly) and get the entity bypassing the service logic.
I'm not sure on which is the best approach,
the 1st is the one I prefer because it tests the public behaviour of the Domain model, since from an outside perspective the only way to create the entity is passing through the Domanin service. But this solution brings in al lot of "Arrange" code due to the mock configuration needed.
The 2nd one is more direct, it creates the object bypassing the service logic, but it's a sort of cheating on the Domain model, it assumes that the test code knows the internals of the Domain model and that's not a good point. But the code is a bit more readable.
I make use of Builders the create entities in tests, so the configuration code needed by the 1st approach would be isolated in the builder code, but I still want to know what would be the correct way.
Essentially you are asking what 'level' you should test at. Option 2 is very much a Unit Test, as it would test the code of a single class only. Option 1 is more of an Integration Test as it would test several components together.
I tend to prefer Option 2 for unit tests, for the following reasons:
Unit tests are simpler and more effective if they test a single class only. If you use the factory service to create the object under test, your test doesn't have direct control over how the object is constructed. The will lead to messy and tedious test code, such as mocking all the repository interfaces.
I will usually have, in a different part of my test code base, actual Integration Tests (or Acceptance Tests) which test the entire application from front to back via it's public interfaces (with external dependencies such as databases mocked/stubbed out). I would expect these tests to cover Option 1 from your question so I don't really need to repeat Option 1 in the unit test suite.
You may ask, what's the point of starting up my whole application just to test a couple of classes? The answer is quite simple - by sticking to only two levels of testing, your test code base will be clean, readable and easy to refactor. If your tests are very varied in terms of the 'level' that they test at (some test a single class, some a couple of classes together, some the whole application) then the test code just becomes hard to maintain.
Some caveats:
This advice is for if you are developing an "application" that will be deployed and run. If you are developing a "shared library" that will be distributed to other teams to use as they see fit, then you should test from all the public entry points to the library, regardless of the 'level'. (But I still wouldn't call these tests "unit tests" and would separate them in the code base.)
If you don't have the ability to write full integration tests, then I would use Option 1 and 2. Just be wary of the test code base becoming bloated.
One more point - test things together if they change for the same reason. The situation you don't want to end up in after choosing Option 1 is having to change your Entity tests every time you make a change to the factory/repository code. If the behavior of each Entity has not changed, then you shouldn't have to change the tests.
You could probably avoid that conundrum by not creating your entity through a domain service in the first place.
If you feel the need to validate something about an entity before creating it, you could probably see it as a domain invariant and have it enforced by an Aggregate. That aggregate root would expose a method to create the entity.
As soon as the invariant is guaranteed by the Aggregate in charge of spawning the new Entity, everything can be tested against concrete objects in memory since the aggregate should have all needed data inside itself to check the invariant - there is no resorting to an external Repository. You can set up the creator aggregate to be in an invariant breaking state or non-invariant-breaking state all in memory and exercise the test directly on the aggregate's CreateMyEntity method.
Don't Create Aggregate Roots by Udi Dahan is a good read on that approach - the basic idea is that entities and aggregate roots aren't just born out of nowhere.
I am working on unit testing stuffs for my controller and service layers(C#,MVC). And I am using Moq dll for mocking the real/dependency objects in unit testing.
But I am little bit confuse regarding mocking the dependencies or real objects. Lets take a example of below unit test method :-
[TestMethod]
public void ShouldReturnDtosWhenCustomersFound_GetCustomers ()
{
// Arrrange
var name = "ricky";
var description = "this is the test";
// setup mocked dal to return list of customers
// when name and description passed to GetCustomers method
_customerDalMock.Setup(d => d.GetCustomers(name, description)).Returns(_customerList);
// Act
List<CustomerDto> actual = _CustomerService.GetCustomers(name, description);
// Assert
Assert.IsNotNull(actual);
Assert.IsTrue(actual.Any());
// verify all setups of mocked dal were called by service
_customerDalMock.VerifyAll();
}
In the above unit test method I am mocking the GetCustomers method and returning a customer list. Which is already defined. And looks like below:
List<Customer> _customerList = new List<Customer>
{
new Customer { CustomerID = 1, Name="Mariya",Description="description"},
new Customer { CustomerID = 2, Name="Soniya",Description="des"},
new Customer { CustomerID = 3, Name="Bill",Description="my desc"},
new Customer { CustomerID = 4, Name="jay",Description="test"},
};
And lets have a look on the Assertion of Customer mocked object and actual object Assertion :-
Assert.AreEqual(_customer.CustomerID, actual.CustomerID);
Assert.AreEqual(_customer.Name, actual.Name);
Assert.AreEqual(_customer.Description, actual.Description);
But here I am not understanding that it(above unit test) always work fine. Means we are just testing(in Assertion) which we passed or which we are returning(in mocking object). And we know that the real/actual object will always return which list or object that we passed.
So what is the meaning of doing unit testing or mocking here?
The true purpose of mocking is to achieve true isolation.
Say you have a CustomerService class, that depends on a CustomerRepository. You write a few unit tests covering the features provided by CustomerService. They all pass.
A month later, a few changes were made, and suddenly your CustomerServices unit tests start failing - and you need to find where the problem is.
So you assume:
Because a unit test that tests CustomerServices is failing, the problem must be in that class!!
Right? Wrong! The problem could be either in CustomerServices or in any of its depencies, i.e., CustomerRepository. If any of its dependencies fail, chances are the class under test will fail too.
Now picture a huge chain of dependencies: A depends on B, B depends on C, ... Y depends on Z. If a fault is introduced in Z, all your unit tests will fail.
And that's why you need to isolate the class under test from its dependencies (may it be a domain object, a database connection, file resources, etc). You want to test a unit.
Your example is too simplistic to show off the real benefit of mocking. That's because your logic under test isn't really doing much beyond returning some data.
But imagine as an example that your logic did something based on wall clock time, say scheduled some process every hour. In a situation like that, mocking the time source lets you actually unit test such logic so that your test doesn't have to run for hours, waiting for the time to pass.
In addition to what already been said:
We can have classes without dependencies. And the only thing we have is unit testing without mocks and stubs.
When we have dependencies there are several kinds of them:
Service that our class uses mostly in a 'fire and forget' way, i.e. services that do not affect control flow of the consuming code.
We can mock these (and all other kinds) services to test they were called correctly (integration testing) or simply for injecting as they could be required by our code.
Two Way Services that provide result but do not have an internal
state and do not affect the state of the system. They can be dubbed complex data transformations.
By mocking these services you can test you expectations about code behavior for different variants of service implementation without need to heave all of them.
Services which affect the state of the system or depend on real world
phenomena or something out of your control. '#500 - Internal Server Error' gave a good example of the time service.
With mocking you can let the time flow at the speed (and direction) whatever is needed. Another example is working with DB. When unit testing it is usually desirable not to change DB state what is not true about functional test. For such kind of services 'isolation' is the main (but not the only) motivation for mocking.
Services with internal state your code depends on.
Consider Entity Framework:
When SaveChanges() is called, many things happen behind the scene. EF detects changes and fixups navigation properties. Also EF won't allow you to add several entities with the same key.
Evidently, it can be very difficult to mock the behavior and the complexity of such dependencies...but usually you have not if they are designed well. If you heavily rely on the functionality some component provides you hardly will be able to substitute this dependency. What is probably needed is isolation again. You don't want to leave traces when testing, thus butter approach is to tell EF not to use real DB. Yes, dependency means more than a mere interface. More often it is not the methods signatures but the contract for expected behavior. For instance IDbConnection has Open() and Close() methods what implies certain sequence of calls.
Sure, it is not strict classification. Better to treat it as extremes.
#dcastro writes: You want to test a unit. Yet the statement doesn't answer the question whether you should.
Lets not discount integration tests. Sometimes knowing that some composite part of the system has a failure is ok.
As to example with the chain of dependencies given by #dcastro we can try to find the place where the bag is likely to by:
Assume, Z is a final dependency. We create unit tests without mocks for it. All boundary conditions are known. 100% coverage is a must here. After that we say that Z works correctly. And if Z fails our unit tests must indicate it.
The analogue comes from engineering. Nobody tests each screw and bolt when building a plane.Statistic methods are used to prove with some certainty that factory producing the details works fine.
On the other hand, for very critical parts of your system it is reasonable to spend time and mock complex behavior of the dependency. Yes, the more complex it is the less maintainable tests are going to be. And here I'd rather call them as the specification checks.
Yes your api and tests both can be wrong but code review and other forms of testing can assure the correctness of the code to some degree. And as soon as these tests fail after some changes are made you either need to change specs and corresponding tests or find the bug and cover the case with the test.
I highly recommend you watching Roy's videos: http://youtube.com/watch?v=fAb_OnooCsQ
In this very case mocking allowed you to fake a database connection, so that you can run a test in place and in-memory, without relying on any additional resource, i.e. the database. This tests asserts that, when a service is called, a corresponded method of DAL is called.
However the later asserts of the list and the values in list aren't necessary. As you correctly noticed you just asserting that the values you "mocked" are returned. This would be useful within the mocking framework itself, to assert that the mocking methods behave as expected. But in your code is is just excess.
In general case, mocking allow one to:
Test behaviour (when something happens, then a particular method is executed)
Fake resources (for example, email servers, web servers, HTTP API request/response, database)
In contrast, unit-tests without mocking usually allow you to test the state. That is, you can detect a change in a state of an object, when a particular method was called.
All previous answers assume that mocking has some value, and then they proceed to explain what that value supposedly is.
For the sake of future generations that might arrive at this question looking to satisfy their philosophical objections on the issue, here is a dissenting opinion:
Mocking, despite being a nifty trick, should be avoided at (almost) all costs.
When you mock a dependency of your code-under-test, you are by definition making two kinds of assumptions:
Assumptions about the behavior of the dependency
Assumptions about the inner workings of your code-under-test
It can be argued that the assumptions about the behavior of the dependency are innocent because they are simply a stipulation of how the real dependency should behave according to some requirements or specification document. I would be willing to accept this, with the footnote that they are still assumptions, and whenever you make assumptions you are living your life dangerously.
Now, what cannot be argued is that the assumptions you are making about the inner workings of your code-under-test are essentially turning your test into a white-box test: the mock expects the code-under-test to issue specific calls to its dependencies, with specific parameters, and as the mock returns specific results, the code-under-test is expected to behave in specific ways.
White-box testing might be suitable if you are building high criticality (aerospace grade) software, where the goal is to leave absolutely nothing to chance, and cost is not a concern. It is orders of magnitude more labor intensive than black-box testing, so it is immensely expensive, and it is a complete overkill for commercial software, where the goal is simply to meet the requirements, not to ensure that every single bit in memory has some exact expected value at any given moment.
White-box testing is labor intensive because it renders tests extremely fragile: every single time you modify the code-under-test, even if the modification is not in response to a change in requirements, you will have to go modify every single mock you have written to test that code. That is an insanely high maintenance level.
How to avoid mocks and black-box testing
Use fakes instead of mocks
For an explanation of what the difference is, you can read this article by Martin Fowler: https://martinfowler.com/bliki/TestDouble.html but to give you an example, an in-memory database can be used as fake in place of a full-blown RDBMS. (Note how fakes are a lot less fake than mocks.)
Fakes will give you the same amount of isolation as mocks would, but without all the risky and costly assumptions, and most importantly, without all the fragility.
Do integration testing instead of unit testing
Using the fakes whenever possible, of course.
For a longer article with my thoughts on the subject, see https://blog.michael.gr/2021/12/white-box-vs-black-box-testing.html
I have the following method:
Void UpdateUser(User user){}
I need to check this method whether will work properly.
I've used a separate db to check this in unit testing. But many experienced people said if I use this method that won't be unit testing; that's integration testing.
But I don't know how to mock for unit testing.
The code written in the UpdateUser method, will try to update data using Entity framework.
If I mock (Actually I don't how to do this either), how this will work with entity framework?
Mocking means that you develop your software components (classes) in a way that any class with behaviour is used/consumed/called-upon as an interface (or abstract class). You program to an abstraction. Run-time you use something (service locator, DI container, factory, ...) to retrieve/create those instances.
The most common way is to use construction injection. Here is an excellent explanation of why one would use DI, and examples of how to do it.
In your case, your component that uses the Entity Framework (your repository for instance) must implement a repository-interface, and any class that uses your repository should use it as an interface.
This way, you can mock the repository in your unittests. Which means that you create a unit-test-repository class (which has nothing to do with any database or EF), and use that when you create the instance of the class that you want to unit-test.
Hopefully this helps. There are many source to be found. Personally I just read this book and I found it to be very good. This is the authors blog.
You can use transaction and rollback or create a test user try its update. Assert and then in the finally block delete the test user.
You can use mocking framework like moq, rhino etc. moq is quite easy and you can find many example that demonstrate moq with DI like unity framework.
If your class is like this
public class UserRepository()
{
Sqlcontext _context;
void UpdateUser(User user)
{
_context.Users.Add(user);
}
}
then this is not unit testable.
Although this is not a unit test, if you insist on connecting on database and testing it, you could change your function to
User UpdateUser(User user)
{
_context.Users.Add(user);
return user;
}
and test if
user.Id > 0
Here, you are basically just testing entity framework.
"I've used a separate db to check this in unit testing. But many
experienced people said if I use this method that won't be unit
testing; that's integration testing."
Those people are mistaken, despite their supposed experience. For some reason, the incorrect notion that unit tests are all about testing parts of your code in isolation has grown in popularity in recent years. In reality, unit testing is all about writing tests that act as a unit, in other words they exist in isolation and the result of one unit test cannot influence another test.
If your UpdateUser method directly accesses EF, then as long as you ensure the database is guaranteed to be rolled back to its starting state at the end of each test, then you have unit tests. However, setting the database up for each test and ensuring it can be reliably rolled back can be a lot of work. That is why mocks are often used. Other answers have covered mcoking EF, so I won't go over that.
To greatly simplify your tests, you could have an extrapolation layer between UpdateUser and EF. In other words, the UpdateUser class is supplied with an instance of an interface, which is its gateway into EF. It doesn't talk to EF directly. To then mock EF, you simply supply a mocked implementation of the interface. This then pushes the need to test against EF down into a more basic layer, with more basic CRUD-like behaviours.
A non official trick, not-best-practice can be to use an in-memory database (context) for the time of testing.
Use transaction and rollback your transaction at the end of test
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Unit testing on code that uses the Database
I am just starting with unit testing and wondering how to unit test methods that are making actual changes to my database. Would the best way be to put them into transactions and then rollback, or are there better approaches to this?
If you want proper test coverage, you need two types of tests:
Unit-tests which mock all your actual data access. These tests will not acually write to the database, but test the behaviour of the class that does (which methods it calls on other dependencies, etc.)
System tests (or integration tests) which check that your database can be accessed and modified. I would considered two types of tests here: simple plain CRUD tests (create / read / update / delete) for each one of your model objects, and more complex system tests for your actual methods, and everything you deem interesting or valuable to test. Good practices here are to have each test starting from an empty (or "ready for the test") database, do its stuff, then check state of the database. Transactions / rollbacks are one good way to achieve this.
For unit testing you need to mock or stub the data access code, mostly you have repository interface and you can stub it by creating a concrete repository which stores data in memory, or you could mock it using dynamic mocking framework ..
For system or integration testing, you need to re-create the entire database before each test method in order to maintain a stable state before each test.
As per some of the previous answers if you want to test your data access code then you might want to think about mocks and a system/integration test strategy.
But, if you want to unit test your SQL objects (eg sprocs, views, constraints in tables etc) - then there are a number of database unit testing frameworks out there that might be of interest (including one that I have written).
Some implement tests within SQL, others within your code and use mbUnit/NUnit etc.
I have written a number of articles with examples on how I approach this - see http://dbtestunit.wordpress.com/
Other resources that might be of use:
http://www.simple-talk.com/sql/t-sql-programming/close-those-loopholes---testing-stored-procedures--/
http://tsqlt.org/articles/
The general approach is to have a way to mock you database actions. So that your unit tests are not reliant on the database being available or in a certain state. That said it also implies design that facilitates the isolation required to mock away your data layer. Unit test and how to do it well is a huge topic. Take a look on the googley for Mock Frameworks, and Dependency injection for a start.
If you are not developing an O/R mapper, there's no need to test database code. You don't want to test ADO.NET methods, right? Instead you want to verify that the ADO.NET methods are called with the right values.
Search Google for repository pattern. You will create an implementation of IRepository interface with CRUD methods and test/mock this.
If you want to test against a real database, this would be more of an integration then a unit test. Wrapping your tests in transaction could be an idea to keep your database in a consistent state.
We've done this in a base class and used the TestInitialize and TestCleanup functions to make sure this always happens.
But testing against a real database will certainly bring you into performance problems. So make sure from the beginning that you can swap your database access code with something that runs in memory. I don't now which database access code your targeting but design patterns like UnitOfWork and Repository can help you to isolate your database code and replace it with an in memory solution.
I use a code generator (CodeSmith with .NetTiers template) to generate all the DAL code. I write unit tests for my code (business layer), and these tests are becoming pretty slow to run. The problem is that for each test, I reset the database to have a clean state. Also, as I do a lot of tests, it seems that the latency of the database operations sum up to a pretty bit delay.
All DB operations are performed through a DataRepository class that is generated by .NetTiers. Do you know if there is a way to generate (or code myself) a mock-DataRepository that would use in-memory storage instead of using the database ?
This way, I would be able to use this mock repository in my unit tests, speeding them a lot, without actually changing anything to my current code !
Take a look at Dependency injection (DI) and Inversion of Control containers (IOC). Essentially, you will create an interface that that a new mock DB object can implement, and then the DI framework will inject your mock DB when running tests, and the real DB when running you app.
There are numerous free and open source libraries that you could use to help you out. Since you are in C#, one of the new and up and coming DI libraries is Ninject. There are many others too. Check out this Wikipedia article for others and a high level description.
From the description of the issue, I think you are performing the Integration test because your test is making use of the Business and the DAL and live database.
For unit testing, you deal with one layer of code with all other dependencies either mocked or stubbed. With this approach, you unit tests will be really fast to execute on every incremental code changes.
There are various mocking frameworks that you can use like Rhino Mock, Moq, typemock to name a few. (In my project, I use Rhino mock to mock the DAL layer and unit test Business Layer in Isolation)
Harsha
Some of our unit tests use data fetched from XML's which were generated from a database to mock db access. DAL classes are replaced by mock ones because they are all stored in a DI container.
The generation of the xml's is custom code, if you find an open source solution for this then I'm happy to hear it.
Edit after Stefan's answer: I recall another team using SQL CE for their test database