c# integration tests with database transactions - c#

When running c# integration tests, I'm trying to recreate my table in a transaction scope for each test, though I'm getting the following error:
DROP DATABASE statement cannot be used inside a user transaction.
What does it mean? How do I handle it?

Related

system test - how to test database without saving data actually

I currently use using (TransactionScope scope = new TransactionScope()) for
c# testing database insertion so that it doesn't actually save into database for unit testing. But for system testing which I am currently testing on different server to test the status code for different scenarios. Each time I run the test, it is actually saving data into database and next time I ran the test it is failing as it is duplicate record. I tried to use transaction scope , but doesn't seem to work. Please help.

Handling concurrency while dealing with Database snapshots

I am writing unit tests in .NET that are heavily dependent on database. Unit test modify the data in the database and I want to revert the database to the initial state. I am planning to follow the approach listed below for reverting the database to pre unit-test state.
Approach: I want to create a database snapshot before running each unit test and rollback database to that snapshot after it is finished.
The problem: many users can simultaneously run the test cases. This can result in different snapshots (inconsistent). How can I handle the concurrency problem? I do not want to pollute the database in any way for the sake of running test cases.
I am using SQL Server 2012 and .NET (C#) for accessing the database.

How to run mstest unit tests so that an underlying database has time to build properly?

I've got a C# library that I've written a lot of unit tests for. The library is a data access layer library and requires SQL Server's full-text indexing capabilities, which means LocalDb will not work. The unit tests are connecting to a local SQL Server instance. The project has an IDatabaseInitializer that drops and re-creates the database for each test, so each test has a fresh set of data to assume it can work against, meaning each test is capable of running on its own - no ordering needed.
A problem that I've had since day one on this, but never tackled yet, is that if I simply run all of the tests at once, some will fail. If I go back and run those tests individually then they succeed. It's not always the same tests that fail when I run all at once.
I've assumed that this is because they're all running so quickly and against the same database. Perhaps the database is not properly deleting and re-creating before the next test runs. On top of just a simple database, I've also got a SQL Server full-text index that some of my tests require in order to pass. The full-text index populates in the background and therefore is not "ready" immediately after populating the test database's tables. The full-text index may take a second or two to build. This causes my tests that target the full-text index to always fail, unless I step through the initializer in a debugger and pause a few seconds while the full-text index builds.
My question is, is there any way to avoid this clashing of database rebuilds? As a bonus, is there a way to "slow down" the tests that need the full-text index, so that it has time to populate?
You can use a database with fixed datas. For your tests you use a transaction that you begin on TestInitialize and rollback on TestCleanup

How would i unit test database logic?

I am still having a issue getting over a small issue when it comes to TDD.
I need a method that will get a certain record set of filtered data from the data layer (linq2SQL). Please note that i am using the linq generated classes from that are generated from the DBML. Now the problem is that i want to write a test for this.
do i:
a) first insert the records in the test and then execute the method and test the results
b) use data that might be in the database. Not to keen on this logic cause it could cause things to break.
c) what ever you suggest?
You should choose option a).
A unit test should be repeatable and has to be fully under your control. So for the test to be meaningful it is absolutely necessary that the test itself prepares the data for its execution - only this way you can rely on the test outcome.
Use a testdatabase and clean it each time you run the tests. Or you might try to create a mock object.
When I run tests using a database, I usually use an in-memory SQLite database.
Using an in memory db generally makes the tests quicker.
Also it is easy to maintain, because the database is "gone" after you close the connection to it.
In the test setup, I set up the db connection and I create the database schema.
In the test, I insert the data needed by the test. (your option a))
In the test teardown, I close the connection to the db.
I used this approach successfully for my NHibernate applications (howto 1 | howto 2 + nice summary), but I'm not that familiar with Linq2SQL.
Some pointers on running SQLite and Linq2SQL are on SO (link 1 | link 2).
Some people argue that a test using a database isn't a unit test. Regardless, I belief that there are situations where you want automated testing using a database:
You can have an architecture / design, where the database is hard to mock out, for instance when using an ActiveRecord pattern, or when you're using Linq2SQL (although there is an interesting solution in one of the comments to Peter's answer)
You want to run integration tests, with the complete system of application and database
What I have done in the past:
Start a transaction
Delete all data from all the tables in the database
Setup the reference data all your tests need
Setup the test data you need in database tables
Run your test
Abort the transaction
This works well provided your database does not have much data in it, otherwise it is slow. So you will wish to use a test database. If you have a test database that is well controlled, you could just run the test in the transaction without the need to delete all data first.
Try to design your system, so you get mock the data access layer for most of your tests. It is valid (and often useful) to unit test database code, however the unit tests for your other code should not need to touch the database.
You should consider if you would get more benefits from “end to end” system tests, with unit tests only for your “logic” code. This depend to an large extent on other factors within the project.

Can TransactionScope rollback be used with Selenium or Watin?

I am trying to do some automated web testing of my ASP.NET application. I was hoping to use the AutoRollback attribute from Xunit.net extensions to undo any database changes that were made during the test. AutoRollback uses TransactionScope to start a transaction before the test and roll it back afterwards.
When I try to hit my web application during a transaction, it always times out. It seems like this should work, any ideas? Here is my test:
[Fact]
[AutoRollback]
public void Entity_should_be_in_list()
{
Entity e = new Entity
{
Name = "Test",
};
dataContext.Entities.InsertOnSubmit(e);
dataContext.SubmitChanges();
selenium.Open("http://localhost/MyApp");
Assert.True(selenium.IsTextPresent("Test"));
}
Your ASP.NET application has a separate database context and it has no idea that you want it to join transaction started by Xunit.net. Apparently, the database locks some resources when transaction starts; web application waits patiently for some time and eventually gives up.
I think your best bet is to start from empty database and use SQL script to create schema and populate lookup tables (your database is under source control, right?). Another approach is to backup database before running the tests and then restore it once they finish.

Categories