Background
We are slowly updating a very old application (based on "normal" SqlCommand queries and everything that comes with it) with some new code. To enhance stability and ease further development, we introduced NHibernate into the mix, along with good programming practices and what not.
So now, whenever a new module (not that the application is actually modular, but lets call it that) needs a large enough update, we break out all functionality into the "new" world of NHibernate and create a facade for the old code, so it keeps working.
Setup
I am creating Unit Tests for the facades. For this, I created a base class that all unit test classes are inheriting from. This class, on TestInitialize
creates the database schema using NHibernates "SchemaExport"
creates a "legacy" schema that consist of all tables that are not yet mapped by Nhibernate, but are needed by the facades to work
Every TestClass (inheriting from the one above) has its own DataSet.sql which is executed on TestInitialize, which
fills mapped Nhibernate tables with test specific data
fills the legacy tables with test specific data
fills a testExecutions table with input and expected output for each test run
The TestMethod of each TestClass then iterates over the testExecutions rows, creates the required objects (NHibernate), calls the facade it tests, and Asserts the returned data against what was defined in the testExecutions table.
Problem
The testing works fine, I get exactly the results I expect, but...
I have a big problem with this approach: Every test run visible in the test outputs is actually many many test executions. But from the outside, it just looks like a single test, even if the test itself actually runs the tested facade method many times over, with new data each time
I read about Data Driven Unit Tests and thought that it was literally exactly what I was already doing. So, instead of using my own things, I decided I should use that.
But the problem is: The TestContext.DataRow does obviously not know about NHibernate, so I don't actually get the objects required for testing, but a DataRow object with all data filled in the "old" way of Sql objects.
Is there a way to "teach" DataSource to return Nhibernate objects? Do I have to write my own DataSource attribute to accomplish this? How would that need to look?
Or is there another way
Is there a way to make my TestMethods to record the iteration over the testExecutions the same way a Data Driven test would do? So I don't have one Test, but the actual amount of tests run inside the method?
Are you bound to MsTest? Consider using NUnit, which has extension points for exactly your scenario. Have a look at [TestCaseSource] attribute http://www.nunit.org/index.php?p=testCaseSource&r=2.5.9, which allows you to reference a method to get test data from. The data you source from there will appear as separate unit tests.
The method must be static, i.e. you have to work around passing the NHibernate session by means of using a static member, which you can set either in the class or test setup method.
Okay, I managed to get it working now, although I had to change quite a bit of code.
The root of my problem was, that I was using [TestInitialize] to create both the database structure (schema, tables, etc.) including the setup data AND the test-runs themselves.
MSTest and NUnit alike see things differently. The test runs need to exist BEFORE the tests are run. This means, that by the time [TestInitialize] (or [SetUp] respectively) are run, the [DataSource] (or [TestCaseSource]) is queried.
Obviously this didn't work in my case, because I only created the data for the test runs in the same methods that generate the table structure.
I reworked my test suites so that the test run data is contained in a separate file (Access DB), while the conext (schema, tables, initial data) is createdy new for each test method.
This would already work fine, if not for an annoying Visual Studio 2010 Bug that causes the [TestInitialize] and [TestCleanup] method to only run on the first iteration of a [DataSource]d test, so I had to put all of that in the normal constructors of the class...
Nonetheless, everything works as it should now an I can see the actual amount of tests run :-)
Related
This question is NOT about what's here: Mocking EF DbContext with Moq and/or similar questions. I am well aware of that. Please, read the question before replying. Thanks.
We have a fairly complicated database, which has some, call them, "business objects" and some, call them, "data objects". The "business objects" are usually created or updated with every new user request and the "data objects" are fairly stable but may be occasionally created during user request if missing at the first call.
I want to create integration tests in the sand box where I could pull the data objects out of the real database (because there are too many of them to mock) but control what happens with the business objects. For example, if I have a get or create workflow (with some validation, of course), then I want explicitly test that whole workflow after testing separately get or create workflows in some other tests. However, if I test get or create workflow, then with the real DB I can only test create part of workflow once but then I will only hit a get workflow (because the object will exist after the first test run). Throw in that many tests are routinely run in parallel and the results become unpredictable.
I wonder what is the proper approach to perform a partial "mock" of a database context where most of the tables would come from the real DB but a few tables could be setup per test, e.g. in InMemoryDbSets
Thanks a lot!
My cenario is the follow:
I'm working in one system developed in C# Asp.Net (a big, huge and definetly grown anyway system). And I'm trying to begin create some unit tests to start refactor (believe, it's need refactor (there some controllers with 10k, 12k lines).
The problem is that a lot of things in this system is related to database (and system is tightly coupled to database). Database context is instantiated in a lot of pieces of code, and not injected.
So, my point now is to Mock some data into local MDB file to refactor code and create unit tests that will have their own MDB (with all database structure, but with only the data that he will use).
In what I though? Something like that:
[TestMethod()]
public void AnyTest()
{
var dbLogger = new DbLogger(); //this class is not created, it's just an example.
dbLogger.Start();
WorstWrittenMethodEver(); //this method will call any other methods insides,
//a couple of then
//and I really don't know the order and the
//complexity (much times very high), and this will probably
//instantiate the DataContext a lot of times and do
//a lot of data retrieval.
db.StopLog();
Console.WriteLine(db.DataRetrieved); //And in this line I will print all the tables
//and data retrieved between this two points.
}
After that, I will get that data, mock into one MDB file, and refactor the unit test above to get a really Unit Test.
Is there anyway to do that?
Looks that it is not very easy task. And I think you should use some popular and tested library or tool. My recommendation is to use the MiniProfiler. It allows to capture all SQL queries (also includes a support of LinqToSql). It has a good UI and API to interact within your code. Actually to get all SQL queries data you can use the following method:
MiniProfiler.Current.GetSqlTimings();
We are trying to add unit testing into our Business layer. The technology stack is asp.net web forms, WCF, ADO.Net calling stored procedures). The business layer is calling static methods on data classes, so it makes it difficult to introduce DI without making a lot of changes.
It may not be a conventional way to do it, but I'm thinking of keeping the DB in the unit test (dependency), but using it as a Test Db... either using an existing frozen db or having mocked data in tables. I was wondering about the feasibility of using a test db where the stored procedures are used like Mocks. Instead of duplicating the entire db, just create table names, named by the stored procedure.
The stored procedure would just call one table, and return static data... essentially, trying to emulate the functionality of Mocking data with something like Moq but from a DB perspective.
Can anyone recommend any designs that would include the DB in testing, that are still deterministic?
if you want to use the DB in the tests and have everything be deterministic then you need each test to have its own DB, which means creating (and potentially populating) a new db for each test.
Depending on how your DB layer creates its connection this is feasible. I have done similar by generating a DB using localDb in test setup with a GUID for the name and then deleting the DB again at the end of the test in the tear down.
It ends up being reasonably slow (not surprisingly) but having the DBs created on a Ram disk drive helped with that.
This worked ok for empty dbs, that then had schemas created by EF, but if you need a fixed set of data in the DB then you might need to restore it from a backup in the setup of the test
It seems to me that it's going to be a lot of work setting up your stored procedures to do what you want them to do when they are called for each test, and you still end up with the speed problems that databases always present. I'd recommend you do one or both of the following instead:
Use TypeMock, which has a powerful isolator tool. It basically changes your compilation to make it so that your unit test can mock even static methods.
Instead of just unit tests, try creating "acceptance tests," which focus on mimicking a complete user experience: log in, create object, view object (verify object looks right), update object, view object again (ditto), delete object (verify object is deleted). Begin each of these tests by setting up all the objects you'll need for this particular test, and end by deleting all those objects, so that other tests can run based on an assumed starting state.
The first approach gives you the speed and mockability of true "unit" tests, whereas the second one allows you to exercise much more of your code, increasing the likelihood that you'll catch bugs, even in things like stored procedures.
I'm just starting to understand the importance of Unit Testing in a c# environment. Now, I'm wondering how do i implement a black-box unit test that does Inserts,Deletes and updates on a database and then cleaning up the data after a successful test.
How do you actually do a process that rollbacks data inserted/updated/deleted? do you simply reset the index and remove the inserted rows? or restore the original state of the table by creating a script?
please guide me, I appreciate it. thanks!
What we do here in our development cycle. we always have that unit testing and load testing in our mind when ever we are developing application. So we make a column in our every datadase's table with userId or else. Then when we run Load Test or Unit test we insert UserId -1 in that every column, pointing that it is a load test data and -2 in case of unit Test Data. then we have pre Define Jobs at data base end that will clean that data after some time.
As long as your test is concise, and i presume it must be - for testing your DAL, why not just do the insert / update / deletes in a transaction that is rolled back once your test is complete.
Another option is to just use specific Update / Delete scripts in your test cleanup methods to roll back the exact changes that you updated / inserted to their pre-test values.
I think deleting the rows in the CleanUp method should be good choice.
By this you will always be testing your deleting rows code.
I was doing a research recently and found this thread as well. Here are my findings, which might be of some help for the future readers:
Make tests responsible for restoring the data they change. Sth like undo for the command. Tests usually know what data changes are expected, so are able to revert those in theory. This will surely involve additional work and could introduce some noise, unless it's automated, e.g you might try to keep track of the data created/updated in the test somehow generally;
Wrap each test in transaction and revert it afterwards. Pretty much as the one above, but easier to implement with things like TransactionScope. Might be not suitable if app creates own transactions as transactions aren't composable in general and if app doesn't work with TransactionScope (there are issues with Entity Framework for example);
Assert in some smart way on data relevant to test only. Then you won't need to cleanup anything unless there is too much data. E.g you might make your app aware of tests and set specific value to a test-only column added to every table. I've never tried that in practice;
Create and initialize fresh database from scratch for every test;
Use database backups to restore database to the point you need;
Use database snapshots for the restore;
Execute scripts to delete all the data and insert it again.
I personally use the latter and even ended up implementing a Reseed library, which does all the work for me.
Test frameworks usually do allow execute some logic before and after each test/test fixture run, which will mostlikely be needed for the ideas above. E.g for NUnit this implemented with use of OneTimeSetUp, OneTimeTearDown, FixtureSetUp, FixtureTearDown, SetUp, TearDown attributes.
One option is to use a Mock database in place of a real database. Here's a link that describes it.
I am still having a issue getting over a small issue when it comes to TDD.
I need a method that will get a certain record set of filtered data from the data layer (linq2SQL). Please note that i am using the linq generated classes from that are generated from the DBML. Now the problem is that i want to write a test for this.
do i:
a) first insert the records in the test and then execute the method and test the results
b) use data that might be in the database. Not to keen on this logic cause it could cause things to break.
c) what ever you suggest?
You should choose option a).
A unit test should be repeatable and has to be fully under your control. So for the test to be meaningful it is absolutely necessary that the test itself prepares the data for its execution - only this way you can rely on the test outcome.
Use a testdatabase and clean it each time you run the tests. Or you might try to create a mock object.
When I run tests using a database, I usually use an in-memory SQLite database.
Using an in memory db generally makes the tests quicker.
Also it is easy to maintain, because the database is "gone" after you close the connection to it.
In the test setup, I set up the db connection and I create the database schema.
In the test, I insert the data needed by the test. (your option a))
In the test teardown, I close the connection to the db.
I used this approach successfully for my NHibernate applications (howto 1 | howto 2 + nice summary), but I'm not that familiar with Linq2SQL.
Some pointers on running SQLite and Linq2SQL are on SO (link 1 | link 2).
Some people argue that a test using a database isn't a unit test. Regardless, I belief that there are situations where you want automated testing using a database:
You can have an architecture / design, where the database is hard to mock out, for instance when using an ActiveRecord pattern, or when you're using Linq2SQL (although there is an interesting solution in one of the comments to Peter's answer)
You want to run integration tests, with the complete system of application and database
What I have done in the past:
Start a transaction
Delete all data from all the tables in the database
Setup the reference data all your tests need
Setup the test data you need in database tables
Run your test
Abort the transaction
This works well provided your database does not have much data in it, otherwise it is slow. So you will wish to use a test database. If you have a test database that is well controlled, you could just run the test in the transaction without the need to delete all data first.
Try to design your system, so you get mock the data access layer for most of your tests. It is valid (and often useful) to unit test database code, however the unit tests for your other code should not need to touch the database.
You should consider if you would get more benefits from “end to end” system tests, with unit tests only for your “logic” code. This depend to an large extent on other factors within the project.