The goal is simple - rollback data inserted by a unit test. Here is how it goes. In a unit test, a method is called that creates a new connection and inserts some data. After that a unit test creates a new connection and tries to find what has been inserted and assert that. I was hoping to wrap these two things with TransactionScope, not call Complete and see inserted data rolled back. That's not happening. Am I doing something wrong or I am just missing the point?
using (new TransactionScope())
{
// call a method that inserts data
var target = new ....
target.DoStuffAndEndupWithDataInDb();
// Now assert what has been added.
using (var conn = new SqlConnection(connectionString))
using (var cmd = conn.CreateCommand())
{
// Just read the data from DB
cmd.CommandText = "SELECT...";
conn.Open();
int count = 0;
using (var rdr = cmd.ExecuteReader())
{
// Read records here
...
count++;
}
// Expecting, say, 3 records here
Assert.AreEqual(3, count);
}
}
EDIT: I don't think I had DTC running and configured on my machine. So I started the service and tried to configure DTC but I am getting this error.
are you using MSTest ? then you can use MsTestExtensions
you unit test needs to derive from MSTestExtensionsTestFixture and your test needs to have TestTransaction Attribute, it uses AOP to automatically start a transaction and roll it back.
I don't think you're missing the point but just attacking the problem incorrectly.
In NUnit terms, the concepts are [SetUp] and [TearDown] methods. You've already defined the setup method in your description and your tear down method should just undo what the setup method did (assuming what you're unit testing has no residual side effects).
Do you have Distributed Transaction Coordinator properly configured? This is a big gotcha when trying to use TransactionScope like this... if it isn't configured, sometimes you'll get an error, but other times the transaction will just commit and not rollback.
I'd recommend looking at this article, which shows you all the various steps that need to be done in order to rollback your unit tests using MSDTC.
Your code should work as you expect. How are you adding data in DoStuffAndEndupWithDataInDb()? I'm wondering whether the data initialization is not participating in the transaction.
For reference, the following console application correctly outputs 3 rows, and does not commit the rows to the database (checked using SSMS).
public class Program
{
private static void Main(string[] args)
{
using (var trx = new TransactionScope())
{
InitializeData();
using (var connection = new SqlConnection("server=localhost;database=Test;integrated security=true"))
using (var command = connection.CreateCommand())
{
command.CommandText = "select count(*) from MyTable";
connection.Open();
Console.WriteLine("{0} rows", command.ExecuteScalar());
}
}
Console.ReadLine();
}
private static void InitializeData()
{
using (var connection = new SqlConnection("server=localhost;database=Test;integrated security=true"))
using (var command = connection.CreateCommand())
{
command.CommandText = "insert into MyTable values (1),(2),(3)";
connection.Open();
command.ExecuteNonQuery();
}
}
}
Related
Suppose I have a database method that looks like this:
public void insertRow(SqlConnection c)
{
using (var cmd = new SqlCommand("insert into myTable values(#dt)",c))
{
cmd.Parameters.Add(new SqlParameter("#dt",DbType.DateTime)).Value = DateTime.Now;
cmd.ExecuteNonQuery();
}
}
Now suppose I want to test this method. So, I write a test case that attempts to wrap this method inside a transaction, so that I can rollback the change after testing the result of the insertion:
public void testInsertRow()
{
SqlConnection c = new SqlConnection("connection.string.here");
SqlTransaction trans = c.BeginTransaction();
insertRow();
// do something here to evaluate what happened, e.g. query the DB
trans.Rollback();
}
This however fails to work, because:
ExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.
Is there a way to accomplish this, without having to rewrite every single database method to accept a transaction, and then rewrite every single call to pass null into the method for the transaction?
For example, this would work:
public void insertRow(SqlConnection c, SqlTransaction t)
{
using (var cmd = new SqlCommand("insert into myTable values(#dt)",c))
{
if (t != null) cmd.Transaction = t;
cmd.Parameters.Add(new SqlParameter("#dt",DbType.DateTime)).Value = DateTime.Now;
cmd.ExecuteNonQuery();
}
c.Close();
}
But then, I have to either rewrite each and every call to the database method to include that null parameter, or write override signatures for each and every database method that automatically pass in a null, e.g.
public void insertRow(SqlConnection c) { insertRow(c, null); }
What's the best way to allow transaction-based testing of database calls?
You can use a TransactionScope to add the connections automatically in to a transaction
public void testInsertRow()
{
using(TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
SqlConnection c = new SqlConnection("connection.string.here");
insertRow(c);
// do something here to evaluate what happened, e.g. query the DB
//do not call scope.Complete() so we get a rollback.
}
}
Now this will cause tests to block each other if you have multiple parallel tests running. If you database is set up to support it you could do Snapshot isolation so updates from concurrent tests won't lock each other out.
public void testInsertRow()
{
using(TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions(IsolationLevel = IsolationLevel.Snapshot))
{
SqlConnection c = new SqlConnection("connection.string.here");
insertRow(c);
// do something here to evaluate what happened, e.g. query the DB
//do not call scope.Complete() so we get a rollback.
}
}
Due to a few restrictions I can't use entity Framework and thus need to use SQL Connections, commands and Transactions manually.
While writing unit tests for the methods calling these data layer operations I stumbled upon a few problems.
For the unit tests I NEED to do them in a Transaction as most of the operations are changing data by their nature and thus doing them outside a Transaction is problematic as that would change the whole base data. Thus I need to put a Transaction around these (with no commit fired at the end).
Now I have 2 different variants of how These BL methods work.
A few have Transactions themselves inside of them while others have no Transactions at all. Both of these variants cause problems.
Layered Transaction: Here I get errors that the DTC cancelled the distributed Transaction due to timeouts (although the timeout is being set to 15 minutes and it is running for only 2 minutes).
Only 1 Transaction: Here I get an error about the state of the Transaction when I come to the "new SQLCommand" line in the called method.
My question here is what can I do to correct this and get unit testing with manual normal and layered Transactions working?
Unit testing method example:
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
MyBLMethod();
}
}
Example for a Transaction using method (very simplified)
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.Transaction = transaction;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = "INSERT ......";
command.ExecuteNonQuery();
// Following commands
....
Transaction.Commit();
}
}
Example for a non Transaction using method
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = "INSERT ......";
command.ExecuteNonQuery();
}
On the face of it, you have a few options, depending upon what you want to test and your ability to spend money / change your code base.
At the moment, you’re effectively writing integration tests. If the database isn’t available then your tests will fail. This means the tests can be slow, but on the plus side if they pass you’re pretty confident that you code can hit the database correctly.
If you don’t mind hitting the database, then the minimum impact to changing your code / spending money would be for you to allow the transactions to complete and verify them in the database. You can either do this by taking database snapshots and resetting the database each test run, or by having a dedicated test database and writing your tests in such a way that they can safely hit the database over and over again and then verified. So for example, you can insert a record with an incremented id, update the record, and then verify that it can be read. You may have more unwinding to do if there are errors, but if you’re not modifying the data access code or the database structure that often then this shouldn’t be too much of an issue.
If you’re able to spend some money and you want to actually turn your tests into unit tests, so that they don’t hit the database, then you should consider looking into TypeMock. It’s a very powerful mocking framework that can do some pretty scary stuff. I believe it using the profiling API to intercept calls, rather than using the approach used by frameworks like Moq. There's an example of using Typemock to mock a SQLConnection here.
If you don’t have money to spend / you’re able to change your code and don’t mind continuing to rely on the database then you need to look at some way to share your database connection between your test code and your dataaccess methods. Two approaches that spring to mind are to either inject the connection information into the class, or make it available by injecting a factory that gives access to the connection information (in which case you can inject a mock of the factory during testing that returns the connection you want).
If you go with the above approach, rather than directly injecting SqlConnection, consider injecting a wrapper class that is also responsible for the transaction. Something like:
public class MySqlWrapper : IDisposable {
public SqlConnection Connection { get; set; }
public SqlTransaction Transaction { get; set; }
int _transactionCount = 0;
public void BeginTransaction() {
_transactionCount++;
if (_transactionCount == 1) {
Transaction = Connection.BeginTransaction();
}
}
public void CommitTransaction() {
_transactionCount--;
if (_transactionCount == 0) {
Transaction.Commit();
Transaction = null;
}
if (_transactionCount < 0) {
throw new InvalidOperationException("Commit without Begin");
}
}
public void Rollback() {
_transactionCount = 0;
Transaction.Rollback();
Transaction = null;
}
public void Dispose() {
if (null != Transaction) {
Transaction.Dispose();
Transaction = null;
}
Connection.Dispose();
}
}
This will stop nested transactions from being created + committed.
If you’re more willing to restructure your code, then you might want to wrap your dataaccess code in a more mockable way. So, for example you could push your core database access functionality into another class. Depending on what you’re doing you’ll need to expand on it, however you might end up with something like this:
public interface IMyQuery {
string GetCommand();
}
public class MyInsert : IMyQuery{
public string GetCommand() {
return "INSERT ...";
}
}
class DBNonQueryRunner {
public void RunQuery(IMyQuery query) {
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString)) {
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction()) {
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.Transaction = transaction;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = query.GetCommand();
command.ExecuteNonQuery();
transaction.Commit();
}
}
}
}
This allows you to unit test more of your logic, like the command generation code, without having to actually worry about hitting the database and you can test your core dataaccess code (the Runner) against the database once, rather than for every command you want to run against the database. I would still write integration tests for all dataaccess code, but I’d only tend to run them whilst actually working on that section of code (to ensure column names etc have been specified correctly).
I am working on an existing application. This application reads data from a huge file and then, after doing some calculations, it stores the data in another table.
But the loop doing this (see below) is taking a really long time. Since the file sometimes contains 1,000s of records, the entire process takes days.
Can I replace this foreach loop with something else? I tried using Parallel.ForEach and it did help. I am new to this, so will appreciate your help.
foreach (record someredord Somereport.r)
{
try
{
using (var command = new SqlCommand("[procname]", sqlConn))
{
command.CommandTimeout = 0;
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add(…);
IAsyncResult result = command.BeginExecuteReader();
while (!result.IsCompleted)
{
System.Threading.Thread.Sleep(10);
}
command.EndExecuteReader(result);
}
}
catch (Exception e)
{
…
}
}
After reviewing the answers , I removed the Async and used edited the code as below. But this did not improve performance.
using (command = new SqlCommand("[sp]", sqlConn))
{
command.CommandTimeout = 0;
command.CommandType = CommandType.StoredProcedure;
foreach (record someRecord in someReport.)
{
command.Parameters.Clear();
command.Parameters.Add(....)
command.Prepare();
using (dr = command.ExecuteReader())
{
while (dr.Read())
{
if ()
{
}
else if ()
{
}
}
}
}
}
Instead of looping the sql connection so many times, ever consider extracting the whole set of data out from sql server and process the data via the dataset?
Edit: Decided to further explain what i meant..
You can do the following, pseudo code as follow
Use a select * and get all information from the database and store them into a list of the class or dictionary.
Do your foreach(record someRecord in someReport) and do the condition matching as usual.
Step 1: Ditch the try at async. It isn't implemented properly and you're blocking anyway. So just execute the procedure and see if that helps.
Step 2: Move the SqlCommand outside of the loop and reuse it for each iteration. that way you don't incurr the cost of creating and destroying it for every item in your loop.
Warning: Make sure you reset/clear/remove parameters you don't need from the previous iteration. We did something like this with optional parameters and had 'bleed-thru' from the previous iteration because we didn't clean up parameters we didn't need!
Your biggest problem is that you're looping over this:
IAsyncResult result = command.BeginExecuteReader();
while (!result.IsCompleted)
{
System.Threading.Thread.Sleep(10);
}
command.EndExecuteReader(result);
The entire idea of the asynchronous model is that the calling thread (the one doing this loop) should be spinning up ALL of the asynchronous tasks using the Begin method before starting to work with the results with the End method. If you are using Thread.Sleep() within your main calling thread to wait for an asynchronous operation to complete (as you are here), you're doing it wrong, and what ends up happening is that each command, one at a time, is being called and then waited for before the next one starts.
Instead, try something like this:
public void BeginExecutingCommands(Report someReport)
{
foreach (record someRecord in someReport.r)
{
var command = new SqlCommand("[procname]", sqlConn);
command.CommandTimeout = 0;
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add(…);
command.BeginExecuteReader(ReaderExecuted,
new object[] { command, someReport, someRecord });
}
}
void ReaderExecuted(IAsyncResult result)
{
var state = (object[])result.AsyncState;
var command = state[0] as SqlCommand;
var someReport = state[1] as Report;
var someRecord = state[2] as Record;
try
{
using (SqlDataReader reader = command.EndExecuteReader(result))
{
// work with reader, command, someReport and someRecord to do what you need.
}
}
catch (Exception ex)
{
// handle exceptions that occurred during the async operation here
}
}
In SQL on the other end of a write is a (one) disk. You rarely can write faster in parallel. In fact in parallel often slows it down due to index fragmentation. If you can sort the data by primary (clustered) key prior to loading. In a big load even disable other keys, load data rebuild keys.
Not really sure what are doing in the asynch but for sure it was not doing what you expected as it was waiting on itself.
try
{
using (var command = new SqlCommand("[procname]", sqlConn))
{
command.CommandTimeout = 0;
command.CommandType = CommandType.StoredProcedure;
foreach (record someredord Somereport.r)
{
command.Parameters.Clear()
command.Parameters.Add(…);
using (var rdr = command.ExecuteReader())
{
while (rdr.Read())
{
…
}
}
}
}
}
catch (…)
{
…
}
As we were talking about in the comments, storing this data in memory and working with it there may be a more efficient approach.
So one easy way to do that is to start with Entity Framework. Entity Framework will automatically generate the classes for you based on your database schema. Then you can import a stored procedure which holds your SELECT statement. The reason I suggest importing a stored proc into EF is that this approach is generally more efficient than doing your queries in LINQ against EF.
Then run the stored proc and store the data in a List like this...
var data = db.MyStoredProc().ToList();
Then you can do anything you want with that data. Or as I mentioned, if you're doing a lot of lookups on primary keys then use ToDictionary() something like this...
var data = db.MyStoredProc().ToDictionary(k => k.MyPrimaryKey);
Either way, you'll be working with your data in memory at this point.
It seems executing your SQL command puts lock on some required resources and that's the reason enforced you to use Async methods (my guess).
If the database in not in use, try an exclusive access to it. Even then in there are some internal transactions due to data-model complexity consider consulting to database designer.
I would like to run multiple insert statements on multiple tables. I am using dapper.net. I don't see any way to handle transactions with dapper.net.
Please share your ideas on how to use transactions with dapper.net.
Here the code snippet:
using System.Transactions;
....
using (var transactionScope = new TransactionScope())
{
DoYourDapperWork();
transactionScope.Complete();
}
Note that you need to add reference to System.Transactions assembly because it is not referenced by default.
I preferred to use a more intuitive approach by getting the transaction directly from the connection:
// This called method will get a connection, and open it if it's not yet open.
using (var connection = GetOpenConnection())
using (var transaction = connection.BeginTransaction())
{
connection.Execute(
"INSERT INTO data(Foo, Bar) values (#Foo, #Bar);", listOf5000Items, transaction);
transaction.Commit();
}
There are 3 approaches to doing transactions in Dapper.
Simple Transaction
Transaction from Transaction Scope
Using Dapper Transaction (additional nuget package and most favored approach)
You can find out more about these transaction approaches from the official tutorial website here
For reference here's a breakdown of the transaction approaches
1. Simple Transaction
In this example, you create a transaction on an existing db connection, and then pass in the transaction to the Execute method on dapper (which is an optional parameter).
Once you've done all your work, simply commit the transaction.
string sql = "INSERT INTO Customers (CustomerName) Values (#CustomerName);";
using (var connection = new SqlConnection(FiddleHelper.GetConnectionStringSqlServerW3Schools()))
{
connection.Open();
using (var transaction = connection.BeginTransaction())
{
connection.Execute(sql, new {CustomerName = "Mark"}, transaction: transaction);
connection.Execute(sql, new {CustomerName = "Sam"}, transaction: transaction);
connection.Execute(sql, new {CustomerName = "John"}, transaction: transaction);
transaction.Commit();
}
}
2. Transaction from Transaction Scope
If you'd like to create a transaction scope, you will need to do this before the db connection is created. Once you've created the transaction scope, you can simply perform all your operations and then do a single call to complete the transaction, which will then commit all the commands
using (var transaction = new TransactionScope())
{
var sql = "INSERT INTO Customers (CustomerName) Values (#CustomerName);";
using (var connection = My.ConnectionFactory())
{
connection.Open();
connection.Execute(sql, new {CustomerName = "Mark"});
connection.Execute(sql, new {CustomerName = "Sam"});
connection.Execute(sql, new {CustomerName = "John"});
}
transaction.Complete();
}
3. Using Dapper Transaction
In my opinion, this is the most favorable approach to achieve transaction in code, because it makes the code easy to read and easy to implement. There is an extended implementation of SQL Transaction called Dapper Transaction (which you can find here), which allows you to run the SQL executes off the transactions directly.
string sql = "INSERT INTO Customers (CustomerName) Values (#CustomerName);";
using (var connection = new SqlConnection(FiddleHelper.GetConnectionStringSqlServerW3Schools()))
{
connection.Open();
using (var transaction = connection.BeginTransaction())
{
transaction.Execute(sql, new {CustomerName = "Mark"});
transaction.Execute(sql, new {CustomerName = "Sam"});
transaction.Execute(sql, new {CustomerName = "John"});
transaction.Commit();
}
}
You should be able to use TransactionScope since Dapper runs just ADO.NET commands.
using (var scope = new TransactionScope())
{
// open connection
// insert
// insert
scope.Complete();
}
Considering all your tables are in single database, I disagree with TransactionScope solution suggested in some answers here. Refer this answer.
TransactionScope is generally used for distributed transactions; transaction spanning different databases may be on different system. This needs some configurations on operating system and SQL Server without which this will not work. This is not recommended if all your queries are against single instance of database.
But, with single database this may be useful when you need to include the code in transaction that is not under your control. With single database, it does not need special configurations either.
connection.BeginTransaction is ADO.NET syntax to implement transaction (in C#, VB.NET etc.) against single database. This does not work across multiple databases.
So, connection.BeginTransaction() is better way to go.
Even the better way to handle the transaction is to implement UnitOfWork as explained in this answer.
Daniel's answer worked as expected for me. For completeness, here's a snippet that demonstrates commit and rollback using a transaction scope and dapper:
using System.Transactions;
// _sqlConnection has been opened elsewhere in preceeding code
using (var transactionScope = new TransactionScope())
{
try
{
long result = _sqlConnection.ExecuteScalar<long>(sqlString, new {Param1 = 1, Param2 = "string"});
transactionScope.Complete();
}
catch (Exception exception)
{
// Logger initialized elsewhere in code
_logger.Error(exception, $"Error encountered whilst executing SQL: {sqlString}, Message: {exception.Message}")
// re-throw to let the caller know
throw;
}
} // This is where Dispose is called
I have a long-running service with several threads calling the following method hundreds of times per second:
void TheMethod()
{
using (var c = new SqlConnection("..."))
{
c.Open();
var ret1 = PrepareAndExecuteStatement1(c, args1);
// some code
var ret2 = PrepareAndExecuteStatement2(c, args2);
// more code
}
}
PrepareAndExecuteStatement is something like this:
void PrepareAndExecuteStatement*(SqlConnection c, args)
{
var cmd = new SqlCommand("query", c);
cmd.Parameters.Add("#param", type);
cmd.Prepare();
cmd.Parameters["#param"] = args;
return cmd.execute().read().etc();
}
I want reuse the prepared statements, preparing once per connection and executing them until the connection breaks. I hope this will improve performance.
Can I use the built-in connection pool to achieve this? Ideally every time a new connection is made, all statements should be automatically prepared, and I need to have access to the SqlCommand objects of these statements.
Suggest taking a slightly modified approach. Close your connection immedately after use. You can certainly re-use your SqlConnection.
The work being done at //some code may take a long time. Are you interacting with other network resources, disk resources, or spending any amount of time with calculations? Could you ever, in the future, need to do so? Perhaps the intervals between executing statement are/could be so long that you'd want to reopen that connection. Regardless, the Connection should be opened late and closed early.
using (var c = new SqlConnection("..."))
{
c.Open();
PrepareAndExecuteStatement1(c, args);
c.Close();
// some code
c.Open();
PrepareAndExecuteStatement2(c, args);
c.Close();
// more code
}
Open Late, Close Early as MSDN Magazine by John Papa.
Obviously we've now got a bunch of code duplication here. Consider refactoring your Prepare...() method to perform the opening and closing operations.
Perhaps you'd consider something like this:
using (var c = new SqlConnection("..."))
{
var cmd1 = PrepareAndCreateCommand(c, args);
// some code
var cmd2 = PrepareAndCreateCommand(c, args);
c.Open();
cmd1.ExecuteNonQuery();
cmd2.ExecuteNonQuery();
c.Close();
// more code
}