Disposing of a SqlConnection manually in XUnit tests - c#

I am writing SQL Server integration tests using xUnit.
My tests look like this:
[Fact]
public void Test()
{
using (IDbConnection connection = new SqlConnection(_connectionString))
{
connection.Open();
connection.Query(#$"...");
//DO SOMETHING
}
}
Since I am planning to create multiple tests in the same class I was trying to avoid creating a new SqlConnection every time. I know that xUnit framework per se creates an instance of the class per each running test inside it.
This means I could make the creation of the SqlConnection in the constructor since anyway a new one will be created every time a new test run.
The problem is disposing it. Is it good practice or do you see any problem in disposing manually the SqlConnection at the end of each test?
Such as:
public class MyTestClass
{
private const string _connectionString = "...";
private readonly IDbConnection _connection;
public MyTestClass()
{
_connection = new SqlConnection(_connectionString));
}
[Fact]
public void Test()
{
_connection.Open();
_connection.Query(#$"...");
//DO SOMETHING
_connection.Dispose();
}
}

I was trying to avoid creating a new SqlConnection every time.
It's okay to create and dispose a new SqlConnection over and over. The connection instance is disposed, but the underlying connection is pooled. Under the hood it may actually use the same connection repeatedly without closing it, and that's okay.
See SQL Server Connection Pooling.
So if that's your concern, don't worry. What you are doing in the first example is completely harmless. And however your code is structured, if you're creating a new connection when you need it, using it, and disposing it, that's fine. You can do that all day long.
If what concerns you is the repetitive code, I find this helpful. I often have a class like this in my unit tests:
static class SqlExecution
{
public static void ExecuteSql(string sql)
{
using (var connection = new SqlConnection(GetConnectionString()))
{
using (var command = new SqlCommand(sql, connection))
{
connection.Open();
command.ExecuteNonQuery();
}
}
}
public static T ExecuteScalar<T>(string sql)
{
using (var connection = new SqlConnection(GetConnectionString()))
{
using (var command = new SqlCommand(sql, connection))
{
connection.Open();
return (T)command.ExecuteScalar();
}
}
}
public static string GetConnectionString()
{
// This may vary
}
}
How you obtain the connection string may vary, which is why I left that method empty. It might be a configuration file, or it could be hard-coded.
This covers the common scenarios of executing something and retrieving a value, and allows me to keep all the repetitive database code out of my tests. Within the test is just one line:
SqlExecution.ExecuteSql("UPDATE WHATEVER");
You can also use dependency injection with xUnit, so you could write something as an instance class and inject it. But I doubt that it's worth the effort.
If I find myself writing more and more repetitive code then I might add test-specific methods. For example the test class might have a method that executes a certain stored procedure or formats arguments into a query and executes it.
You can also do dependency injection with xUnit so that dependencies are injected into the class like with any other class. There are lots of answers about this. You may find it useful. Maybe it's the reason why you're using xUnit. But something simpler might get the job done.
Someone is bound to say that unit tests shouldn't talk to the database. And they're mostly right. But it doesn't matter. Sometimes we have to do it anyway. Maybe the code we need
to unit test is a stored procedure, and executing it from a unit test and verifying the
results is the simplest way to do that.
Someone else might say that we shouldn't have logic in the database that needs testing, and I agree with them too. But we don't always have that choice. If I have to put logic in SQL I still want to test it, and writing a unit test is usually the easiest way. (We can call it an "integration test" if it makes anyone happy.)

There's a couple different things at play here. First, SqlConnection uses connection pooling so it should be fine to create/dispose the connections within a single test.
Having said that disposing of the connections on a per test class basis would also be doable. XUnit will dispose of test classes that are IDisposable. So either would work. I'd suggest it's cleaner to create/dispose them within the test.
public class MyTestClass : IDisposable
{
const string ConnectionStr = "";
SqlConnection conn;
public MyTestClass()
{
this.conn = new SqlConnection(ConnectionStr);
}
public void Dispose()
{
this.conn?.Dispose();
this.conn = null;
}
public void Test()
{
using (var conn = new SqlConnection(ConnectionStr))
{
}
}
}

Related

Unit testing with manual transactions and layered transactions

Due to a few restrictions I can't use entity Framework and thus need to use SQL Connections, commands and Transactions manually.
While writing unit tests for the methods calling these data layer operations I stumbled upon a few problems.
For the unit tests I NEED to do them in a Transaction as most of the operations are changing data by their nature and thus doing them outside a Transaction is problematic as that would change the whole base data. Thus I need to put a Transaction around these (with no commit fired at the end).
Now I have 2 different variants of how These BL methods work.
A few have Transactions themselves inside of them while others have no Transactions at all. Both of these variants cause problems.
Layered Transaction: Here I get errors that the DTC cancelled the distributed Transaction due to timeouts (although the timeout is being set to 15 minutes and it is running for only 2 minutes).
Only 1 Transaction: Here I get an error about the state of the Transaction when I come to the "new SQLCommand" line in the called method.
My question here is what can I do to correct this and get unit testing with manual normal and layered Transactions working?
Unit testing method example:
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
MyBLMethod();
}
}
Example for a Transaction using method (very simplified)
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.Transaction = transaction;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = "INSERT ......";
command.ExecuteNonQuery();
// Following commands
....
Transaction.Commit();
}
}
Example for a non Transaction using method
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = "INSERT ......";
command.ExecuteNonQuery();
}
On the face of it, you have a few options, depending upon what you want to test and your ability to spend money / change your code base.
At the moment, you’re effectively writing integration tests. If the database isn’t available then your tests will fail. This means the tests can be slow, but on the plus side if they pass you’re pretty confident that you code can hit the database correctly.
If you don’t mind hitting the database, then the minimum impact to changing your code / spending money would be for you to allow the transactions to complete and verify them in the database. You can either do this by taking database snapshots and resetting the database each test run, or by having a dedicated test database and writing your tests in such a way that they can safely hit the database over and over again and then verified. So for example, you can insert a record with an incremented id, update the record, and then verify that it can be read. You may have more unwinding to do if there are errors, but if you’re not modifying the data access code or the database structure that often then this shouldn’t be too much of an issue.
If you’re able to spend some money and you want to actually turn your tests into unit tests, so that they don’t hit the database, then you should consider looking into TypeMock. It’s a very powerful mocking framework that can do some pretty scary stuff. I believe it using the profiling API to intercept calls, rather than using the approach used by frameworks like Moq. There's an example of using Typemock to mock a SQLConnection here.
If you don’t have money to spend / you’re able to change your code and don’t mind continuing to rely on the database then you need to look at some way to share your database connection between your test code and your dataaccess methods. Two approaches that spring to mind are to either inject the connection information into the class, or make it available by injecting a factory that gives access to the connection information (in which case you can inject a mock of the factory during testing that returns the connection you want).
If you go with the above approach, rather than directly injecting SqlConnection, consider injecting a wrapper class that is also responsible for the transaction. Something like:
public class MySqlWrapper : IDisposable {
public SqlConnection Connection { get; set; }
public SqlTransaction Transaction { get; set; }
int _transactionCount = 0;
public void BeginTransaction() {
_transactionCount++;
if (_transactionCount == 1) {
Transaction = Connection.BeginTransaction();
}
}
public void CommitTransaction() {
_transactionCount--;
if (_transactionCount == 0) {
Transaction.Commit();
Transaction = null;
}
if (_transactionCount < 0) {
throw new InvalidOperationException("Commit without Begin");
}
}
public void Rollback() {
_transactionCount = 0;
Transaction.Rollback();
Transaction = null;
}
public void Dispose() {
if (null != Transaction) {
Transaction.Dispose();
Transaction = null;
}
Connection.Dispose();
}
}
This will stop nested transactions from being created + committed.
If you’re more willing to restructure your code, then you might want to wrap your dataaccess code in a more mockable way. So, for example you could push your core database access functionality into another class. Depending on what you’re doing you’ll need to expand on it, however you might end up with something like this:
public interface IMyQuery {
string GetCommand();
}
public class MyInsert : IMyQuery{
public string GetCommand() {
return "INSERT ...";
}
}
class DBNonQueryRunner {
public void RunQuery(IMyQuery query) {
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString)) {
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction()) {
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.Transaction = transaction;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = query.GetCommand();
command.ExecuteNonQuery();
transaction.Commit();
}
}
}
}
This allows you to unit test more of your logic, like the command generation code, without having to actually worry about hitting the database and you can test your core dataaccess code (the Runner) against the database once, rather than for every command you want to run against the database. I would still write integration tests for all dataaccess code, but I’d only tend to run them whilst actually working on that section of code (to ensure column names etc have been specified correctly).

using statements

Today I noticed a snippet of code that looks like the one below:
public class Test
{
SqlConnection connection1 =
new SqlConnection(ConfigurationManager.ConnectionStrings["c1"].ToString());
SqlConnection connection2 =
new SqlConnection(ConfigurationManager.ConnectionStrings["c2"].ToString());
public void Method1()
{
using (connection1)
{
connection1.Open();
using (SqlCommand newSqlCommand = new SqlCommand("text",connection2))
{
// do something
}
}
}
public void Method2()
{
using (connection1)
{
// do something
}
}
}
I am just wondering why would anyone want to open the connection when creating the class and not when calling the corresponding methods inside the class?
EDIT: I should have probably posted the whole code instead. So I do see where they are opening connection1, but then they are instantiating a sqlcommand with a different sql connection (connection2) which has not been opened anywhere. What am I missing here?
Thanks,
This line only initializes a connection object, which can later be used to open a connection to a database server.
SqlConnection connection1 =
new SqlConnection(ConfigurationManager.ConnectionStrings["c1"].ToString());
What I know is that using disposes of an object (and in the case of Connection objects they are automatically closed) after its scope so I wouldn't recommend such usage because it might be problematic with other object types (other than Connection) that can't be used after having their dispose called.
connection1 = new SqlConnection(...) does not really open the connection. It just creates the connection object.
You have to call connection1.Open(); to actually open it. You can do this inside using statement block.
Refer to this MSDN page for more details.
It either
written this way to enforce only single call to a method be performed on the class
unintentionally confusing by throwing ObjectDisposed exception if you call 2 methods
contains connection re-initialization code in blocks you've ommited.
The code is dangerous.
If you call Method1 then Method2 (or visa versa) you will get an error (connection string not initialized).
The error occurs because the using statement will close the connection AND Disposes the object. I double checked what happens when dispose is called...the connection string is cleared (and possibly some other things I didn't notice).
You don't want to re-use a disposed object.
The cost of instantiating a new connection object is insignificant so I would just create the object where needed, maybe with a little factory method to reduce code duplication, so change to something like:-
private static SqlConnection GetSqlConnection()
{
return new SqlConnection(ConfigurationManager.ConnectionStrings["c1"].ToString());
}
private void Method1()
{
using (var conn = GetSqlConnection())
{
conn.Open();
// do stuff...
}
}
private void Method2()
{
using (var conn = GetSqlConnection())
{
conn.Open();
// do other stuff...
}
}
Of course there are many different ways of approaching this problem, this is just one and indeed quite a simplistic one...but it is a good starting point and is safe :-)

Enterprise library manage connections

I am building an application with c# and I decided to use the Enterprise Library for the DAL (SQL Server).
I don't remember where, but I had read an article about EntLib which said that the connections are closed automatically.
Is it true?
If not, what is the best approach of managing the connections in the middle layer?
Open and close in each method?
The above is a sample method of how I am using the EntLib
public DataSet ReturnSomething
{
var sqlStr = "select something";
DbCommand cmd = db.GetSqlStringCommand(sqlStr);
db.AddInParameter(cmd, "#param1", SqlDbType.BigInt, hotelID);
db.AddInParameter(cmd, "#param2", SqlDbType.NVarChar, date);
return db.ExecuteDataSet(cmd);
}
Thanks in advance.
the ExecuteDataSet method returns a DataSet object that contains all the data. This gives you your own local copy. The call to ExecuteDataSet opens a connection, populates a DataSet, and closes the connection before returning the result
for more info:
http://msdn.microsoft.com/en-us/library/ff648933.aspx
I think you should have something like a static class used as a Façade which would provide the correct connection for your library subsystems.
public static class SystemFacade {
// Used as a subsystem to which the connections are provided.
private static readonly SystemFactory _systemFactory = new SystemFactory();
public static IList<Customer> GetCustomers() {
using (var connection = OpenConnection(nameOfEntLibNamedConnection))
return _systemFactory.GetCustomers(connection);
}
public static DbConnection OpenConnection(string connectionName) {
var connection =
// Read EntLib config and create a new connection here, and assure
// it is opened before you return it.
if (connection.State == ConnectionState.Closed)
connection.Open();
return connection;
}
}
internal class SystemFactory {
internal IList<Customer> GetCustomers(DbConnection connection) {
// Place code to get customers here.
}
}
And using this code:
public class MyPageClass {
private void DisplayCustomers() {
GridView.DataSource = SystemFacade.GetCustomers();
}
}
In this code sample, you have a static class that provides the functionalities and features of a class library. The Façade class is used to provide the user with all possible action, but you don't want to get a headache with what connection to use, etc. All you want is the list of customers out of the underlying datastore. Then, a call to GetCustomers will do it.
The Façade is an "intelligent" class that knows where to get the information from, so creates the connection accordingly and order the customers from the subsystem factory. The factory does what it is asked for, take the available connection and retrieve the customers without asking any further questions.
Does this help?
Yes, EntLib closes connections for you (actually it releases them back into the connection pool). That is the main reason why we originally started to use EntLib.
However, for all new development we have now gone on to use Entity Framework, we find that much more productive.

Which pattern is better for SqlConnection object?

Which pattern is better for SqlConnection object? Which is better in performance?
Do you offer any other pattern?
class DataAccess1 : IDisposable
{
private SqlConnection connection;
public DataAccess1(string connectionString)
{
connection = new SqlConnection(connectionString);
}
public void Execute(string query)
{
using (SqlCommand command = connection.CreateCommand())
{
command.CommandText = query;
command.CommandType = CommandType.Text;
// ...
command.Connection.Open();
command.ExecuteNonQuery();
command.Connection.Close();
}
}
public void Dispose()
{
connection.Dispose();
}
}
VS
class DataAccess2 : IDisposable
{
private string connectionString;
public DataAccess2(string connectionString)
{
this.connectionString = connectionString;
}
public void Execute(string query)
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
SqlCommand command = connection.CreateCommand();
command.CommandText = query;
command.CommandType = CommandType.Text;
// ...
command.Connection.Open();
command.ExecuteNonQuery();
command.Connection.Close();
}
}
public void Dispose()
{
}
}
There's no real way to answer this question. The short, canonical answer is that the connection should stay alive for the lifetime of your unit of work. Because we have no way of knowing how DataAccess is used (does it exist for the lifetime of your application, or do you instantiate it and dispose it whenever you do something?), it's impossible to give a concrete answer.
That being said, I would recommend the first pattern, but instantiate and dispose of your DataAccess object as needed; don't keep it around longer than necessary.
Suggest going with DataAccess2. It's a personal preference though. Some might even suggest your class be static. It'd be difficult to say that one is more performant than the other. You're on the path of IDisposable, which is great.
I'd be happy to read and maintain both styles shown above in your question.
Consider having your DAL be able to read the connection string from a .config as well, rather than exclusively allowing the value to be passed in the constructor.
public DataAccess2(string connStr)
{
this.connectionString = connStr;
}
public DataAccess2()
{
this.connectionString =
ConfigurationManager.ConnectionStrings["foo"].ConnectionString;
}
Consider wrapping your SqlCommand in a using as well.
using (var conn = new SqlConnection(connectionString))
{
using(var cmd = conn.CreateCommand())
{
}
}
I think it depends on how your DataAccess object is intended to be used, if it's used within a 'using' clause then the connection is guaranteed to be disposed of after it's done.
But in general I prefer the second pattern as the sql connection is created and disposed of within the Execute method so it's less likely to be left open when you forget to dispose of your DataAccess object.
Considering that sql connection can be a scarse resource I think every attempt should be made to ensure that they're not wasted.
The first will result in errors if you make concurrent calls.
The second will ensure you use a clean connection for each command resulting in more connections being made.
I agree with the statements above that it depends on the scenario for use, to get over the problem related to the first I have a wrapper that needs to use such a pattern so I set a field value boolean to show that a command is being executed on the connection already then "queue" the next command for execution.
There will of course be situations where you may prefer to use multiple connections ...

Is Mocking able to replace functionality wrapped inside a method?

I'm trying to define a way to simulate an case on accessing into a database without accessing... That's probably sounds quite crazy, but it's not.
Here is an example about a method i would like to test:
public IDevice GetDeviceFromRepository(string name)
{
IDevice device = null;
IDbConnection connection = new SqlConnection(ConnectionString);
connection.Open();
try
{
IDbCommand command = connection.CreateCommand();
command.CommandText = string.Format("SELECT DEVICE_ID,DEVICE_NAME FROM DEVICE WHERE DEVICE_NAME='{0}'", name);
IDataReader dataReader = command.ExecuteReader();
if(dataReader.NextResult())
{
device = new Device(dataReader.GetInt32(0),dataReader.GetString(1));
}
}
finally
{
connection.Close();
}
return device;
}
I'm pretending to mock IDataReader so i can control what's being read. Something like that (using Moq framework):
[TestMethod()]
public void GetDeviceFromRepositoryTest()
{
Mock<IDataReader> dataReaderMock = new Mock<IDataReader>();
dataReaderMock.Setup(x => x.NextResult()).Returns(true);
dataReaderMock.Setup(x => x.GetInt32(0)).Returns(000);
dataReaderMock.Setup(x => x.GetString(1)).Returns("myName");
Mock<IDbCommand> commandMock = new Mock<IDbCommand>();
commandMock.Setup(x => x.ExecuteReader()).Returns(dataReaderMock.Object);
Mock<RemoveDeviceManager> removeMock = new Mock<RemoveDeviceManager>();
removeMock.Setup()
RemoveDeviceManager target =new RemoveDeviceManager(new Device(000, "myName"));
string name = string.Empty;
IDevice expected = new Device(000, "myName"); // TODO: Initialize to an appropriate value
IDevice actual;
actual = target.GetDeviceFromRepository(name);
Assert.AreEqual(expected.SerialNumber, actual.SerialNumber);
Assert.AreEqual(expected.Name, actual.Name);
}
My question is whether i can force method GetDeviceFromRepository to replace IDataReader by mocked one.
Although you're currently use Moq I think that the functionality you're looking for cannot be achived without dependency injection unless you use Typemock Isolator (Disclaimer - I worked at Typemock).
Isolator has a feature called "future objects" that enable replacing a future instantiation of an object with a previously created fake object:
// Create fake (stub/mock whateever) objects
var fakeSqlConnection = Isolate.Fake.Instance<SqlConnection>();
var fakeCommand = Isolate.Fake.Instance<SqlCommand>();
Isolate.WhenCalled(() => fakeSqlConnection.CreateCommand()).WillReturn(fakeCommand);
var fakeReader = Isolate.Fake.Instance<SqlDataReader>();
Isolate.WhenCalled(() => fakeCommand.ExecuteReader()).WillReturn(fakeReader);
// Next time SQLConnection is instantiated replace with our fake
Isolate.Swap.NextInstance<SqlConnection>().With(fakeSqlConnection);
I would think the problem here is your direct dependency ultimately to SqlConnection. If you would use some variant of dependency injection such that your code gets access to the IDbCommand without knowing how it gets constructed, you would be able to inject your mock without much of a hassle.
I understand this doesn't quite answer your question, but in the long run doing stuff as described will give you far better testability.
I agree with Frank's answer that moving toward Dependency Injection is the better long-term solution, but there are some intermediate steps you can take to move you in that direction without biting the whole thing off.
One thing is to move the construction of the IDbConnection class into a protected virtual method inside your class:
protected virtual IDbConnection CreateConnection()
{
return new SqlConnection(ConnectionString);
}
Then, you can create a testing version of your class like so:
public class TestingRemoteDeviceManager : RemoteDeviceManager
{
public override IDbConnection CreateConnection()
{
IDbConnection conn = new Mock<IDbConnection>();
//mock out the rest of the interface, as well as the IDbCommand and
//IDataReader interfaces
return conn;
}
}
That returns a Mock or fake IDbConnection instead of the concrete SqlConnection. That fake can then return a fake IDbCommand object, which can then return a fake IDataReader object.
The mantra is test until fear is transformed in boredom. I think you've crossed that line here. If you would take control of the data reader, then the only code you're testing is this:
device = new Device(dataReader.GetInt32(0),dataReader.GetString(1));
There is almost nothing to test here, which is good: the data layer should be simple and stupid. So don't try to unit test your data layer. If you feel you must test it, then integration-test it against a real database.
Of course, hiding your data layer behind a IDeviceRepository interface so that you can easily mock it in order to test other code is still a good idea.

Categories