Testing SQL methods that don't use transactions? - c#

Suppose I have a database method that looks like this:
public void insertRow(SqlConnection c)
{
using (var cmd = new SqlCommand("insert into myTable values(#dt)",c))
{
cmd.Parameters.Add(new SqlParameter("#dt",DbType.DateTime)).Value = DateTime.Now;
cmd.ExecuteNonQuery();
}
}
Now suppose I want to test this method. So, I write a test case that attempts to wrap this method inside a transaction, so that I can rollback the change after testing the result of the insertion:
public void testInsertRow()
{
SqlConnection c = new SqlConnection("connection.string.here");
SqlTransaction trans = c.BeginTransaction();
insertRow();
// do something here to evaluate what happened, e.g. query the DB
trans.Rollback();
}
This however fails to work, because:
ExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.
Is there a way to accomplish this, without having to rewrite every single database method to accept a transaction, and then rewrite every single call to pass null into the method for the transaction?
For example, this would work:
public void insertRow(SqlConnection c, SqlTransaction t)
{
using (var cmd = new SqlCommand("insert into myTable values(#dt)",c))
{
if (t != null) cmd.Transaction = t;
cmd.Parameters.Add(new SqlParameter("#dt",DbType.DateTime)).Value = DateTime.Now;
cmd.ExecuteNonQuery();
}
c.Close();
}
But then, I have to either rewrite each and every call to the database method to include that null parameter, or write override signatures for each and every database method that automatically pass in a null, e.g.
public void insertRow(SqlConnection c) { insertRow(c, null); }
What's the best way to allow transaction-based testing of database calls?

You can use a TransactionScope to add the connections automatically in to a transaction
public void testInsertRow()
{
using(TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
SqlConnection c = new SqlConnection("connection.string.here");
insertRow(c);
// do something here to evaluate what happened, e.g. query the DB
//do not call scope.Complete() so we get a rollback.
}
}
Now this will cause tests to block each other if you have multiple parallel tests running. If you database is set up to support it you could do Snapshot isolation so updates from concurrent tests won't lock each other out.
public void testInsertRow()
{
using(TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions(IsolationLevel = IsolationLevel.Snapshot))
{
SqlConnection c = new SqlConnection("connection.string.here");
insertRow(c);
// do something here to evaluate what happened, e.g. query the DB
//do not call scope.Complete() so we get a rollback.
}
}

Related

C# Parallel.For and Oracle database access - memory exception

I am working on some code that I would like to access an Oracle database inside of a Parallel.For loop. The loop will run for several minutes, and then result in the error:
"Attempted to read or write protected memory. This is often an
indication that other memory is corrupt."
There is no inner exception. Inside my Parallel.For loop, I am creating an opening the database connection as local objects. My code looks like this:
static void CheckSinglePath(Path p)
{
string sqlBase = "select * from table where hour = #HOUR#";
Parallel.For (1, 24, i =>
{
DBManager localdbm = new DBManager();
string sql = sqlBase;
sql = sql.Replace("#HOUR#", i.ToString());
OracleDataReader reader = db.GetData(sql);
if (reader.Read())
{
//do some stuff
}
reader.Close();
});
}
class DBManager
{
OracleConnection conn;
OracleCommand cmd;
public DBManager()
{
string connStr = "blahblahblah;Connection Timeout=600;";
conn = new OracleConnection(connStr);
conn.Open();
cmd = conn.CreateCommand();
}
public OracleDataReader GetData(string sql)
{
cmd.CommandText = sql;
return cmd.ExecuteReader();//EXCEPTION HERE!
}
}
What am I doing wrong? How can I create 24 parallel Oracle connections to process the data? I'm guessing there is some sort of race condition or memory leak that is going on here which I don't fully understand because it seems to be coming from inside the OracleConnection object. Is the database connection not threadsafe? I tried changing the connection string to use a connection pool and that didn't change anything.
Memory problems is always caused by wrong resources usage. You do not properly release your connections after the loop exit.
You need to implement IDisposable interface and after that you need rewrite your code in such manner with using keyword:
// dispose the connection after command finished
using (var localdbm = new DBManager())
{
var sql = sqlBase;
sql = sql.Replace("#HOUR#", i.ToString());
using (var reader = db.GetData(sql))
{
if (reader.Read())
{
//do some stuff
}
// no need to close reader
// as it's being disposed inside using directive
}
}

Properly implement concurrency control in ASP.NET WebAPI + SQL

I'm writing a very simple web application that serves as an endpoint for uploading money transactions from customers and saving them in SQL Server DB. It accepts requests with just 2 params: userid: 'xxx', balancechange: -19.99. If the user ID exists in the app database, then the balance is changed; if not - a new row is created for this ID.
The difficult part in all this is that the numer of requests is enormous and I have to implement the app in such a way that it works as fast as possible and resolves concurrency issues (if 2 requests for the same ID arrive simultaneously).
The app is a ASP.NET MVC WebAPI. I chose to use plain old ADO.NET for speed, and this is what I currently have:
private static readonly object syncLock = new object();
public void UpdateBalance(string userId, decimal balance)
{
lock (syncLock)
{
using (var sqlConnection = new SqlConnection(this.connectionString))
{
var command = new SqlCommand($"SELECT COUNT(*) FROM Users WHERE Id = '{userId}'", sqlConnection);
if ((int)command.ExecuteScalar() == 0)
{
command = new SqlCommand($"INSERT INTO Users (Id, Balance) VALUES ('{userId}', 0)", sqlConnection);
command.ExecuteNonQuery();
}
command = new SqlCommand($"UPDATE Users SET Balance = Balance + {balance} WHERE Id = {userId}", sqlConnection);
command.ExecuteNonQuery();
}
}
}
Called from a controller like this:
[HttpPost]
public IHttpActionResult UpdateBalance(string id, decimal balanceChange)
{
UpdateBalance(id, balanceChange);
return Ok();
}
The thing I'm concernred with is concurrency control using lock (syncLock). This would slow the app down under high load and doesn't allow multiple instances of the app to be deployed on different servers. What are ways to properly implement concurrency control here?
Note: I'd like to use a fast and DB-independent way of implementing concurrency control, as the current storage mechanism (SQL Server) can change in the future.
First, DB-independent code:
For this, you will want to look at DbProviderFactory. What this allows is passing the provider name (MySql.Data.MySqlClient, System.Data.SqlClient), then using the abstract classes (DbConnection, DbCommand) you interact with your DB.
Second, using transactions and paramaterized queries:
When you are working with a database, you ALWAYS want to have your queries paramaterized. If you use String.Format() or any other type of string concatenation, you open your query up to injection.
Transactions ensure all or nothing with your queries, and they can also lock down the table so that only queries within the transaction can access those tables. Transactions have two commands, Commit which will save the changes (if any) to the DB, and Rollback which discards any changes to the DB.
The following will assume that you already have an instance of DbProviderFactory in a class variable _factory.
public void UpdateBalance(string userId, decimal balanceChange)
{
//since we might need to execute two queries, we will create the paramaters once
List<DbParamater> paramaters = new List<DbParamater>();
DbParamater userParam = _factory.CreateParamater();
userParam.ParamaterName = "#userId";
userParam.DbType = System.Data.DbType.Int32;
userParam.Value = userId;
paramaters.Add(userParam);
DbParamater balanceChangeParam = _factory.CreateParamater();
balanceChangeParam.ParamaterName = "#balanceChange";
balanceChangeParam.DbType = System.Data.DbType.Decimal;
balanceChangeParam.Value = balanceChange;
paramaters.Add(balanceChangeParam);
//Improvement: if you implement a method to clone a DbParamater, you can
//create the above list in class construction instead of function invocation
//then clone the objects for the function.
using (DbConnection conn = _factory.CreateConnection()){
conn.Open(); //Need to open the connection before you start the transaction
DbTransaction trans = conn.BeginTransaction(System.Data.IsolationLevel.Serializable);
//IsolationLevel.Serializable locks the entire table down until the
//transaction is commited or rolled back.
try {
int changedRowCount = 0;
//We can use the fact that ExecuteNonQuery will return the number
//of affected rows, and if there are no affected rows, a
//record does not exist for the userId.
using (DbCommand cmd = conn.CreateCommand()){
cmd.Transaction = trans; //Need to set the transaction on the command
cmd.CommandText = "UPDATE Users SET Balance = Balance + #balanceChange WHERE Id = #userId";
cmd.Paramaters.AddRange(paramaters.ToArray());
changedRowCount = cmd.ExecuteNonQuery();
}
if(changedRowCount == 0){
//If no record was affected in the previous query, insert a record
using (DbCommand cmd = conn.CreateCommand()){
cmd.Transaction = trans; //Need to set the transaction on the command
cmd.CommandText = "INSERT INTO Users (Id, Balance) VALUES (#userId, #balanceChange)";
cmd.Paramaters.AddRange(paramaters.ToArray());
cmd.ExecuteNonQuery();
}
}
trans.Commit(); //This will persist the data to the DB.
}
catch (Exception e){
trans.Rollback(); //This will cause the data NOT to be saved to the DB.
//This is the default action if Commit is not called.
throw e;
}
finally {
trans.Dispose(); //Need to call dispose
}
//Improvement: you can remove the try-catch-finally block by wrapping
//the conn.BeginTransaction() line in a using block. I used a try-catch here
//so that you can more easily see what is happening with the transaction.
}
}

Unit testing with manual transactions and layered transactions

Due to a few restrictions I can't use entity Framework and thus need to use SQL Connections, commands and Transactions manually.
While writing unit tests for the methods calling these data layer operations I stumbled upon a few problems.
For the unit tests I NEED to do them in a Transaction as most of the operations are changing data by their nature and thus doing them outside a Transaction is problematic as that would change the whole base data. Thus I need to put a Transaction around these (with no commit fired at the end).
Now I have 2 different variants of how These BL methods work.
A few have Transactions themselves inside of them while others have no Transactions at all. Both of these variants cause problems.
Layered Transaction: Here I get errors that the DTC cancelled the distributed Transaction due to timeouts (although the timeout is being set to 15 minutes and it is running for only 2 minutes).
Only 1 Transaction: Here I get an error about the state of the Transaction when I come to the "new SQLCommand" line in the called method.
My question here is what can I do to correct this and get unit testing with manual normal and layered Transactions working?
Unit testing method example:
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
MyBLMethod();
}
}
Example for a Transaction using method (very simplified)
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.Transaction = transaction;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = "INSERT ......";
command.ExecuteNonQuery();
// Following commands
....
Transaction.Commit();
}
}
Example for a non Transaction using method
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = "INSERT ......";
command.ExecuteNonQuery();
}
On the face of it, you have a few options, depending upon what you want to test and your ability to spend money / change your code base.
At the moment, you’re effectively writing integration tests. If the database isn’t available then your tests will fail. This means the tests can be slow, but on the plus side if they pass you’re pretty confident that you code can hit the database correctly.
If you don’t mind hitting the database, then the minimum impact to changing your code / spending money would be for you to allow the transactions to complete and verify them in the database. You can either do this by taking database snapshots and resetting the database each test run, or by having a dedicated test database and writing your tests in such a way that they can safely hit the database over and over again and then verified. So for example, you can insert a record with an incremented id, update the record, and then verify that it can be read. You may have more unwinding to do if there are errors, but if you’re not modifying the data access code or the database structure that often then this shouldn’t be too much of an issue.
If you’re able to spend some money and you want to actually turn your tests into unit tests, so that they don’t hit the database, then you should consider looking into TypeMock. It’s a very powerful mocking framework that can do some pretty scary stuff. I believe it using the profiling API to intercept calls, rather than using the approach used by frameworks like Moq. There's an example of using Typemock to mock a SQLConnection here.
If you don’t have money to spend / you’re able to change your code and don’t mind continuing to rely on the database then you need to look at some way to share your database connection between your test code and your dataaccess methods. Two approaches that spring to mind are to either inject the connection information into the class, or make it available by injecting a factory that gives access to the connection information (in which case you can inject a mock of the factory during testing that returns the connection you want).
If you go with the above approach, rather than directly injecting SqlConnection, consider injecting a wrapper class that is also responsible for the transaction. Something like:
public class MySqlWrapper : IDisposable {
public SqlConnection Connection { get; set; }
public SqlTransaction Transaction { get; set; }
int _transactionCount = 0;
public void BeginTransaction() {
_transactionCount++;
if (_transactionCount == 1) {
Transaction = Connection.BeginTransaction();
}
}
public void CommitTransaction() {
_transactionCount--;
if (_transactionCount == 0) {
Transaction.Commit();
Transaction = null;
}
if (_transactionCount < 0) {
throw new InvalidOperationException("Commit without Begin");
}
}
public void Rollback() {
_transactionCount = 0;
Transaction.Rollback();
Transaction = null;
}
public void Dispose() {
if (null != Transaction) {
Transaction.Dispose();
Transaction = null;
}
Connection.Dispose();
}
}
This will stop nested transactions from being created + committed.
If you’re more willing to restructure your code, then you might want to wrap your dataaccess code in a more mockable way. So, for example you could push your core database access functionality into another class. Depending on what you’re doing you’ll need to expand on it, however you might end up with something like this:
public interface IMyQuery {
string GetCommand();
}
public class MyInsert : IMyQuery{
public string GetCommand() {
return "INSERT ...";
}
}
class DBNonQueryRunner {
public void RunQuery(IMyQuery query) {
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString)) {
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction()) {
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.Transaction = transaction;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = query.GetCommand();
command.ExecuteNonQuery();
transaction.Commit();
}
}
}
}
This allows you to unit test more of your logic, like the command generation code, without having to actually worry about hitting the database and you can test your core dataaccess code (the Runner) against the database once, rather than for every command you want to run against the database. I would still write integration tests for all dataaccess code, but I’d only tend to run them whilst actually working on that section of code (to ensure column names etc have been specified correctly).

Inserting record causes lock

When I do an insert with the SQL Anywhere 16 .net provider, it creates a shared lock on the table. even with a commit afterwards. How do I prevent it? (Or what am I doing wrong?)
DbCommand command = new SACommand();
command.CommandTimeout = this.Timeout;
bool mustCloseConnection = false;
PrepareCommand(command, connection, null, commandType, commandText, commandParameters, ref mustCloseConnection);
int num2 = command.ExecuteNonQuery();
command.Parameters.Clear();
if (mustCloseConnection)
{
connection.Close();
}
private void PrepareCommand(IDbCommand command, IDbConnection connection, IDbTransaction transaction, CommandType commandType, string commandText, IDataParameter[] commandParameters, ref bool mustCloseConnection)
{
if (command == null)
{
throw new ArgumentNullException("command");
}
if ((commandText == null) || (commandText.Length == 0))
{
throw new ArgumentNullException("commandText");
}
if (connection.State != ConnectionState.Open)
{
connection.Open();
mustCloseConnection = true;
}
else
{
mustCloseConnection = false;
}
command.Connection = connection;
command.CommandText = commandText;
command.CommandTimeout = this.Timeout;
if (transaction != null)
{
if (transaction.Connection == null)
{
throw new ArgumentException("The transaction was rollbacked or commited, please provide an open transaction.", "transaction");
}
command.Transaction = transaction;
}
command.CommandType = commandType;
if (commandParameters != null)
{
AttachParameters(command, commandParameters);
}
}
Ok, lets check some documentation:
SQL Anywhere® Server - Programming > Using SQL in Applications > Controlling transactions in applications > Cursors and transactions
In general, a cursor closes when a COMMIT is performed. There are two exceptions to this behavior:
The close_on_endtrans database option is set to Off.
**A cursor is opened WITH HOLD**, which is the default with Open Client and JDBC.
If either of these two cases is true, the cursor remains open on a COMMIT.
And that One:
http://infocenter.sybase.com/help/topic/com.sybase.help.sqlanywhere.12.0.1/pdf/dbprogramming12.pdf
Cursor behavior when opening cursors
You can configure the following aspects of cursor behavior when you open the cursor:
● Isolation level You can explicitly set the isolation level of operations on a cursor to be different
from the current isolation level of the transaction. To do this, set the isolation_level option.
● Holding By default, cursors in embedded SQL close at the end of a transaction. Opening a cursor
WITH HOLD allows you to keep it open until the end of a connection, or until you explicitly close it.
ADO.NET, ODBC, JDBC, and Open Client leave cursors open at the end of transactions by default
Maybe there are some more fresh docs, but I don't thing anything has been changed..
I'm almost sure you have WITH HOLD Cursor so even after COMMIT it remains open.
Actually you already have that answer, just without explanation why you should close the connection..
I hope this helps...
You should Dispose your Connection and Command Object. Call Dispose or wrap it in a using statement like this: (code just typed for example)
using (IDBConnection con = new DbConnection())
{
using (IDBCommand com = new DbCommand())
{
// do your sql stuff here
}
con.Close();
}
I've had such errors with MSSQL because I haven't disposed my command object.
If you are 100% sure all your connections, commands, transactions, etc. are getting gracefully disposed, then you might want to check how you are instantiating your transactions.
The TransactionScope default constructor was incorrectly designed by Microsoft and sooner or later it causes locks (similar issues could apply to other transaction classes). The default constructor utilize the "Serializable isolation level" which is prone to cause locks.
The issue is even worse: Should you instantiate an TransactionScope with a better isolation level (for example, ReadCommitted), in the event that your SQL operation fails and performs a rollback, it will internally change the isolation level again to "Serializable", which will be prone to locks to any other command using the same transaction instance.
This was reported as a bug to Microsoft, and MS simply replied with the typical "Well, if this is a bug, we can't fix it because it could affect systems out there, and why do you even create TransactionScope instances using the default constructor? You shouldn't do that ever" excuse.
The only safe-way to use then the TransactionScope class, is to ALWAYS AND EACH TIME instantiate it using explicitly the desired isolation level that doesn't cause locks (the usual scenario is using ReadCommitted, but you could use others).
You can either use a static builder as proposed in here http://blogs.msdn.com/b/dbrowne/archive/2010/06/03/using-new-transactionscope-considered-harmful.aspx?Redirected=true :
public class TransactionUtils {
public static TransactionScope CreateTransactionScope()
{
var transactionOptions = new TransactionOptions();
transactionOptions.IsolationLevel = IsolationLevel.ReadCommitted;
transactionOptions.Timeout = TransactionManager.MaximumTimeout;
return new TransactionScope(TransactionScopeOption.Required, transactionOptions);
}
}
Or using a custom wrapper class:
namespace System.Transactions {
/**************************************************************************************************************
* IMPORTANT: This class must ALWAYS be used instead of the regular "TransactionScope" class. This class
* enforces having transactions that read "only committed" data.
*
* This is because the implementation of TransactionScope is faulty (wrong design) and it gives issues because
* it changes the connections available at the the connection pool. To read more, check this link:
* http://blogs.msdn.com/b/dbrowne/archive/2010/05/21/using-new-transactionscope-considered-harmful.aspx
*
* The namespace was set to "System.Transactions" in order to provide ease of use when updating legacy code
* that was using the old class
**************************************************************************************************************/
public class SafeTransactionScope : IDisposable {
private TransactionScope _transactionScope;
private bool _disposed;
#region Constructors
public SafeTransactionScope()
: this(TransactionManager.MaximumTimeout) {
}
public SafeTransactionScope(TimeSpan scopeTimeout)
: this(TransactionScopeOption.Required, scopeTimeout) {
}
public SafeTransactionScope(TransactionScopeOption scopeOption)
: this(scopeOption, TransactionManager.MaximumTimeout) {
}
public SafeTransactionScope(TransactionScopeOption scopeOption, TimeSpan scopeTimeout) {
this._disposed = false;
this._transactionScope = CreateTransactionScope(scopeOption, scopeTimeout);
}
#endregion
#region Disposing methods
public void Dispose() {
Dispose(true);
// Use SupressFinalize in case a subclass
// of this type implements a finalizer.
GC.SuppressFinalize(this);
}
private void Dispose(bool disposing) {
if(!this._disposed) {
if(disposing) {
if(this._transactionScope != null) {
this._transactionScope.Dispose();
this._transactionScope = null;
}
}
// Indicate that the instance has been disposed.
this._disposed = true;
}
}
#endregion
public void Complete() {
if(this._disposed) {
throw new ObjectDisposedException("SafeTransactionScope");
}
if(this._transactionScope == null) {
// This should never happen
throw new ObjectDisposedException("SafeTransactionScope._transactionScope");
}
this._transactionScope.Complete();
}
private static TransactionScope CreateTransactionScope(TransactionScopeOption scopeOption, TimeSpan scopeTimeout) {
var transactionOptions = new TransactionOptions() {
IsolationLevel = IsolationLevel.ReadCommitted,
Timeout = scopeTimeout
};
return new TransactionScope(scopeOption, transactionOptions);
}
}
}

What's the most DRY-appropriate way to execute an SQL command?

I'm looking to figure out the best way to execute a database query using the least amount of boilerplate code. The method suggested in the SqlCommand documentation:
private static void ReadOrderData(string connectionString)
{
string queryString = "SELECT OrderID, CustomerID FROM dbo.Orders;";
using (SqlConnection connection = new SqlConnection(connectionString))
{
SqlCommand command = new SqlCommand(queryString, connection);
connection.Open();
SqlDataReader reader = command.ExecuteReader();
try
{
while (reader.Read())
{
Console.WriteLine(String.Format("{0}, {1}", reader[0], reader[1]));
}
}
finally
{
reader.Close();
}
}
}
mostly consists of code that would have to be repeated in every method that interacts with the database.
I'm already in the habit of factoring out the establishment of a connection, which would yield code more like the following. (I'm also modifying it so that it returns data, in order to make the example a bit less trivial.)
private SQLConnection CreateConnection()
{
var connection = new SqlConnection(_connectionString);
connection.Open();
return connection;
}
private List<int> ReadOrderData()
{
using(var connection = CreateConnection())
using(var command = connection.CreateCommand())
{
command.CommandText = "SELECT OrderID FROM dbo.Orders;";
using(var reader = command.ExecuteReader())
{
var results = new List<int>();
while(reader.Read()) results.Add(reader.GetInt32(0));
return results;
}
}
}
That's an improvement, but there's still enough boilerplate to nag at me. Can this be reduced further? In particular, I'd like to do something about the first two lines of the procedure. I don't feel like the method should be in charge of creating the SqlCommand. It's a tiny piece of repetition as it is in the example, but it seems to grow if transactions are being managed manually or timeouts are being altered or anything like that.
edit: Assume, at least hypothetically, there's going to have to be a bunch of different types of data being returned. And consequently the solution can't be just one one-size-fits-all method, there will have to be a few different ones depending, at minimum, on whether ExecuteNonQuery, ExecuteScalar, ExecuteReader, ExecuteReaderAsync, or any of the others are being called. I'd like to cut down on the repetition among those.
Tried Dapper?
Granted this doesn't get you a DataReader but you might just prefer it this way once you've tried it.
It's about the lightest-weight an ORM can be while still being called an ORM. No more methods to map between DataReader and strong types for me.
Used right here on all the StackExchange sites.
using (var conn = new SqlConnection(cs))
{
var dogs = connection.Query("select name, age from dogs");
foreach (dynamic dog in dogs)
{
Console.WriteLine("{0} age {1}", dog.name, dog.age);
}
}
or
using (var conn = new SqlConnection(cs))
{
var dogs = connection.Query<Dog>("select Name, Age from dogs");
foreach (Dog dog in dogs)
{
Console.WriteLine("{0} age {1}", dog.Name, dog.Age);
}
}
class Dog
{
public string Name { get; set; }
public int Age { get; set; }
}
If you want to roll data access on your own, this pattern of help methods could be one way to remove duplication:
private List<int> ReadOrderData()
{
return ExecuteList<int>("SELECT OrderID FROM dbo.Orders;",
x => x.GetInt32("orderId")).ToList();
}
private IEnumerable<T> ExecuteList(string query,
Func<IDataRecord, T> entityCreator)
{
using(var connection = CreateConnection())
using(var command = connection.CreateCommand())
{
command.CommandText = query;
connection.Open();
using(var reader = command.ExecuteReader())
{
while(reader.Read())
yield return entityCreator(reader);
}
}
}
You'll have to add support for parameters and this might not compile, but the pattern is what I'm trying to illustrate.
What I typically do is use a custom class that I wrote a while back that accepts a SQL string, and optionally a list of parameters and it returns a DataTable.
Since the thing that changes between invocations is typically just the SQL that is optimal IMHO.
If you truly do need to use a DataReader you can do something like this:
public void ExecuteWithDataReader(string sql, Action<DataReader> stuffToDo) {
using (SqlConnection connection = new SqlConnection(connectionString)) {
using (SqlCommand command = new SqlCommand(sql, connection)) {
connection.Open();
using (SqlDataReader reader = command.ExecuteReader()) {
try {
while (reader.Read()) {
stuffToDo(reader);
}
}
finally {
reader.Close();
}
}
}
}
}
private static void ReadOrderData(string connectionString) {
string sql = "SELECT OrderID, CustomerID FROM dbo.Orders;";
ExecuteWithDataReader(sql, r => Console.WriteLine(String.Format("{0}, {1}", r[0], r[1])));
}
The first two line are the most important thing you need...
but if you still wish to do it, you can turn them to a database handler class, yes it will become more of code, but in refactoring concept, every thing will move to the related topic...
try to write a singleton class, that receive a command and do action, so return result of type SqlDataReader reader...
Doing this in comments was too much.
I would suggest that the boilerplate code around
using(conn = new sqlconnection)
using(cmd = new sqlcommand) {
// blah blah blah
}
isn't something to be lightly removed and instead would encourage that you keep it exactly where it's at. Resources, especially unmanaged ones, should be opened and released at the closest point to execution as possible IMHO.
In no small part due to the ease with which other developers will fail to follow the appropriate clean up conventions.
If you do something like
private SQLConnection CreateConnection()
{
var connection = new SqlConnection(_connectionString);
connection.Open();
return connection;
}
Then you are inviting another programmer to call this method and completely fail to release the resource as soon as the query is executed. I don't know what kind of app you are building, but in a web app such a thing will lead to memory / connection / resource errors of types that are difficult to debug, unless you've been through it before.
Instead, I'd suggest you look into a lightweight ORM such as Dapper.net or similar to see how they approached it. I don't use dapper, but I hear it's pretty good. The reason I don't use it is simply that we don't allow inline sql to be executed against our databases (but that's a very different conversation).
Here's our standard:
public static DataTable StatisticsGet( Guid tenantId ) {
DataTable result = new DataTable();
result.Locale = CultureInfo.CurrentCulture;
Database db = DatabaseFactory.CreateDatabase(DatabaseType.Clients.ToString());
using (DbCommand dbCommand = db.GetStoredProcCommand("reg.StatsGet")) {
db.AddInParameter(dbCommand, "TenantId", DbType.Guid, tenantId);
result.Load(db.ExecuteReader(dbCommand));
} // using dbCommand
return result;
} // method::StatisticsGet
We make heavy use of Enterprise Library. It's short, simple and to the point and very well tested. This method just returns a datatable but you could easily have it return an object collection.. or nothing.

Categories