We're using C# SqlCommand.ExecuteReader() to issue SQL Server stored procedure and SQL requests inside a transaction.
When the connection is chosen as a deadlock victim, ExecuteReader() does NOT throw SqlException with 1205 deadlock code for some commands but DOES for others.
According to MSDN
If a transaction is deadlocked, an exception may not be thrown until Read is called.
Considering that we use SqlCommand object encapsulated inside our own database request framework, is there a way to always guarantee that the exception is thrown when a deadlock occurs?
We're using .Net 4.5, SQL Server 2008 R2, Visual Studio 2012
Here is a simplified version of our database access framework code:
SqlDataReader DoWork( string sql ) {
...
cmd = new SqlCommand( sql );
SqlDataReader rdr = null;
try {
rdr = cmd.ExecuteReader( CommandBehavior.Default );
} catch (SqlException sqle) {
// Log the error, throw a custom exception, etc.
// if (sqle.ErrorCode == 1205) ...
...
if (rdr != null) {
rdr.Close();
rdr = null;
}
}
// All is well, so just return to caller to consume the result set
return rdr;
}
...
main() {
...
SqlDataReader result = DoWork( "select ...";
if (result.HasRows) { // Check there is data to read...
while (result.Read()) {
...
}
result.Close();
...
}
I don't know why you are doing this:
if (result.HasRows)
This is not necessary and it prevents the deadlock from appearing:
If a transaction is deadlocked, an exception may not be thrown until Read is called.
Delete that if. It's a common anti pattern. It's often introduced by people who copy sample code without really understanding what it does.
This in your catch is also an anti pattern:
if (rdr != null) {
rdr.Close();
rdr = null;
}
Just use using.
this is the code from that link, Stack wouldnt allow it as an answer
function DoWork() {
using (TransactionScope scope = new TransactionScope(...)) {
cmd = new SqlCommand("select ...");
using (DataReader rdr = cmd.ExecuteReader ()) {
while(rdr.Read()) {
... process each record
}
}
scope.Complete ();
}
}
Related
I have a standard routine for executing SqlCommand with an exception handler. But if an exception is thrown within this routine then I can't make an rollback if this standard routine is called within a transaction. So how can I check that this standard routine is placed inside a transaction or not? What is the proper way? I have googled a lot so far...
Do I have to thrown a new exception in my exception handler for the standard routine for forcing the overall transaction to rollback?
My standard routine looks like:
try
using (SqlConnection con = new SqlConnection(dbCon))
{
using (SqlCommand cmd = new SqlCommand(SQL, con))
{
con.Open();
cmd.CommandTimeout = 600;
cmd.ExecuteScalar();
}
}
}
catch (Exception ex)
{
// do some stuff here
maybe check for existence in a transaction here
}
My overall transaction look like this:
using (SqlConnection con = new SqlConnection(dbCon))
{
con.Open();
string TransactionName = "TransactionName";
using (SqlTransaction sqlTransaction = con.BeginTransaction(TransactionName))
{
try
{
// do some stuff here and call the standard routine here several times...
}
catch (Exception vDBException)
{
DB.getInstance().RollbackTransaction(sqlTransaction, TransactionName);
}
}
}
I have tried to make use of select ##trancount with no success.
I have also tried to check sys.sysprocesses from SQL Server with no success.
I really hope that someone can show me the right direction.
I am trying to layer my Sql client object calls such that they get disposed of reliably. Something like this:
Open database connection -> Create command -> Read results -> close
command -> close database connection
So far this has succeeded when I do all of these things in the same method.
The problem is this is error prone. And a mess to read through.
When I try to create a common method to handle this that cleans up everything and returns a reader the connection gets closed before the reader starts.
//closes connection before it can be read...apparently the reader doesn't actually have any data at that point ... relocating to disposable class that closes on dispose
public SqlDataReader RunQuery(SqlCommand command)
{
SqlDataReader reader = null;
using (var dbConnection = new SqlConnection(_dbConnectionString))
{
try
{
dbConnection.Open();
command.Connection = dbConnection;
reader = command.ExecuteReader(); // connection closed before data can be read by the calling method
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
finally
{
dbConnection.Close();
}
}
return reader;
}
I can get around this by creating my own class that implements IDispose (etc) but then when I wrap it with the same using statement it takes up just as many lines as a database connection using statement.
How can I take care of the data base connection in a repeatable class that takes care of all these artifacts and closes the connection?
You could create a class that holds an open database connection that is reusable, but I suggest reading the data into a list and returning the result:
public List<object> RunQuery(SqlCommand command)
{
List<object> results = new List<object>();
using (var dbConnection = new SqlConnection(_dbConnectionString))
{
try
{
dbConnection.Open();
command.Connection = dbConnection;
using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
// Repeat for however many columns you have
results.Add(reader.GetString(0));
}
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
return results;
}
I don't know the structure of your data, but the important point is that you need to read your data (reader.GetString does this) before you dispose of the connection. You can find more information on how to properly read your data here.
Edit: As mentioned, I removed your finally statement. This is because your using statement is essentially doing the same thing. You can think of a using statement as a try-finally block. Your disposable object will always be disposed after the using statement is exited.
so there's no way to make a reusable method that tucks away all/most of the nested using statements?
There is a specific pattern supported for returning a DataReader from a method, like this:
static IDataReader GetData(string connectionString, string query)
{
var con = new SqlConnection(connectionString);
con.Open();
var cmd = con.CreateCommand();
cmd.CommandText = query;
var rdr = cmd.ExecuteReader(CommandBehavior.CloseConnection);
return rdr;
}
Then you can call this method in a using block:
using (var rdr = GetData(constr, sql))
{
while (rdr.Read())
{
//process rows
}
} // <- the DataReader _and_ the connection will be closed here
Think about these 2 snippets:
Approach 1: use using statement
using(var connection = new SqlConnection())
using(var command = new SqlCommand(cmdText, connection)){
try{
connection.Open();
using(var reader = command.ExecuteReader(
CommandBehavior.CloseConnection | CommandBehavior.SingleResult){
while(reader.Read())
// read values
}
} catch (Exception ex) {
// log(ex);
}
}
Approach 2: use try/finally
var connection = new SqlConnection();
var command = new SqlCommand(cmdText, connection);
SqlDataReader = null;
try{
var reader = command.ExecuteReader(
CommandBehavior.CloseConnection | CommandBehavior.SingleResult);
while(reader.Read())
// read values...
} catch (Exception ex) {
// log(ex);
} finally {
command.Dispose();
if (reader != null) {
if (!reader.IsClosed)
reader.Close();
reader.Dispose();
}
if (connection.State != ConnectionState.Closed)
connection.Close();
connection.Dispose();
}
We all know that the using statements would be compiled to try/finally blocks. So is it correct to say: when the app get compiled, there would be 4 try blocks?
try { // for using SqlConnection
try { // for using SqlCommand
try { // my own try block
try { // for using SqlDataReader
} finally {
// dispose SqlDataReader
}
} catch {
// my own catch. can be used for log etc.
}
} finally {
// dispose SqlCommand
}
} finally {
// dispose SqlConnection
}
And, if the answer is yes, wouldn't that be a performance issue? Generally, is there any, I mean any performance difference between using blocks and try/finally blocks?
UPDATE:
From comments, I've to say:
1- The important question is, having multiple try blocks inside each other: is there any performance issue?
2- I have to care of code, because I'm responsible to code, not to query. The query-side has its own developer which is doing his best. So, I have to do my best too. So, it's important to me to take care of milliseconds ;) Thanks in advance.
Usually when you hear about try/catch is slow, it's all about exception handling. So if exception occurs then it might be slow. But just entering in try method is not something you should worry about. Especially in your case when you warp SQL query call.
If you want to know more about exceptions and performance in .NET you can find a lot of articles to read. For example: MSDN article or great CodeProject article.
And of course using is preferable way because it makes code much cleaner.
I frequently use the following code (or alike) to dispose of objects:
SqlCommand vCmd = null;
try
{
// CODE
}
catch(Exception ex) { /* EXCEPTION HANDLE */ }
finally
{
if (vCmd != null)
{
vCmd.Dispose();
vCmd = null;
}
}
Is this the best way to release objects and dispose of objects?
I'm using the VS analytics and give me a warning about redundancies. But I always do it this way...
The best way in terms of readability is using the using statement:
using(SqlCommand vCmd = new SqlCommand("...", connection)
{
try
{
// CODE
}
catch(Exception ex)
{
// EXCEPTION HANDLE
}
}
It disposes objects even in case of error, so similar to a finally. You should use it always when an object implements IDisposable which indicates that it uses unmanaged resources.
Further reading:
Cleaning Up Unmanaged Resources
there is no need to set objects to null.
Here is an example from MSDN:
private static void ReadOrderData(string connectionString)
{
string queryString =
"SELECT OrderID, CustomerID FROM dbo.Orders;";
using (SqlConnection connection = new SqlConnection(
connectionString))
{
SqlCommand command = new SqlCommand(
queryString, connection);
connection.Open();
SqlDataReader reader = command.ExecuteReader();
try
{
while (reader.Read())
{
Console.WriteLine(String.Format("{0}, {1}",
reader[0], reader[1]));
}
}
finally
{
// Always call Close when done reading.
reader.Close();
}
}
}
Note the use of "using" for the connection.
Back in the Olden Days of COM/ActiveX, you needed to set your objects to "Nothing".
In managed code, this is no longer necessary.
You should neither call "Dispose()", nor set your sqlCommand to "null".
Just stop using it - and trust the .Net garbage collector to do the rest.
I have the following method for bulk insert of the data in tables.
First my code populates the data in data tables and inserts this data in corresponding tables using the SqlBulkCopy claas of the .net .
I have requirement that data should get inserted in all tables or neither of them.
For this I have used SqlTransaction class of the .net.
Scenario is, multiple threads execute the following code block at the same time.
public void Import()
{
using (SqlConnection sqlConnection = new SqlConnection(connectionString))
{
SqlTransaction sqlTrans =null;
try
{
sqlConnection.Open();
sqlTrans = sqlConnection.BeginTransaction(IsolationLevel.Serializable)
SqlCommand cmd = sqlConnection.CreateCommand();
cmd.CommandText = "select top 1 null from lockTable with(xlock)";
cmd.CommandTimeout = 3600*3;
cmd.Transaction = sqlTrans;
SqlDataReader reader = cmd.ExecuteReader();
foreach (DataTable dt in DataTables)
{
ImportIntoDatabase(sqlConnection, dt, sqlTrans);
}
reader.Close();
sqlTrans.Commit();
}
catch (Exception ex)
{
sqlTrans.Rollback();
throw ex;
}
}
}
private void ImportIntoDatabase(SqlConnection sqlConn, DataTable dt, SqlTransaction sqlTrans)
{
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(sqlConn, SqlBulkCopyOptions.Default, sqlTrans))
{
bulkCopy.BulkCopyTimeout = dt.Rows.Count * 10;
try
{
bulkCopy.DestinationTableName = dt.TableName;
bulkCopy.WriteToServer(dt);
}
catch (Exception ex)
{
throw ex;
}
}
}
To handle this concurrency, I have created one dummy table(table named 'lockTable'), in the database where the other table resides(the bulk insert tables). I am getting exclusive lock on this dummy table in the SqlTransaction having command time out as high as 3 hours.
Problem:
I am getting following exception
: Cannot access destination table 'Tbl1' (tbl1 is the table for bulk inserting)
followed by another exception, while rolling back the transaction in catch block
: Error While executing activity The server failed to resume the transaction. Desc:3a00000001.
The transaction active in this session has been committed or aborted by another session.
Can any one help me for this weird behavior of the code. I have already searched a lot on this issue on the internet, but I have not found anything helpful for me.
In Import (DataTable dt in DataTables) is not going to be thread safe.
sqlConnection already has an active reader from Import so that connection cannot be used in ImportIntoDatabase.
Echo smp - if you are locking a table then why multi threads?
If you want to build up the input while the SQL inserts are taking place then use
Asynch method such as SqlCommand.BeginExecuteReader. You get asynch without the overhead of a thread. And DataTables are relatively slow. I insert using TVP and light weight objects. A huge factor in insert performance is index fragmentation. If at all possible insert order by the order of the clustered index. The loop is simple build input, wait for asynch, run asych. Or build input may be read input from a queue. SQL insert to the same table(s) are typically not going to go faster in parallel. My experience is ordered serial inserts with no gap in time between inserts.
I got my problem solved out.
Following are the changes which I have made to my Import method
public void Import()
{
using (SqlConnection sqlConnection = new SqlConnection(connectionString))
{
sqlConnection.Open();
using (SqlTransaction sqlTrans = sqlConnection.BeginTransaction())
{
try
{
SqlCommand cmd = sqlConnection.CreateCommand();
cmd.CommandText = "select top 1 null from lockTable with(xlock)";
cmd.CommandTimeout = LOCK_TIME_OUT;
cmd.Transaction = sqlTrans;
SqlDataReader reader = cmd.ExecuteReader();
foreach (DataTable dt in DataTables)
{
ImportIntoDatabase(sqlConnection, dt, sqlTrans);
}
reader.Close();
sqlTrans.Commit();
}
catch (Exception ex)
{
sqlTrans.Rollback();
throw ex;
}
}
sqlConnection.Close();
}
}
If multiple threads have access to the "Import" Method, then shouldn't you be locking the content of this method?
I don't think you need a dummy table, you just need to lock the two methods above.
I would also mention that you should join all threads, so that you can tell when they have finished.