What is the difference in these two ways of transaction handling
First approach
//
const string selectSatement = #"INSERT INTO Payment....";
using (SqlTransaction sqlTrans = sqlConnection.BeginTransaction())
using (SqlCommand sqlCommand = new SqlCommand(selectSatement, sqlConnection,sqlTrans))
//
sqlTrans.commit();
Second Approach
BEGIN TRAN T1;
INSERT INTO Payment....;
COMMIT TRAN T1;
With the first option you can have asynchronous use of your database connection (multithreading).
If you have parallel threads performing operations in the database and you simply dump a BEGIN TRANSACTION there, you will probably cause other thread's queries that were not meant to be part of this transaction to be included too, and screw something up in case you have to perform a ROLLBACK.
With the use of a SqlTransaction you make sure only the queries that are supposed to be part of the transaction will be included in it.
Related
Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.
Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.
I've written a custom replication function in a standard C# windows forms app with a SQL Server 2008 Express database. It basically pulls down a set of sql statements that need to be executed against a subscriber database. On a complete refresh this can run up to 200k+ statements that need to be executed.
I processing these statements inside a code block as shown below:
using (SqlConnection connection = ConnectionManager.GetConnection())
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
// Process 200k+ Insert/Update/Delete statements using SqlCommands
transaction.Commit
}
What I'm finding is that my applications memory usage remains pretty stable at around 40mb for the first 30k statements. After which it suddenly seems to jump to around 300mb and then grows until I hit a OutOfMemory exception.
Is the method I'm using even possible, can I process that many statements inside a single transaction? I would assume I should be able to do this. If there is a better way I'd love to here it. I need this to be transactional otherwise a partial replication would result in a broken database.
Thanks.
EDIT:
After restarting my computer I managed to get a full 200k+ replication to go through. Even though it did at one point grow in memory usage to 1.4Gb after the replication completed the memory usage dropped all the way back to 40mb. Which leads me to conclude that something inside my loop that processes the commands is causing the growth in memory perhaps.
Are you Disposing your forms and the disposable controls before closing?
Wrap all Disposable objects in Using Statement. Click here for more details
Don't open/close the Connection over and over again, instead send the data to database in single Transaction. Click here for more details
Still your application is holding tooo much memory then you need a Doctor like Red Gate Ants Memory Profiler. Click here to see more details about it
can I process that many statements inside a single transaction?
You have below options to do this...
Bulk insert and oprate the records in Stored Proc.
Prepare XML and send the string in Database.
Send the Read only DataTable in the Sql Server through Stored Proc
Sample Stored Proc
Begin Try
Set NoCount ON
Set XACT_Abort ON
Begin TRan
--Your queries
Commit Tran
Begin Tran
Begin Catch
Rollback Tran
End Catch
Make sure to Dispose the objects once not in use.
It should be like this
using (SqlConnection connection = new SqlConnection())
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
transaction.Commit();
}
}
Did you verify the SqlCommand also?
using (SqlCommand cmd = new SqlCommand())
{
}
I've been brushing up on my knowledge this evening, trying to overcome 4 years of bad programming practices because of the company I was working for. One of the things I've recently stumbled on was System.Transactions. After reading about them for the last few hours, I think I have an adequate understanding of how they work and why you would want to use them. However, all the examples I've looked at are showing inline T-SQL being called from within the transaction.
I pretty much use Stored Procedures exclusively when doing database access and the existing stored procedures are all wrapped in their own SqlTransactions. You know, using 'Begin Tran' and then rolling back or committing. If a Stored Proc calls another stored proc, it too creates a transaction and the Commits bubble up until the outer one either commits or rolls back. Works great.
So now my question is, if I wanted to start using System.Transactions in my code - for the simple purposes of monitoring successive database tasks that can't be nested inside a single Stored Procedure - how does that work with the existing SqlTransactions I already have in my stored procs?
Will using System.Transactions in my code just add one more layer of protection before it is actually committed, or because I'm explicitly committing in my SqlTransaction - will the data be persisted regardless of committing or rolling back in code based transaction?
No, System.Transactions and Sql transactions do not mix.
And I quote, "Do Not Mix Them" from the following MSDN article: https://msdn.microsoft.com/en-us/library/ms973865.aspx.
Sql transactions do not participate on the outer System.Transaction the way you want them to. Sql transactions that fail or rollback will not cause other activities within the System.Transaction to rollback.
This example shows the phenomena:
using (var tx = new TransactionScope())
{
using (var con = new SqlConnection($"{connectionstring}"))
{
con.Open();
using (var com = new SqlCommand($"set xact_abort on; begin transaction; INSERT INTO dbo.KeyValueTable VALUES ('value1', '{Guid.NewGuid()}'); rollback;", con))
{
// This transaction failed, but it doesn't rollback the entire system.transaction!
com.ExecuteNonQuery();
}
using (var com = new SqlCommand($"set xact_abort on; begin transaction; INSERT INTO dbo.KeyValueTable VALUES ('value2', '{Guid.NewGuid()}'); commit;", con))
{
// This transaction will actually persist!
com.ExecuteNonQuery();
}
}
tx.Complete();
}
After running this example on an empty data store you should notice that the records from the second Sql operation are indeed committed, when the structure of the C# code would imply that they shouldn't be.
Put simply, you should not mix them. If you are orchestrating multiple Sql transactions within an application you should just use System.Transactions. Unfortunately that would mean removing your transaction code from all of your stored procedures, but alas, it is necessary as with a mixed model you cannot guarantee the integrity of your data.
Works just fine, if your inner transactions within the stored procs are committed everything will commit. If one of them roll back then everything within the outer transcation will roll back. Pure magic. :)
I have two stored procedures that I want execute wrapped in a transaction. For various reasons, I need to handle the transaction in my application code instead of within the database.
At the moment, my code looks like this:
try
{
using (SqlConnection conn = Connection())
{
conn.Open();
using (SqlTransaction sqlTrans = conn.BeginTransaction())
{
try
{
using (SqlCommand cmd1 = new SqlCommand("Stored_Proc_1", conn, sqlTrans))
{
cmd1.CommandType = CommandType.StoredProcedure;
cmd1.ExecuteNonQuery();
}
using (SqlCommand cmd2 = new SqlCommand("Stored_Proc_2", conn, sqlTrans))
{
cmd2.CommandType = CommandType.StoredProcedure;
cmd2.ExecuteNonQuery();
}
sqlTrans.Commit();
}
catch
{
sqlTrans.Rollback();
throw;
}
}
conn.Close();
}
}
catch (SqlException ex)
{
// exception handling and logging code here...
}
When one of the stored procs raises an error, the exception message I am seeing looks like:
Error message from raiserror within stored procedure.
Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 0.
Which makes sense, because at the first catch, the transaction has not been rolled back yet.
But I want a "clean" error (without the tran count message - I'm not interested in this because I am rolling back the transaction) for my exception handling code.
Is there a way I can restructure my code to achieve this?
EDIT:
The basic structure of my stored procs looks like this:
create proc Stored_Proc_1
as
set nocount on
begin try
begin transaction
raiserror('Error raised by Stored_Proc_1', 16, 1)
commit
end try
begin catch
if (##trancount > 0) rollback
declare #ErrMsg nvarchar(4000), #ErrSeverity int, #ErrProc sysname, #ErrLine varchar(10)
select #ErrMsg = ERROR_MESSAGE(), #ErrSeverity = ERROR_SEVERITY(), #ErrProc = ERROR_PROCEDURE(), #ErrLine = ERROR_LINE()
-- log the error
-- sql logging code here...
raiserror(#ErrMsg, #ErrSeverity, 1)
end catch
UPDATE:
I've taken the transaction handling out of my stored procedures and that seems to have solved the problem. Obviously I was doing it wrong - but I'd still like to know how to do it right. Is removing transactions from the stored procedures the best solution?
Well, the conn.Close() could go anyway - it'll get closed by the using (if you think about it, it is odd that we only Close() it after an exception).
Do either of your stored procedures do any transaction code inside themselves (that isn't being rolled back/committed)? It sounds like that is where the problem is...? If anything, the error message suggests to me that one of the stored procedures is doing a COMMIT even though it didn't start a transaction - perhaps due to the (incorrect) approach:
-- pseduo-TSQL
IF ##TRANCOUNT = 0 BEGIN TRAN
-- ...
IF ##TRANCOUNT > 0 COMMIT TRAN -- or maybe = 1
(if you do conditional transactions in TSQL, you should track (via a bool flag) whether you created the transaction - and only COMMIT if you did)
The other option is to use TransactionScope - easier to use (you don't need to set it against each command etc), but slightly less efficient:
using(TransactionScope tran = new TransactionScope()) {
// create command, exec sp1, exec sp2 - without mentioning "tran" or
// anything else transaction related
tran.Complete();
}
(note there is no rollback etc; the Dispose() (via using) will do the rollback if it needs to.
Don't do transactions in your database/stored procedures if you do this in your application! This will most surely just create confusion. Pick a layer and stick to it. Make sure you have a nice normalised database and exceptions should percolate upwards.
I agree with Marc that the problem is likely to be within the stored procedures themselves.
There's a quite interesting article outlining a few issues here.
If the stored procedure includes code like this:
BEGIN TRY
SET #now = CAST(#start AS datetime2(0))
END TRY
BEGIN CATCH
SET #now = CURRENT_TIMESTAMP
END CATCH
and you pass e.g. 'now' as #start, the CAST in the try will fail. This marks the transaction as being rollback only even though the error itself has been captured and handled. So while you get no exceptions from the above code, the transaction cannot be committed. If your stored procedures have code like this, it needs to be rewritten to avoid the try/catch.