Exception handling around the rollback of a SqlTransaction - c#

I have two stored procedures that I want execute wrapped in a transaction. For various reasons, I need to handle the transaction in my application code instead of within the database.
At the moment, my code looks like this:
try
{
using (SqlConnection conn = Connection())
{
conn.Open();
using (SqlTransaction sqlTrans = conn.BeginTransaction())
{
try
{
using (SqlCommand cmd1 = new SqlCommand("Stored_Proc_1", conn, sqlTrans))
{
cmd1.CommandType = CommandType.StoredProcedure;
cmd1.ExecuteNonQuery();
}
using (SqlCommand cmd2 = new SqlCommand("Stored_Proc_2", conn, sqlTrans))
{
cmd2.CommandType = CommandType.StoredProcedure;
cmd2.ExecuteNonQuery();
}
sqlTrans.Commit();
}
catch
{
sqlTrans.Rollback();
throw;
}
}
conn.Close();
}
}
catch (SqlException ex)
{
// exception handling and logging code here...
}
When one of the stored procs raises an error, the exception message I am seeing looks like:
Error message from raiserror within stored procedure.
Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 0.
Which makes sense, because at the first catch, the transaction has not been rolled back yet.
But I want a "clean" error (without the tran count message - I'm not interested in this because I am rolling back the transaction) for my exception handling code.
Is there a way I can restructure my code to achieve this?
EDIT:
The basic structure of my stored procs looks like this:
create proc Stored_Proc_1
as
set nocount on
begin try
begin transaction
raiserror('Error raised by Stored_Proc_1', 16, 1)
commit
end try
begin catch
if (##trancount > 0) rollback
declare #ErrMsg nvarchar(4000), #ErrSeverity int, #ErrProc sysname, #ErrLine varchar(10)
select #ErrMsg = ERROR_MESSAGE(), #ErrSeverity = ERROR_SEVERITY(), #ErrProc = ERROR_PROCEDURE(), #ErrLine = ERROR_LINE()
-- log the error
-- sql logging code here...
raiserror(#ErrMsg, #ErrSeverity, 1)
end catch
UPDATE:
I've taken the transaction handling out of my stored procedures and that seems to have solved the problem. Obviously I was doing it wrong - but I'd still like to know how to do it right. Is removing transactions from the stored procedures the best solution?

Well, the conn.Close() could go anyway - it'll get closed by the using (if you think about it, it is odd that we only Close() it after an exception).
Do either of your stored procedures do any transaction code inside themselves (that isn't being rolled back/committed)? It sounds like that is where the problem is...? If anything, the error message suggests to me that one of the stored procedures is doing a COMMIT even though it didn't start a transaction - perhaps due to the (incorrect) approach:
-- pseduo-TSQL
IF ##TRANCOUNT = 0 BEGIN TRAN
-- ...
IF ##TRANCOUNT > 0 COMMIT TRAN -- or maybe = 1
(if you do conditional transactions in TSQL, you should track (via a bool flag) whether you created the transaction - and only COMMIT if you did)
The other option is to use TransactionScope - easier to use (you don't need to set it against each command etc), but slightly less efficient:
using(TransactionScope tran = new TransactionScope()) {
// create command, exec sp1, exec sp2 - without mentioning "tran" or
// anything else transaction related
tran.Complete();
}
(note there is no rollback etc; the Dispose() (via using) will do the rollback if it needs to.

Don't do transactions in your database/stored procedures if you do this in your application! This will most surely just create confusion. Pick a layer and stick to it. Make sure you have a nice normalised database and exceptions should percolate upwards.

I agree with Marc that the problem is likely to be within the stored procedures themselves.
There's a quite interesting article outlining a few issues here.

If the stored procedure includes code like this:
BEGIN TRY
SET #now = CAST(#start AS datetime2(0))
END TRY
BEGIN CATCH
SET #now = CURRENT_TIMESTAMP
END CATCH
and you pass e.g. 'now' as #start, the CAST in the try will fail. This marks the transaction as being rollback only even though the error itself has been captured and handled. So while you get no exceptions from the above code, the transaction cannot be committed. If your stored procedures have code like this, it needs to be rewritten to avoid the try/catch.

Related

T-SQL Equivalent of .NET TransactionScopeOption.Suppress

In my .NET code, inside a database transaction (using TransactionScope), I could include a nested block with TransactionScopeOption.Suppress, which ensures that the commands inside the nested block are committed even if the outer block rolls back.
Following is a code sample:
using (TransactionScope txnScope = new TransactionScope(TransactionScopeOption.Required))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Business(Value) Values('Some Value')");
using (TransactionScope txnLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Logging(LogMsg) Values('Log Message')");
txnLogging.Complete();
}
// Something goes wrong here. Logging is still committed
txnScope.Complete();
}
I was trying to find if this could be done in T-SQL. A few people have recommended OPENROWSET, but it doesn't look very 'elegant' to use. Besides, I think it is a bad idea to put connection information in T-SQL code.
I've used SQL Service Broker in past, but it also supports Transactional Messaging, which means message is not posted to the queue until the database transaction is committed.
My requirement: Our application stored procedures are being fired by some third party application, within an implicit transaction initiated outside stored procedure. And I want to be able to catch and log any errors (in a database table in the same database) within my stored procedures. I need to re-throw the exception to let the third party app rollback the transaction, and for it to know that the operation has failed (and thus do whatever is required in case of a failure).
You can set up a loopback linked server with the remote proc transaction Promotion option set to false and then access it in TSQL or use a CLR procedure in SQL server to create a new connection outside the transaction and do your work.
Both methods suggested in How to create an autonomous transaction in SQL Server 2008.
Both methods involve creating new connections. There is an open connect item requesting this functionality be provided natively.
Values in a table variable exist beyond a ROLLBACK.
So in the following example, all the rows that were going to be deleted can be inserted into a persisted table and queried later on thanks to a combination of OUTPUT and table variables.
-- First, create our table
CREATE TABLE [dbo].[DateTest] ([Date_Test_Id] INT IDENTITY(1, 1), [Test_Date] datetime2(3));
-- Populate it with 15,000,000 rows
-- from 1st Jan 1900 to 1st Jan 2017.
INSERT INTO [dbo].[DateTest] ([Test_Date])
SELECT
TOP (15000000)
DATEADD(DAY, 0, ABS(CHECKSUM(NEWID())) % 42734)
FROM [sys].[messages] AS [m1]
CROSS JOIN [sys].[messages] AS [m2];
BEGIN TRAN;
BEGIN TRY
DECLARE #logger TABLE ([Date_Test_Id] INT, [Test_Date] DATETIME);
-- Delete every 1000 row
DELETE FROM [dbo].[DateTest]
OUTPUT deleted.Date_Test_Id, deleted.Test_Date INTO #logger
WHERE [Date_Test_Id] % 1000 = 0;
-- Make it fail
SELECT 1/0
-- So this will never happen
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT * INTO dbo.logger FROM #logger;
END CATCH;
SELECT * FROM dbo.logger;
DROP TABLE dbo.logger;

Is using ##TRANCOUNT useful?

I have a simple SP that will either do an INSERT or an UPDATE depending on the existence or non-existence of data in a table.
CREATE PROCEDURE [dbo].spUpsert
-- Parameters to Update / Insert a StudentSet
#StudentSetId nvarchar(128),
#Status_Id int
AS
BEGIN
BEGIN TRY
BEGIN TRANSACTION
SET XACT_ABORT ON;
SET NOCOUNT ON;
IF EXISTS(SELECT StudentSetId FROM StudentSet WHERE StudentSetId = #StudentSetId)
BEGIN
UPDATE StudentSet SET ModifiedDate = GETDATE(), Status_Id = #Status_Id
WHERE StudentSetId = #StudentSetId;
END
ELSE
BEGIN
INSERT INTO StudentSet
(StudentSetId, Status_Id)
VALUES
(
#StudentSetId,
#Status_Id
)
END
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
END CATCH
END
Wrote a method like so:
public void Upsert(string studentSetId, int statusId)
{
this.DatabaseJobs.ExecuteSqlCommand(#"exec spUpsert
#StudentSetId = {0},
#Status_Id = {10} ",
studentSetId,
statusId);
}
Here's how this is used:
A student has a file, an xml to be precise, that is sent to a processor which calls this SP as part of the process. Multiple files can be uploaded and the processor is designed to work with 5 files spawning 5 threads.
For a batch of 5 files it throws this error:
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0. Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0.
The number 5 is not a perfect one, it may happen when more that 5 files are uploaded. Lesser than that I haven't tried.
So I searched and found a solution that implements the usage of ##TRANCOUNT detailed here & here
##TRANCOUNT is a global variable and it's usage as suggested in the articles seem's like it's local to the session. What I mean is that any process in SQL Server can increase the #TRANCOUNT and relying on that may not produce the expected result.
My question is what's a good way to handle this type of situation?
Thanks in advance.
First, ##TRANCOUNT is informational - it tells you how many nested transactions are currently in progress in the current thread. In your case, a transaction is already in progress when the stored procedure is called hence the transaction count is 1.
Your problem is that ROLLBACK rolls back all transactions, including any nested transactions. If you wish to abort the whole batch, this is exactly what you want, and the error is simply telling you that it has happened.
However if you only want to roll back the transaction you created locally, you must do something slightly different. You have to save the transaction right at the start, then on error you can roll back to that point (before any work was done), and then commit it (with no work done).
BEGIN TRAN
DECLARE #savepoint varbinary(16) set #savepoint = newid()
SAVE TRAN #savepoint
BEGIN TRY
-- Do some stuff here
select 1/0; -- divide by zero error
COMMIT TRAN
END TRY
BEGIN CATCH
ROLLBACK TRAN #savepoint;
COMMIT TRAN -- important!!!
--re-raise the error if you want (or recover in some other way)
RAISERROR('Rethrowing error', ERROR_SEVERITY(), ERROR_STATE() );
END CATCH
Well, if the transaction was started in .NET code, it would be good if it rolls back in the same code. However, if it's not possible, then you SHOULD check ##TRANCOUNT.
However, you are missing one important thing: what if transaction wasn't started at all? Your code is constructed in such a way that you need transaction. What if you (or someone else) executes procedure from SSMS?
I suggest you do the following:
at the beginning of your code store ##trancount locally (declare #mytrancount)
before you start your processing, check #mytrancount and if there is no transaction, start one
commit transaction at the end, but feel free to check the #mytrancount again before commit
EDIT
Of course, as Ben stated in his answer, you can save the transaction instead beginning it in the code. E.g., if there is a transaction, save it in order to be able to roll back only the part from SAVE to ROLLBACK. And if there is no transaction, start it in your procedure.
Remus Rusanu has the good template for that.

What is the diference in these two ways of transaction handling

What is the difference in these two ways of transaction handling
First approach
//
const string selectSatement = #"INSERT INTO Payment....";
using (SqlTransaction sqlTrans = sqlConnection.BeginTransaction())
using (SqlCommand sqlCommand = new SqlCommand(selectSatement, sqlConnection,sqlTrans))
//
sqlTrans.commit();
Second Approach
BEGIN TRAN T1;
INSERT INTO Payment....;
COMMIT TRAN T1;
With the first option you can have asynchronous use of your database connection (multithreading).
If you have parallel threads performing operations in the database and you simply dump a BEGIN TRANSACTION there, you will probably cause other thread's queries that were not meant to be part of this transaction to be included too, and screw something up in case you have to perform a ROLLBACK.
With the use of a SqlTransaction you make sure only the queries that are supposed to be part of the transaction will be included in it.

C# execute batch SQL command, catch exception, then continue with rest of batch

I’m executing batch SQL commands in C# using SQLConnection and command. I need to be able to know which statement fails, and I can’t do these one at a time because of performance issues. Is there any way in C# that I can execute a batch SQL statement, and in the case of failure, tell me what statement fails (the index, id, or anything so I can know which one) and THEN continue with the rest of the statements.
Thanks
You didn't mention what database you're using, but if you're using SQL Server 2005 or greater, you can use try/catch for this. Here's an example.
BEGIN TRY
select 1/0
END TRY
BEGIN CATCH
SELECT 'statement 1 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
BEGIN TRY
select 1.0/2
END TRY
BEGIN CATCH
SELECT 'statement 2 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
In this case I'm catching the errors and just returning them as a result set, but you could create a temp table/variable at the beginning, insert into that when an error happens, and then select all rows from that table at the end.
EDIT: Here's an example that will throw an error in a trigger:
create table csm (id int)
go
create trigger tr_i_csm on csm for insert as
declare #d int
select #d=sum(id) from inserted
if (#d>=10)
begin
raiserror('error',#d,0)
end
go
BEGIN TRY
BEGIN TRAN
insert into csm values (5)
COMMIT
END TRY
BEGIN CATCH
ROLLBACK
SELECT 'statement 1 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
BEGIN TRY
BEGIN TRAN
insert into csm values(16)
COMMIT
END TRY
BEGIN CATCH
ROLLBACK
SELECT 'statement 2 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
BEGIN TRY
BEGIN TRAN
insert into csm values(2)
COMMIT
END TRY
BEGIN CATCH
ROLLBACK
SELECT 'statement 3 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
selecT * from csm
One option is to include print statements in your batches following each query. You can then look at the output to find failures. (See here for information on how to read this).
In a prior job, we had a number of nightly stored procedures that ran via Sql Agent, and some other non-database jobs written in C# that ran as Windows Scheduled Tasks. We eventually wrote a c# program to call the stored procedures, instead of Sql Agent, so that we could have all of our scheduling (and logging!) in one place (scheduled tasks). We also had support for executing an Sql file via the program. Receiving Print message output was how we handled logging.
Of course, this implies the ability to modify your batch scripts. It also means writing the sql such that a failed statement won't terminate the whole job.

Processing Thousands of SqlCommands using a SqlTransaction Causes Memory Exception

I've written a custom replication function in a standard C# windows forms app with a SQL Server 2008 Express database. It basically pulls down a set of sql statements that need to be executed against a subscriber database. On a complete refresh this can run up to 200k+ statements that need to be executed.
I processing these statements inside a code block as shown below:
using (SqlConnection connection = ConnectionManager.GetConnection())
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
// Process 200k+ Insert/Update/Delete statements using SqlCommands
transaction.Commit
}
What I'm finding is that my applications memory usage remains pretty stable at around 40mb for the first 30k statements. After which it suddenly seems to jump to around 300mb and then grows until I hit a OutOfMemory exception.
Is the method I'm using even possible, can I process that many statements inside a single transaction? I would assume I should be able to do this. If there is a better way I'd love to here it. I need this to be transactional otherwise a partial replication would result in a broken database.
Thanks.
EDIT:
After restarting my computer I managed to get a full 200k+ replication to go through. Even though it did at one point grow in memory usage to 1.4Gb after the replication completed the memory usage dropped all the way back to 40mb. Which leads me to conclude that something inside my loop that processes the commands is causing the growth in memory perhaps.
Are you Disposing your forms and the disposable controls before closing?
Wrap all Disposable objects in Using Statement. Click here for more details
Don't open/close the Connection over and over again, instead send the data to database in single Transaction. Click here for more details
Still your application is holding tooo much memory then you need a Doctor like Red Gate Ants Memory Profiler. Click here to see more details about it
can I process that many statements inside a single transaction?
You have below options to do this...
Bulk insert and oprate the records in Stored Proc.
Prepare XML and send the string in Database.
Send the Read only DataTable in the Sql Server through Stored Proc
Sample Stored Proc
Begin Try
Set NoCount ON
Set XACT_Abort ON
Begin TRan
--Your queries
Commit Tran
Begin Tran
Begin Catch
Rollback Tran
End Catch
Make sure to Dispose the objects once not in use.
It should be like this
using (SqlConnection connection = new SqlConnection())
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
transaction.Commit();
}
}
Did you verify the SqlCommand also?
using (SqlCommand cmd = new SqlCommand())
{
}

Categories