I have a simple SP that will either do an INSERT or an UPDATE depending on the existence or non-existence of data in a table.
CREATE PROCEDURE [dbo].spUpsert
-- Parameters to Update / Insert a StudentSet
#StudentSetId nvarchar(128),
#Status_Id int
AS
BEGIN
BEGIN TRY
BEGIN TRANSACTION
SET XACT_ABORT ON;
SET NOCOUNT ON;
IF EXISTS(SELECT StudentSetId FROM StudentSet WHERE StudentSetId = #StudentSetId)
BEGIN
UPDATE StudentSet SET ModifiedDate = GETDATE(), Status_Id = #Status_Id
WHERE StudentSetId = #StudentSetId;
END
ELSE
BEGIN
INSERT INTO StudentSet
(StudentSetId, Status_Id)
VALUES
(
#StudentSetId,
#Status_Id
)
END
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
END CATCH
END
Wrote a method like so:
public void Upsert(string studentSetId, int statusId)
{
this.DatabaseJobs.ExecuteSqlCommand(#"exec spUpsert
#StudentSetId = {0},
#Status_Id = {10} ",
studentSetId,
statusId);
}
Here's how this is used:
A student has a file, an xml to be precise, that is sent to a processor which calls this SP as part of the process. Multiple files can be uploaded and the processor is designed to work with 5 files spawning 5 threads.
For a batch of 5 files it throws this error:
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0. Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0.
The number 5 is not a perfect one, it may happen when more that 5 files are uploaded. Lesser than that I haven't tried.
So I searched and found a solution that implements the usage of ##TRANCOUNT detailed here & here
##TRANCOUNT is a global variable and it's usage as suggested in the articles seem's like it's local to the session. What I mean is that any process in SQL Server can increase the #TRANCOUNT and relying on that may not produce the expected result.
My question is what's a good way to handle this type of situation?
Thanks in advance.
First, ##TRANCOUNT is informational - it tells you how many nested transactions are currently in progress in the current thread. In your case, a transaction is already in progress when the stored procedure is called hence the transaction count is 1.
Your problem is that ROLLBACK rolls back all transactions, including any nested transactions. If you wish to abort the whole batch, this is exactly what you want, and the error is simply telling you that it has happened.
However if you only want to roll back the transaction you created locally, you must do something slightly different. You have to save the transaction right at the start, then on error you can roll back to that point (before any work was done), and then commit it (with no work done).
BEGIN TRAN
DECLARE #savepoint varbinary(16) set #savepoint = newid()
SAVE TRAN #savepoint
BEGIN TRY
-- Do some stuff here
select 1/0; -- divide by zero error
COMMIT TRAN
END TRY
BEGIN CATCH
ROLLBACK TRAN #savepoint;
COMMIT TRAN -- important!!!
--re-raise the error if you want (or recover in some other way)
RAISERROR('Rethrowing error', ERROR_SEVERITY(), ERROR_STATE() );
END CATCH
Well, if the transaction was started in .NET code, it would be good if it rolls back in the same code. However, if it's not possible, then you SHOULD check ##TRANCOUNT.
However, you are missing one important thing: what if transaction wasn't started at all? Your code is constructed in such a way that you need transaction. What if you (or someone else) executes procedure from SSMS?
I suggest you do the following:
at the beginning of your code store ##trancount locally (declare #mytrancount)
before you start your processing, check #mytrancount and if there is no transaction, start one
commit transaction at the end, but feel free to check the #mytrancount again before commit
EDIT
Of course, as Ben stated in his answer, you can save the transaction instead beginning it in the code. E.g., if there is a transaction, save it in order to be able to roll back only the part from SAVE to ROLLBACK. And if there is no transaction, start it in your procedure.
Remus Rusanu has the good template for that.
Related
In my .NET code, inside a database transaction (using TransactionScope), I could include a nested block with TransactionScopeOption.Suppress, which ensures that the commands inside the nested block are committed even if the outer block rolls back.
Following is a code sample:
using (TransactionScope txnScope = new TransactionScope(TransactionScopeOption.Required))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Business(Value) Values('Some Value')");
using (TransactionScope txnLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Logging(LogMsg) Values('Log Message')");
txnLogging.Complete();
}
// Something goes wrong here. Logging is still committed
txnScope.Complete();
}
I was trying to find if this could be done in T-SQL. A few people have recommended OPENROWSET, but it doesn't look very 'elegant' to use. Besides, I think it is a bad idea to put connection information in T-SQL code.
I've used SQL Service Broker in past, but it also supports Transactional Messaging, which means message is not posted to the queue until the database transaction is committed.
My requirement: Our application stored procedures are being fired by some third party application, within an implicit transaction initiated outside stored procedure. And I want to be able to catch and log any errors (in a database table in the same database) within my stored procedures. I need to re-throw the exception to let the third party app rollback the transaction, and for it to know that the operation has failed (and thus do whatever is required in case of a failure).
You can set up a loopback linked server with the remote proc transaction Promotion option set to false and then access it in TSQL or use a CLR procedure in SQL server to create a new connection outside the transaction and do your work.
Both methods suggested in How to create an autonomous transaction in SQL Server 2008.
Both methods involve creating new connections. There is an open connect item requesting this functionality be provided natively.
Values in a table variable exist beyond a ROLLBACK.
So in the following example, all the rows that were going to be deleted can be inserted into a persisted table and queried later on thanks to a combination of OUTPUT and table variables.
-- First, create our table
CREATE TABLE [dbo].[DateTest] ([Date_Test_Id] INT IDENTITY(1, 1), [Test_Date] datetime2(3));
-- Populate it with 15,000,000 rows
-- from 1st Jan 1900 to 1st Jan 2017.
INSERT INTO [dbo].[DateTest] ([Test_Date])
SELECT
TOP (15000000)
DATEADD(DAY, 0, ABS(CHECKSUM(NEWID())) % 42734)
FROM [sys].[messages] AS [m1]
CROSS JOIN [sys].[messages] AS [m2];
BEGIN TRAN;
BEGIN TRY
DECLARE #logger TABLE ([Date_Test_Id] INT, [Test_Date] DATETIME);
-- Delete every 1000 row
DELETE FROM [dbo].[DateTest]
OUTPUT deleted.Date_Test_Id, deleted.Test_Date INTO #logger
WHERE [Date_Test_Id] % 1000 = 0;
-- Make it fail
SELECT 1/0
-- So this will never happen
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT * INTO dbo.logger FROM #logger;
END CATCH;
SELECT * FROM dbo.logger;
DROP TABLE dbo.logger;
I’m executing batch SQL commands in C# using SQLConnection and command. I need to be able to know which statement fails, and I can’t do these one at a time because of performance issues. Is there any way in C# that I can execute a batch SQL statement, and in the case of failure, tell me what statement fails (the index, id, or anything so I can know which one) and THEN continue with the rest of the statements.
Thanks
You didn't mention what database you're using, but if you're using SQL Server 2005 or greater, you can use try/catch for this. Here's an example.
BEGIN TRY
select 1/0
END TRY
BEGIN CATCH
SELECT 'statement 1 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
BEGIN TRY
select 1.0/2
END TRY
BEGIN CATCH
SELECT 'statement 2 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
In this case I'm catching the errors and just returning them as a result set, but you could create a temp table/variable at the beginning, insert into that when an error happens, and then select all rows from that table at the end.
EDIT: Here's an example that will throw an error in a trigger:
create table csm (id int)
go
create trigger tr_i_csm on csm for insert as
declare #d int
select #d=sum(id) from inserted
if (#d>=10)
begin
raiserror('error',#d,0)
end
go
BEGIN TRY
BEGIN TRAN
insert into csm values (5)
COMMIT
END TRY
BEGIN CATCH
ROLLBACK
SELECT 'statement 1 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
BEGIN TRY
BEGIN TRAN
insert into csm values(16)
COMMIT
END TRY
BEGIN CATCH
ROLLBACK
SELECT 'statement 2 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
BEGIN TRY
BEGIN TRAN
insert into csm values(2)
COMMIT
END TRY
BEGIN CATCH
ROLLBACK
SELECT 'statement 3 failed' AS Statement,ERROR_MESSAGE() as ErrorMessage,ERROR_SEVERITY() AS Severity;
END CATCH
selecT * from csm
One option is to include print statements in your batches following each query. You can then look at the output to find failures. (See here for information on how to read this).
In a prior job, we had a number of nightly stored procedures that ran via Sql Agent, and some other non-database jobs written in C# that ran as Windows Scheduled Tasks. We eventually wrote a c# program to call the stored procedures, instead of Sql Agent, so that we could have all of our scheduling (and logging!) in one place (scheduled tasks). We also had support for executing an Sql file via the program. Receiving Print message output was how we handled logging.
Of course, this implies the ability to modify your batch scripts. It also means writing the sql such that a failed statement won't terminate the whole job.
I have a coworker working on an application who's run into a problem. He fires off a stored procedure using SqlCommand.ExecuteNonQuery. This stored procedure, in the same table, updates one row and inserts another. Meanwhile his application goes on and reads from the table. A race condition occurs where the read happens in between the update and the insert.
The data in question is records of access levels. When an access level changes it terminates (updates) the old access level and then instantiates (inserts) the new access level. Not infrequently the read will get in between the update and insert and find only terminated access levels--a bit of a problem.
What's the best solution to my coworker's problem?
I got a hold of the stored procedure he's trying to fix:
BEGIN
SELECT OBJECT_ACCESS_ID, PERSON_AUTH_LEVEL
INTO lAccessID, lExistingAccessLevel
FROM SHPAZ.SH_PAZ_OBJECT_ACCESS
WHERE
USER_ID = pUserID
AND (GRGR_ID = pGroupID OR (GRGR_ID IS NULL AND pGroupID IS NULL))
AND SYSDATE BETWEEN OBJECT_ACCESS_EFF_DATE AND OBJECT_ACCESS_END_DATE
FOR UPDATE;
-- If the new access level is the same as the existing, then do nothing.
IF lExistingAccessLevel = pLevel THEN
RETURN;
END IF;
-- Terminate the existing record.
UPDATE SHPAZ.SH_PAZ_OBJECT_ACCESS
SET OBJECT_ACCESS_END_DATE = SYSDATE
WHERE OBJECT_ACCESS_ID = lAccessID;
-- Create the new record.
SELECT CASE WHEN pGroupID IS NULL THEN 'Broker' ELSE 'Employer' END
INTO lSource
FROM DUAL;
INSERT INTO SHPAZ.SH_PAZ_OBJECT_ACCESS (USER_ID, GRGR_ID, SOURCE, PERSON_AUTH_LEVEL, OBJECT_ACCESS_EFF_DATE, OBJECT_ACCESS_END_DATE)
VALUES (pUserID, pGroupID, lSource, pLevel, SYSDATE, TO_DATE('12/31/2199', 'MM/DD/YYYY'));
COMMIT;
EXCEPTION
-- If there is no record, then just create a new one.
WHEN NO_DATA_FOUND THEN
SELECT CASE WHEN pGroupID IS NULL THEN 'Broker' ELSE 'Employer' END
INTO lSource
FROM DUAL;
INSERT INTO SHPAZ.SH_PAZ_OBJECT_ACCESS (USER_ID, GRGR_ID, SOURCE, PERSON_AUTH_LEVEL, OBJECT_ACCESS_EFF_DATE, OBJECT_ACCESS_END_DATE)
VALUES (pUserID, pGroupID, lSource, pLevel, SYSDATE, TO_DATE('12/31/2199', 'MM/DD/YYYY'));
END SHSP_SET_USER_ACCESS;
The solution is to remove the commit from inside your procedure, and have it
done after procedure returns. Let's say you create your procedure with name my_procedure:
SQL> exec my_procedure(my_in_arg, my_out_arg);
SQL> commit;
There should be no race at all when atomic functional operations are wrapped inside a transaction.
I've been playing around with using transaction in SQL server and in C#. Consider a store procedure which inserts a row into a three column table
alter proc spInsertItem
#itemId int
,#itemDescription varchar(50)
,#itemCost decimal
as
begin
if(#itemCost < 0)
begin
raiserror('cost cannot be less than 0',16,1)
end
else
begin
begin try
begin tran
insert into Items(itemid, [description],itemCost)
values (#itemid, #itemdescription,#itemCost)
commit tran
end try
begin catch
rollback tran
select ERROR_LINE()as errorLine
,ERROR_MESSAGE() as errorMessage
,ERROR_STATE() as errorState
,ERROR_PROCEDURE() as errorProcedure
,ERROR_NUMBER() as errorNumber
end catch
end
end
vs
create proc spInsertItem2
#itemid int
,#itemDescription varchar(50)
,#itemCost decimal
as
begin
insert into Items(ItemId,[Description],ItemCost)
values (#itemid, #itemDescription,#itemCost)
end
In the first example the user is notified that they are unable to enter in an item cost less than 0, and the rest is pretty self explanatory. This got me to thinking, if you're going to want to disallow a certain values you should need a check constraint, so I added the following constraint
alter table items
add constraint chkItemCost
check (ItemCost > 0)
Now the two stored procedures function the same in code, and the SQL is much shorter and in my opinion, easier to read in the second, shorter version. Granted, this is a very rudimentary example, but to me it seems that if you see the try/catch in code when you call the stored procedure, you can be sure of the database not being put in an inconsistent state. So, what am I missing that I shouldn't rely on C# to create transactions?
This is usually a design decision; where the logic of application housed. If you decide upon concentrating your business logic in the application code, for each atomic application logic which involves multiple trips to the database, you need to wrap that logic using transaction in C#.
Whereas, if you house business logic in the database with the help of SPs, you do not need transaction in C#.
A common scenario is:
You create one or more records in the database.
You do some post processing on this data in C#.
You update another table with the data you just processed .
The requirement is that if step 2 or 3 fails, step 1 (created records) should be rolled back. For this you need transactions. You may argue that you can put all the three steps in an SP and wrap it with a transaction; it should be possible and is generally a matter of preference of where you put your application logic.
I'm trying to debug an application error only by reviewing SQL profiler. (I don't have access to the code).
The error in the application says
Now, I ran SQL profiler when executing the frontend command and tried to identify the command which wasn't commited. I can't post the exact queries, but I'll post their main parts.
In the profiler ROLLBACK was called after the following update statemant (from what I know only INSERT/UPDATE/DELETE statements are rollbacked).
UPDATE transactions
SET refference = N':ICBilling 8/9/2013'
,type1 = 1
,notes = N'Billing'
,dateModified = convert(DATETIME, N'08/09/2013 10:33:13AM', 101)
,moduleModifiedBy = N'PM'
WHERE transaction_id = 1100001368
I scrolled up the profiler to see what happened before and observed that this record was inserted into transactions table.
INSERT INTO transactions (
transaction_id,
uRefference,
Type1,
Notes,
ModuleModifiedBy,
)
VALUES (
1100001368
, NULL
, 0
, NULL
, convert(DATETIME, N'08/09/2013 10:33:13AM', 101)
)
Right before this insert there was a BEGIN TRANSACTION command. I assumed that because the INSERT opperation was not commited into the Transactions table, the UPDATE statement would cause an error, because it couldn't find the record to update.
(Although I have tested in SQL Server this idea with these queries. I created a temp table and ran the part right after begin transaction in one go)
CREATE TABLE #tempx (i INT);
INSERT INTO #tempx VALUES (1),(2),(3);
BEGIN TRANSACTION
INSERT INTO #tempx VALUES (4);
UPDATE #tempx
SET i = 5
WHERE i = 4
The queries ran and the UPDATE was executed without a problem, so I have sort of eliminated this possibility. Although I had my doubts about the difference between runnign queries directly from SQL Server M.S. and calling them from an application and after further googling I found (here) that a few parameters have to be set in order to have implicit transactions.
So, I went back to my profiler and right at the top this is what I found
SET QUOTED_IDENTIFIER ON
SET ARITHABORT OFF
SET NUMERIC_ROUNDABORT OFF
SET ANSI_WARNINGS ON
SET ANSI_PADDING ON
SET ANSI_NULLS ON
SET CONCAT_NULL_YIELDS_NULL ON
SET CURSOR_CLOSE_ON_COMMIT OFF
SET IMPLICIT_TRANSACTIONS OFF
SET LANGUAGE us_english
SET DATEFORMAT mdy
SET DATEFIRST 7
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
I haven't tried to set IMPLICIT_TRANSACTIONS to ON yet because this is a production DB and I'm not entirely sure there will be no problems.
So, I would like to ask the help of people with more experience to help me find or at least drill down to a place where the problem might be (the code or in SQL Server settings)
UPDATE:
I managed to get access to the part of the code that throws the error
If Not myExceptionOccured And Not SessionKey Is Nothing Then
If SessionKey.Connection.IsInTranState Then
Dim Message As String = String.Format("Could not complete Page_Unload. There are {0} uncommitted transactions. All changes were rolled back.", SessionKey.Connection.TransactionCount)
so it is clear at this moment that the session is not being closed and transaction is not being commited by SQL Server.
I've tried modifying other transactions and that worked. But apparently it has a problem only with this transaction. I'll try comparing both profilers for these transactions, maybe I can find a difference.
In the meantime, if anyone thinks I'm going in the wrong direction, please leave a comment.
UPDATE #2:
Like I've said, I profiled another transaction which worked and looked at the query after which rollback is called.
The query, in both cases (in the case where it works and when it rollbacks), is this (the same as at the beginning of my question):
UPDATE transactions
SET refference = N':Billing 8/16/2013'
,type1 = 1
,notes = N'Billing'
,dateModified = convert(DATETIME, N'08/16/2013 12:47:25PM', 101)
,moduleModifiedBy = N'PM'
WHERE tranId = --transaction id here--
The only difference between them is the transaction ID which they update, and the times (there's a GETDATE() function which gets the current time).
And that is just it. One transaction is commited, the other is not.
Any other suggestions?