T-SQL Equivalent of .NET TransactionScopeOption.Suppress - c#

In my .NET code, inside a database transaction (using TransactionScope), I could include a nested block with TransactionScopeOption.Suppress, which ensures that the commands inside the nested block are committed even if the outer block rolls back.
Following is a code sample:
using (TransactionScope txnScope = new TransactionScope(TransactionScopeOption.Required))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Business(Value) Values('Some Value')");
using (TransactionScope txnLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Logging(LogMsg) Values('Log Message')");
txnLogging.Complete();
}
// Something goes wrong here. Logging is still committed
txnScope.Complete();
}
I was trying to find if this could be done in T-SQL. A few people have recommended OPENROWSET, but it doesn't look very 'elegant' to use. Besides, I think it is a bad idea to put connection information in T-SQL code.
I've used SQL Service Broker in past, but it also supports Transactional Messaging, which means message is not posted to the queue until the database transaction is committed.
My requirement: Our application stored procedures are being fired by some third party application, within an implicit transaction initiated outside stored procedure. And I want to be able to catch and log any errors (in a database table in the same database) within my stored procedures. I need to re-throw the exception to let the third party app rollback the transaction, and for it to know that the operation has failed (and thus do whatever is required in case of a failure).

You can set up a loopback linked server with the remote proc transaction Promotion option set to false and then access it in TSQL or use a CLR procedure in SQL server to create a new connection outside the transaction and do your work.
Both methods suggested in How to create an autonomous transaction in SQL Server 2008.
Both methods involve creating new connections. There is an open connect item requesting this functionality be provided natively.

Values in a table variable exist beyond a ROLLBACK.
So in the following example, all the rows that were going to be deleted can be inserted into a persisted table and queried later on thanks to a combination of OUTPUT and table variables.
-- First, create our table
CREATE TABLE [dbo].[DateTest] ([Date_Test_Id] INT IDENTITY(1, 1), [Test_Date] datetime2(3));
-- Populate it with 15,000,000 rows
-- from 1st Jan 1900 to 1st Jan 2017.
INSERT INTO [dbo].[DateTest] ([Test_Date])
SELECT
TOP (15000000)
DATEADD(DAY, 0, ABS(CHECKSUM(NEWID())) % 42734)
FROM [sys].[messages] AS [m1]
CROSS JOIN [sys].[messages] AS [m2];
BEGIN TRAN;
BEGIN TRY
DECLARE #logger TABLE ([Date_Test_Id] INT, [Test_Date] DATETIME);
-- Delete every 1000 row
DELETE FROM [dbo].[DateTest]
OUTPUT deleted.Date_Test_Id, deleted.Test_Date INTO #logger
WHERE [Date_Test_Id] % 1000 = 0;
-- Make it fail
SELECT 1/0
-- So this will never happen
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT * INTO dbo.logger FROM #logger;
END CATCH;
SELECT * FROM dbo.logger;
DROP TABLE dbo.logger;

Related

Set of sql commands executed in single call considered as under transaction

In case I'm executing set of sql commands within single SqlCommand.ExecuteReader() may I consider them under transaction by default or I should express transaction explicitly?
Would be .NET pattern using(var scope = new TransactionScope) {...} enough to make it transactional in case it isn't?
First part of command batch is declaration of table variable based on value of table being updated in second part.
DECLARE #tempTable TABLE (ID int,...)
INSERT INTO #tempTable (ID,...)
SELECT [ID],.. FROM [Table] WHERE ...
Second part is set of items which should be Updated or Inserted
IF EXISTS(SELECT 1 FROM #tempTable WHERE ID=#id{0})
BEGIN
UPDATE [Table]
SET ... WHERE ...
END ELSE BEGIN
INSERT INTO [Table] ([ID], ...) VALUES (#id{0}, ...)
END
After they're concatenated I'm executing them by single call SqlCommand.ExecuteQuery()
Reason to do that batching is I'd like to minimize recurring transfers of parameters between SQL Server and my app since they're constants in scope of that batch.
Reason to ask is to prevent redundant code and understand whether it's possible that original data in [Table] (source of #tempTable) may change before last update/insert fragment of batch is finished.
Executing SqlCommand won't begin any transaction.
You have two choices:
Use "BeginTransaction" from connection and passing transaction object to SqlCommand
Use TransactionScope, which uses DTC and allows you to manage transactions across multiple dbs
I would use the second.

Pitfalls of using SQL transactions vs .NET transactions

I've been playing around with using transaction in SQL server and in C#. Consider a store procedure which inserts a row into a three column table
alter proc spInsertItem
#itemId int
,#itemDescription varchar(50)
,#itemCost decimal
as
begin
if(#itemCost < 0)
begin
raiserror('cost cannot be less than 0',16,1)
end
else
begin
begin try
begin tran
insert into Items(itemid, [description],itemCost)
values (#itemid, #itemdescription,#itemCost)
commit tran
end try
begin catch
rollback tran
select ERROR_LINE()as errorLine
,ERROR_MESSAGE() as errorMessage
,ERROR_STATE() as errorState
,ERROR_PROCEDURE() as errorProcedure
,ERROR_NUMBER() as errorNumber
end catch
end
end
vs
create proc spInsertItem2
#itemid int
,#itemDescription varchar(50)
,#itemCost decimal
as
begin
insert into Items(ItemId,[Description],ItemCost)
values (#itemid, #itemDescription,#itemCost)
end
In the first example the user is notified that they are unable to enter in an item cost less than 0, and the rest is pretty self explanatory. This got me to thinking, if you're going to want to disallow a certain values you should need a check constraint, so I added the following constraint
alter table items
add constraint chkItemCost
check (ItemCost > 0)
Now the two stored procedures function the same in code, and the SQL is much shorter and in my opinion, easier to read in the second, shorter version. Granted, this is a very rudimentary example, but to me it seems that if you see the try/catch in code when you call the stored procedure, you can be sure of the database not being put in an inconsistent state. So, what am I missing that I shouldn't rely on C# to create transactions?
This is usually a design decision; where the logic of application housed. If you decide upon concentrating your business logic in the application code, for each atomic application logic which involves multiple trips to the database, you need to wrap that logic using transaction in C#.
Whereas, if you house business logic in the database with the help of SPs, you do not need transaction in C#.
A common scenario is:
You create one or more records in the database.
You do some post processing on this data in C#.
You update another table with the data you just processed .
The requirement is that if step 2 or 3 fails, step 1 (created records) should be rolled back. For this you need transactions. You may argue that you can put all the three steps in an SP and wrap it with a transaction; it should be possible and is generally a matter of preference of where you put your application logic.

transaction question in SQL Server 2008

I am using SQL Server 2008 Enterprise. And using ADO.Net + C# + .Net 3.5 + ASP.Net as client to access database. When I access SQL Server 2008 tables, I always invoke stored procedure from my C# + ADO.Net code.
My question is, if I do not have any transaction control (I mean begin/end transaction) from my client C# + ADO.Net code, and I also do not have any transaction control (I mean begin/end transaction) in sql stored procedure code. Then my question is, each single Insert/Delete/Update/Select statement will act as a single transaction? Is that correct? For example, in the following store procedure, delete/insert/select will act as 3 single transactions?
create PROCEDURE [dbo].[FooProc]
(
#Param1 int
,#Param2 int
,#Param3 int
)
AS
DELETE FooTable WHERE Param1 = #Param1
INSERT INTO FooTable
(
Param1
,Param2
,Param3
)
VALUES
(
#Param1
,#Param2
,#Param3
)
DECLARE #ID bigint
SET #ID = ISNULL(##Identity,-1)
IF #ID > 0
BEGIN
SELECT IdentityStr FROM FooTable WHERE ID = #ID
END
Then my question is, each single
Insert/Delete/Update/Select statement
will act as a single transaction?
Yes, without explicit transaction control, each SQL statement will be wrapped in its own transaction. The single statement is guaranteed to be executed as a whole or fail as a whole.
The single statements will run under the current transaction isolation level: normally read committed. So it won't read uncommitted changes from other statements, but it might suffer from nonrepeatable reads, or phantom values.
If you don't handle transactions then each statement will be independent and might interfere with other users running the stored procedure at the same time.

Mixing System.Transactions with SqlTransactions

I've been brushing up on my knowledge this evening, trying to overcome 4 years of bad programming practices because of the company I was working for. One of the things I've recently stumbled on was System.Transactions. After reading about them for the last few hours, I think I have an adequate understanding of how they work and why you would want to use them. However, all the examples I've looked at are showing inline T-SQL being called from within the transaction.
I pretty much use Stored Procedures exclusively when doing database access and the existing stored procedures are all wrapped in their own SqlTransactions. You know, using 'Begin Tran' and then rolling back or committing. If a Stored Proc calls another stored proc, it too creates a transaction and the Commits bubble up until the outer one either commits or rolls back. Works great.
So now my question is, if I wanted to start using System.Transactions in my code - for the simple purposes of monitoring successive database tasks that can't be nested inside a single Stored Procedure - how does that work with the existing SqlTransactions I already have in my stored procs?
Will using System.Transactions in my code just add one more layer of protection before it is actually committed, or because I'm explicitly committing in my SqlTransaction - will the data be persisted regardless of committing or rolling back in code based transaction?
No, System.Transactions and Sql transactions do not mix.
And I quote, "Do Not Mix Them" from the following MSDN article: https://msdn.microsoft.com/en-us/library/ms973865.aspx.
Sql transactions do not participate on the outer System.Transaction the way you want them to. Sql transactions that fail or rollback will not cause other activities within the System.Transaction to rollback.
This example shows the phenomena:
using (var tx = new TransactionScope())
{
using (var con = new SqlConnection($"{connectionstring}"))
{
con.Open();
using (var com = new SqlCommand($"set xact_abort on; begin transaction; INSERT INTO dbo.KeyValueTable VALUES ('value1', '{Guid.NewGuid()}'); rollback;", con))
{
// This transaction failed, but it doesn't rollback the entire system.transaction!
com.ExecuteNonQuery();
}
using (var com = new SqlCommand($"set xact_abort on; begin transaction; INSERT INTO dbo.KeyValueTable VALUES ('value2', '{Guid.NewGuid()}'); commit;", con))
{
// This transaction will actually persist!
com.ExecuteNonQuery();
}
}
tx.Complete();
}
After running this example on an empty data store you should notice that the records from the second Sql operation are indeed committed, when the structure of the C# code would imply that they shouldn't be.
Put simply, you should not mix them. If you are orchestrating multiple Sql transactions within an application you should just use System.Transactions. Unfortunately that would mean removing your transaction code from all of your stored procedures, but alas, it is necessary as with a mixed model you cannot guarantee the integrity of your data.
Works just fine, if your inner transactions within the stored procs are committed everything will commit. If one of them roll back then everything within the outer transcation will roll back. Pure magic. :)

Bulk Copying and Deleting in OneTransaction

In C# application I like to copy the table data from one server(SQLServer2000) to another server (SQLServer2005). I like to copy the data in single instance and delete the existing data in the SQL Server 2000 table. I need to do all this(Bulk copying and Deleting) in single transaction. How to achieve this?
Note: I am having two different sql server connections how to achieve this for single transaction
To minimise the duration of the transaction, I always do this by bulk-copying to a staging table (same schema, but different name - no indexes etc), and then once all the data is at the server, do something like:
BEGIN TRAN
DELETE FROM FOO
INSERT FOO ...
SELECT ...
FROM FOO_STAGING
COMMIT TRAN
DELETE FROM FOO_STAGING
(the transaction could be either in the TSQL or on the connection via managed code, or via TransactionScope; the TSQL could be either as command-text, or as a SPROC)
You can do this using linked servers, although I've never tried doing this between a SQL2005 and a SQL2000 instance. On your SQL2005 instance:
sp_addlinkedserver Sql2000Server --Only need to do this once on the server
BEGIN TRAN
INSERT INTO dbo.MyTable (id, column)
SELECT id, column FROM Sql2000Server.MyDatabase.dbo.MyTable
DELETE FROM Sql2000Server.MyDatabase.dbo.MyTable
--etc...
COMMIT TRAN
See Microsoft books online for the syntax to add / remove linked servers (http://msdn.microsoft.com/en-us/library/ms190479.aspx)
Further to the linked server suggestion, you can use SSIS as well, which would be my preferred method.

Categories