Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.
Related
Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.
In my .NET code, inside a database transaction (using TransactionScope), I could include a nested block with TransactionScopeOption.Suppress, which ensures that the commands inside the nested block are committed even if the outer block rolls back.
Following is a code sample:
using (TransactionScope txnScope = new TransactionScope(TransactionScopeOption.Required))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Business(Value) Values('Some Value')");
using (TransactionScope txnLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Logging(LogMsg) Values('Log Message')");
txnLogging.Complete();
}
// Something goes wrong here. Logging is still committed
txnScope.Complete();
}
I was trying to find if this could be done in T-SQL. A few people have recommended OPENROWSET, but it doesn't look very 'elegant' to use. Besides, I think it is a bad idea to put connection information in T-SQL code.
I've used SQL Service Broker in past, but it also supports Transactional Messaging, which means message is not posted to the queue until the database transaction is committed.
My requirement: Our application stored procedures are being fired by some third party application, within an implicit transaction initiated outside stored procedure. And I want to be able to catch and log any errors (in a database table in the same database) within my stored procedures. I need to re-throw the exception to let the third party app rollback the transaction, and for it to know that the operation has failed (and thus do whatever is required in case of a failure).
You can set up a loopback linked server with the remote proc transaction Promotion option set to false and then access it in TSQL or use a CLR procedure in SQL server to create a new connection outside the transaction and do your work.
Both methods suggested in How to create an autonomous transaction in SQL Server 2008.
Both methods involve creating new connections. There is an open connect item requesting this functionality be provided natively.
Values in a table variable exist beyond a ROLLBACK.
So in the following example, all the rows that were going to be deleted can be inserted into a persisted table and queried later on thanks to a combination of OUTPUT and table variables.
-- First, create our table
CREATE TABLE [dbo].[DateTest] ([Date_Test_Id] INT IDENTITY(1, 1), [Test_Date] datetime2(3));
-- Populate it with 15,000,000 rows
-- from 1st Jan 1900 to 1st Jan 2017.
INSERT INTO [dbo].[DateTest] ([Test_Date])
SELECT
TOP (15000000)
DATEADD(DAY, 0, ABS(CHECKSUM(NEWID())) % 42734)
FROM [sys].[messages] AS [m1]
CROSS JOIN [sys].[messages] AS [m2];
BEGIN TRAN;
BEGIN TRY
DECLARE #logger TABLE ([Date_Test_Id] INT, [Test_Date] DATETIME);
-- Delete every 1000 row
DELETE FROM [dbo].[DateTest]
OUTPUT deleted.Date_Test_Id, deleted.Test_Date INTO #logger
WHERE [Date_Test_Id] % 1000 = 0;
-- Make it fail
SELECT 1/0
-- So this will never happen
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT * INTO dbo.logger FROM #logger;
END CATCH;
SELECT * FROM dbo.logger;
DROP TABLE dbo.logger;
I've written a custom replication function in a standard C# windows forms app with a SQL Server 2008 Express database. It basically pulls down a set of sql statements that need to be executed against a subscriber database. On a complete refresh this can run up to 200k+ statements that need to be executed.
I processing these statements inside a code block as shown below:
using (SqlConnection connection = ConnectionManager.GetConnection())
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
// Process 200k+ Insert/Update/Delete statements using SqlCommands
transaction.Commit
}
What I'm finding is that my applications memory usage remains pretty stable at around 40mb for the first 30k statements. After which it suddenly seems to jump to around 300mb and then grows until I hit a OutOfMemory exception.
Is the method I'm using even possible, can I process that many statements inside a single transaction? I would assume I should be able to do this. If there is a better way I'd love to here it. I need this to be transactional otherwise a partial replication would result in a broken database.
Thanks.
EDIT:
After restarting my computer I managed to get a full 200k+ replication to go through. Even though it did at one point grow in memory usage to 1.4Gb after the replication completed the memory usage dropped all the way back to 40mb. Which leads me to conclude that something inside my loop that processes the commands is causing the growth in memory perhaps.
Are you Disposing your forms and the disposable controls before closing?
Wrap all Disposable objects in Using Statement. Click here for more details
Don't open/close the Connection over and over again, instead send the data to database in single Transaction. Click here for more details
Still your application is holding tooo much memory then you need a Doctor like Red Gate Ants Memory Profiler. Click here to see more details about it
can I process that many statements inside a single transaction?
You have below options to do this...
Bulk insert and oprate the records in Stored Proc.
Prepare XML and send the string in Database.
Send the Read only DataTable in the Sql Server through Stored Proc
Sample Stored Proc
Begin Try
Set NoCount ON
Set XACT_Abort ON
Begin TRan
--Your queries
Commit Tran
Begin Tran
Begin Catch
Rollback Tran
End Catch
Make sure to Dispose the objects once not in use.
It should be like this
using (SqlConnection connection = new SqlConnection())
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
transaction.Commit();
}
}
Did you verify the SqlCommand also?
using (SqlCommand cmd = new SqlCommand())
{
}
I've been brushing up on my knowledge this evening, trying to overcome 4 years of bad programming practices because of the company I was working for. One of the things I've recently stumbled on was System.Transactions. After reading about them for the last few hours, I think I have an adequate understanding of how they work and why you would want to use them. However, all the examples I've looked at are showing inline T-SQL being called from within the transaction.
I pretty much use Stored Procedures exclusively when doing database access and the existing stored procedures are all wrapped in their own SqlTransactions. You know, using 'Begin Tran' and then rolling back or committing. If a Stored Proc calls another stored proc, it too creates a transaction and the Commits bubble up until the outer one either commits or rolls back. Works great.
So now my question is, if I wanted to start using System.Transactions in my code - for the simple purposes of monitoring successive database tasks that can't be nested inside a single Stored Procedure - how does that work with the existing SqlTransactions I already have in my stored procs?
Will using System.Transactions in my code just add one more layer of protection before it is actually committed, or because I'm explicitly committing in my SqlTransaction - will the data be persisted regardless of committing or rolling back in code based transaction?
No, System.Transactions and Sql transactions do not mix.
And I quote, "Do Not Mix Them" from the following MSDN article: https://msdn.microsoft.com/en-us/library/ms973865.aspx.
Sql transactions do not participate on the outer System.Transaction the way you want them to. Sql transactions that fail or rollback will not cause other activities within the System.Transaction to rollback.
This example shows the phenomena:
using (var tx = new TransactionScope())
{
using (var con = new SqlConnection($"{connectionstring}"))
{
con.Open();
using (var com = new SqlCommand($"set xact_abort on; begin transaction; INSERT INTO dbo.KeyValueTable VALUES ('value1', '{Guid.NewGuid()}'); rollback;", con))
{
// This transaction failed, but it doesn't rollback the entire system.transaction!
com.ExecuteNonQuery();
}
using (var com = new SqlCommand($"set xact_abort on; begin transaction; INSERT INTO dbo.KeyValueTable VALUES ('value2', '{Guid.NewGuid()}'); commit;", con))
{
// This transaction will actually persist!
com.ExecuteNonQuery();
}
}
tx.Complete();
}
After running this example on an empty data store you should notice that the records from the second Sql operation are indeed committed, when the structure of the C# code would imply that they shouldn't be.
Put simply, you should not mix them. If you are orchestrating multiple Sql transactions within an application you should just use System.Transactions. Unfortunately that would mean removing your transaction code from all of your stored procedures, but alas, it is necessary as with a mixed model you cannot guarantee the integrity of your data.
Works just fine, if your inner transactions within the stored procs are committed everything will commit. If one of them roll back then everything within the outer transcation will roll back. Pure magic. :)
I have a C# application, using ADO.Net to connect to MSSQL
I need to create the table (with a dynamic number of columns), then insert many records, then do a select back out of the table.
Each step must be a separate C# call, although I can keep a connection/transaction open for the duration.
There are two types of temp tables in SQL Server, local temp tables and global temp tables. From the BOL:
Prefix local temporary table names with single number sign (#tablename), and prefix global temporary table names with a double number sign (##tablename).
Local temp tables will live for just your current connection. Globals will be available for all connections. Thus, if you re-use (and you did say you could) the same connection across your related calls, you can just use a local temp table without worries of simultaneous processes interfering with each others' temp tables.
You can get more info on this from the BOL article, specifically under the "Temporary Tables" section about halfway down.
The issue is that #Temp tables exist only within the Connection AND the Scope of the execution.
When the first call from C# to SQL completes, control passes up to a higher level of scope.
This is just as if you had a T-SQL script that called two stored procedures. Each SP created a table named #MyTable. The second SP is referencing a completly different table than the first SP.
However, if the parent T-SQL code created the table, both SP's could see it, but they can't see each others.
The solution here is to use ##Temp tables. They cross scope and connections.
The danger though is that if you use a hard coded name, then two instances of your program running at the same time could see the same table. So dynamically set the table name to something that will be always be unique.
You might take a look at the repository pattern as far as dealing with this concept in C#. This allows you to have a low level repository layer for data access where each method performs a task. But the connection is passed in to the method and actual actions are performed with in a transaction scope. This means you can theoretically call many different methods in your data access layer (implemented as repository) and if any of them fail you can roll back the whole operation.
http://martinfowler.com/eaaCatalog/repository.html
The other aspects of your question would be handled by standard sql where you can dynamically create a table, insert into it, delete from it, etc. The tricky part here is keeping one transaction away from another transaction. You might look to using temp tables...or you might simply have a 2nd database specifically for performing this dynamic table concept.
Personaly I think you are doing this the hard way. Do all the steps in one stored proc.
One way to extend the scope/lifetime of your single pound sign #Temp is to use a transaction. For as long as the transaction lives, the #temp table continues to exist. You can also use TransactionScope to give you the same effect, because TransactionScope creates an ambient transaction in the background.
The below test methods pass, proving that the #temp table contents survive between executions.
This may be preferable to using double-pound temp tables, because ##temp tables are global objects. If you have more than one client that happens to use the same ##temp table name, then they could step on each other. Also, ##temp tables do not survive a server restart, so their lifespan is technically not forever. IMHO it's best to control the scope of #temp tables because they're meant to be limited.
using System.Transactions;
using Dapper;
using Microsoft.Data.SqlClient;
using IsolationLevel = System.Data.IsolationLevel;
namespace TestTempAcrossConnection
{
[TestClass]
public class UnitTest1
{
private string _testDbConnectionString = #"Server=(localdb)\mssqllocaldb;Database=master;trusted_connection=true";
class TestTable1
{
public int Col1 { get; set; }
public string Col2 { get; set; }
}
[TestMethod]
public void TempTableBetweenExecutionsTest()
{
using var conn = new SqlConnection(_testDbConnectionString);
conn.Open();
var tran = conn.BeginTransaction(IsolationLevel.ReadCommitted);
conn.Execute("create table #test1(col1 int, col2 varchar(20))", transaction: tran);
conn.Execute("insert into #test1(col1,col2) values (1, 'one'),(2,'two')", transaction: tran);
var tableResult = conn.Query<TestTable1>("select col1, col2 from #test1", transaction: tran).ToList();
Assert.AreEqual(1, tableResult[0].Col1);
Assert.AreEqual("one", tableResult[0].Col2);
tran.Commit();
}
[TestMethod] public void TempTableBetweenExecutionsScopeTest()
{
using var scope = new TransactionScope();
using var conn = new SqlConnection(_testDbConnectionString);
conn.Open();
conn.Execute("create table #test1(col1 int, col2 varchar(20))");
conn.Execute("insert into #test1(col1,col2) values (1, 'one'),(2,'two')");
var tableResult = conn.Query<TestTable1>("select col1, col2 from #test1").ToList();
Assert.AreEqual(2, tableResult[1].Col1);
Assert.AreEqual("two", tableResult[1].Col2);
scope.Complete();
}
}
}