Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.
Related
Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.
In my .NET code, inside a database transaction (using TransactionScope), I could include a nested block with TransactionScopeOption.Suppress, which ensures that the commands inside the nested block are committed even if the outer block rolls back.
Following is a code sample:
using (TransactionScope txnScope = new TransactionScope(TransactionScopeOption.Required))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Business(Value) Values('Some Value')");
using (TransactionScope txnLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Logging(LogMsg) Values('Log Message')");
txnLogging.Complete();
}
// Something goes wrong here. Logging is still committed
txnScope.Complete();
}
I was trying to find if this could be done in T-SQL. A few people have recommended OPENROWSET, but it doesn't look very 'elegant' to use. Besides, I think it is a bad idea to put connection information in T-SQL code.
I've used SQL Service Broker in past, but it also supports Transactional Messaging, which means message is not posted to the queue until the database transaction is committed.
My requirement: Our application stored procedures are being fired by some third party application, within an implicit transaction initiated outside stored procedure. And I want to be able to catch and log any errors (in a database table in the same database) within my stored procedures. I need to re-throw the exception to let the third party app rollback the transaction, and for it to know that the operation has failed (and thus do whatever is required in case of a failure).
You can set up a loopback linked server with the remote proc transaction Promotion option set to false and then access it in TSQL or use a CLR procedure in SQL server to create a new connection outside the transaction and do your work.
Both methods suggested in How to create an autonomous transaction in SQL Server 2008.
Both methods involve creating new connections. There is an open connect item requesting this functionality be provided natively.
Values in a table variable exist beyond a ROLLBACK.
So in the following example, all the rows that were going to be deleted can be inserted into a persisted table and queried later on thanks to a combination of OUTPUT and table variables.
-- First, create our table
CREATE TABLE [dbo].[DateTest] ([Date_Test_Id] INT IDENTITY(1, 1), [Test_Date] datetime2(3));
-- Populate it with 15,000,000 rows
-- from 1st Jan 1900 to 1st Jan 2017.
INSERT INTO [dbo].[DateTest] ([Test_Date])
SELECT
TOP (15000000)
DATEADD(DAY, 0, ABS(CHECKSUM(NEWID())) % 42734)
FROM [sys].[messages] AS [m1]
CROSS JOIN [sys].[messages] AS [m2];
BEGIN TRAN;
BEGIN TRY
DECLARE #logger TABLE ([Date_Test_Id] INT, [Test_Date] DATETIME);
-- Delete every 1000 row
DELETE FROM [dbo].[DateTest]
OUTPUT deleted.Date_Test_Id, deleted.Test_Date INTO #logger
WHERE [Date_Test_Id] % 1000 = 0;
-- Make it fail
SELECT 1/0
-- So this will never happen
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT * INTO dbo.logger FROM #logger;
END CATCH;
SELECT * FROM dbo.logger;
DROP TABLE dbo.logger;
How do i restrict other users to update or insert in a table after a certain transaction has begun ?
I tried this :
MySqlConnection con = new MySqlConnection("server=localhost;database=data;user=root;pwd=;");
con.Open();
MySqlTransaction trans = con.BeginTransaction();
try
{
string sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','483', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',-233.22,1,'asadfsaf','Bank OD A/c','Receipt','4274',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','4274', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',100,0,'asadfsaf','Sundry Creditors','Receipt','483',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','427', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',133.22,0,'asadfsaf','Sundry Creditors','Receipt','483',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
trans.Commit();
}
catch (Exception ex)
{
trans.Rollback();
}
finally
{
con.Close();
}
but this still allows to insert rows after BeginTransaction.
BeginTransaction does not mean that "your transaction has started and everything is locked". It just informs the RDBMS regarding your intent of initiating a transaction and that everything that you should do from now on should and must be considered atomic.
This means that you could call BeingTransaction and I could delete all data from all tables in your database and the RDBMS will happily let me do that. Hopefully, it should not let me drop the DB because you have an open connection to it, however, you never know these days. There might be some undocumented features I am not aware of.
Atomic means any action or set of actions must be performed as one. If any one of them fails that all of them fail. It is an everything or nothing concept.
Looks like you are inserting three rows into a table. If your table is empty or has very low number of rows, it might lock the whole table depending on the LOCK ESCALATION rules of your RDBMS. However, if it is a large or very large or partitioned table then the LOCK escalation rules might not guarantee a table lock. So, it might still be possible for multiple transactions to insert rows into your table at the same time. It all depends on how the RDBMS handles this situation and how your data model is structured.
Now to answer your question:
HINT - Look for a way to lock the entire table before you start inserting data.
However, this is usually not good but I am assuming that you have a reasonable reason to do it.
Hope this helps.
I've written a custom replication function in a standard C# windows forms app with a SQL Server 2008 Express database. It basically pulls down a set of sql statements that need to be executed against a subscriber database. On a complete refresh this can run up to 200k+ statements that need to be executed.
I processing these statements inside a code block as shown below:
using (SqlConnection connection = ConnectionManager.GetConnection())
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
// Process 200k+ Insert/Update/Delete statements using SqlCommands
transaction.Commit
}
What I'm finding is that my applications memory usage remains pretty stable at around 40mb for the first 30k statements. After which it suddenly seems to jump to around 300mb and then grows until I hit a OutOfMemory exception.
Is the method I'm using even possible, can I process that many statements inside a single transaction? I would assume I should be able to do this. If there is a better way I'd love to here it. I need this to be transactional otherwise a partial replication would result in a broken database.
Thanks.
EDIT:
After restarting my computer I managed to get a full 200k+ replication to go through. Even though it did at one point grow in memory usage to 1.4Gb after the replication completed the memory usage dropped all the way back to 40mb. Which leads me to conclude that something inside my loop that processes the commands is causing the growth in memory perhaps.
Are you Disposing your forms and the disposable controls before closing?
Wrap all Disposable objects in Using Statement. Click here for more details
Don't open/close the Connection over and over again, instead send the data to database in single Transaction. Click here for more details
Still your application is holding tooo much memory then you need a Doctor like Red Gate Ants Memory Profiler. Click here to see more details about it
can I process that many statements inside a single transaction?
You have below options to do this...
Bulk insert and oprate the records in Stored Proc.
Prepare XML and send the string in Database.
Send the Read only DataTable in the Sql Server through Stored Proc
Sample Stored Proc
Begin Try
Set NoCount ON
Set XACT_Abort ON
Begin TRan
--Your queries
Commit Tran
Begin Tran
Begin Catch
Rollback Tran
End Catch
Make sure to Dispose the objects once not in use.
It should be like this
using (SqlConnection connection = new SqlConnection())
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
transaction.Commit();
}
}
Did you verify the SqlCommand also?
using (SqlCommand cmd = new SqlCommand())
{
}
I've been brushing up on my knowledge this evening, trying to overcome 4 years of bad programming practices because of the company I was working for. One of the things I've recently stumbled on was System.Transactions. After reading about them for the last few hours, I think I have an adequate understanding of how they work and why you would want to use them. However, all the examples I've looked at are showing inline T-SQL being called from within the transaction.
I pretty much use Stored Procedures exclusively when doing database access and the existing stored procedures are all wrapped in their own SqlTransactions. You know, using 'Begin Tran' and then rolling back or committing. If a Stored Proc calls another stored proc, it too creates a transaction and the Commits bubble up until the outer one either commits or rolls back. Works great.
So now my question is, if I wanted to start using System.Transactions in my code - for the simple purposes of monitoring successive database tasks that can't be nested inside a single Stored Procedure - how does that work with the existing SqlTransactions I already have in my stored procs?
Will using System.Transactions in my code just add one more layer of protection before it is actually committed, or because I'm explicitly committing in my SqlTransaction - will the data be persisted regardless of committing or rolling back in code based transaction?
No, System.Transactions and Sql transactions do not mix.
And I quote, "Do Not Mix Them" from the following MSDN article: https://msdn.microsoft.com/en-us/library/ms973865.aspx.
Sql transactions do not participate on the outer System.Transaction the way you want them to. Sql transactions that fail or rollback will not cause other activities within the System.Transaction to rollback.
This example shows the phenomena:
using (var tx = new TransactionScope())
{
using (var con = new SqlConnection($"{connectionstring}"))
{
con.Open();
using (var com = new SqlCommand($"set xact_abort on; begin transaction; INSERT INTO dbo.KeyValueTable VALUES ('value1', '{Guid.NewGuid()}'); rollback;", con))
{
// This transaction failed, but it doesn't rollback the entire system.transaction!
com.ExecuteNonQuery();
}
using (var com = new SqlCommand($"set xact_abort on; begin transaction; INSERT INTO dbo.KeyValueTable VALUES ('value2', '{Guid.NewGuid()}'); commit;", con))
{
// This transaction will actually persist!
com.ExecuteNonQuery();
}
}
tx.Complete();
}
After running this example on an empty data store you should notice that the records from the second Sql operation are indeed committed, when the structure of the C# code would imply that they shouldn't be.
Put simply, you should not mix them. If you are orchestrating multiple Sql transactions within an application you should just use System.Transactions. Unfortunately that would mean removing your transaction code from all of your stored procedures, but alas, it is necessary as with a mixed model you cannot guarantee the integrity of your data.
Works just fine, if your inner transactions within the stored procs are committed everything will commit. If one of them roll back then everything within the outer transcation will roll back. Pure magic. :)