Calling two stored procedures in the same TransactionScope() - c#

I have two stored procedures and calling both of them in the same TransactionScope as follows.
The first - SPInsert() - inserts a new row into Table A
The second - SPUpdate() - updates the recently inserted row in the Table A
My question is, even though I have put a break point before the second stored procedure is getting called, I am unable to see the first stored procedure's row in the table, until the TransactionScope is completed.
Am I doing something wrong?
using (var transactionScope = new TransactionScope())
{
// Call and execute stored procedure 1
SPInsert();
// Call and execute stored procedure 2
SPUpdate();
transactionScope.Complete();
}
In detail:
I put a break point on SPUpdate, just right after the SPInsert, want to check on SQL to see whether or not row is being inserted, but when I run the query to check table, it keeps executing, never stops. It seems that table is not accessible at that moment. Then how would I check whether or not row is being inserted before second store procedure is getting called

Because you are in a transaction, by design and by default, SQL Server wont show you any uncommitted operations if you connect using a different session. This is why you cannot see uncommitted operations.

Related

Triggering a rollback inside transactionscope C#

I'm currently working on a stored procedure that will perform a couple of inserts into a database table and I want to test and see if it can return the total rows affected in a way I'd like it to. And I'm calling this stored procedure from my C# .NET code inside a transactionscope.
My question however, is how I can trigger a rollback after the stored procedure is executed and the rows affected is displayed on the console?
I'm not allowed to share my code, but I can give a pseudo code of it as it's quite simple:
using(TransactionScope scope){
//Run the procedure and save the return value in a variable
int rowsAffected = MyStoredProcedure();
//Print the value in the variable
Console.WriteLine(rowsAffected);
//Ideally, I want to perform the rollback here.
scope.Complete();
}
Is it enough to simple throw some sort of Exception or is there a better way to trigger a rollback?
It's not committed as long as you don't call the 'Complete'. Remove that and it will be rollbacked when you leave the using of the scope:
using(TransactionScope scope)
{
//Run the procedure and save the return value in a variable
int rowsAffected = MyStoredProcedure();
//Print the value in the variable
Console.WriteLine(rowsAffected);
//Don't call Complete() and it will be rollbacked
//scope.Complete();
}
Since you are using stored procedure. Why don't you keep the transaction in the stored procedure itself. Then you no need to worry about handling rollback in c# code

T-SQL Equivalent of .NET TransactionScopeOption.Suppress

In my .NET code, inside a database transaction (using TransactionScope), I could include a nested block with TransactionScopeOption.Suppress, which ensures that the commands inside the nested block are committed even if the outer block rolls back.
Following is a code sample:
using (TransactionScope txnScope = new TransactionScope(TransactionScopeOption.Required))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Business(Value) Values('Some Value')");
using (TransactionScope txnLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Logging(LogMsg) Values('Log Message')");
txnLogging.Complete();
}
// Something goes wrong here. Logging is still committed
txnScope.Complete();
}
I was trying to find if this could be done in T-SQL. A few people have recommended OPENROWSET, but it doesn't look very 'elegant' to use. Besides, I think it is a bad idea to put connection information in T-SQL code.
I've used SQL Service Broker in past, but it also supports Transactional Messaging, which means message is not posted to the queue until the database transaction is committed.
My requirement: Our application stored procedures are being fired by some third party application, within an implicit transaction initiated outside stored procedure. And I want to be able to catch and log any errors (in a database table in the same database) within my stored procedures. I need to re-throw the exception to let the third party app rollback the transaction, and for it to know that the operation has failed (and thus do whatever is required in case of a failure).
You can set up a loopback linked server with the remote proc transaction Promotion option set to false and then access it in TSQL or use a CLR procedure in SQL server to create a new connection outside the transaction and do your work.
Both methods suggested in How to create an autonomous transaction in SQL Server 2008.
Both methods involve creating new connections. There is an open connect item requesting this functionality be provided natively.
Values in a table variable exist beyond a ROLLBACK.
So in the following example, all the rows that were going to be deleted can be inserted into a persisted table and queried later on thanks to a combination of OUTPUT and table variables.
-- First, create our table
CREATE TABLE [dbo].[DateTest] ([Date_Test_Id] INT IDENTITY(1, 1), [Test_Date] datetime2(3));
-- Populate it with 15,000,000 rows
-- from 1st Jan 1900 to 1st Jan 2017.
INSERT INTO [dbo].[DateTest] ([Test_Date])
SELECT
TOP (15000000)
DATEADD(DAY, 0, ABS(CHECKSUM(NEWID())) % 42734)
FROM [sys].[messages] AS [m1]
CROSS JOIN [sys].[messages] AS [m2];
BEGIN TRAN;
BEGIN TRY
DECLARE #logger TABLE ([Date_Test_Id] INT, [Test_Date] DATETIME);
-- Delete every 1000 row
DELETE FROM [dbo].[DateTest]
OUTPUT deleted.Date_Test_Id, deleted.Test_Date INTO #logger
WHERE [Date_Test_Id] % 1000 = 0;
-- Make it fail
SELECT 1/0
-- So this will never happen
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT * INTO dbo.logger FROM #logger;
END CATCH;
SELECT * FROM dbo.logger;
DROP TABLE dbo.logger;

Does LINQ2SQL automatically put ExecuteCommand in a transaction

Does the documentation quotation from this answer: https://stackoverflow.com/a/542691/1011724
When you call SubmitChanges, LINQ to SQL checks to see whether the call is in the scope of a Transaction or if the Transaction property (IDbTransaction) is set to a user-started local transaction. If it finds neither transaction, LINQ to SQL starts a local transaction (IDbTransaction) and uses it to execute the generated SQL commands. When all SQL commands have been successfully completed, LINQ to SQL commits the local transaction and returns.
apply to the .ExecuteCommand() method? In otherwords, can I trust that the following delete is handled in a transaction and will automatically rollback if it fails or do I need to manually tell it to use a transaction and if so how? Should I use TransactionScope?
using(var context = Domain.Instance.GetContext())
{
context.ExecuteCommand("DELETE FROM MyTable WHERE MyDateField = {0}", myDate)
}
Every SQL statement, whether or not wrapped in an explicit transaction, occurs transactionally. So, explicit transaction or not, individual statements are always atomic -- they either happen entirely or not at all. In the example above, either all rows that match the criterion are deleted or none of them are -- this is irrespective of what client code does. There is literally no way to get SQL Server to delete the rows partially; even yanking out the power cord will simply mean whatever was already done for the delete will be undone when the server restarts and reads the transaction log.
The only fly in the ointment is that which rows match can vary depending on how the statement locks. The statement logically happens in two phases, the first to determine which rows will be deleted and the second to actually delete them (while under an update lock). If you, say, issued this statement, and while it was running issued an INSERT that inserted a row matching the DELETE criterion, whether the row is in the database or not after the DELETE has finished depends on which transaction isolation level was in effect for the statements. So if you want practical guarantees about "all rows" being deleted, what client code does comes into scope. This goes a little beyond the scope of the original question, though.

Batching Stored Procedure Commands in EF 4.2

I've got a call to a stored procedure, that is basically an INSERT stored procedure. It inserts into Table A, then into Table B with the identity from Table A.
Now, i need to call this stored procedure N amount of times from my application code.
Is there any way i can batch this? At the moment it's doing N round trips to the DB, i would like it to be one.
The only approach i can think of is to pass a the entire list of items across the wire, via an User Defined Table Type.
But the problem with this approach is that i will need a CURSOR in the sproc to loop through each item in order to do the insert (because of the identity field).
Basically, can we batch DbCommand.ExecuteNonQuery() with EF 4.2?
Or can we do it with something like Dapper?
You can keep it like that and in the stored procedure just do a MERGE between your target table and the table parameter. Because you are always coming with new records, the MERGE will enter only on the INSERT branch.
In this case, using MERGE like this is an easy way of doing batch inserts without a cursor.
Also, another way which also avoids the use of a cursor is to use a INSERT from SELECT statement in the SP.

ChangeConflictException when updating rows with LINQ-to-SQL

I have a form which contains a data grid and a save button.
When the user clicks the save button I check for new rows by checking a specific column. If its value is 0 I insert the row to database, and if the column value is not 0 then I update that row.
I can insert correctly but when updating an exception occurs:
ChangeConflictException was unhandled,1 of 6 updates failed.
I have checked the update statement and I'm sure it's correct. What is the problem, can any one help me?
int id;
for (int i = 0; i < dgvInstructores.Rows.Count - 1; i++)
{
id = int.Parse(dgvInstructores.Rows[i].Cells["ID"].Value.toString());
if (id == 0)
{
dataClass.procInsertInstructores(name, nationalNum, tel1, tel2,
address, email);
dataClass.SubmitChanges();
}
else
{
dataClass.procUpdateInstructores(id, name, nationalNum, tel1, tel2,
address, email);
dataClass.SubmitChanges();
}
}
I'm using linq to query sql server2005 database and vs2008
the stored procedure for 'procUpdateInstructores' is :
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
ALTER proc [dbo].[procUpdateInstructores]
#ID int,
#name varchar(255),
#NationalNum varchar(25),
#tel1 varchar(15),
#tel2 varchar(15),
#address varchar(255),
#email varchar(255)
as
begin
BEGIN TRANSACTION
update dbo.Instructores
set
Name = #name , NationalNum = #NationalNum ,
tel1 = #tel1 , tel2 = #tel2 , address = #address , email = #email
where ID = #ID
IF (##ROWCOUNT > 0) AND (##ERROR = 0)
BEGIN
COMMIT TRANSACTION
END
ELSE
BEGIN
ROLLBACK TRANSACTION
END
end
In my experience, (working with .net forms and mvc with linq-to-sql) I have found that several times if the form collection contains the ID parameter of the data object then the update surely fails.
Even if the ID is the actual ID, it is still flagged as 'propertyChanged' when you bind it or update it or assign to another variable.
As such can we see the code for your stored procs? More specifically, the update proc?
The code you have posted above is fine, the exception should be coming from your stored proc.
However if you are confident that the proc is correct then perhaps look at the HTML code being used to generate the table. Some bugs might be present with respect to 0/1 on ID columns, etc.
In the absence of further information (what your SQL or C# update code looks like...) my first recommendation would be to do SubmitChanges once, outside the for loop, rather than submitting changes once per row.
It appears in this case that you are using a DataGridView (thus WinForms). I further guess that your dataClass is persisted on the form so that you loaded and bound the DataGridView from the same dataClass that you are trying to save the changes to in this example.
Assuming you are databinding the DataGridView to entities returned via LINQ to SQL, when you edit the values, you are marking the entity in question that it is needing to be updated when the next SubmitChanges is called.
In your update, you are calling dataClass.procUpdateInstructores(id, name, nationalNum, tel1, tel2, address, email); which immediately issues the stored procedure against the database, setting the new values as they have been edited. The next line is the kicker. Since your data context still thinks the object is still dirty, SubmitChanges tries to send another update statement to your database with the original values that it fetched as part of the Where clause (to check for concurrency). Since the stored proc updated those values, the Where clause can't find a matching value and thus returns a concurrency exception.
Your best bet in this case is to modify the LINQ to SQL model to use your stored procedures for updates and inserts rather than the runtime generated versions. Then in your parsing code, simply call SubmitChanges without calling procUpdateInstructores manually. If your dbml is configured correctly, it will call the stored proc rather than the dynamic update statement.
Also, FWIW, your stored proc doesn't seem to be doing anything more than the generated SQL would. Actually, LINQ to SQL would give you more functionality since you aren't doing any concurrency checking in your stored proc anyway. If you are required to use stored procs by your DBA or some security policy, you can retain them, but you may want to consider bypassing them if this is all your stored procs are doing and rely on the runtime generated SQL for updates.

Categories