Can't immediately see data after committing sql transaction - c#

Updated for more direct title..
I am having an issue with seeing committed data from my transactions immediately after the commit.
I am accepting data from user input, putting data into the DB (SQL Server 2005) in the form of a parent table with multiple children tables with 1-many relationships. I am using a transaction and committing it when complete. The data is committed to the DB but I immediately run several other queries (SELECT statements only) against the DB after the commit to check to see if all the rows for a group have been entered. These rows can be entered at any time by anyone and are each in their own transaction. So, say the "group" has 3 total rows that are required in the database, I want to commit my transaction and then read the DB to see if ANY groups are complete...including that of the transaction I just committed.
The problem is that apparently reading the data back after the commit doesn't bring back the transaction I just committed... if this transaction is the last row in the group then it gets missed by the reads and doesn't return.
So...
Transaction begins.
write rows...
transaction commits..
read to see if complete groups exist...
group with current transaction not returned. other complete groups returned.
One thing to note, submitting another transaction will cause this data to be returned... if this new transaction contains a row to complete another group I do not get the new row returned...
Here is my code.. abbreviated...
SqlConnection sqlConn = getConn(); //getconn is a function that returns a connection object
String sqlSTR = "<<<MY INSERT STATEMENT>>>";
SqlTransaction trans = sqlConn.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted,"Trans");
SqlCommand sqlCMD;
try
{
sqlCMD = new SqlCommand(sqlSTR, sqlConn, trans);
//...Add all my data to the DB.
trans.Commit();
catch (SqlException sqlErr)
{
trans.Rollback("Trans");
throw;
}
finally
{
trans.Dispose();
}
Next is some code to read the DB looking for "completed groups" but the group this data belongs to, if completed, is not returned.
The data is stored... The data is retrievable by the same code by the next call.. but not here.
It's as if it takes time to commit the transaction by SQL and my reads are called BEFORE it's committed by SQL Server... even though I have the transaction set to allow reads to uncommitted data.
Is there someway to have the commit to wait for completion of the commit BEFORE moving on to other code?
I've tried doing a SELECT ##TRANCOUNT against the connection immediately after the commit but it ALWAYS returns 0.
Thanks!
EDIT:
public static SqlConnection getConn()
{
try
{
string sqlStr = WebConfigurationManager.ConnectionStrings["conn_Main"].ConnectionString;
SqlConnection sqlCN = new SqlConnection(sqlStr);
sqlCN.Open();
return sqlCN;
}
catch
{
throw;
}
}
I'm reading the records after the transaction by using a stored procedure...
In the SP I am using a basic select statement that does a couple of joins and not much else to pull back first a list of "groups" that have submissions and then each group individually pulls the individual groups. It's the 2nd part that doesn't seem to see the records inserted with the transaction... until the next transaction is committed.
I'm doing some more testing this morning to see if I can tell exactly when the records show up after committing the transaction.

Related

Is a single call to ExecuteNonQuery() atomic

Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.

Calling two stored procedures in the same TransactionScope()

I have two stored procedures and calling both of them in the same TransactionScope as follows.
The first - SPInsert() - inserts a new row into Table A
The second - SPUpdate() - updates the recently inserted row in the Table A
My question is, even though I have put a break point before the second stored procedure is getting called, I am unable to see the first stored procedure's row in the table, until the TransactionScope is completed.
Am I doing something wrong?
using (var transactionScope = new TransactionScope())
{
// Call and execute stored procedure 1
SPInsert();
// Call and execute stored procedure 2
SPUpdate();
transactionScope.Complete();
}
In detail:
I put a break point on SPUpdate, just right after the SPInsert, want to check on SQL to see whether or not row is being inserted, but when I run the query to check table, it keeps executing, never stops. It seems that table is not accessible at that moment. Then how would I check whether or not row is being inserted before second store procedure is getting called
Because you are in a transaction, by design and by default, SQL Server wont show you any uncommitted operations if you connect using a different session. This is why you cannot see uncommitted operations.

T-SQL Equivalent of .NET TransactionScopeOption.Suppress

In my .NET code, inside a database transaction (using TransactionScope), I could include a nested block with TransactionScopeOption.Suppress, which ensures that the commands inside the nested block are committed even if the outer block rolls back.
Following is a code sample:
using (TransactionScope txnScope = new TransactionScope(TransactionScopeOption.Required))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Business(Value) Values('Some Value')");
using (TransactionScope txnLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Logging(LogMsg) Values('Log Message')");
txnLogging.Complete();
}
// Something goes wrong here. Logging is still committed
txnScope.Complete();
}
I was trying to find if this could be done in T-SQL. A few people have recommended OPENROWSET, but it doesn't look very 'elegant' to use. Besides, I think it is a bad idea to put connection information in T-SQL code.
I've used SQL Service Broker in past, but it also supports Transactional Messaging, which means message is not posted to the queue until the database transaction is committed.
My requirement: Our application stored procedures are being fired by some third party application, within an implicit transaction initiated outside stored procedure. And I want to be able to catch and log any errors (in a database table in the same database) within my stored procedures. I need to re-throw the exception to let the third party app rollback the transaction, and for it to know that the operation has failed (and thus do whatever is required in case of a failure).
You can set up a loopback linked server with the remote proc transaction Promotion option set to false and then access it in TSQL or use a CLR procedure in SQL server to create a new connection outside the transaction and do your work.
Both methods suggested in How to create an autonomous transaction in SQL Server 2008.
Both methods involve creating new connections. There is an open connect item requesting this functionality be provided natively.
Values in a table variable exist beyond a ROLLBACK.
So in the following example, all the rows that were going to be deleted can be inserted into a persisted table and queried later on thanks to a combination of OUTPUT and table variables.
-- First, create our table
CREATE TABLE [dbo].[DateTest] ([Date_Test_Id] INT IDENTITY(1, 1), [Test_Date] datetime2(3));
-- Populate it with 15,000,000 rows
-- from 1st Jan 1900 to 1st Jan 2017.
INSERT INTO [dbo].[DateTest] ([Test_Date])
SELECT
TOP (15000000)
DATEADD(DAY, 0, ABS(CHECKSUM(NEWID())) % 42734)
FROM [sys].[messages] AS [m1]
CROSS JOIN [sys].[messages] AS [m2];
BEGIN TRAN;
BEGIN TRY
DECLARE #logger TABLE ([Date_Test_Id] INT, [Test_Date] DATETIME);
-- Delete every 1000 row
DELETE FROM [dbo].[DateTest]
OUTPUT deleted.Date_Test_Id, deleted.Test_Date INTO #logger
WHERE [Date_Test_Id] % 1000 = 0;
-- Make it fail
SELECT 1/0
-- So this will never happen
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT * INTO dbo.logger FROM #logger;
END CATCH;
SELECT * FROM dbo.logger;
DROP TABLE dbo.logger;

Using Linq, I need to delete all of the rows from a table and then enter about a thousand new rows. How can I make this whole thing a transaction?

I need to empty a table and then enter around 1000 rows into it. I need for the whole thing to be a transaction, however, so I'm not stuck with an empty (or partially empty) table if any of the inserts fail.
So I experimented with the code below, where the insert (.Add) will intentionally fail. When running it, however, the call to the delete stored procedure (prDeleteFromUserTable) does not roll back with the transaction. I'm left with an empty table and no inserts.
using (var context = new Entities(_strConnection))
{
using (var transaction = new TransactionScope())
{
//delete all rows in the table
context.prDeleteFromUserTable();
//add a row, which I made intentionally make fail to test the transaction
context.UserTable.Add(row);
context.SaveChanges();
//end the transaction
transaction.Complete();
}
}
How would I accomplish this using Linq-to-SQL?
LINQ is for queries (Language Integrated Query) and is not designed for BULK deletion and insertion. A good solution would be to use SQL to delete all rows DELETE FROM myTable and the SqlBulkCopy for the 1000 inserts.

lock table after BeginTransaction MySql Transaction in c#.net

How do i restrict other users to update or insert in a table after a certain transaction has begun ?
I tried this :
MySqlConnection con = new MySqlConnection("server=localhost;database=data;user=root;pwd=;");
con.Open();
MySqlTransaction trans = con.BeginTransaction();
try
{
string sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','483', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',-233.22,1,'asadfsaf','Bank OD A/c','Receipt','4274',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','4274', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',100,0,'asadfsaf','Sundry Creditors','Receipt','483',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','427', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',133.22,0,'asadfsaf','Sundry Creditors','Receipt','483',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
trans.Commit();
}
catch (Exception ex)
{
trans.Rollback();
}
finally
{
con.Close();
}
but this still allows to insert rows after BeginTransaction.
BeginTransaction does not mean that "your transaction has started and everything is locked". It just informs the RDBMS regarding your intent of initiating a transaction and that everything that you should do from now on should and must be considered atomic.
This means that you could call BeingTransaction and I could delete all data from all tables in your database and the RDBMS will happily let me do that. Hopefully, it should not let me drop the DB because you have an open connection to it, however, you never know these days. There might be some undocumented features I am not aware of.
Atomic means any action or set of actions must be performed as one. If any one of them fails that all of them fail. It is an everything or nothing concept.
Looks like you are inserting three rows into a table. If your table is empty or has very low number of rows, it might lock the whole table depending on the LOCK ESCALATION rules of your RDBMS. However, if it is a large or very large or partitioned table then the LOCK escalation rules might not guarantee a table lock. So, it might still be possible for multiple transactions to insert rows into your table at the same time. It all depends on how the RDBMS handles this situation and how your data model is structured.
Now to answer your question:
HINT - Look for a way to lock the entire table before you start inserting data.
However, this is usually not good but I am assuming that you have a reasonable reason to do it.
Hope this helps.

Categories