I've been Googling for an answer for the past few hours, but haven't found an answer, so I'm hoping someone here can point me in the right direction.
I am wondering how to do a dirty read with EF DbContext (Code-First) within a TransactionScope. For example
DbContext context = new MyEntities();
using(TransactionScope scope = new TransactionScope())
{
context.SomeTable.Add(someObject);
context.SaveChanges();
var objects = context.SomeTable.Where(...).Select(...); //times out here on the read, because the write above locks the tables
//modify each object
context.SaveChanges();
scope.Complete(); //transaction is a success only if the entire batch succeeds
}
I have tried wrapping the read call with the following:
using(TransactionScope scope2 = new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions{IsolationLEvel = IsolationLevel.ReadUncommitted}))
{
var objects = context.SomeTable.Where(...).Select(...); //times out here on the
}
What is the proper approach?
It has finally clicked for me what the problem is: EF is not integrating with TransactionScope the way L2S nicely does. This means that EF opens and closes the connection for each operation requiring the server (saving changes or querying).
This gets you distributed transactions and a distributed deadlock.
To solve this, manually open and close the EF StoreConnection to ensure that there is exactly one connection for the duration of the transaction.
You can set the IsolationLevel for TransactionScope like this...
var transactionOptions = new System.Transactions.TransactionOptions();
transactionOptions.Timeout = new TimeSpan(0, 0, 30);
transactionOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted;
using (var transactionScope = new System.Transactions.TransactionScope(System.Transactions.TransactionScopeOption.Required, transactionOptions))
{
...
}
You can just execute alter database your_db_name set READ_COMMITTED_SNAPSHOT on;(available from SQLServer 2005) on the server, and readers won't be blocked by writters (assuming reading transaction in read committed isolation level).
Related
There is a performance and lock issue when using EF for a update-from-query case on MSSQL 2008. So I put ReadUncommitted transaction isolationlevel, hoping to resolve it, like this,
Before
using (MyEntities db = new MyEntities())
{
// large dataset
var data = from _Contact in db.Contact where _Contact.MemberId == 13 select _Contact;
for (var item in data)
item.Flag = 0;
// Probably db lock
db.SaveChanges();
}
After
using (var scope =
new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
using (MyEntities db = new MyEntities())
{
// large dataset but with NOLOCK
var data = from _Contact in db.Contact where _Contact.MemberId == 13 select _Contact;
for (var item in data)
item.Flag = 0;
// Try avoid db lock
db.SaveChanges();
}
}
We use SQL profiler to trace. However, got these scripts in order,
(Expect read-uncommitted for the 1st script.)
Audit Login
set transaction isolation level read committed
SP:StmtStarting
SELECT
[Extent1].[ContactId] AS [ContactId],
[Extent1].[MemberId] AS [MemberId],
FROM [dbo].[Contact] AS [Extent1]
WHERE [Extent1].[MemberId] = #p__linq__0
Audit Login
set transaction isolation level read uncommitted
Though I could resend this request and make it right order (will show read-uncommitted for the following requests, same SPID), I wonder why it sent read-uncommitted command after read-committed command and how to fix by using EF and TransactionScope ? Thanks.
I think this is a red herring caused by relying on the Audit Login Event. This is not showing the moment when client tells server 'set transaction isolation level read uncommitted'. It is showing you what the isolation level is later on, when that connection is picked out of the pool and reused.
I verify this by adding Pooling=false to my connection string. Then, audit login always shows transaction isolation level read committed.
I have so far found no way, in SQL Profiler, of seeing the moment when EF sets the transaction level, nor any explicit begin tran.
I can kind of confirm that it is being set somewhere, by reading and logging the level:
const string selectIsolationLevel = #"SELECT CASE transaction_isolation_level WHEN 0 THEN 'Unspecified' WHEN 1 THEN 'ReadUncommitted' WHEN 2 THEN 'ReadCommitted' WHEN 3 THEN 'Repeatable' WHEN 4 THEN 'Serializable' WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL FROM sys.dm_exec_sessions where session_id = ##SPID";
static void ReadUncommitted()
{
using (var scope =
new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions{ IsolationLevel = IsolationLevel.ReadUncommitted }))
using (myEntities db = new myEntities())
{
Console.WriteLine("Read is about to be performed with isolation level {0}",
db.Database.SqlQuery(typeof(string), selectIsolationLevel).Cast<string>().First()
);
var data = from _Contact in db.Contact where _Contact.MemberId == 13 select _Contact; // large results but with nolock
foreach (var item in data)
item.Flag = 0;
//Using Nuget package https://www.nuget.org/packages/Serilog.Sinks.Literate
//logger = new Serilog.LoggerConfiguration().WriteTo.LiterateConsole().CreateLogger();
//logger.Information("{#scope}", scope);
//logger.Information("{#scopeCurrentTransaction}", Transaction.Current);
//logger.Information("{#dbCurrentTransaction}", db.Database.CurrentTransaction);
//db.Database.ExecuteSqlCommand("-- about to save");
db.SaveChanges(); // Try avoid db lock
//db.Database.ExecuteSqlCommand("-- finished save");
//scope.Complete();
}
}
(I say ‘kind of’ because the statements each run in their own session)
Perhaps this is a long way of saying, yes EF transactions work correctly even if you can't prove it via Profiler.
According to the following note in the ADO.NET documentation Snapshot Isolation in SQL Server, the Isolation Level is not bound to the Transaction Scope as long as the underlying connection is pooled:
If a connection is pooled, resetting its isolation level does not
reset the isolation level at the server. As a result, subsequent
connections that use the same pooled inner connection start with their
isolation levels set to that of the pooled connection. An alternative
to turning off connection pooling is to set the isolation level
explicitly for each connection.
Thus I conclude that until SQL Server 2012, setting the isolation to any other level than ReadCommitted requires to either turn of connection pooling when creating the questionable SqlConnection or to set the Isolation Level in each connection explicitly to avoid unexpected behavior, including deadlocks. Alternatively the Connection Pool could be cleared by calling the ClearPool Method, but since this method is neither bound to the Transaction Scope nor the underlying connection, I don't think that it's approriate when several connections run simultaneously against the same pooled inner connection.
Referencing the post SQL Server 2014 reseting isolation level in the SQL forum and my own tests, such workarounds are obsolete when using SQL Server 2014 and a client driver with TDS 7.3 or higher.
I think a better solution is to perform update by generating a direct query (not selection and update entity by entity). In order to work with objects and not queries, you can use EntityFramework.Extended:
db.Contact.Update(C => c.MemberId == 13, c => new Contact { Flag = 0 });
This should generate something like UPDATE Contact SET Flag = 0 WHERE MemberId = 13 which is much more faster than your current solution.
If I remember correctly, this should generate its own transaction. If this must be executed in a transaction with other queries, `TransactionScope can still be used (you will have two transactions).
Also, isolation level can remain untouched (ReadCommitted).
[EDIT]
Chris's analysis shows exactly what happens. To make it even more relevant the following code shows the difference inside and outside of TransactionScope:
using (var db = new myEntities())
{
// this shows ReadCommitted
Console.WriteLine($"Isolation level outside TransactionScope = {db.Database.SqlQuery(typeof(string), selectIsolationLevel).Cast<string>().First()}");
}
using (var scope =
new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
// this show ReadUncommitted
Console.WriteLine($"Isolation level inside TransactionScope = {db.Database.SqlQuery(typeof(string), selectIsolationLevel).Cast<string>().First()}");
using (myEntities db = new myEntities ())
{
var data = from _Contact in db.Contact where _Contact.MemberId == 13 select _Contact; // large results but with nolock
for (var item I data)
item.Flag = 0;
db.SaveChanges(); // Try avoid db lock
}
// this should be added to actually Commit the transaction. Otherwise it will be rolled back
scope.Complete();
}
Coming back to the actual problem (getting deadlocks), if we take a look of what Profiler is outputting during the whole thing, we see something like this (removed GOs):
BEGIN TRANSACTION
SELECT <all columns> FROM Contact
exec sp_reset_connection
exec sp_executesql N'UPDATE Contact
SET [Flag] = #0
WHERE ([Contact] = #1)
',N'#0 nvarchar(1000),#1 int',#0=N'1',#1=1
-- lots and lots of other UPDATEs like above
-- or ROLLBACK if scope.Complete(); is missed
COMMIT
This has two disadvantages:
Many round-trips - many queries are issued against the database, which puts more pressure on database engine and also takes much longer for the client
Long transaction - long transactions should be avoid as a tentative of minimizing deadlocks
So, the suggested solution should work better in your particular case (simple update).
In more complex cases, changing the isolation level might be needed.
I think that, if one deals with large processing of data (select millions, do something, update back etc.) a stored procedure might be the solution, since everything executes server-side.
I am writing a Web API Rest service that does operations on a distinct set of entities. I have broken them down like this:
db = new DBEntities();
using (var dbContextTransaction = db.Database.BeginTransaction())
{
try
{
ProcessClient();
ProcessClientPerson();
ProcessGuardian();
ProcessAddress();
ProcessEmail();
ProcessPhones();
ProcessChildren();
dbContextTransaction.Commit();
}
catch (Exception ex)
{
dbContextTransaction.Rollback();
etc.
Following the advice that data contexts should live as short as possible, each of the methods creates its own data context, calls SaveChanges(), and disposes of it at the end:
private ProcessClient()
{
db = new DBEntities();
....
This obviously does not work - a transaction context created this way is tied to the data context. If something goes wrong in one of the entity operations, only that operation is rolled back (implicitly), but the overarching transaction is not.
I found this approach for creating a transaction outside of EF, but I am wondering if I should follow it or if I should just let my data context live for the duration of the transaction and keep the transaction inside of EF!?
I am not looking for an opinion, but for data around stability, performance, etc.
There is no immediate need to keep contexts short-lived. You can do that but you don't have to.
Over time entities will accumulate in a context. If you risk running out of memory it can be necessary to let go of a context.
Otherwise, the usual procedure is to keep the context alive for the duration of the logical unit of work. Here, that UOW is all those methods in their entirety.
This also makes transaction management easier (as you already found out).
dbContextTransaction.Rollback();
This is an anti-pattern. Simply don't commit in case of error.
I have mixed feelings about this. I am working against a legacy database that has no foreign key constraints, and I am inserting, updating, and deleting between 20 and 30 objects in one of these service calls.
The problem is that I need to call SaveChanges() frequently to get the identity column values that will become foreign keys.
On the other hand, I have to be able to roll back everything if there is a problem three layers down, so a single large transaction is needed.
For some reason that I have not been able to determine, calling SaveChanges repeatedly on the same data context would result in errors that the connection state is open. So I ended up giving each method its own data context anyway:
var scope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted);
using (scope)
{
try
{
ProcessClient();
ProcessClientPerson();
ProcessGuardian();
ProcessAddress();
ProcessEmail();
ProcessPhones();
ProcessChildren();
scope.Complete();
}
catch (System.Data.Entity.Validation.
DbEntityValidationException ex)
{
[...] handle validation errors etc [...]
}
}
with each section doing basically this, once stripped down to the bare essentials:
private void ProcessClient() {
{
using (MyDBEntities db = new MyDBEntities())
{
[...] doing client stuff [...]
aClient.LastUpdated = DateTime.Now;
db.AddOrUpdate(db, db.Clients, aClient, aClient.ClientID);
db.SaveChanges();
ClientId = aClient.ClientID; // now I can use this to form FKs
}
}
Mixed feelings about locking, because on my development VM the transaction runs for 1-2 seconds and this is a production database with office staff and online customers doing CRUD transactions through web applications at the same time.
Unrelated, but helpful for my AddOrUpdate method was this blog post.
I want to update two formviews within a transaction. if one of them fails the other should fail too. Formviews have their own entity datasources.
button1_click(..........)
{
formview1.updateItem(true);
formview2.updateItem(true);
}
Ok so this may not be the simplest thing in the world.
The basic answer is that yes you can do it. And the code would be similar to this if the updateItem method opens the db connection.
using (TransactionScope scope = new TransactionScope())
{
formview1.updateItem(true);
formview2.updateItem(true);
scope.Complete();
}
If on the other hand the connections are already open by the time updateItem is called then you will need to do something more like
using (TransactionScope scope = new TransactionScope())
{
formview1.Connection.EnlistTransaction(Transcation.Current);
formview2.Connection.EnlistTransaction(Transcation.Current);
formview1.updateItem(true);
formview2.updateItem(true);
scope.Complete();
}
I'm having trouble suppressing part of a transaction using Sql Server CE 4 with Entity Framework and System.Transactions.TransactionScope.
The simplified code below is from a unit test demonstrating the problem.
The idea is to enable the innerScope block (with no transaction) to succeed or fail without affecting the outerScope block (the "ambient" transaction). This is the stated purpose of TransactionScopeOption.Suppress.
However, the code fails because it seems that the entire SomeTable table is locked by the first insert in outerScope. At the point indicated in the code, this error is thrown:
"SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 2,Thread id = 2248,Process id = 13516,Table name = SomeTable,Conflict type = x lock (x blocks),Resource = PAG (idx): 1046 ]"
[TestMethod()]
[DeploymentItem("MyLocalDb.sdf")]
public void MyLocalDb_TransactionSuppressed()
{
int count = 0;
// This is the ambient transaction
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.Required))
{
using (MyObjectContext outerContext = new MyObjectContext())
{
// Do something in the outer scope
outerContext.Connection.Open();
outerContext.AddToSomeTable(CreateSomeTableRow());
outerContext.SaveChanges();
try
{
// Ambient transaction is suppressed for the inner scope of SQLCE operations
using (TransactionScope innerScope = new TransactionScope(TransactionScopeOption.Suppress))
{
using (MyObjectContext innerContext = new MyObjectContext())
{
innerContext.Connection.Open();
// This insert will work
innerContext.AddToSomeTable(CreateSomeTableRow());
innerContext.SaveChanges(); // ====> EXCEPTION THROWN HERE
// There will be other, possibly failing operations here
}
innerScope.Complete();
}
}
catch { }
}
outerScope.Complete();
}
count = GetCountFromSomeTable();
// The insert in the outer scope should succeed, and the one from the inner scope
Assert.AreEqual(2, count);
}
So, it seems that "a transaction in a transaction scope executes with isolation level set to Serializable", according to http://msdn.microsoft.com/en-us/library/ms172001
However, using the following code snippet to change the isolation level of the TransactionScope does not help:
public void MyLocalDb_TransactionSuppressed()
{
TransactionOptions opts = new TransactionOptions();
opts.IsolationLevel = IsolationLevel.ReadCommitted;
int count = 0;
// This is the ambient transaction
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.Required, opts))
...
The same exception is thrown at the same location.
It seems the only way to avoid this is to call outerScope.Complete() before entering the innerScope block. But this would defeat the purpose.
What am I missing here?
Thanks.
AFAIK SQL Server Compact does not support nested transactions.
And why do you do that this way? If I look at your code there is no difference between running the second transaction scope inside the first one and running them in sequence.
IMHO this is not a problem of SQL Compact, TransactionScope or isolation level. This is a problem of your wrong application logic.
Each SaveChanges runs in transaction - either outer transaction defined by TransactionScope or inner DbTransaction. Even if it would not create transaction every database command has its own implicit transaction. If you use Suppress on the inner code block you are creating two concurrent transactions which are trying to insert into same table and moreover the first cannot complete without completing the second and the second cannot complete without completing the first => deadlock.
The reason is that insert command always locks part of the table not allowing new inserts until it is committed or rolled back. I'm not sure if this can be avoided by changing transaction isolation level - if it does, you will most probably need Read.Uncommitted.
Under what circumstances can code wrapped in a System.Transactions.TransactionScope still commit, even though an exception was thrown and the outermost scope never had commit called?
There is a top-level method wrapped in using (var tx = new TransactionScope()), and that calls methods that also use TransactionScope in the same way.
I'm using typed datasets with associated tableadapters. Could it be that the commands in the adapter aren't enlisting for some reason? Do any of you know how one might check whether they are enlisting in the ambient TransactionScope or not?
The answer turned out to be because I was creating the TransactionScope object after the SqlConnection object.
I moved from this:
using (new ConnectionScope())
using (var transaction = new TransactionScope())
{
// Do something that modifies data
transaction.Complete();
}
to this:
using (var transaction = new TransactionScope())
using (new ConnectionScope())
{
// Do something that modifies data
transaction.Complete();
}
and now it works!
So the moral of the story is to create your TransactionScope first.
The obvious scenario would be where a new (RequiresNew) / null (Suppress) transaction is explicitly specified - but there is also a timeout/unbinding glitch that can cause connections to miss the transaction. See this earlier post (the fix is just a connection-string change), or full details.
Be aware how TransactionScope works:
It sets property System.Transactions.Transaction.Current at the begining of using scope and then set it back to previous value at end of using scope.
Previous value depends on where given scope is declared. It can be inside another scope.
You can modify code like this:
using (var sqlConnection = new ConnectionScope())
using (var transaction = new TransactionScope())
{
sqlConnection.EnlistTransaction(System.Transactions.Transaction.Current);
// Do something that modifies data
transaction.Complete();
}
I show this possibility for those who have their code more complicated and cannot simply change code to open DB connection first.
This example (C#, .NetFramework 4.7.1) shows how we can persist to the database even if the code is wrapped in a TransactionScope. The first insert will be rolled back, and the second insert will be inserted without transaction.
See my related post, where I ask for help in how to detect this.
using (var transactionScope = new TransactionScope())
{
using (var connection = new SqlConnection("Server=localhost;Database=TestDB;Trusted_Connection=True"))
{
connection.Open();
new SqlCommand($"INSERT INTO TestTable VALUES('This will be rolled back later')", connection).ExecuteNonQuery();
var someNestedTransaction = connection.BeginTransaction();
someNestedTransaction.Rollback();
new SqlCommand($"INSERT INTO TestTable VALUES('This is not in a transaction, and will be committed')", connection).ExecuteNonQuery();
}
throw new Exception("Exiting.");
transactionScope.Complete();
}