Calls into my web service use the following code to ensure that the caller has a valid session. If a valid session found then it updates the session details and saves the changes. All simple enough and works fine.
// Create the Entity Framework context
using(MyContext ctx = CreateMyContext())
{
// Get the user session for the client session
UserSession session = (from us in context.UserSessions.Include("UserEntity")
where us.SessionId = callerSessionId
select us).FirstOrDefault<UserSession>();
if (session == null)
return false;
else
{
// Update session details
session.Calls++;
session.LastAccessed = DateTime.Now.Ticks;
Console.WriteLine("Call by User:{0}", session.UserEntity.Name);
// Save session changes back to the server
ctx.SaveChanges();
return true;
}
}
All works fine until the same caller, and hence the same session, makes multiple concurrent calls (which is perfectly valid to happen). In this case I sometimes get a deadlock. Using SQL Server Profiler I can see the following is happening.
Caller A performs the select and acquires a shared lock on the user session. Caller B performs the select and acquires a shared lock on the same user session. Caller A cannot perform its update because of Caller B's shared lock. Caller B cannot perform its update because of caller A's shared lock. Deadlock.
This seems like a simple and classic deadlock scenario and there must be a simple method to resolve it. Surely almost all real world applications have this same problem.But none of the Entity Frameworks books I have mention anything about deadlocks.
I found an article that talks about this HERE. It basically sounds like you can start and stop a transaction that surrounds your EF call... The block gives the following code example so credit goes to Diego B Vega... The blog post also links to another blog with additional information.
using (var scope = new TransactionScope(TransactionScopeOption.Required, new
TransactionOptions { IsolationLevel= IsolationLevel.Snapshot }))
{
// do something with EF here
scope.Complete();
}
Will the following work for you?
using(MyContext ctx = CreateMyContext())
{
ctx.Database.ExecuteSqlCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;");
// Get the user session for the client session
...
}
You probably have each session using a transaction maybe? in which case they will deadlock as the transactions both try to upgrade from shared lock to exclusive lock when they attempt to save. This seems to be not well documented since EF favours optimistic concurrency.
One way out of this is to provide an updatelock hint using something like this:
return context.TestEntities
.SqlQuery("SELECT TOP 1 Id, Value FROM TestEntities WITH (UPDLOCK)")
.Single();
}
see: entity framework 6 and pessimistic concurrency
Related
I am writing a Web API Rest service that does operations on a distinct set of entities. I have broken them down like this:
db = new DBEntities();
using (var dbContextTransaction = db.Database.BeginTransaction())
{
try
{
ProcessClient();
ProcessClientPerson();
ProcessGuardian();
ProcessAddress();
ProcessEmail();
ProcessPhones();
ProcessChildren();
dbContextTransaction.Commit();
}
catch (Exception ex)
{
dbContextTransaction.Rollback();
etc.
Following the advice that data contexts should live as short as possible, each of the methods creates its own data context, calls SaveChanges(), and disposes of it at the end:
private ProcessClient()
{
db = new DBEntities();
....
This obviously does not work - a transaction context created this way is tied to the data context. If something goes wrong in one of the entity operations, only that operation is rolled back (implicitly), but the overarching transaction is not.
I found this approach for creating a transaction outside of EF, but I am wondering if I should follow it or if I should just let my data context live for the duration of the transaction and keep the transaction inside of EF!?
I am not looking for an opinion, but for data around stability, performance, etc.
There is no immediate need to keep contexts short-lived. You can do that but you don't have to.
Over time entities will accumulate in a context. If you risk running out of memory it can be necessary to let go of a context.
Otherwise, the usual procedure is to keep the context alive for the duration of the logical unit of work. Here, that UOW is all those methods in their entirety.
This also makes transaction management easier (as you already found out).
dbContextTransaction.Rollback();
This is an anti-pattern. Simply don't commit in case of error.
I have mixed feelings about this. I am working against a legacy database that has no foreign key constraints, and I am inserting, updating, and deleting between 20 and 30 objects in one of these service calls.
The problem is that I need to call SaveChanges() frequently to get the identity column values that will become foreign keys.
On the other hand, I have to be able to roll back everything if there is a problem three layers down, so a single large transaction is needed.
For some reason that I have not been able to determine, calling SaveChanges repeatedly on the same data context would result in errors that the connection state is open. So I ended up giving each method its own data context anyway:
var scope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted);
using (scope)
{
try
{
ProcessClient();
ProcessClientPerson();
ProcessGuardian();
ProcessAddress();
ProcessEmail();
ProcessPhones();
ProcessChildren();
scope.Complete();
}
catch (System.Data.Entity.Validation.
DbEntityValidationException ex)
{
[...] handle validation errors etc [...]
}
}
with each section doing basically this, once stripped down to the bare essentials:
private void ProcessClient() {
{
using (MyDBEntities db = new MyDBEntities())
{
[...] doing client stuff [...]
aClient.LastUpdated = DateTime.Now;
db.AddOrUpdate(db, db.Clients, aClient, aClient.ClientID);
db.SaveChanges();
ClientId = aClient.ClientID; // now I can use this to form FKs
}
}
Mixed feelings about locking, because on my development VM the transaction runs for 1-2 seconds and this is a production database with office staff and online customers doing CRUD transactions through web applications at the same time.
Unrelated, but helpful for my AddOrUpdate method was this blog post.
I suspect I don't understand fully what's going on or something weird is happening. (The first case is more likely I guess.)
The big picture:
I'm trying to have a web-service perform certain operations asynchronously as they can be time consuming and I don't want the client to wait for the operations to finish (just query for the results every now and again to see the operation is done).
The async code is wrapped in a transaction - in case something goes wrong, I want to be able to rollback any changes.
Unfortunately the last step of the async code is to call a DIFFERENT service which queries the same database.
Despite wrapping the whole thing in a Snapshot transaction, the last step fails as the service cannot read from the database.
For that matter, while the async operation is underway I cannot perform a simple SELECT statements from the database either.
Here's a sample of the code which I'm currently using to test out the transactions (using Entity Framework 5 model first):
using (var transaction = new System.Transactions.TransactionScope(System.Transactions.TransactionScopeOption.RequiresNew, new System.Transactions.TransactionOptions() { IsolationLevel = System.Transactions.IsolationLevel.Snapshot }))
{
var db = new DataModelContainer();
Log test = new Log();
test.Message = "TEST";
test.Date = DateTime.UtcNow;
test.Details = "asd";
test.Type = "test";
test.StackTrace = "asd";
db.LogSet.Add(test);
db.SaveChanges();
using (var suppressed = new System.Transactions.TransactionScope(System.Transactions.TransactionScopeOption.Suppress))
{
var newDb = new DataModelContainer();
var log = newDb.LogSet.ToArray(); //deadlock here... WHY?
}
test = db.LogSet.Where(l => l.Message == "TEST").Single();
db.LogSet.Remove(test);
db.SaveChanges();
transaction.Complete();
}
The code creates a simple Log entry in the database (yeah, I'm playing around at the moment so the values are rubbish). I've set the SQL database to allow snapshot isolation, and to my knowledge reads should still be permitted (these are being tested in this code by using a new, suppressed transaction and a new DataModelContainer). However, I cannot query the LogSet in the suppressed transaction or in SQL Management Studio - the whole table is locked!
So... why? Why is it locked if the transaction scope is defined as such? I've also tried other isolation levels (like ReadUncommited) and I still cannot query the table.
Could someone please provide an explanation for this behavior?
In SSMS, set your current isolation level to SNAPSHOT and see if that corrects your problem - it's probably set to READ COMMITTED and would therefore still block due to pending updates.
Update:
You can allow READ COMMITTED to access versioned rows DB-wide by altering the following option (and avoiding having to constantly set the current isolation level to SNAPSHOT):
ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON
Context: web application / ASP.NET 4.0 / EF 4.0 / MSSQLEXPRESS2012
I'm trying to do a simple delete as follows:
if (someObject != null)
{
if (!context.IsAttached(someObject))
context.SomeObjects.Attach(someObject);
context.DeleteObject(someObject);
context.SaveChanges();
}
The code executes without problem and SaveChanges returns the expected number of rows. But when I try to read the table afterwards it times out (this happens in mssms as well as in code).
It also seems to corrupt other processes, returning '8501 MSDTC on server is unavailable' and "No process is on the other end of the pipe" on subsequent site access. Which of the three errors happens in the application seems somewhat random, but the timeout in mssms always happens.
Inserts and updates on the same table execute without problem.
Apart from reading through a zillion posts, I tried wrapping it in try/catch and transaction with SaveChanges(System.Data.Objects.SaveOptions.None) and manual AcceptAllChanges; the debugger shows the correct object just before delete - everything appears normal until table read or accessing another page.
I suspect somehow the lock is not released, but other objects in different tables successfully delete with the exact same logic. I'm not sure where best to look next - any ideas are greatly appreaciated!
must have made a mistake with transaction before - the following worked:
var transactionScope = new TransactionScope
(TransactionScopeOption.RequiresNew,
new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted
}
);
try
{
using (transactionScope)
{ ...etc
the following link explains more about the whys of explicit transactions
http://blogs.u2u.be/diederik/post/2010/06/29/Transactions-and-Connections-in-Entity-Framework-40.aspx
still not sure why it's working without transaction for the other deletes though
Hi i am using Entity Framework 4.1 code first approach . I have class MyContainer that inherits DBContext .
I have a process that has 7 steps each step accesses many repository methods (about 60) .This process executes automatically or mannually depends upon business logic and user requirement . Now for performance point of view for automatic process i created context i.e object of MyContainer once and pass it to all methods and dispose it at the end of process and its working fine and improved the performance.But when this process is executed mannually same methods are executed and container is created and disposed in the method itself. eg below but it is just rough code.
public bool UpdateProcess(Process process, MyContainer container = null)
{
bool disposeContainer = false;
if (container == null)
{
container = new MyContainer();
disposeContainer = true;
}
var result = SaveProcess(container, process);
container.SaveChanges();
if (disposeContainer)
container.Dispose();
return result;
}
For Automatic Process transaction is created at the beginning of the process and ended at the end of the process and for manual transaction is created at the bll in the method which is called according to the user action on ui.Now suppose my automatic process is running and simultaneously user did some action on ui I gets the Exception "Transaction (Process ID 65) Was Deadlocked On Lock Resources With Another Process And Has Been Chosen" When the UpdateProcess() method is called together from both mannual and automatic process, I get it on container.SaveChanges().
Any help will be highly appreciated.
If i create a transaction scope in this repository method like
public bool UpdateProcess(Process process, MyContainer container = null)
{ bool disposeContainer = false;
if (container == null)
{
container = new MyContainer();
disposeContainer = true;
}
using(var trans=new TransactionScop(TransactionScopeOption.RequiresNew))
{
var result = SaveProcess(container, process);
container.SaveChanges();
trans.Complete();
}
if (disposeContainer)
container.Dispose();
return result;
}
It works fine.
However I dont want to create the transaction in repository as the transactions been already made in bll.
Any help will be appericiated.
The issue (sql deadlocking) you are having is common to most non-trivial client - database systems and can be very difficult to resolve.
The primary rule in designing code where deadlocks may occur is to assume that they will and design the application to handle them appropriately. This is normally handled via resubmitting the transaction. As documented by microsoft here
Although deadlocks can be minimized, they cannot be completely avoided. That is why the front-end application should be designed to handle deadlocks.
In order to minimise any deadlocks you are seeing the normal approach I take is as follows:
Run profiler with a deadlock graph enabled to capture all deadlocks
Export all deadlocks to text and investigate why they are occurring
Check to see these can be minimised by using table hints and/or reducing the transation isolation level
See if they can be minimised by other means, e.g. changing the order of operation
Finally after all this has been completed ensure that you are trapping for deadlocks anytime you communicate with the database & resubmit the transaction / query etc.
I'm in troubles with TransactionPropagation.NotSupported. I believed that this propagation causes that the code is executed within no transaction. Means that when I marked the specific method, the current transaction will be suspended and the code will be executed without any transaction.
The current version of spring.net creates new transaction instead. See the following code:
[Test]
public void A() {
TransactionTemplate template = new TransactionTemplate(TransactionManager) {
PropagationBehavior = TransactionPropagation.NotSupported
};
template.Execute(delegate {
Assert.AreEqual(0,
SessionFactory.GetCurrentSession().Linq<XXX>().
Where(t => t.Id.Equals(YYY)).ToList().Count);
return null;
});
}
I hoped that this notation causes that linq query is executed without transaction and it'll throw the new exception. But the log showed that it creates both new session and transaction automatically.
I've find out this issue when I marked any method by mentioned annotation and despite the annotation the LINQ query inside was correctly executed.
The question is: how can I mark the method to it's doesn't use the transaction at all? I don't want to use propagation never as I want the current transaction would be suspended.
My project has the business code flow, there is transaction handling, and I want to mark any parts to be certainly non-transactional.
You mention being able to tell from the log that a new transaction is started. What log, the database or the application? What database are you using? Some databases won't let you run a query outside a transaction at all, so would just start one internally for you...
Update:
Your issues looks similar to this one:
https://jira.springframework.org/browse/SPRNET-1307?page=com.atlassian.jira.plugin.ext.bamboo%3Abamboo-build-results-tabpanel#issue-tabs
I would make sure you are running the version of Spring.NET that has this fix in it (looks like v 1.3.1 or greater?)
Also, you could try manually suppressing the transaction and see if that fixes the behavior:
using(var tx = new TransactionScope(TransactionScopeOption.Suppress))
{
// make DB call...
}