Can I do any other things (not related to DataBase) in the TransactionScope ?
Will it be a bad practice ?
e.g
Starting a workflow inside the scope.
I don't want to save in DB unless starting the workflow fails.
If it's a bad practice, what would be a good practice?
Thanks in advance.
using (TransactionScope scope = new TransactionScope())
{
using (ComponentCM.Audit auditComponent = new ComponentCM.Audit())
{
using (AccessCM.Biz dataAccess = new AccessCM.Biz())
{
auditComponent.SaveInDB();
dataAccess.SaveinDB()
StartWorkflow();
}
}
scope.Complete();
}
You want transactions to take as little time as possible, as they can cause locking on the database.
In this regard, it is better to do other work outside of the transaction scope.
I agree with Oded.
I'd also add that the scope of the TransactionScope should define the state changes that are defined in your unit of work. In other words, state changes inside the scope should all succeed or fail together and are usually part of the same logical operation.
In order to keep the scope of the as small as possible, one common pattern is optimistic concurrency. When using optimistic concurrency, you typically only open a transaction scope when performing a write operation on the state store. In this case, you are excepting that usually the data you are working on will not have changed (by another operation) between the time that you query and commit.
Related
I'm implementing a DAL abstraction layer on top of the C# Mongo DB driver using the Repository + Unit of Work pattern.
My current design is that every unit of work instance opens (and closes) a new Mongo DB session.
The problem is that Mongo DB only allows a 1:1 ratio between a session and a transaction, so multiple units of work under the same .NET transaction will not be possible.
The current implementation is:
public class MongoUnitOfWork
{
private IClientSessionHandle _sessionHandle;
public MongoUnitOfWork(MongoClient mongoClient)
{
_sessionHandle = mongoClient.StartSession();
}
public void Dispose()
{
if (_sessionHandle != null)
{
// Must commit transaction, since the session is closing
if (Transaction.Current != null)
_sessionHandle.CommitTransaction();
_sessionHandle.Dispose();
}
}
}
And because of this, the following code won't work. The first batch of data will be committed ahead of time:
using (var transactionScope = new TransactionScope())
{
using (var unitOfWork = CreateUnitOfWork())
{
//... insert items
unitOfWork.SaveChanges();
} // Mongo DB unit of work implementation will commit the changes when disposed
// Do other things
using (var unitOfWork = CreateUnitOfWork())
{
//... insert some more items
unitOfWork.SaveChanges();
}
transactionScope.Complete();
}
Obviously the immediate answer would be to bring all of the changes into one unit of work, but it's not always possible, and also this leaks the Mongo DB limitation.
I thought about session pooling, so that multiple units of work will use the same session, and commit/rollback when the transient transaction completes/aborts.
What other solutions are possible?
Clarification:
The question here is specifically regarding Unit-Of-Work implementation over MongoDB using the MongoDB 4.0 (or later) built-in transaction support.
I never used MongoDB; do not know anything about it. I am only answering in terms of TransactionScope; so not sure if this will help you.
Please refer the Magic Of TransactionScope. IMO, there are three factors you should look for:
Connection to the database should be opened inside the TransactionScope.
So remember, the connection must be opened inside the TransactionScope block for it to enlist in the ambient transaction automatically. If the connection was opened before that, it will not participate in the transaction.
Not sure but it looks that you can manually enlist the connection opened outside the scope using connection.EnlistTransaction(Transaction.Current).
Looking at your comment and the edit, this is not an issue.
All operations should run on same thread.
The ambient transaction provided by the TransactionScope is a thread-static (TLS) variable. It can be accessed with static Transaction.Current property. Here is the TransactionScope code at referencesource.microsoft.com. ThreadStatic ContextData, contains the CurrentTransaction.
and
Remember that Transaction.Current is a thread static variable. If your code is executing in a multi-threaded environments, you may need to take some precautions. The connections that need to participate in ambient transactions must be opened on the same thread that creates the TransactionScope that is managing that ambient transaction.
So, all the operations should run on same thread.
Play with TransactionScopeOption (pass it to constructor of TransactionScope) values as per your need.
Upon instantiating a TransactionScope by the new statement, the transaction manager determines which transaction to participate in. Once determined, the scope always participates in that transaction. The decision is based on two factors: whether an ambient transaction is present and the value of the TransactionScopeOption parameter in the constructor.
I am not sure what your code expected to do. You may play with this enum values.
As you mentioned in the comment, you are using async/await.
Lastly, if you are using async/await inside the TransactionScope block, you should know that it does not work well with TransactionScope and you might want to look into new TransactionScope constructor in .NET Framework 4.5.1 that accepts a TransactionScopeAsyncFlowOption. TransactionScopeAsyncFlowOption.Enabled option, which is not the default, allows TransactionScope to play well with asynchronous continuations.
For MongoDB, see if this helps you.
The MongoDB driver isn't aware of ambient TransactionScopes. You would need to enlist with them manually, or use JohnKnoop.MongoRepository which does this for you: https://github.com/johnknoop/MongoRepository#transactions
What is difference of these codes:
using (TransactionScope tran = new TransactionScope ())
{
using (Entities ent = new Entities())
{
and
using (Entities ent = new Entities())
{
using (TransactionScope tran = new TransactionScope ())
{
Does line order matter?
Thanks
In this case it wouldn't matter, since the Entities seems like Model / Data class, which cannot / needn't enlist into a transaction, so you can have any order, it would not create a difference. Now if the there's any issue in the Entities operation, Read / DML then transaction would also abort, assuming that operation takes place in the ambient transaction context, though class / object doing it like IDBConnection would anyway automatically enlisted into a transaction (provided not set to false), however even if Connection is created outside transaction context, then would not automatically enlist, needs explicit enlisting for Transaction Context
In Short
For your current code, it doesn't matter till the point you are dealing with an object that needs Transaction enlistment like IDBConnection. Out of two code snippets though my preference would be first one, where ambient transaction is all encompassing all the objects need to be enlist automatically does, we don't leave any object by chance.
Important Difference
What you though may want to be careful about whether you want to access Entities outside the Transaction Context, since that is not possible in the Option 1 but would not be an issue in the Option 2
Yes the order matters. Or rather we can't say it doesn't matter without looking at your code.
DbConnection instances will enist in an ambient transaction if one exists when they are Open()ed.
Your DbContext constructor might open the underlying DbConnection, in which case the two patterns differ.
The first one is the normal pattern, and you should stick to that.
Also if you are using SQL Server don't use the default constructor of TransactionScope. See Using new TransactionScope() Considered Harmful
I am developing student marking system using ASP.NET MVC and Entity Framework. There is a heavy method I am using for calculate marking. At a given time there are about 50 users enter marks to system and that heavy method call by all the users. It gives me deadlock most of the time. I am using a TransactionScope.
This is my code:
try
{
using (context = new SIMSDBAPPEntities())
{
using (TransactionScope scope = new TransactionScope())
{
// My heavy calculation
}
scope.Complete();
context.SaveChanges();
}
catch (Exception ex)
{
throw ex;
}
}
My heavy method is running inside a TransactionScope. I want to know whether my code has any problems? If yes, what can I do to avoid deadlock situation?
If you are using just one context for saving data. You dont need to use TransactionScope. when you call SaveChanges Ef automatically enlist all changes and apply them on single transaction. For distributed transactions must use TransactionScope.
[link]Using Transactions or SaveChanges(false) and AcceptAllChanges()?
https://coderwall.com/p/jnniww/why-you-shouldn-t-use-entity-framework-with-transactions
Consider using an async method to do the heavy calculating.
using (var transaction = Database.BeginTransaction())
{
try
{
var heavyCalulation = await heavyCalulationMethodAsync();
await context.SaveChangesAsync();
}
catch
{
transaction.Rollback();
}
}
The only way to avoid a deadlock is to access and update records in the same table order every time for every operation that does an update.
This can be very tricky to accomplish in ORMs like EF and nHibernate because the concept of table operations is hidden behind the implementation. You're at the mercy of whatever strategy the framework decides. To work around it, you'd have to be careful to update objects individually and save your changes on a per-object basis. You'd also have to make sure you always save these objects in the same order for every operation that does an update.
What I've done in the past to minimize this issue is to enable the database for isolated snapshots, and use that method for your transaction scope.
https://msdn.microsoft.com/en-us/library/tcbchxcb(v=vs.110).aspx
There are some things that can go wrong here as well (such as writing back to a stale record). But I've found it happens less often than deadlocks.
I found this article helpful too (and gives examples of snapshot isolation)..
http://blogs.msdn.com/b/diego/archive/2012/04/01/tips-to-avoid-deadlocks-in-entity-framework-applications.aspx
I want to understand what is the trade-of/downside of using TransactionScopeOption.RequiresNew on EntityFramework (w/ Sql Server 2008), what are the reasons why we should NOT use RequiresNew always.
Regards.
You should use Required not RequiresNew. RequiresNew means every operation will use a new transaction, even if there is an encompassing already existing transaction scope. This will certainly lead to deadlocks. Even with Required there is another serious problem with TransactionScope, namely that it creates by default a Serializable transaction, which is a horribly bad choice and yet another shortcut to deadlock hell and no scalability. See using new TransactionScope() Considered Harmful. You should always create a transaction scope with the explicit TransactionOption setting the isolation level to ReadCommitted, which a much much much more sane isolation level:
using(TransactionScope scope = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions {
IsolationLevel = IsolationLevel.ReadCommitted}))
{
/// do work here
...
scope.Complete();
}
I just wanted to add here that in a couple certain cases the method i've written is inside a parent transaction scope that may or may not be closed with scope.Complete() in these cases I didn't want to be dependent on the parent transaction so we needed to set RequiresNew.
In general though I agree it's not necessary and should use read committed.
http://msdn.microsoft.com/en-us/library/ms172152(v=vs.90).aspx
I would like to be 100% that when I use such pattern
using (var scope = new TransactionScope())
{
db1.Foo.InsertOnSubmit(new Foo()); // just an example
db1.SubmitChanges();
scope.Commit();
}
the transaction scope will only refer to db1 not db2 (db1 and db2 are DataBaseContext objects). So how to limit scope to db1 only?
Thank you in advance.
The strength of TransactionScope is that it automatically catches everything within it, down the call stack (unless another TransactionScope with RequiresNew or Supress). If this is not what you want, you should use another mechanism for the transaction handling.
One way is to open SqlConnection and call BeginTransaction to get a transaction, then use that DB connection when creating your data context. (Maybe you'll have to set the Transaction propery on the data context, I'm not sure).
In the example you have given above, the use of a TransactionScope is totally redundant. There is only one function call that actually modifies the database: SubmitChanges and it always creates its own transaction if there isn't already one existing. The reason is that when you're doing several operations SubmitChanges should either succeed in them all or fail all. If you're only after transactional integrity for a single SubmitChanges call for a single data context, then you can just drop the TransactionScope.
As far as I know, what you are after is exactly what you have written, the compiler will ensure that transaction is completed when you commit.
If you use:
using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
db1.Foo.InsertOnSubmit(new Foo()); // just an example
db1.SubmitChanges();
scope.Commit();
}
You can be sure a new transcation is created for this scope, so everyting you do inside the scope has its own transaction, and will never use any existing ambient transactions.