I am stuck using two db connections with entity framework contexts under a single transaction.
I am trying to use two db contexts under one transaction scope. I get "MSTDC not available". I read it's not an EF problem it's TDC which does not allow two connections.
Is there any answer for this problem?
This happens because the framework thinks that you are trying to have a transaction span multiple databases. This is called a distributed transaction.
To use distributed transactions, you need a transaction coordinator. In your case, the coordinator is the Microsoft Distributed Transaction Coordinator, which runs as a Widows Service on your server. You will need to make sure that this service is running:
Starting the service should solve your immediate issue.
Two-phase commit
From a purely theoretical point of view, distributed transactions are an impossibility* - that is, disparate systems cannot coordinate their actions in such a way that they can be absolutely certain that they either all commit or all roll back.
However, using a transaction coordinator, you get pretty darn close (and 'close enough' for any conceivable purpose). When using a distributed transaction, each party in the transaction will try to make the required changes and report back to the coordinator whether all went well or not. If all parties report success, the coordinator will tell all parties to commit. However, if one or more parties report a failure, the coordinator will tell all parties to roll back their changes. This is the "Two-phase commit protocol".
Watch out
It obviously takes time for the coordinator to communicate with the different parties of the transaction. Thus, using distributed transactions can hamper performance. Moreover, you may experience blocking and deadlocking among your transactions, and MSDTC obviously complicates your infrastructure.
Thus, before you turn on the Distributed Transaction Coordinator service and forge ahead with your project, you should first take a long, hard look at your architecture and convince yourself that you really need to use multiple contexts.
If you do need multiple contexts, you should investigate whether you can prevent transactions from being escalated to distributed transactions.
Further reading
You may want to read:
MSDN: "Managing Connections and Transactions" (specifically on EF)
Blog: "Avoid unwanted escalation to distributed transactions" (a bit dated, though)
***** See for example: Reasoning About Knowledge
You should run MSDTC (Distributed Transaction Coordinator) system service.
Related
Is there anyway to find if a DbContext is enlisted in any transaction while enlist=false in the connection string?
I was tracing DbContext.Database.CurrentTransaction, but I noticed it is always null.
I know when enlist=false, all opened connections will not enlist themselves in an ambient transaction, is that right?
If (2) is correct, how to enlist DbContext in an transaction where TransactionScope is used?
Finally, I noticed using clones of DependentTransaction with multiple DbContext and multiple threads while enlist=false will not promote the transaction to distributed one, but I wonder am I still able to commit and rollback in case an exception happened using the dependent transaction while enlist=false?
if (4) is incorrect, then is there any way to fully avoid DistributedTransaction while being able to open multiple connections with a single transaction scope?
FYI, currently Oracle database is employed; however, in future MySQL is going to be in operation as well.
Thank you
I cant really say something to 1 , 2 and 3.. But:
The distribution-thing is not absolutely clear to me. But however, MS escalates Transactions from LTM (Lightweigt Transaction Manger) to DTC if a some criteria come to play. For Example if you try to access different databases in one transaction..
The escalation from LTM to DTC or the decission if an escalation will be forced is a system-decission. Until now I didnt find a way to change this behaviour. Thats why you need to think about the transactions in general. If there is a possibility to avoid multiple-database access you may rethink your transactions.
For further information I´ll recommend Why is my TransactionScope trying to use MSDTC when used in an EF Code First app? , MSSQL Error 'The underlying provider failed on Open' and How do I use TransactionScope in C#?
I am developing a C# ORM with php's Laravel-like Syntax.
When the ORM starts, it connects to the database and performs any query (it also supports two different connections for reading and writing), and reconnect to the db only if connection is losen or missing.
As this is a Web Framework ORM, how can I handle concurrent transactions on the same connection? Are they supported?
I saw I can manually assign the transaction object to the sqlcommand, but can I create parallel sqltransactions?
Example:
There is an URL of a REST action that will cause a transaction to be opened, some actions performed, and then transaction committed (e.g. perform an order on a shopping cart). What if, multiple users (so different WebOperationContext) call that URL? Is it possible to open and work with multiple "parallel" transactions and then commit them?
How do other ORM's handle this case? Did they use multiple connections?
Thanks for any support!
Mattia
SQL Server does not support parallel transactions on the same connection.
Normally, there is no need for that. Just open connections as you need them. This is cheap thanks to pooling.
A common model is to open connection and transaction right after another. Then, commit and dispose everything at the end.
That way concurrent HTTP requests do not interact at all which is good.
I'm using EF6.1 and SQL Server on a WPF thick-client application and by default I'm opening a transaction with each DbContext I instantiate (I commit it and reopen it on every SaveChanges() unless specified otherwise). The isolation level for these transactions is READ COMMITED (IsolationLevel.ReadCommited).
I'm by default opening a new context (thus a new transaction) on each "main view". The application is kind of a fake-MDI app and each MDI View will use its own DbContext... "main views" (every MDI tab/window) can contain other secondary views (think of small modal windows for specific data entry and things like that) which will share the same context (and transaction) as the opened in the main view. I'm using a structure like UseCase -> Views -> ViewModels... generally a "UseCase" will open up a DbContext and can spawn multiple views, which will share it. Those secondary views usually call SaveChanges() without committing the transaction, that's why I want to have them in first place.
I've done some performance tests with a single user on a lab server and there doesn't seem to exist any difference (performance-wise) either opening the transaction when instancing the context, or not having transactions at all (other than the one EF opens by default on SaveChanges()).
I'm no SQL Server expert, so I'm wondering if there are any implications (when the app is used by multiple users on a production server) on having many long-running transactions opened on SQL Server with that isolation level (I understand the implications on other isolation levels which may lock reads, but it's not the case). I'm handling concurrency errors manually when committing the transactions.
Am I doing the right thing here, should I stick to short-living transactions, or is it just a matter of preference?
I've been trying to find an answer to this but haven't found anything definitive (there's some people that says long-living transactions are not a good idea, but they don't seem to explain why).
there's some people that says long-living transactions are not a good
idea, but they don't seem to explain why
Just a couple of reasons:
MS SQL transaction, depending on its isolation level, could obtain record (more general) or even metadata (more exotic) locks. The more time transaction lives, the more locks it could obtain, hence, the probability of deadlocks increases.
Also, uncommitted transaction means server resource utilization. Transaction log will grow and its data for active transactions could not be truncated, server must remember all of things, which has been done within transaction to commit them or rollback.
by default I'm opening a transaction with each DbContext I instantiate
There should be a reason to do this. The only reason I can imagine, is non-EF changes to a database, which must be consistent with EF changes. Otherwise, you're doing an extra-job, which at least useless, and could waste resources of database server.
I'm maintaining a ASP/C# program that uses an MS SQL Server 2008 R2 for its database requirements.
On normal and perfect days, everything works fine as it is. But we don't live in a perfect world.
An Application (for Leave, Sick Leave, Overtime, Undertime, etc.) Approval process requires up to ten separate connections to the database. The program connects to the database, passes around some relevant parameters, and uses stored procedures to do the job. Ten times.
Now, due to the structure of the entire thing, which I can not change, a dip in the connection, or heck, if I put a debug point in VS2005 and let it hang there long enough, the Application Approval Process goes incomplete. The tables are often just joined together, so a data mismatch - a missing data here, a primary key that failed to update there - would mean an entire row would be useless.
Now, I know that there is nothing I can do to prevent this - this is a connection issue, after all.
But are there ways to minimize connection lag / failure? Or a way to inform the users that something went wrong with the process? A rollback changes feature (either via program, or SQL), so that any incomplete data in the database will be undone?
Thanks.
But are there ways to minimize connection lag / failure? Or a way to
inform the users that something went wrong with the process? A
rollback changes feature (either via program, or SQL), so that any
incomplete data in the database will be undone?
As we discussed in the comments, transactions will address many of your concerns.
A transaction comprises a unit of work performed within a database
management system (or similar system) against a database, and treated
in a coherent and reliable way independent of other transactions.
Transactions in a database environment have two main purposes:
To provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system
failure, when execution stops (completely or partially) and many
operations upon a database remain uncompleted, with unclear status.
To provide isolation between programs accessing a database concurrently. If this isolation is not provided, the program's outcome
are possibly erroneous.
Source
Transactions in .Net
As you might expect, the database is integral to providing transaction support for database-related operations. However, creating transactions from your business tier is quite easy and allows you to use a single transaction across multiple database calls.
Quoting from my answer here:
I see several reasons to control transactions from the business tier:
Communication across data store boundaries. Transactions don't have to be against a RDBMS; they can be against a variety of entities.
The ability to rollback/commit transactions based on business logic that may not be available to the particular stored procedure you are calling.
The ability to invoke an arbitrary set of queries within a single transaction. This also eliminates the need to worry about transaction count.
Personal preference: c# has a more elegant structure for declaring transactions: a using block. By comparison, I've always found transactions inside stored procedures to be cumbersome when jumping to rollback/commit.
Transactions are most easily declared using the TransactionScope (reference) abstraction which does the hard work for you.
using( var ts = new TransactionScope() )
{
// do some work here that may or may not succeed
// if this line is reached, the transaction will commit. If an exception is
// thrown before this line is reached, the transaction will be rolled back.
ts.Complete();
}
Since you are just starting out with transactions, I'd suggest testing out a transaction from your .Net code.
Call a stored procedure that performs an INSERT.
After the INSERT, purposely have the procedure generate an error of any kind.
You can validate your implementation by seeing that the INSERT was rolled back automatically.
Transactions in the Database
Of course, you can also declare transactions inside a stored procedure (or any sort of TSQL statement). See here for more information.
If you use the same SQLConnection, or other connection types that implement IDbConnection, you can do something similar to transactionscopes but without the need to create the security risk that is a transactionscope.
In VB:
Using scope as IDbTransaction = mySqlCommand.Connection.BeginTransaction()
If blnEverythingGoesWell Then
scope.Commit()
Else
scope.Rollback()
End If
End Using
If you don't specify commit, the default is to rollback the transaction.
Quick question about the TransactionScope object. Found this on the internet:
When you access your first durable resource manager, a lightweight
committable transaction is created to support the single transaction.
When you access a second durable resource manager, the transaction is
promoted to a distributed transaction.
That seems fine, but I didnt understand what exactly is a "durable resource". I know that TransactionScope only works with SQL Server 2005 and above so if I need to access SQL server 200, it wont be possible? How about a text file on the disk? I've always heard that you cant have transaction control when it involves disk access. Maybe is it different with this object?
Thanks!
This link discusses the differences between durable and volatile resource managers.
Just to clarify - TransactionScopes will work with earlier versions of SQL, however, the lightweight transaction manager only works for 2005+. DTC will be required for transactions to SQL 2000.
There is also support for transactional file systems (Vista and later) - have a look here.
Resource Managers are of two types
Durable: transactions are durable even when system failures occur.Resource Managers memorize state of a transaction. If system is shut down in between then upon restart Transaction can proceed from its previous state. for e.g. SQL Server DBMS and MSMQ.
Volatile: Non-resistant to system failures, e.g. This transactional implementation of some core .Net classes.