Transaction escalates when using TransactionScopeAsyncFlowOption - c#

I have a small web api server written in C# using async/await. The Net version is 4.5.2
Everything is working fine except that I use TransactionScope for some calls and the underlying transaction is escalated to a distributed one. Since I use async/await for my db calls I use TransactionScopeAsyncFlowOption. The SQL server is running version 2008 r2 so it should be able to handle multiple calls without escelating the transaction. All calls are made to the same database with the same connection string.
All SQL connections are done in using statements and I'm not nesting any of them. Each call to the database is awaited before another is done so there should never be two connections active a the same time in one transaction, unless I have misunderstood how async/await works. I'm using Dapper if that might impact things.
Am I missing something obvious or do I need to rewrite my code to use the same connection for all operation in the transaction?

Feel really stupid, missed that Pooling was disabled in the connection string. Removed Pooling=false and the transaction does not escalate to a distributed state.

Related

Why is DbContext.Database.CurrentTransaction always null?

Is there anyway to find if a DbContext is enlisted in any transaction while enlist=false in the connection string?
I was tracing DbContext.Database.CurrentTransaction, but I noticed it is always null.
I know when enlist=false, all opened connections will not enlist themselves in an ambient transaction, is that right?
If (2) is correct, how to enlist DbContext in an transaction where TransactionScope is used?
Finally, I noticed using clones of DependentTransaction with multiple DbContext and multiple threads while enlist=false will not promote the transaction to distributed one, but I wonder am I still able to commit and rollback in case an exception happened using the dependent transaction while enlist=false?
if (4) is incorrect, then is there any way to fully avoid DistributedTransaction while being able to open multiple connections with a single transaction scope?
FYI, currently Oracle database is employed; however, in future MySQL is going to be in operation as well.
Thank you
I cant really say something to 1 , 2 and 3.. But:
The distribution-thing is not absolutely clear to me. But however, MS escalates Transactions from LTM (Lightweigt Transaction Manger) to DTC if a some criteria come to play. For Example if you try to access different databases in one transaction..
The escalation from LTM to DTC or the decission if an escalation will be forced is a system-decission. Until now I didnt find a way to change this behaviour. Thats why you need to think about the transactions in general. If there is a possibility to avoid multiple-database access you may rethink your transactions.
For further information I´ll recommend Why is my TransactionScope trying to use MSDTC when used in an EF Code First app? , MSSQL Error 'The underlying provider failed on Open' and How do I use TransactionScope in C#?

C# - TSQL Parallel transactions on same connection instance

I am developing a C# ORM with php's Laravel-like Syntax.
When the ORM starts, it connects to the database and performs any query (it also supports two different connections for reading and writing), and reconnect to the db only if connection is losen or missing.
As this is a Web Framework ORM, how can I handle concurrent transactions on the same connection? Are they supported?
I saw I can manually assign the transaction object to the sqlcommand, but can I create parallel sqltransactions?
Example:
There is an URL of a REST action that will cause a transaction to be opened, some actions performed, and then transaction committed (e.g. perform an order on a shopping cart). What if, multiple users (so different WebOperationContext) call that URL? Is it possible to open and work with multiple "parallel" transactions and then commit them?
How do other ORM's handle this case? Did they use multiple connections?
Thanks for any support!
Mattia
SQL Server does not support parallel transactions on the same connection.
Normally, there is no need for that. Just open connections as you need them. This is cheap thanks to pooling.
A common model is to open connection and transaction right after another. Then, commit and dispose everything at the end.
That way concurrent HTTP requests do not interact at all which is good.

Is there any way to resume a (long) transaction after the underlying mysql connection has been lost?

I have a long running transaction performing a lot of delete queries on a database; the issue is that the mysql connection (to the server on the same machine) will be dropped for no reason every now and then.
Currently, my retry logic will detect the disconnection, reconnect, and restart the whole transaction from the beginning, which may never succeed if the connection's "dropping frequency" is too high.
Is it possible at all to reopen a lost connection to continue the transaction?
I am using MySQL Connector for .NET.
What you are asking is not possible for a Transaction. A transaction is to make sure that either each and every action performed on DataBase is completed or None are.
If your Connection Dropping frequency is too high and you don't have a control on fixing it then what you should do is to make simple queries without a transaction or Better Make the Number of Actions in your Transaction fewer and Send a Batch of Transactions instead of a Single Big Transaction.
And also add some data validation check codes to make sure every thing is right with entries.
Theoretically you can do exactly what you need with the XA transactions... but the limitations of mysql are rather drastic and make XA transactions on mysql a joke do be honest: both resume & join on start and end with suspend are not working (since 2006 when this was first released). So to answer you question no! No chance with mysql, forget it! Try increasing timeouts(both on client and server), memory pools, optimize the queries etc... mysql won't help you here.

Entity Framework database connection question

We are using .Net Entity Framework to do our database related work. Our database is Sybase SQL Anywhere.
using (AndeDBEntities db = new AndeDBEntities(decryptConnStr()))
{
}
We use a lots of above statements to access database. My questions are do we need to close the connection each time after the access is done and how to do that?
At one time I saw "Database server connection limit exceeded" error. I am wondering there must be something wrong in our database connection code.
The connection should be closed automatically. It's possible that there is a resource leak in the Sybase EF supporting classes.
See Managing Connections for more information. Note that (by default) EF will open and dispose a database connection for each query or SaveChanges call. If Sybase's supporting classes do not handle this well (e.g., with a connection pool), then a resource leak may become noticeable when it would otherwise not be.
So actually the using statement does not close the EF connection (unless you've manually opened it). It should have already been disposed (released to the connection pool or closed) before reaching the end of the using statement.
The using statement will make sure that db will get disposed and the connection closed.
Grz, Kris.
No, you are wrapping the AndeDBEntities object in a using block with means its Dispose() method will be called when it goes out of scope (as it implements IDisposable). This method will clear all of the unmanaged resources aquired by the object (assuming it has been developed without a leak - which, I think, is a fair presumption).
I don't believe this is the route of your connection limit error. Do you have the developer's edition? This is only licensed for 3 connections.

Working With ODP.NET Asynchronously

Hay,
My system needs to execute several major SQL`s (on Oracle DB) using the same connection (asynchronous).
What`s the best practice for this issue?
1. open single connection and execute every SQL statement on different thread (does it thread safe?)
2. create new connection and “open + close” it for every SQL statement
Thanks,
Hec
We've been calling Oracle SQL statements on multiple threads, and this is probably best, if your DB can handle the load and won't be the bottleneck anyway. HOWEVER, I think you need to create the connection on the thread that will be issuing the SQL command. You can (and probably should) also use connection pooling so your connections will be reused, rather than being re-established (and Oracle seems to be fine with re-using these from one thread to another).

Categories