In my application I have the following pattern:
using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required))
{
Function1();
Function2();
Function3();
}
My problen is that Function2 calls another function that connects to another DB ... and the transaction becomes distributed and I get an exception.
Is there any way in the code in which I can make a db call that is not part of the current transaction? My code in Function2 does just a read ... so I don't want to be part of current transaction.
Thanks, Radu
Around function2 you could create a new transaction with TransactionScopeOption.RequiresNew, thus forcing it into its own separate transaction. Since only one resource (the other database) will be used in that transaction is shouldn't become distributed.
Related
I want to use Oracle Transaction for two different method in C#. But unable to figure out how to use it. Basically code in 3Layer Architecture.
Example :-
in Code behind file on Cancel Butto Click.
On_Cancel_Button_Click()
{
Cancel Method() //Cancelling Booked Ticket
If (Some Booked Ticket in Waiting)
Calling Confirm Method() //to confirm waiting list ticket
}
Cancel & Confirm both method is defined in Aspx.cs file and both are calling BAL class method then DAL Class Method, In BAL, DAL both are calling different Method (In DAL all ADO.Net Code written).
So how to implement Transaction for this scenario.
The simplest way is to handle the transaction in the outer scope.
Eg
void CancelAndConfirmTicket()
{
using (var con = new OracleConnection(...))
{
con.Open();
using (var tran = con.BeginTransaction())
{
Cancel(con);
Confirm(con);
tran.Commit();
}
}
There are other patterns for sharing a connection between methods (like Dependency Injection), or sharing a transaction (like TransactionScope), but the the idea is the same. You define the scope and logic of the transaction in an outer layer of the application, and "enlist" the inner layers in the transaction.
I have a 'M' manager (conductor) instance, which controls two other instances 'A' and 'B'. 'A' is az Entity Framework data saver, 'B' calls an external web service to do something. The code of 'M' looks like:
// code inside 'M'
void Save()
B.Save()
A.Save()
This is a kind of distributed transaction. When B.Save drops exception, then A.Save should not happen or finish. Now I have to change it to works well. The problem is, that 'M' does not know anything about how an EF transaction works, or how to handle it, and A.Save cannot include the B.Save call. So I have to change it to somehow:
Object transaction = A.PrepareSave()
try {
B.Save()
}
catch {
A.RollbackSave(transaction)
throw
}
A.FinishSave(transaction)
Where the A.PrepareSave() looks like (?)
TransactionScope scope = new TransactionScope()
var context = CreateContext()
... do something EF ...
context.SaveChanges(false)
return new MyCustomTuple(scope,context)
And where A.FinishShave(Object trans) look as (?)
MyCustomTuple tuple = (MyCustomTuple)trans
TransactionScope scope = (TransactionScope)tuple.Scope
EFContext context = (EFContext)tuple.Context
scope.Complete()
context.AcceptAllChanges()
Question 1: is it ok? Is it the way to handle such a situation? (I have no influence on B.Save(), it saves or drops exception)
Question 2: how to free the resources (scope, context) at the end? The 'M' manager does not know anything about the MyCustomTuple and its contents.
You can use the TransactionScope right in the M method, you don't need to handle it in different parts of your code.
using (var transaction = new TransactionScope())
{
A.Save();
B.Save();
transaction.Complete();
}
This will complete if both save methods complete, otherwise an exception is thrown, no call to Complete() is made, so there is no commit. The using block will free the TransactionScope. As for disposing other resources, you can just do it the same way you're doing it now. You have not included any examples for this (I'd expect that the component that creates the context, maybe component A, handles the disposition of that context),
I am writing a Web API Rest service that does operations on a distinct set of entities. I have broken them down like this:
db = new DBEntities();
using (var dbContextTransaction = db.Database.BeginTransaction())
{
try
{
ProcessClient();
ProcessClientPerson();
ProcessGuardian();
ProcessAddress();
ProcessEmail();
ProcessPhones();
ProcessChildren();
dbContextTransaction.Commit();
}
catch (Exception ex)
{
dbContextTransaction.Rollback();
etc.
Following the advice that data contexts should live as short as possible, each of the methods creates its own data context, calls SaveChanges(), and disposes of it at the end:
private ProcessClient()
{
db = new DBEntities();
....
This obviously does not work - a transaction context created this way is tied to the data context. If something goes wrong in one of the entity operations, only that operation is rolled back (implicitly), but the overarching transaction is not.
I found this approach for creating a transaction outside of EF, but I am wondering if I should follow it or if I should just let my data context live for the duration of the transaction and keep the transaction inside of EF!?
I am not looking for an opinion, but for data around stability, performance, etc.
There is no immediate need to keep contexts short-lived. You can do that but you don't have to.
Over time entities will accumulate in a context. If you risk running out of memory it can be necessary to let go of a context.
Otherwise, the usual procedure is to keep the context alive for the duration of the logical unit of work. Here, that UOW is all those methods in their entirety.
This also makes transaction management easier (as you already found out).
dbContextTransaction.Rollback();
This is an anti-pattern. Simply don't commit in case of error.
I have mixed feelings about this. I am working against a legacy database that has no foreign key constraints, and I am inserting, updating, and deleting between 20 and 30 objects in one of these service calls.
The problem is that I need to call SaveChanges() frequently to get the identity column values that will become foreign keys.
On the other hand, I have to be able to roll back everything if there is a problem three layers down, so a single large transaction is needed.
For some reason that I have not been able to determine, calling SaveChanges repeatedly on the same data context would result in errors that the connection state is open. So I ended up giving each method its own data context anyway:
var scope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted);
using (scope)
{
try
{
ProcessClient();
ProcessClientPerson();
ProcessGuardian();
ProcessAddress();
ProcessEmail();
ProcessPhones();
ProcessChildren();
scope.Complete();
}
catch (System.Data.Entity.Validation.
DbEntityValidationException ex)
{
[...] handle validation errors etc [...]
}
}
with each section doing basically this, once stripped down to the bare essentials:
private void ProcessClient() {
{
using (MyDBEntities db = new MyDBEntities())
{
[...] doing client stuff [...]
aClient.LastUpdated = DateTime.Now;
db.AddOrUpdate(db, db.Clients, aClient, aClient.ClientID);
db.SaveChanges();
ClientId = aClient.ClientID; // now I can use this to form FKs
}
}
Mixed feelings about locking, because on my development VM the transaction runs for 1-2 seconds and this is a production database with office staff and online customers doing CRUD transactions through web applications at the same time.
Unrelated, but helpful for my AddOrUpdate method was this blog post.
After opening a connection, Do I have to explicitly close it? or does .net close it automatically? What about if an exception is thrown?
using (DataContext pContext = new DataContext())
{
pContext.Connection.Open()
using (var trx = pContext.Connection.BeginTransaction())
{
//make changes to objects
//Make calls to ExecuteStoreCommand("Update table SET pee=1 AND poop=2 WHERE ETC");
pContext.SaveChanges();
trx.Commit();
}
}
Does pContext.Connection.Close()` get called in all instances by the framework? I know that in a normal using statement it does, but what about when I manually open it?
I am using the Connection.BeginTransaction because In my tests, (using SaveChanges() alone without transactions) if this code executes and fails for some reason, the commands sent during ExecuteStoreCommand() are saved even though the changes to my objects aren't.
I have to call Connection.Open() before calling pContext.Connection.BeginTransaction() otherwise I get the error:
Inner Exception:
The connection is not open.
Any help would be appreciated.
There is no need to call DataContext.Connection.Open(). In fact, it's bad practice to do so.
The context opens a connection only when it needs to. By calling Connection.Open, you open the connection for far longer that is needed, increasing the chance that locks will accumulate and lead to blocking. It is also a good way to exhaust the connection pool in high traffic systems.
SaveChanges opens a connection and a transaction itself, so there is no need to manually open the connection or start the transaction. From the documentation:
SaveChanges operates within a transaction. SaveChanges will roll back that transaction and throw an exception if any of the dirty ObjectStateEntry objects cannot be persisted
Explicitly using connections and transactions makes sense only if you want to mix raw SQL commands inside the EF session. By itself this is not a good practice, but if you do, you don't need to close the connection because the DataContext will close the connection when it is disposed. There is a sample in the documentation, at How to: Manually Open the Connection from the Object Context
You can also handle a transaction using a TransactionScope, as described in How to: Manage Transactions in the Entity Framework. In this case, you don't need to open the connection as all changes are automatically enlisted in the current transaction. This is described in http://msdn.microsoft.com/en-us/library/bb738523(v=vs.100).aspx and an example would be :
using (DataContext pContext = new DataContext())
{
using (TransactionScope transaction = new TransactionScope())
{
//execute commands
pContext.SaveChanges();
transaction.Complete()
}
}
In any case, manually handling connections and transactions is not trivial. You should read Managing Connections and Transactions for some of the things you need to be aware
This means
using (DataContext pContext = new DataContext())
that pContext is disposed after leaving using
it's equivalent to:
DataContext pContext = new DataContext()
try
{
//do stuff here
}
finally
{
if(pContext!=null)
((IDisposable)pContext).Dispose();
}
so answer is -> you don't need to call close after your code because
Connections are released back into the pool when you call Close or Dispose on the Connection..."
from here
I'm having trouble suppressing part of a transaction using Sql Server CE 4 with Entity Framework and System.Transactions.TransactionScope.
The simplified code below is from a unit test demonstrating the problem.
The idea is to enable the innerScope block (with no transaction) to succeed or fail without affecting the outerScope block (the "ambient" transaction). This is the stated purpose of TransactionScopeOption.Suppress.
However, the code fails because it seems that the entire SomeTable table is locked by the first insert in outerScope. At the point indicated in the code, this error is thrown:
"SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 2,Thread id = 2248,Process id = 13516,Table name = SomeTable,Conflict type = x lock (x blocks),Resource = PAG (idx): 1046 ]"
[TestMethod()]
[DeploymentItem("MyLocalDb.sdf")]
public void MyLocalDb_TransactionSuppressed()
{
int count = 0;
// This is the ambient transaction
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.Required))
{
using (MyObjectContext outerContext = new MyObjectContext())
{
// Do something in the outer scope
outerContext.Connection.Open();
outerContext.AddToSomeTable(CreateSomeTableRow());
outerContext.SaveChanges();
try
{
// Ambient transaction is suppressed for the inner scope of SQLCE operations
using (TransactionScope innerScope = new TransactionScope(TransactionScopeOption.Suppress))
{
using (MyObjectContext innerContext = new MyObjectContext())
{
innerContext.Connection.Open();
// This insert will work
innerContext.AddToSomeTable(CreateSomeTableRow());
innerContext.SaveChanges(); // ====> EXCEPTION THROWN HERE
// There will be other, possibly failing operations here
}
innerScope.Complete();
}
}
catch { }
}
outerScope.Complete();
}
count = GetCountFromSomeTable();
// The insert in the outer scope should succeed, and the one from the inner scope
Assert.AreEqual(2, count);
}
So, it seems that "a transaction in a transaction scope executes with isolation level set to Serializable", according to http://msdn.microsoft.com/en-us/library/ms172001
However, using the following code snippet to change the isolation level of the TransactionScope does not help:
public void MyLocalDb_TransactionSuppressed()
{
TransactionOptions opts = new TransactionOptions();
opts.IsolationLevel = IsolationLevel.ReadCommitted;
int count = 0;
// This is the ambient transaction
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.Required, opts))
...
The same exception is thrown at the same location.
It seems the only way to avoid this is to call outerScope.Complete() before entering the innerScope block. But this would defeat the purpose.
What am I missing here?
Thanks.
AFAIK SQL Server Compact does not support nested transactions.
And why do you do that this way? If I look at your code there is no difference between running the second transaction scope inside the first one and running them in sequence.
IMHO this is not a problem of SQL Compact, TransactionScope or isolation level. This is a problem of your wrong application logic.
Each SaveChanges runs in transaction - either outer transaction defined by TransactionScope or inner DbTransaction. Even if it would not create transaction every database command has its own implicit transaction. If you use Suppress on the inner code block you are creating two concurrent transactions which are trying to insert into same table and moreover the first cannot complete without completing the second and the second cannot complete without completing the first => deadlock.
The reason is that insert command always locks part of the table not allowing new inserts until it is committed or rolled back. I'm not sure if this can be avoided by changing transaction isolation level - if it does, you will most probably need Read.Uncommitted.