I want to use Oracle Transaction for two different method in C#. But unable to figure out how to use it. Basically code in 3Layer Architecture.
Example :-
in Code behind file on Cancel Butto Click.
On_Cancel_Button_Click()
{
Cancel Method() //Cancelling Booked Ticket
If (Some Booked Ticket in Waiting)
Calling Confirm Method() //to confirm waiting list ticket
}
Cancel & Confirm both method is defined in Aspx.cs file and both are calling BAL class method then DAL Class Method, In BAL, DAL both are calling different Method (In DAL all ADO.Net Code written).
So how to implement Transaction for this scenario.
The simplest way is to handle the transaction in the outer scope.
Eg
void CancelAndConfirmTicket()
{
using (var con = new OracleConnection(...))
{
con.Open();
using (var tran = con.BeginTransaction())
{
Cancel(con);
Confirm(con);
tran.Commit();
}
}
There are other patterns for sharing a connection between methods (like Dependency Injection), or sharing a transaction (like TransactionScope), but the the idea is the same. You define the scope and logic of the transaction in an outer layer of the application, and "enlist" the inner layers in the transaction.
Related
We've got a web-based, multi-tiered eHealth system platform that uses a class/object hierarchy. As an example, a "patient clinic visit" object may consist of multiple diagnosis objects, multiple medication objects, multiple observation objects, laboratory test objects and so on. Database transactions are handled using the .NET System.Transactions.TransactionScope.
When persisting data into the database each nested object in the hierarchy:
instantiates a TransactionScope (with the default transaction
required option)
instantiates an NpgsqlConnection connection
does its own SQL and
sets transactionsScope.Complete(), if all went well
To simplify, objects in the object hierarchy are doing like the code sample below:
void RootMethod()
{
using(TransactionScope scope = new TransactionScope())
{
/* Perform transactional work here */
SomeMethod();
AnotherMethod();
scope.Complete();
}
}
void SomeMethod()
{
using(TransactionScope scope = new TransactionScope())
{
using(NpgsqlConnection connection = new NpgsqlConnection(connectionString))
{
connection.Open()
/* Perform transactional work here */
scope.Complete();
}
}
}
void AnotherMethod()
{
using(TransactionScope scope = new TransactionScope())
{
using(NpgsqlConnection connection = new NpgsqlConnection(connectionString))
{
connection.Open()
/* Perform transactional work here */
scope.Complete();
}
}
}
Program code that is encapsulated in "using(TransactionScope scope = new TransactionScope()) {...}" code blocks gets enlisted in the same transaction fine on Oracle (with an db Oracle driver) but with Npgsql on Postgres separate transactions would appear to get generated instead of one transaction.
As a result, transactions on Postgres fail because of foreign key constraints - data cannot be persisted in child tables as the separate transactions that get created don't see data inserted in parent tables (in separate transaction) before the data is commited by parent objects.
We've got Enlist=true in the Npgsql connectString and the server parameter max_prepared_transactions set to a bigger number than 0 value on our Postgres server.
The Npgsql driver versions we've tested are 4.0.7 and 4.1.2. The Postgres server version in our devt environment is version 10.
Npgsql documentation https://www.npgsql.org/doc/transactions.html says System.Transactions.TransactionScope is supported (and has been since v3.3), as do other Npgsql related answers we've search for on StackOverflow.
At first glance the Npgsql unit tests would appear to use one database connection in transactional unit tests.
QUESTIONS:
are multiple TransactionScopes and multiple participating database connections as per TransactionScope implementation guidelines (e.g. https://learn.microsoft.com/en-us/dotnet/framework/data/transactions/implementing-an-implicit-transaction-using-transaction-scope) supported on Npgsql?
Anything obvious we are missing here?
As stated above, with Npgsql 4.x each opened database connection in "using(TransactionScope scope = new TransactionScope()) {...}" code block would appear to generate a new transaction instead of enlisting in one and the same transaction.
One needs to close an NpgsqlConnection opened in an outer TransactionScope before opening a second NpgsqlConnection in an inner TransactionScope (to allow Npgsql to internally reuse the same physical connection, without escalation to a distributed transaction to occur).
I have a 'M' manager (conductor) instance, which controls two other instances 'A' and 'B'. 'A' is az Entity Framework data saver, 'B' calls an external web service to do something. The code of 'M' looks like:
// code inside 'M'
void Save()
B.Save()
A.Save()
This is a kind of distributed transaction. When B.Save drops exception, then A.Save should not happen or finish. Now I have to change it to works well. The problem is, that 'M' does not know anything about how an EF transaction works, or how to handle it, and A.Save cannot include the B.Save call. So I have to change it to somehow:
Object transaction = A.PrepareSave()
try {
B.Save()
}
catch {
A.RollbackSave(transaction)
throw
}
A.FinishSave(transaction)
Where the A.PrepareSave() looks like (?)
TransactionScope scope = new TransactionScope()
var context = CreateContext()
... do something EF ...
context.SaveChanges(false)
return new MyCustomTuple(scope,context)
And where A.FinishShave(Object trans) look as (?)
MyCustomTuple tuple = (MyCustomTuple)trans
TransactionScope scope = (TransactionScope)tuple.Scope
EFContext context = (EFContext)tuple.Context
scope.Complete()
context.AcceptAllChanges()
Question 1: is it ok? Is it the way to handle such a situation? (I have no influence on B.Save(), it saves or drops exception)
Question 2: how to free the resources (scope, context) at the end? The 'M' manager does not know anything about the MyCustomTuple and its contents.
You can use the TransactionScope right in the M method, you don't need to handle it in different parts of your code.
using (var transaction = new TransactionScope())
{
A.Save();
B.Save();
transaction.Complete();
}
This will complete if both save methods complete, otherwise an exception is thrown, no call to Complete() is made, so there is no commit. The using block will free the TransactionScope. As for disposing other resources, you can just do it the same way you're doing it now. You have not included any examples for this (I'd expect that the component that creates the context, maybe component A, handles the disposition of that context),
I am writing a Web API Rest service that does operations on a distinct set of entities. I have broken them down like this:
db = new DBEntities();
using (var dbContextTransaction = db.Database.BeginTransaction())
{
try
{
ProcessClient();
ProcessClientPerson();
ProcessGuardian();
ProcessAddress();
ProcessEmail();
ProcessPhones();
ProcessChildren();
dbContextTransaction.Commit();
}
catch (Exception ex)
{
dbContextTransaction.Rollback();
etc.
Following the advice that data contexts should live as short as possible, each of the methods creates its own data context, calls SaveChanges(), and disposes of it at the end:
private ProcessClient()
{
db = new DBEntities();
....
This obviously does not work - a transaction context created this way is tied to the data context. If something goes wrong in one of the entity operations, only that operation is rolled back (implicitly), but the overarching transaction is not.
I found this approach for creating a transaction outside of EF, but I am wondering if I should follow it or if I should just let my data context live for the duration of the transaction and keep the transaction inside of EF!?
I am not looking for an opinion, but for data around stability, performance, etc.
There is no immediate need to keep contexts short-lived. You can do that but you don't have to.
Over time entities will accumulate in a context. If you risk running out of memory it can be necessary to let go of a context.
Otherwise, the usual procedure is to keep the context alive for the duration of the logical unit of work. Here, that UOW is all those methods in their entirety.
This also makes transaction management easier (as you already found out).
dbContextTransaction.Rollback();
This is an anti-pattern. Simply don't commit in case of error.
I have mixed feelings about this. I am working against a legacy database that has no foreign key constraints, and I am inserting, updating, and deleting between 20 and 30 objects in one of these service calls.
The problem is that I need to call SaveChanges() frequently to get the identity column values that will become foreign keys.
On the other hand, I have to be able to roll back everything if there is a problem three layers down, so a single large transaction is needed.
For some reason that I have not been able to determine, calling SaveChanges repeatedly on the same data context would result in errors that the connection state is open. So I ended up giving each method its own data context anyway:
var scope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted);
using (scope)
{
try
{
ProcessClient();
ProcessClientPerson();
ProcessGuardian();
ProcessAddress();
ProcessEmail();
ProcessPhones();
ProcessChildren();
scope.Complete();
}
catch (System.Data.Entity.Validation.
DbEntityValidationException ex)
{
[...] handle validation errors etc [...]
}
}
with each section doing basically this, once stripped down to the bare essentials:
private void ProcessClient() {
{
using (MyDBEntities db = new MyDBEntities())
{
[...] doing client stuff [...]
aClient.LastUpdated = DateTime.Now;
db.AddOrUpdate(db, db.Clients, aClient, aClient.ClientID);
db.SaveChanges();
ClientId = aClient.ClientID; // now I can use this to form FKs
}
}
Mixed feelings about locking, because on my development VM the transaction runs for 1-2 seconds and this is a production database with office staff and online customers doing CRUD transactions through web applications at the same time.
Unrelated, but helpful for my AddOrUpdate method was this blog post.
Please check the below code sample, I want A type process and B type process to be done both or none of them. Does below code success?
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TimeSpan(0, 30, 0)))
{
con.Open();
//do A type process
con.Close();
con.Open();
//do B type process
con.Close();
scope.Complete();
}
P.S: (Please don't suggest to use 1 con, the reason is that I use the 3 tier architecture at this link (http://geekswithblogs.net/edison/archive/2009/04/05/a-simple-3-tier-layers-application-in-asp.net.aspx), and A-B type processes are called with a function (genericdata class) which opens and closes it's connection automatically.) So the above code is the interpretation of my actual code.
Using DTC, it will act like a layer between your DB layer and database. Which means any changes made to db will not be applied till you call .Complete(). It really doesn't matter which connection you use and how many database involved in transaction.
make sure you call .Complete() and end of transaction. Or even you can have nested transaction scope
Scope1
Scope2
Scope3
in above , whenever Scope1.Complete is called , data will be moved to database even though child scopes calls Complete
In my application I have the following pattern:
using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required))
{
Function1();
Function2();
Function3();
}
My problen is that Function2 calls another function that connects to another DB ... and the transaction becomes distributed and I get an exception.
Is there any way in the code in which I can make a db call that is not part of the current transaction? My code in Function2 does just a read ... so I don't want to be part of current transaction.
Thanks, Radu
Around function2 you could create a new transaction with TransactionScopeOption.RequiresNew, thus forcing it into its own separate transaction. Since only one resource (the other database) will be used in that transaction is shouldn't become distributed.