I'm facing the next problem. I have a piece of code that goes like this:
DoSomething(){
using (TransactionScope scope = new TransactionScope())
{
InsertSomething();
InsertSomethingElse();
InsertYetAnotherThing();
ProcessDataRelatedWithThePreviousInserts();
scope.Complete()
}
}
In ProcessDataRelatedWithThePreviousInserts I check for a condition and if needed, the rest of the work flow is redirected to a Message Queue in other server. In the other server, I recover the message, and continue the workflow (basically, make some other insertions that are related with the ones on the DoSomething method).
This is in theory, because I only manage to do that if I remove the TransactionScope in the DoSomething method. Is there a way to accomplish what I want to do or I'll need to change the way the transactions are handled?
Did you already try
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
// ...
using (TransactionScope innerscope = new TransactionScope(TransactionScopeOption.Supress)
{
ProcessDataRelatedWithThePreviousInserts();
}
scope.Complete();
}
explicitly supressing the outer transaction for your call to ProcessDataRelatedWithThePreviousInserts().
Related
I have a 'M' manager (conductor) instance, which controls two other instances 'A' and 'B'. 'A' is az Entity Framework data saver, 'B' calls an external web service to do something. The code of 'M' looks like:
// code inside 'M'
void Save()
B.Save()
A.Save()
This is a kind of distributed transaction. When B.Save drops exception, then A.Save should not happen or finish. Now I have to change it to works well. The problem is, that 'M' does not know anything about how an EF transaction works, or how to handle it, and A.Save cannot include the B.Save call. So I have to change it to somehow:
Object transaction = A.PrepareSave()
try {
B.Save()
}
catch {
A.RollbackSave(transaction)
throw
}
A.FinishSave(transaction)
Where the A.PrepareSave() looks like (?)
TransactionScope scope = new TransactionScope()
var context = CreateContext()
... do something EF ...
context.SaveChanges(false)
return new MyCustomTuple(scope,context)
And where A.FinishShave(Object trans) look as (?)
MyCustomTuple tuple = (MyCustomTuple)trans
TransactionScope scope = (TransactionScope)tuple.Scope
EFContext context = (EFContext)tuple.Context
scope.Complete()
context.AcceptAllChanges()
Question 1: is it ok? Is it the way to handle such a situation? (I have no influence on B.Save(), it saves or drops exception)
Question 2: how to free the resources (scope, context) at the end? The 'M' manager does not know anything about the MyCustomTuple and its contents.
You can use the TransactionScope right in the M method, you don't need to handle it in different parts of your code.
using (var transaction = new TransactionScope())
{
A.Save();
B.Save();
transaction.Complete();
}
This will complete if both save methods complete, otherwise an exception is thrown, no call to Complete() is made, so there is no commit. The using block will free the TransactionScope. As for disposing other resources, you can just do it the same way you're doing it now. You have not included any examples for this (I'd expect that the component that creates the context, maybe component A, handles the disposition of that context),
I want to update two formviews within a transaction. if one of them fails the other should fail too. Formviews have their own entity datasources.
button1_click(..........)
{
formview1.updateItem(true);
formview2.updateItem(true);
}
Ok so this may not be the simplest thing in the world.
The basic answer is that yes you can do it. And the code would be similar to this if the updateItem method opens the db connection.
using (TransactionScope scope = new TransactionScope())
{
formview1.updateItem(true);
formview2.updateItem(true);
scope.Complete();
}
If on the other hand the connections are already open by the time updateItem is called then you will need to do something more like
using (TransactionScope scope = new TransactionScope())
{
formview1.Connection.EnlistTransaction(Transcation.Current);
formview2.Connection.EnlistTransaction(Transcation.Current);
formview1.updateItem(true);
formview2.updateItem(true);
scope.Complete();
}
I'm having trouble suppressing part of a transaction using Sql Server CE 4 with Entity Framework and System.Transactions.TransactionScope.
The simplified code below is from a unit test demonstrating the problem.
The idea is to enable the innerScope block (with no transaction) to succeed or fail without affecting the outerScope block (the "ambient" transaction). This is the stated purpose of TransactionScopeOption.Suppress.
However, the code fails because it seems that the entire SomeTable table is locked by the first insert in outerScope. At the point indicated in the code, this error is thrown:
"SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 2,Thread id = 2248,Process id = 13516,Table name = SomeTable,Conflict type = x lock (x blocks),Resource = PAG (idx): 1046 ]"
[TestMethod()]
[DeploymentItem("MyLocalDb.sdf")]
public void MyLocalDb_TransactionSuppressed()
{
int count = 0;
// This is the ambient transaction
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.Required))
{
using (MyObjectContext outerContext = new MyObjectContext())
{
// Do something in the outer scope
outerContext.Connection.Open();
outerContext.AddToSomeTable(CreateSomeTableRow());
outerContext.SaveChanges();
try
{
// Ambient transaction is suppressed for the inner scope of SQLCE operations
using (TransactionScope innerScope = new TransactionScope(TransactionScopeOption.Suppress))
{
using (MyObjectContext innerContext = new MyObjectContext())
{
innerContext.Connection.Open();
// This insert will work
innerContext.AddToSomeTable(CreateSomeTableRow());
innerContext.SaveChanges(); // ====> EXCEPTION THROWN HERE
// There will be other, possibly failing operations here
}
innerScope.Complete();
}
}
catch { }
}
outerScope.Complete();
}
count = GetCountFromSomeTable();
// The insert in the outer scope should succeed, and the one from the inner scope
Assert.AreEqual(2, count);
}
So, it seems that "a transaction in a transaction scope executes with isolation level set to Serializable", according to http://msdn.microsoft.com/en-us/library/ms172001
However, using the following code snippet to change the isolation level of the TransactionScope does not help:
public void MyLocalDb_TransactionSuppressed()
{
TransactionOptions opts = new TransactionOptions();
opts.IsolationLevel = IsolationLevel.ReadCommitted;
int count = 0;
// This is the ambient transaction
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.Required, opts))
...
The same exception is thrown at the same location.
It seems the only way to avoid this is to call outerScope.Complete() before entering the innerScope block. But this would defeat the purpose.
What am I missing here?
Thanks.
AFAIK SQL Server Compact does not support nested transactions.
And why do you do that this way? If I look at your code there is no difference between running the second transaction scope inside the first one and running them in sequence.
IMHO this is not a problem of SQL Compact, TransactionScope or isolation level. This is a problem of your wrong application logic.
Each SaveChanges runs in transaction - either outer transaction defined by TransactionScope or inner DbTransaction. Even if it would not create transaction every database command has its own implicit transaction. If you use Suppress on the inner code block you are creating two concurrent transactions which are trying to insert into same table and moreover the first cannot complete without completing the second and the second cannot complete without completing the first => deadlock.
The reason is that insert command always locks part of the table not allowing new inserts until it is committed or rolled back. I'm not sure if this can be avoided by changing transaction isolation level - if it does, you will most probably need Read.Uncommitted.
In my application I have the following pattern:
using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required))
{
Function1();
Function2();
Function3();
}
My problen is that Function2 calls another function that connects to another DB ... and the transaction becomes distributed and I get an exception.
Is there any way in the code in which I can make a db call that is not part of the current transaction? My code in Function2 does just a read ... so I don't want to be part of current transaction.
Thanks, Radu
Around function2 you could create a new transaction with TransactionScopeOption.RequiresNew, thus forcing it into its own separate transaction. Since only one resource (the other database) will be used in that transaction is shouldn't become distributed.
Under what circumstances can code wrapped in a System.Transactions.TransactionScope still commit, even though an exception was thrown and the outermost scope never had commit called?
There is a top-level method wrapped in using (var tx = new TransactionScope()), and that calls methods that also use TransactionScope in the same way.
I'm using typed datasets with associated tableadapters. Could it be that the commands in the adapter aren't enlisting for some reason? Do any of you know how one might check whether they are enlisting in the ambient TransactionScope or not?
The answer turned out to be because I was creating the TransactionScope object after the SqlConnection object.
I moved from this:
using (new ConnectionScope())
using (var transaction = new TransactionScope())
{
// Do something that modifies data
transaction.Complete();
}
to this:
using (var transaction = new TransactionScope())
using (new ConnectionScope())
{
// Do something that modifies data
transaction.Complete();
}
and now it works!
So the moral of the story is to create your TransactionScope first.
The obvious scenario would be where a new (RequiresNew) / null (Suppress) transaction is explicitly specified - but there is also a timeout/unbinding glitch that can cause connections to miss the transaction. See this earlier post (the fix is just a connection-string change), or full details.
Be aware how TransactionScope works:
It sets property System.Transactions.Transaction.Current at the begining of using scope and then set it back to previous value at end of using scope.
Previous value depends on where given scope is declared. It can be inside another scope.
You can modify code like this:
using (var sqlConnection = new ConnectionScope())
using (var transaction = new TransactionScope())
{
sqlConnection.EnlistTransaction(System.Transactions.Transaction.Current);
// Do something that modifies data
transaction.Complete();
}
I show this possibility for those who have their code more complicated and cannot simply change code to open DB connection first.
This example (C#, .NetFramework 4.7.1) shows how we can persist to the database even if the code is wrapped in a TransactionScope. The first insert will be rolled back, and the second insert will be inserted without transaction.
See my related post, where I ask for help in how to detect this.
using (var transactionScope = new TransactionScope())
{
using (var connection = new SqlConnection("Server=localhost;Database=TestDB;Trusted_Connection=True"))
{
connection.Open();
new SqlCommand($"INSERT INTO TestTable VALUES('This will be rolled back later')", connection).ExecuteNonQuery();
var someNestedTransaction = connection.BeginTransaction();
someNestedTransaction.Rollback();
new SqlCommand($"INSERT INTO TestTable VALUES('This is not in a transaction, and will be committed')", connection).ExecuteNonQuery();
}
throw new Exception("Exiting.");
transactionScope.Complete();
}