if writing below codes: Error returns.i do like advise : http://stackoverflow.com/questions/794364/how-do-i-use-transactionscope-in-c
But only error change:The partner transaction manager has disabled its support for remote/network transactions Exception from HRESULT: 0x8004D025
i am using windows server 2003.
using (var stockMovementCtx = new StockMovementCtxDataContext())
{
using (var scope = new TransactionScope())
{
// do something....
}
scope.Complete();
}
but if i changed my codes ; every thing is ok:
using (var stockMovementCtx = new StockMovementCtxDataContext())
{
// do something....
}
How can i solve below error. This is really important.Please help me:((((
TransactionScope will elevate to DTC if necessary. It sounds like DTC is not correctly configured, usually due to firewall restrictions. Try a dtcping between the servers (important: in both directions).
DataContext by default wraps all operations within a Transaction, so you don't need to explicitly do Transaction while working with DataContext. Read this.
http://weblogs.asp.net/scottgu/archive/2007/07/11/linq-to-sql-part-4-updating-our-database.aspx
using (var stockMovementCtx = new StockMovementCtxDataContext())
{
// do something....
//everything until this point is within the transaction.
stockMovementCtx.SubmitChange();
}
Why we need TransactionScope ?
TransactionScope enables you to perform transactions beyond dataabse. Say you have series of operations as below and all are atomic and they need be performed within a transaction.
1.) Add record in Table 1
2.) Add record in Table 2
3.) Write a NEW file on the Disk
4.) Rename file X on the disk
5.) Update file record in Table 3
If you use SqlTransaction then only the operaration 1,3 and 5 can participate in the transaction but 3 and 4 cannot because they do not relate database at all. In such cases TrasanctionScope can help you. TransactionScope leverages the MSDTC (Distributed Trasanction co-coordinator) that can help you do transactions beyond database context. You can wrap all five operations within a TransactionScope transaction execute them atomically. It is also worth noting that TransactionScope supports nested transactions, that means one block of transaction can contain multiple set of transactions.
Related
please can someone help i'm trying to use Transaction in EF6. it's my first try in transaction in .net.
i have read the manual by Microsoft in their site.
http://msdn.microsoft.com/en-us/data/dn456843.aspx and i would to use a transaction for user registration but nothing is working. Im using EF objectContext.
here is a little piece of my code :
using (var context= new MyModelContainer())
{
using(var dbContextTransaction = context.Database.BeginTransaction())
{
try
{
// code here to create user
context.savechanges();
// code here to add role
context.savechanges();
dbContextTransaction.Commit();
}
catch
{
dbContextTransaction.Rollback();
}
}
}
My problem is that Visual Studio does not even recognize Database.BeginTransaction(). may be because we are using objectContext? i never use transaction. i change to use another database where our model is DbContext it's seems to work.
How can we use objectContext with transaction also (mean not distributed systems) ?
any tutorial please ?
I tried transaction scope but it's seem to work but i read that it's for distributed system ((( it's means that performance going down. any suggestion ?
thanks for your time !!!
Replace the line
using(var dbContextTransaction = context.Database.BeginTransaction())
with this:
using(var dbContextTransaction = new TransactionScope())
and remove the call to Rollback. TransactionScore will do what you need.
TransactionScope is compatible with distributed systems, but if you do not make a call to other db's and use it as described, then it is more than good enough for your needs. If you try to make a call to save to another db (for example) then it will require special privileges and an exception will be thrown.
I am using Parallel.ForEach to do work on multiple threads, using a new EF5 DbContext for each iteration, all wrapped within a TransactionScope, as follows:
using (var transaction = new TransactionScope())
{
int[] supplierIds;
using (var appContext = new AppContext())
{
supplierIds = appContext.Suppliers.Select(s => s.Id).ToArray();
}
Parallel.ForEach(
supplierIds,
supplierId =>
{
using (var appContext = new AppContext())
{
Do some work...
appContext.SaveChanges();
}
});
transaction.Complete();
}
After running for a few minutes it is throwing an EntityException "The underlying provider failed on Open" with the following inner detail:
"The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions."
Does anyone know what's causing this or how it can be prevented? Thanks.
You could also try setting the maximum number of concurrent tasks in the Parallel.ForEach() method using new ParallelOptions { MaxDegreeOfParallelism = 8 } (replace 8 with the whatever you want to limit it to.
See MSDN for more details
You should also find out why your app is taking such huge amounts of locks? You have wrapped a TransactionScope around multiple db connections. This probably causes a distributed transaction which might have to do with it. It certainly causes locks to never be released until the very end. Change that.
You can only turn up locking limits so far. It does not scale to arbitrary amounts of supplier ids. You need to find the cause for the locks, not mitigate the symptoms.
You are running into the maximum number of locks allowed by sql server - which by default is set automatically and governed by available memory.
You can
Set it manually - I forget exactly how but google is your friend.
Add more memory to your sql server
Commit your transactions more frequently.
I'm having trouble suppressing part of a transaction using Sql Server CE 4 with Entity Framework and System.Transactions.TransactionScope.
The simplified code below is from a unit test demonstrating the problem.
The idea is to enable the innerScope block (with no transaction) to succeed or fail without affecting the outerScope block (the "ambient" transaction). This is the stated purpose of TransactionScopeOption.Suppress.
However, the code fails because it seems that the entire SomeTable table is locked by the first insert in outerScope. At the point indicated in the code, this error is thrown:
"SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 2,Thread id = 2248,Process id = 13516,Table name = SomeTable,Conflict type = x lock (x blocks),Resource = PAG (idx): 1046 ]"
[TestMethod()]
[DeploymentItem("MyLocalDb.sdf")]
public void MyLocalDb_TransactionSuppressed()
{
int count = 0;
// This is the ambient transaction
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.Required))
{
using (MyObjectContext outerContext = new MyObjectContext())
{
// Do something in the outer scope
outerContext.Connection.Open();
outerContext.AddToSomeTable(CreateSomeTableRow());
outerContext.SaveChanges();
try
{
// Ambient transaction is suppressed for the inner scope of SQLCE operations
using (TransactionScope innerScope = new TransactionScope(TransactionScopeOption.Suppress))
{
using (MyObjectContext innerContext = new MyObjectContext())
{
innerContext.Connection.Open();
// This insert will work
innerContext.AddToSomeTable(CreateSomeTableRow());
innerContext.SaveChanges(); // ====> EXCEPTION THROWN HERE
// There will be other, possibly failing operations here
}
innerScope.Complete();
}
}
catch { }
}
outerScope.Complete();
}
count = GetCountFromSomeTable();
// The insert in the outer scope should succeed, and the one from the inner scope
Assert.AreEqual(2, count);
}
So, it seems that "a transaction in a transaction scope executes with isolation level set to Serializable", according to http://msdn.microsoft.com/en-us/library/ms172001
However, using the following code snippet to change the isolation level of the TransactionScope does not help:
public void MyLocalDb_TransactionSuppressed()
{
TransactionOptions opts = new TransactionOptions();
opts.IsolationLevel = IsolationLevel.ReadCommitted;
int count = 0;
// This is the ambient transaction
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.Required, opts))
...
The same exception is thrown at the same location.
It seems the only way to avoid this is to call outerScope.Complete() before entering the innerScope block. But this would defeat the purpose.
What am I missing here?
Thanks.
AFAIK SQL Server Compact does not support nested transactions.
And why do you do that this way? If I look at your code there is no difference between running the second transaction scope inside the first one and running them in sequence.
IMHO this is not a problem of SQL Compact, TransactionScope or isolation level. This is a problem of your wrong application logic.
Each SaveChanges runs in transaction - either outer transaction defined by TransactionScope or inner DbTransaction. Even if it would not create transaction every database command has its own implicit transaction. If you use Suppress on the inner code block you are creating two concurrent transactions which are trying to insert into same table and moreover the first cannot complete without completing the second and the second cannot complete without completing the first => deadlock.
The reason is that insert command always locks part of the table not allowing new inserts until it is committed or rolled back. I'm not sure if this can be avoided by changing transaction isolation level - if it does, you will most probably need Read.Uncommitted.
I have some problem with this piece of code. When I run it, I want it to NOT lock the table(s) used by the transaction. To achieve this goal I set the isolation level to ReadUncommited.
The problem is that it still locks the table, it acts like if the isolationLevel were Serializable. I'm using SQL server 2008
Here is the code:
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions() { IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted }))
{
while (true)
{
using (SqlConnection connection = new SqlConnection(ConnectionString))
{
connection.Open();
Console.WriteLine(Transaction.Current.IsolationLevel);
SqlUtils.ExecuteNonQuery(connection, "INSERT INTO test4 (test) VALUES ('ASDASDASD')");
}
Thread.Sleep(1000);
}
scope.Complete();
}
The transaction isolation level Read Uncommitted only applies when you read data (as the name says). It will read data that's not been committed yet.
There's no way I know of to stop SQL Server from putting locks on tables when you INSERT or UPDATE data.
ReadUncommitted, like the name suggests, impacts readers - i.e. should read operations take read-locks and key-range-locks; should they respect existing locks, etc.
I wonder whether IsolationLevel.Chaos would offer anything here, but please don't use that. Please.
If your competing reader needs to see uncommitted data, then change the isolation-level of the reader. Also, it goes without saying, but long-running transactions (and DTC/LTM transactions in particular) are not recommended.
I am getting the following error when I try to call a stored procedure that contains a SELECT Statement:
The operation is not valid for the state of the transaction
Here is the structure of my calls:
public void MyAddUpdateMethod()
{
using (TransactionScope Scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//do my first add update statement
//do my call to the select statement sp
bool DoesRecordExist = this.SelectStatementCall(id)
}
}
}
public bool SelectStatementCall(System.Guid id)
{
using(SQLServer Sql = new SQLServer(this.m_connstring)) //breaks on this line
{
//create parameters
//
}
}
Is the problem with me creating another connection to the same database within the transaction?
After doing some research, it seems I cannot have two connections opened to the same database with the TransactionScope block. I needed to modify my code to look like this:
public void MyAddUpdateMethod()
{
using (TransactionScope Scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//do my first add update statement
}
//removed the method call from the first sql server using statement
bool DoesRecordExist = this.SelectStatementCall(id)
}
}
public bool SelectStatementCall(System.Guid id)
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//create parameters
}
}
When I encountered this exception, there was an InnerException "Transaction Timeout". Since this was during a debug session, when I halted my code for some time inside the TransactionScope, I chose to ignore this issue.
When this specific exception with a timeout appears in deployed code, I think that the following section in you .config file will help you out:
<system.transactions>
<machineSettings maxTimeout="00:05:00" />
</system.transactions>
I also come across same problem, I changed transaction timeout to 15 minutes and it works.
I hope this helps.
TransactionOptions options = new TransactionOptions();
options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
options.Timeout = new TimeSpan(0, 15, 0);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required,options))
{
sp1();
sp2();
...
}
For any wanderer that comes across this in the future. If your application and database are on different machines and you are getting the above error especially when using TransactionScope, enable Network DTC access. Steps to do this are:
Add firewall rules to allow your machines to talk to each other.
Ensure the distributed transaction coordinator service is running
Enable network dtc access. Run dcomcnfg. Go to Component sevices > My Computer > Distributed Transaction Coordinator > Local DTC. Right click properties.
Enable network dtc access as shown.
Important: Do not edit/change the user account and password in the DTC Logon account field, leave it as is, you will end up re-installing windows if you do.
I've encountered this error when my Transaction is nested within another. Is it possible that the stored procedure declares its own transaction or that the calling function declares one?
For me, this error came up when I was trying to rollback a transaction block after encountering an exception, inside another transaction block.
All I had to do to fix it was to remove my inner transaction block.
Things can get quite messy when using nested transactions, best to avoid this and just restructure your code.
In my case, the solution was neither to increase the time of the "transactionscope" nor to increase the time of the "machineSettings" property of "system.transactions" of the machine.config file.
In this case there was something strange because this error only happened when the volume of information was very high.
So the problem was based on the fact that in the code inside the transaction there were many "foreach" that made updates for different tables (I had to solve this problem in a code developed by other personnel). If tests were performed with few records in the tables, the error was not displayed, but if the number of records was increased then the error was displayed.
In the end the solution was to change from a single transaction to several separate ones in the different "foreach" that were within the transaction.
You can't have two transactions open at the same time. What I do is that I specify transaction.Complete() before returning results that will be used by the 2nd transaction ;)
I updated some proprietary 3rd party libraries that handled part of the database process, and got this error on all calls to save. I had to change a value in the web.config transaction key section because (it turned out) the referenced transaction scope method had moved from one library to another.
There was other errors previously connected with that key, but I got rid of them by commenting the key out (I know I should have been suspicious at that point, but I assumed it was now redundant as it wasn't in the shiny new library).