I have an application that reads data from one database, and transforms that data into a new form and writes it into a new database. Some of the tables in the new database are made from multiple tables in the old database so there is a large amount of reading and writing going on. Here is the basic concept of the system:
public void TransferData()
{
OldEntities oldContext = new OldEntities()
NewEntities newContext = new NewEntities()
using(var transaction = newContext.Database.BeginTransaction())
{
try{
TransferTable(oldContext, newContext);
} catch (Exception e) {
transaction.Rollback();
}
}
}
public void TransferTable(OldEntities oldContext, NewEntities newContext)
{
List<Entity1> mainTable = oldContext.Where();
Parallel.ForEach(mainTable, (row) =>
{
using(NewEntities anotherNewContext = new NewContext())
{
anotherNewContext.Database.UseTransaction(newContext.Database.CurrentTransaction.UnderlyingTransaction);
// Do Work
}
});
}
This causes the following exception:
The transaction passed in is not associated with the current connection. Only transactions associated with the current connection may be used.
How can I get around this. The transaction will always be coming from a different EF context but I need them all to share the same transaction. I couldn't find a way to create the new context as a "child" of the original and I am trying to avoid creating a transaction entirely separate from the EF context. Any suggestions?
There is an excellent overview of transactions here which explains how to use transactions in a variety of contexts some of which are similar to yours. Rather than trying to fix your code as is it may be that a modified approach will help.
I assume you are using EF6
Related
i'm stuck within a problem that is going to happen to everyone in ADO.NET mixed with Entity Framework contexts.
I've got a big procedure that handles and saves data into database by using multiple ways like ADO.NET dataadapters and direct CRUD commands against DB. All the procedure is wrapped by 2 using() blocks that creates and releases a DBConnection/DBTransaction and a try/catch block to commit or rollback the transaction. Unfortunately, in the middle of this routine, i have to recall a saving procedure implemented by using Entity Framework. This leads me to a problem:
According to the official documentation, Entity Framework 5 allows me to pass a connection with an transaction associated transaction (it should work, in debug mode, when i call SaveChanges() i don't receive any TimeOutException due to deadlocks, conversely if i pass a new connection it does), but unfortunately after SaveChanges() kicks in the connection is closed and associated transaction committed! Even if i set the flag 'contextOwnsConnection'!
As far as i know, if i migrate EF5 to EF6, things should work (am i right?), but unfortunately i can't, because the project i'm working on is very large and involves a lot of dependecies and it would take a large amount of time.
How can i make it work with EF5? Is there any trick or pattern to achieve the desired result? Am i right about the behavior of EF6? Does it worth EF6 migration?
Here you are a simple example of how does my code looks like.
For privacy reasons i can't post the original code but just imagine a situation like this with a lot of more complexity:
using(DbConnection conn = DBProvider.CreateConnection()){
//Open the created connection
conn.Open();
//Create a new transaction
using(DbTransaction tr = DBProvider.CreateTransaction()){
//Begin a new transaction
tr.Begin();
bool saveOk;
try{
//Updates customers by using dataadapter
dataAdapterCustomers.InsertCommand.Transaction = tr;
dataAdapterCustomers.UpdateCommand.Transaction = tr;
dataAdapterCustomers.DeleteCommand.Transaction = tr;
dataAdapterCustomers.Update();
//Updates stock items by using dataadapter
stockAdapterCustomers.InsertCommand.Transaction = tr;
stockAdapterCustomers.UpdateCommand.Transaction = tr;
stockAdapterCustomers.DeleteCommand.Transaction = tr;
stockAdapterCustomers.Update();
//...Many other DB accessing here...
//Updates stock quantity by using simple DBCommand
quantityUpdateCmd.Transaction = tr;
quantityUpdateCmd.ExecuteNonQuery();
//Updates stock statistics by using a simple DBCommand
updateStockStatsCmd.Transaction = tr;
updateStockStatsCmd.ExecuteNonQuery();
//...Many other DB accessing here...
//HERE:
//Creates a new activity and save it using EF.
//I use a UnitOfWork and i pass to it my connection and 'false' as contextOwnsConnection parameter
//(it 'll be used by the DBContext contained in my Unit of work)
using(ActivityUoW uow = new ActivityUoW(conn, false)){
Activity act = new Activity();
act.Name = "Saving activity";
act.Description = "Done by user";
act.Date = DateTime.Now;
uow.Activities.Add(act);
uow.SaveChanges();
}
//Based on activity result, launch a store procedure that makes other complex things.
UNFORTUNATELY THE CONNECTION HAS BEEN CLOSED AND TRANSACTION COMMITTED, SO THE FOLLOWING INSTRUCTION WILL FAIL.
launchActivityUpdateSpCmd.Transaction = tr;
launchActivityUpdateSpCmd.ExecuteNonQuery();
//...Many other DB accessing here...
//Data saved correctly
saveOk = true;
}
catch(Exception ex){
//There was an error during save
saveOk = false;
}
//Commit or rollback transaction according to save procedure result
if(saveOk)
tr.Commit();
else
tr.Rollback();
}
}
I didn't quite follow your question(s) and wasn't sure if your issue was related to how to handle transactions or if you had a question about EF5 to EF6 migrations. That being said, you have an interesting mixture of data access code.
Regarding transactions - I would look into using the TransactionScope which is part of the System.Transactions namespace.
For example:
try
{
using (var scope = new TransactionScope())
{
using (var conn = new SqlConnection("your connection string"))
{
conn.Open();
// your EF and ADO.NET code
}
scope.Complete();
}
}
catch (TransactionAbortedException ex)
{
}
catch (ApplicationException ex)
{
}
I need to make the code below atomic/fail or succeed as a single unit. How could I go about achieving that?
void Processor(Input input)
{
var mapper = new Mapper(recordDetails);
int remainingRecords = GetCountForRemainingRecords(recordDetails);
try
{
while (remainingRecords > 0)
{
mapper.CreateRecords(dataset);
Validate(dataset);
//the Save(dataset) uses SqlBulkCopy maps tables, transaction, and saves it..
Save(dataset);
//I cannot perform the operation below on the dataset directly because dataset doesn't have the records that is in the database
//the method below eventually calls a stored proc that sends a list of users that was recently created
OutdateDuplicateUsers(dataset.userTable);
remainingRecords = MethodToGetUpdatedCount();
}
}
catch (Exception exception)
{
//exception handler..
}
}
Now if my OutdateDuplicateUsers throws an exception, I would still end up with the accounts that Save method persisted. I do not want that to happen.
I want both Save and OutdateDuplicateUsers method to be atomic. I read about this great article about TransactionScope and seemed it is exactly what I want. However, I could not get it to work. The implementation seems straight forward reading from the article, but I couldn't get it working myself.
What I tried:
void Processor(Input input)
{
var mapper = new Mapper(recordDetails);
int remainingRecords = GetCountForRemainingRecords(recordDetails);
try
{
while (remainingRecords > 0)
{
using (var scope = new TransactionScope())
{
try
{
mapper.CreateRecords(dataset);
Validate(dataset);
//the method Save(dataset) is using SqlBulkCopy; maps tables, uses transaction, and saves it..
Save(dataset);
//I cannot perform this opertaion on the dataset directly because dataset doesn't have the records that is in the database
//the method below eventually calls a stored proc that sends a list of users that was recently created
OutdateDuplicateUsers(dataset.userTable);
remainingRecords = MethodToGetUpdatedCount();
scope.Complete();
}
catch (Exception)
{
//not both at the same time. I tried using both, one at a time though.
TransactionScope.Dispose();
TransactionScope.Current.Rollback();
//exception handler
}
}
}
}
}
update:
The dataset is a strongly typed dataset and is schema only. The CreateRecords and Validate method populates the data based on the business logic. The 'mapper' takes in recordDetails which is, for instance, a list of Users (updated the snippet).
What I mean by doesn't work is that if OutdateDuplicateUser() method throws an exception and cannot complete the outdating operation, I could still see that the records have been persisted in the database from Save(dataset) method, which I am trying to prevent.
I need to insert some objects (about 4 million) in the database using C# and EF (using .NET 3.5). My method that adds the objects is in a for:
private DBModelContainer AddToContext(DBModelContainer db, tblMyTable item, int count)
{
db.AddTottblMyTable (item);
if ((count % 10000== 0) || (count == this.toGenerate))
{
try
{
db.SaveChanges();
}
catch (Exception e)
{
Console.WriteLine(e.StackTrace);
}
}
return db;
}
How to detach the added objects (of type tblMyTable) from the context object? I don't need them for a later use and when more than 300000 objects are added, the execution time between db saving ( db.SaveChanges()) increases considerably.
Regards
Entity Framework may not be the best tool for this type of operation. You may be better off with plain ADO.Net, some stored procedures... But if you had to use it, here are a number of suggestions:
Keep the active Context Graph small by using a new context for each
Unit of Work
Turn off AutoDetechChangesEnabled - context.Configuration.AutoDetectChangesEnabled = false;
Batching, in your loop, Call SaveChanges periodically
EDIT
using(var db = new DBModelContainer())
{
db.tblMyTable.MergeOption = MergeOption.NoTracking;
// Narrow the scope of your db context
db.AddTottblMyTable (item);
db.SaveChanges();
}
Keeping a long running db context is not advisable, so consider refactoring your Add method to not keep attempting to reuse the same context.
See Rick Strahl's post on bulk inserts for more details
AFAK EF does not support directly the BulkInsert so it will be tedious to do such thing manually.
try to consider EntityFramework.BulkInsert
using (var ctx = GetContext())
{
using (var transactionScope = new TransactionScope())
{
// some stuff in dbcontext
ctx.BulkInsert(entities);
ctx.SaveChanges();
transactionScope.Complete();
}
}
You may try Unit Of Work and dont save context (SaveChanges) on every record insert but save it at end
I got this error when I was trying to call same method with multiple threads: The connection is already in a transaction and cannot participate in another transaction. EntityClient does not support parallel transactions.
And I found that my issue somehow is similar to this: SqlException from Entity Framework - New transaction is not allowed because there are other threads running in the session
My scenario:
I have a class that is instantiated by multiple theads, each thread - new instance:
public MarketLogic()
{
var dbContext = new FinancialContext();
AccountBalanceRepository = new AccountBalanceRepository(dbContext);
CompositeTradeRepository = new CompositeTradeRepository(
new OrderRepository(dbContext)
, new PositionRepository(dbContext)
, new TradeRepository(dbContext));
CompositeRepository = new CompositeRepository(
new LookupValueRepository(dbContext)
, new SecurityRepository(dbContext)
, new TransactionRepository(dbContext)
, new FinancialMarketRepository(dbContext)
, new FinancialMarketSessionRepository(dbContext)
);
}
In MarketLogic class, SavePosition() is used to save information into database using Entity Framework DbContext. (SaveChanges()) method.
private void SavePosition()
{
using (DbContextTransaction transaction = CompositeTradeRepository.OrderRepository.DbContext.Database.BeginTransaction())
{
try
{
// business logic code, **this take some times to complete**.
position = EntityExistsSpecification.Not().IsSatisfiedBy(position)
? CompositeTradeRepository.PositionRepository.Add(position)
: CompositeTradeRepository.PositionRepository.Update(position);
transaction.Commit();
}
catch (Exception exception)
{
// some code
transaction.Rollback();
}
}
}
public Position Add(Position position)
{
// some code
// context is a instance of FinancialContext, this class is generated by Entity Framework 6
context.SaveChanges();
}
In my scenario, the issue happened when there are 2 threads and more try to call new MarketLogic().SavePosition().
I can see that while the first transaction is not completed yet, the second thread come in and start a new transaction.
But I dont understand why 2 threads are in different DbContext object BUT the error still happens
So what is wrong? Or did I miss something?
My fault, I left the repositories as static, so all thread shared same repositories, which means they shared same DbContext, which caused the issue when the EF didn't finished permitting changes yet and other call to SaveChanges() is made. So EF throwed exception.
I'm learning some Fluent NHibernate and I've run across the semi-awesome PersistenceSpecification class.
I've set it up in a unit test to verify my mappings and it works great. However, it leaves the record in the database when done. I tried throwing it in a transaction so I can rollback the changes but I get an error:
System.ObjectDisposedException: Cannot access a disposed object.
Object name: 'AdoTransaction'..
Without a transaction I have to figure out the ID's of the record, retrieve them and delete them and that doesn't seem very elegant.
Any thoughts?
EDIT:
Here is the code snippet:
var factory = GetSessionFactory();
using (var session = factory.OpenSession())
using (var transaction = session.BeginTransaction())
{
new PersistenceSpecification<TimePeriod>(session)
.CheckProperty(x => x.EndDate, DateTime.Today)
.VerifyTheMappings();
transaction.Rollback();
}
Try setting the IsolationLevel on the transaction. This snippet worked for me:
using (var trans = _session.BeginTransaction(IsolationLevel.ReadUncommitted))
{
new PersistenceSpecification<Event>(_session)
.CheckProperty(p => p.StartTime, new DateTime(2010, 1, 1))
.VerifyTheMappings();
trans.Rollback();
}
The PersistenceSpecification is usually used with an in-memory database like SQLite, that's why it doesn't roll anything back. I believe there's a constructor overload that takes an ISession instance, have you tried getting a transaction from there then rolling that back after?
I think the issue here is VerifyTheMappings() calls TransactionSave() which does a tx.Commit() to the database. As James indicated, this technique seems to work great for throw away in-memory testing techniques. This would not work in the case of testing mappings against a legacy database.
Setting IsolationLevel.ReadUncommitted will work, but only incidentally, since all it is doing is telling the session that it can read without needing a new transaction (a dirty read, in DBMS parlance) - so Session.Transaction.Commit () doesn't have to commit a database transaction before the verification reads. This also means that it is not necessarily testing what you think it is testing! (I also think this has probably-questionable support amongst non-MS SQL databases). The answer from leebrandt works because of the explicit rollback, not the isolation level (nb. At the time of the answer this helped more than it does now, see note below).
The good news is that the correct way to do this is to just rollback the transaction manually. Session.Transaction is automatically replaced whenever the transaction is committed, so you'll need to hold a reference to it, and you'll have to open one explicitly anyways, since TransactionalSave () checks if the current transaction is active and creates (and disposes!) its own if not. I typically test all of my mappings in the same fixture, where I also verify the factory creation and a few other infrastructural persistence things, so I like the following pattern for this to keep the plumbing down:
class TestFixture {
static ISessionFactory factory = CreateMyFactorySomehowHere();
ISession session;
ITransaction tx;
public void Setup ()
{
session = factory.OpenSession ();
tx = session.BeginTransaction ();
}
public void Cleanup ()
{
tx.Rollback ();
tx.Dispose ();
session.Close ();
}
public void TestAMappingForSomething ()
{
var spec = new PersistenceSpecification<Something> (session);
spec.VerifyTheMappings ();
}
}
Obviously, insert your own test-framework-specific terminology and attributes/annotations wherever, but you get the idea.
I've just now noticed how old this question is: this behavior was fixed in this commit in July 09, to handle existing transactions nicely so that the above works! Clearly this is what you were doing originally anyways.
i think that it's very important to do this testing with your real db, to see that his tables definition r ok, so i'v developed a very simple class that perform a crud test on a mapped entity and roll back at the end;
internal class GenericMappingTesterWithRealDB<T> where T : IIdentifiable
{
public T EntityToTest { get; set; }
public Func<T, object> PerformEntityManipulationBeforeUpdate { get; set; }
public GenericMappingTesterWithRealDB()
{
Assume.That(SessionFactoryProvider.NewSession,Is.Not.Null);
}
public void RunTest()
{
using (ISession session = SessionFactoryProvider.NewSession)
using (ITransaction transaction = session.BeginTransaction())
{
try
{
session.Save(EntityToTest);
var item = session.Get<T>(EntityToTest.ID);
Assert.IsNotNull(item);
if (PerformEntityManipulationBeforeUpdate != null)
{
PerformEntityManipulationBeforeUpdate.Invoke(EntityToTest);
}
session.Update(EntityToTest);
session.Delete(EntityToTest);
session.Save(EntityToTest);
}
catch (Exception e)
{
Assert.Fail(e.Message, e.StackTrace);
}
finally
{
transaction.Rollback();
}
}
}
}
IIdentifiable in my project is the most basic interface of my entities
the class is using the nunit.framework but u can do it with every testing framework u want
sessionfactoryprovider needs to supply the isession obj
here is a sample of use
/// <summary>
/// Testing the mapping of our entities.
/// there must be a server connection for this kind of test.
/// </summary>
[TestFixture]
internal class someMappingTest
{
[Test(Description = "Check the Encoding Profile FluentNHibernate Mapping")]
[Timeout(20000)]
public void checkthatMappingWorks()
{
// creatw the new entity
TestedType testOn = new TestedType();
// set the initialization values
testOn.Name = "TestProfileExecution";
// create the test object
new GenericMappingTesterWithRealDB<TestedType>
{
// assign an entity
EntityToTest = testOn,
// assign new values for update check
PerformEntityManipulationBeforeUpdate =
delegate(TestedType testedTypeBeingTested)
{
return testedTypeBeingTested.Name = "Updateing Test";
}
}.
// call run test to perform the mapping test.
RunTest();
}
}