I am new to EF 4.1, i am trying to add list of object to sql table using EF and DB first approach.
1) Should i be calling context.SaveChanges() inside for loop after AddObject(Trans) or its okay to call it outside the loop?
2) Also, would all transactions be roll back in case there is an error?
I have read the Entity Framework already supports native database transactions, If any dirty ObjectStateEntry object cannot be persisted, then ObjectContext.SaveChanges will roll back the transaction and throw an exception.
3) What else can be done to increase the performance? I will be inserting around 1200 records in one go.
try
{
using (TransactionModel context = new TransactionModel())
{
try
{
foreach (var item in lstUnAllocatedTransaction)
{
Transaction Trans = new Transaction();
Trans.Amount = item.Amount;
Trans.Date = Datetime.Now;
Trans.DateAllocated = null;
context.Transactions.AddObject(Trans);
}
context.SaveChanges();
}
catch (Exception ex)
{
}
}
}
catch (Exception ex)
{
}
}
1) Should i be calling context.SaveChanges() inside for loop after AddObject(Trans) or its okay to call it outside the loop?
It's okay (and preferable) to call outside the loop if it's okay that the whole add operations are one transaction. If you need one transaction per item, then obviously you need to call it inside the loop.
2) Also, would all transactions be roll back in case there is an error? I have read the Entity Framework already supports native database transactions, If any dirty ObjectStateEntry object cannot be persisted, then ObjectContext.SaveChanges will roll back the transaction and throw an exception.
Only the last transaction will be rolled back. That's why 1) makes a difference. If you want to insert all-or-none then you need to add all and then call SaveChanges once.
3) What else can be done to increase the performance? I will be inserting around 1200 records in one go.
You could call AddRange instead of Add, otherwise I never had problems with a small number of records. If you think it's too slow, look into bulk inserts. There is nothing build into EF to do that, you will need an extension.
Related
I have a problem to handle the delete operation in C# / EF.
This is my code. I can delete record that is no problem. I have only problems with record that are used in other records.
So he throw an exception and the user see a messagebox.
But the record is still marked as delete so when I do another SaveChanges, I get a exception.
Is it possible to change back the state of the record to not deleted?
public void Remove()
{
try
{
Klant deleteKlant = dbContext.Klanten.First(i => i.KlantId == Klant.KlantId);
dbContext.Klanten.Remove(deleteKlant);
dbContext.SaveChanges();
}
catch (Exception)
{
throw new ArgumentException("Wissen mislukt");
}
}
First you should keep your dbContext objects short lived if at all possible. You will run into huge problems if you keep them around, as they hold a strong pointer to every object created using them.
Second you should check whether it is valid to delete a record before you try to delete it, that way you never get an exception in the first place.
Sometimes I am importing 5000+ rows (of different entity types) into my DB. I use Entity Framework and I want the context to save the rows that fits on my model, if there are invalid ones they should be discarded.
I am inside a transaction, so :
I need to decide at the end of the iteration if i want to Complete() the transacion or Dispose() ,It depends on which kind of entity throws the exception.
I think there are 2 possibilities:
Check if a entity object is valid before context.Add(object)
Add the 5000 items and, set in some way that context.SaveChanges() , saves the valid one and discard the invalid ones (now it throws an exception and discard all added items).
I am asking for a snippet of code, or some indications i can follow to solve that situation. I want only to import the valid items and generate a log of the invalid ones.
Thanks. I will mark as an answer if you solve it.
Option 1 is the more sensible approach. Adding 5000 objects and then saving changes is very non-performant. A better approach is
while (null != (entity = GetNextEntity())
{
if (entity.IsValid())
{
context.Add(entity);
context.SaveChanges();
}
}
Update -- Ignore SQL errors
while (null != (entity = GetNextEntity())
{
try
{
context.Add(entity);
context.SaveChanges();
}
catch (Exception) { /* eat it */ }
}
I have a loop that looks a bit like this:
CountsToReport = rep.Counts_Get().Where(x => x.Status == "Completed");
foreach (var count in CountsToReport.ToList())
{
//do some stuff
//send an email
count.Status = "Reported";
rep.SaveChanges();
}
Where "rep" is a repository wrapper around an EF context.
When this runs, the unfortunate email recipient gets deluged with spam because the SaveChanges call doesn't actually commit the changes - so the loop keep getting the same counts, emailing them, and marking them as "Reported" but doesn't actually save the change.
If you stop the loop, and re-start the code, the change saves successfully. You can confirm this scenario by stepping through the code: the EF object in C# changes its Status, but the underlying data in SQL doesn't change.
I'm presuming this is because SaveChanges doesn't actually commit the transaction - it just marks the data as having changed ready for the end of the transaction. But we're not using transactions anywhere else in the DB, and it'd be a bit of a pain to change the repository for this one use case.
Is there any other way I can force EF to commit this change and escape my endless loop of doom? Or am I mistaken about the cause?
EDIT: Putting this in the repository and calling it instead of SaveChanges his fixes it:
public void SaveWithTransaction()
{
using (var transaction = new System.Transactions.TransactionScope())
{
db.SaveChanges();
transaction.Complete();
}
}
But it seems ugly. Still interested to know if there's another way round.
EDIT: This is deceptive. It looks like it's just the old add/modified problem again. Marking the object as modified seems to help.
It is quite possible that entities in your CountsToReport are detached from context.
Thus not only ChangeTracker doesn't see any sort of change done over Count, it doesn't know anything about the entity at all.
To have the issue fixed:
iterate through collection of CountsToReport
attach each entity back to context, something like db.Counts.Attach(count)
modify the count.Status and call db.SaveChanges();
like this:
foreach (var count in CountsToReport.ToList())
{
//do some stuff
//send an email
rep.Counts.Attach(count);
count.Status = "Reported";
rep.SaveChanges();
}
Have a look at the following link for more information about entity states and how to deal with them:
http://msdn.microsoft.com/en-us/data/jj592676.aspx
I'm doing a multi-layer MVVM WPF application, which is connected to an Oracle database. I need some explanations about TransactionScope. Consider the following example:
using (TransactionScope ts = new TransactionScope())
{
...
bank.setID(BankName, Branch);
check.addCheck(check);
...
ts.Complete();
}
This code is for explanation only: bank.setID() updates a record, while addCheck actually inserts a record. I couldn't figure out how to test this. I wanted to execute the update and shutdown of the database before inserting with the second method, and then check to see if the update is rolled back. Is this already right? Am I on the right track ? Is this the purpose of TransactionScope?
Thanks in advance
Edit: I wasn't sure at first if you understood DB transactions, so here's a very short description:
TransactionScope is designed to wrap a database transaction with a mechanism that is exception safe.
You use it to wrap a set of operations that should be atomic with a transaction, so if you fail on one update, all of the DB actions within that transaction get rolled back.
You use it in a using block so that if your code throws an exception, the transaction gets rolled back, instead of committed to the database.
I wanted to execute the update and shutdown the database before inserting with the second method, and then check to see if the update is rolled back ... Am I on the right track ?
Yep. Your code should already handle it:
If the DB is shut down before addCheck does an insert, then the insert should throw an exception
Since an exception gets thrown, Complete should never get called
The finally block should automatically roll back the transaction when the (hidden) finally block is reached, and Complete hasn't been called
I couldn't figure out how to test this
If you would like to write a unit test for your code, then I wouldn't suggest the "take the DB offline in the middle of execution" approach.
Instead, I'd suggest making your DB logic flexible enough to point at different DBs. Then, either remove the table or some of the columns that addCheck inserts to. Try to set up your DB so that setID succeeds, but addCheck fails.
TransactionScope is well documented. MSDN it.
to test how it works, no need to take database offline. use this snippet:
using (TransactionScope ts = new TransactionScope())
{
try
{
...
bank.setID(BankName, Branch);
throw new System.InvalidOperationException("sht happens");
check.addCheck(check);
...
ts.Complete();
}
catch
{
//catch the exception
//ts.Complete() is not called, thus update/insert rollbacks
}
}
Which has better performance:
using (GADEEntities context = new GADEEntities(_connectionString))
{
using (TransactionScope transaction = new TransactionScope())
{
AddToContext1(context);
AddToContext2(context);
AddToContext3(context);
...
context.SaveChanges();
transaction.Complete();
}
}
or
using (GADEEntities context = new GADEEntities(_connectionString))
{
using (TransactionScope transaction = new TransactionScope())
{
AddToContext1(context);
context.SaveChanges();
AddToContext2(context);
context.SaveChanges();
AddToContext3(context);
context.SaveChanges();
...
transaction.Complete();
}
}
At any time, this could translate into 5000+ inserts into a DB on a clients machine. Is either way any different?
It's very likely that your first version will be always faster, depending on what AddToContext exactly does. If your AddToContext method adds a single or only a few new objects to the context it will be definitely much faster. Calling SaveChanges after each insert (and probably also update and delete) slows the performance extremely down.
Here are a few measurements in a similar question:
Fastest Way of Inserting in Entity Framework
The way you have it set up, I do not think there is any significant difference. The data will be transmitted either way, and that's the real bottleneck.
There is very big difference because second version is terribly wrong.
What are you doing by this code:
AddToContext1(context);
context.SaveChanges(false);
You add record to context in Added state and let the context insert the record to the database but in the same time you are saying: "Let the data in Added state".
What happesn if you call this:
AddToContext2(context);
context.SaveChanges(false);
You add another recored to context in Added state and let the context insert all records in Added state to the database = the first record will be added again
It doesn't matter if AddToContext actually performs update because it will simply do the DB command again. So if you have 5.000 records you will insert or update the first one 5.000 times!
If you want to use second version you still have to accept changes during each saving.
Btw. SaveChanges overload accepting bool is obsolete in EFv4.