dbcontext.SaveChanges() memory exception in large records EF - c#

I am facing out of memory exception while executing decontext.savechanges() method.
Actually i need to insert/update 10000+ record into some tables.
ie
//insert table 1 // 10000 record <br>
//insert or update record to table 2 //25000 record<br>
//table 3 1000 record<br>
....
.....
.....
db.context.savechanges(); //Exception happening in this line.
I got "Exception of type 'System.OutOfMemoryException' was thrown" error.
I tried AutoDetectChangesEnabled but its did not help
mydbcontext.Configuration.AutoDetectChangesEnabled = false;
mydbcontext.SaveChanges();

"Exception of type 'System.OutOfMemoryException' was thrown" - This normally happens to me when I don't dispose my database. The issue probably isn't the amount of records you are trying to update, its probably because those records are kept in the System Memory.
Solution 1(Recommended)
Wrap your database functions in 'using' statements which will auto dispose your database context and free up memory
public void CreateRange(List<MyModel> modelList){
using(DbContext db = new DbContext())
{
db.MyTable.AddRange(modelList);
db.SaveChanges();
}
}
Solution 2
Call 'db.Dispose();' before the end of each method
public void CreateRange(List<MyModel> modelList){
db.MyTable.AddRange(modelList);
db.SaveChanges();
db.Dispose();
}

context.BulkInsert(entitiesList);
For BulkInsert follow the link
https://github.com/borisdj/EFCore.BulkExtensions
Another is sqlBulkCopy
Fastest Way of Inserting in Entity Framework
If you want to use the Savechanges outsude the loop, so look the below code
try
{
context.Configuration.AutoDetectChangesEnabled = false;
foreach (var book in bookList)
{
context.Books.Add(book);
}
context.SaveChanges();
}
finally
{
context.Configuration.AutoDetectChangesEnabled = true;
}

Related

Ef Core 2.0 - Adding entities after deleting server side data causes exceptions and duplicate data

I found an interesting problem with EF core changetracking and how it's adding unwanted duplicate data to my database.
Order of operations:
Add new "Order" entity
Delete order using a manual query on the database
Add another new "Order" entity
An exception is thrown: The instance of entity type 'Order' cannot be tracked because another instance with the same key value for {'OrderId'} is already being tracked. (The new entity still gets added to the ChangeTracker)
Add another new "Order" entity
It worked this time, but it inserted two new Orders and now there is duplicate data.
I'm not using this in a web application, it's a service that maintains a single connection context because it's constantly processing a queue of data. But I must accept that data can be deleted outside of this context/service.
Example:
public async Task TestException()
{
var dirtyOrders = _allOrders.Where(x => x.IsDirty).ToList();
foreach(var order in dirtyOrders)
{
await CreateAsync(order);
order.IsDirty=false;
}
Sql("TRUNCATE dbo.Orders;");
dirtyOrders = _allOrders.Where(x => x.IsDirty).ToList();
foreach(var order in dirtyOrders)
{
await CreateAsync(order); // exception thrown
order.IsDirty=false;
}
}
public async Task<Order> CreateAsync(Order order)
{
_dbContext.OrderBook.Add(order);
await _dbContext.SaveChangesAsync();
return order;
}
You can create a separate method to delete your model to avoid the EF's model lifecycle and tracking. hope it'll work.
private bool DeleteModel(Order order)
{
try
{
_context.Database.ExecuteSqlCommand($"DELETE Order WHERE Id={order.Id}");
_context.SaveChanges();
return true;
}
catch(Exception ex)
{
return false;
}
}
public bool SaveOrder(Order order)
{
try
{
//Delete Model First
if(DeleteModel(order))
{
_context.Orders.Add(order);
_context.SaveChanges();
return true;
}
else
{
return false;
}
}
catch(Exception ex)
{
return false;
}
}
I realized what the problem was. Because I was using Truncate to wipe the data it was resetting the auto-increment sequence to 1. Order #1 was already in the ChangeTracker after truncate and when it went to insert another order, the db returned a primary key of '1' which already was present in the ChangeTracker. Subsequent calls may refresh the changetracker, but your entity still sits in there in the state "Added" despite the exception, and a future call to Add that same record and call SaveChanges will now insert two duplicate records.
While I can avoid this condition by not resetting sequences, there isn't an easy way to prevent EF from running into this condition. I would think the behavior should be to overwrite the entry in Changetracker when in the "Added" state, not to throw an exception. The exception does not contain the key that was in conflict because it came from the server so there's no good way to programatically deal with it.
At the very least, I can remove it from changeTracking if SaveChangesAsync() fails to prevent duplicate rows from showing up in the future.
public async Task<Order> CreateAsync(Order order)
{
_dbContext.OrderBook.Add(order);
try
{
await _dbContext.SaveChangesAsync();
}
catch(DbUpdateException ex)
{
_dbContext.OrderBook.Remove(order);
throw ex;
}
return order;
}

intermittent System.Data.Entity.Infrastructure.DbUpdateConcurrencyException

The following code is causing an intermittent exception:
public int UnblockJob(int jobId)
{
using (var connect = MakeConnect())
{
var tag = connect.JobTag.SingleOrDefault(jt => jt.JobId == jobId && jt.Name == Metrics.TagNameItemBlockCaller);
if (tag == null)
{
return 0;
}
connect.JobTag.Remove(tag);
return connect.SaveChanges();
}
}
How can I correct or troubleshoot it?
From the documentation for DbUpdateConcurrencyException:
Exception thrown by DbContext when it was expected that SaveChanges for an entity would result in a database update but in fact no rows in the database were affected.
This means that the record you are attempting to delete has since been removed from the database. It would appear that you have another process that is deleting records or this function is able to be called concurrently.
There are several solutions, here are a couple:
Fix the source problem Stop other processes affecting the data.
Catch the error Wrap this method in a try/catch block, after all you may only care that the record has been deleted:
try
{
//Existing code here
}
catch(DbUpdateConcurrencyException)
{
//Safely ignore this exception
}
catch(Exception e)
{
//Something else has occurred
throw;
}

NHibernate: Updating database multiple times in a transaction

Below is a simplified code of what I currently have.
I noticed that "status" remained as 0 instead of 1 or 2 when Point 1/2 throws an exception.
I initially thought Update() would do the trick, but it seems like I have to call Commit() for the changes to be in the DB.
What could be a good way for me to do this?
(showing status of 1 and 2 in DB upon returning/exception).
Any help is much appreciated.
using(var tx = session.BeginTransaction())
{
Monitor monitor = monitorDao.Get(id);
if (someStatus)
{
monitor.status = 1; // initially monitor.status == 0 in DB
// Point 1: some codes that might return or throw exception
}
else
{
monitor.status = 2;
// Point 2: some codes that might return or throw exception
}
monitor.status = 3;
tx.Commit();
}
It seems you have to refactor you code to get the behavior you want:
Monitor monitor = monitorDao.Get(id);
if (someStatus)
{
using(var tx = session.BeginTransaction())
{
monitor.status = 1; // initially monitor.status == 0 in DB
tx.Commit();
}
// Point 1: some codes that might return or throw exception
}
else
{
using(var tx = session.BeginTransaction())
{
monitor.status = 2; // initially monitor.status == 0 in DB
tx.Commit();
}
// Point 2: some codes that might return or throw exception
}
using(var tx = session.BeginTransaction())
{
monitor.status = 3;
tx.Commit();
}
When you had an exception, the first status update was never send to your db. By giving it a separate transaction, you first update your db by committing it. Then you do you logic which can fail. If you are doing anything at Point 1/2 which like saving to the db and what has to be in the transaction with status = 3. Then you need to refactor your code so that logic is in the second transaction.
This setting is controlled by FlushMode on ISession. You should google this topic, for example this link gives some more details on FlushMode options: http://weblogs.asp.net/ricardoperes/nhibernate-pitfalls-flush-mode.
Never: changes are never flushed automatically, you must call
ISession.Flush() explicitly;
Commit: changes are sent as soon as the
current ITransaction is committed, no need to call Flush();
Auto: the
session is flushed if a query is requested for some entity type and
there are dirty local entity instances, no need to call Flush(); this
is the default;
Always: the session is flushed before any query is
executed, also no need to call Flush().

Entity Framework; How to handle an exception in foreach loop and keep iterating

When I iterate through a foreach with the following code it successfully catches the first exception that occurs and adds the id to my error list. On all the subsequent iterations of the loop, it will continue to catch the previous exception.
How can I appropriately catch the exception and undo or clear the failed DeleteObject request so that subsequent deletes can be performed.
public ActionResult Delete(int[] ListData)
{
List<int> removed = new List<int>();
List<int> error = new List<int>();
Item deleteMe;
foreach (var id in ListData)
{
deleteMe = this.getValidObject(id);
if (deleteMe == null)
{
error.Add(id);
continue;
}
try
{
this.DB.Items.DeleteObject(deleteMe);
this.DB.SaveChanges();
removed.Add(id);
}
catch (DataException ex)
{
// revert change to this.DB.Items?
error.Add(id);
}
}
if (error.Count > 0)
{
return Json(new { Success = false, Removed = removed, Error = error });
}
return Json(new { Success = true, Removed = removed });
}
I have searched SO and google and most people will process all the delete objects first and then save changes so that it is one transaction. But I need it to process each transaction individually so a single failure does not stop the rest of the transactions.
I am using Entity Framework 4.
The exception I get for this specific example caused by foreign keys being associated to the item that is being removed. While in production I will be handling this scenario, it should be able to continue on no matter what the exception is.
I assume that the the same context, this.DB, is being used in this.getValidObject(id) to retrieve an entity. If that is the case, in the exception block call: this.DB.detach(deleteme). That should prevent the SaveChanges() to try to delete the problematic entity on the next iteration.
The code you present looks good. What is the error you see? As you've noted, maybe you need to un-tag something in this.DB.Items, though I don't think so. You could also try creating a new DataContext for each loop such that the old, failed DataContext's state on the world is irrelevant.
If I understood correctly, you cannot remove the entity(Item) because it has a foreign key association(child) to it.
You will first have to update all child(related) entities using the Parent(Item) you want to delete, by removing the relationship, updating the entity to relate too an alternative parent(Item) or deleting the child entity(entities) and then finally removing the Parent(Item) entity.

In my typed dataset, will the Update method run as a transaction?

I have a typed dataset for a table called People. When you call the update method of a table adapter and pass in the table, is it run as a transaction?
I'm concerned that at some point the constraints set in the xsd will pass but the database will reject this item for one reason or another. I want to make sure that the entire update is rejected and I'm not sure that it just accepts what it can until that error occurs.
If it runs as a transaction I have this
Auth_TestDataSetTableAdapters.PeopleTableAdapter tableAdapter = new Auth_TestDataSetTableAdapters.PeopleTableAdapter();
Auth_TestDataSet.PeopleDataTable table = tableAdapter.GetDataByID(1);
table.AddPeopleRow("Test Item", 5.015);
tableAdapter.Update(table);
But if I have to manually trap this in a transaction I wind up with this
Auth_TestDataSetTableAdapters.PeopleTableAdapter tableAdapter = new Auth_TestDataSetTableAdapters.PeopleTableAdapter();
Auth_TestDataSet.PeopleDataTable table = tableAdapter.GetDataByID(1);
tableAdapter.Connection.Open();
tableAdapter.Transaction = tableAdapter.Connection.BeginTransaction();
table.AddPeopleRow("Test Item", 5.015);
try
{
tableAdapter.Update(table);
tableAdapter.Transaction.Commit();
}
catch
{
tableAdapter.Transaction.Rollback();
}
finally
{
tableAdapter.Connection.Close();
}
Either way works but I am interested in the inner workings. Any other issues with the way I've decided to handle this type of row addition?
-- EDIT --
Determined that it does not work as a transaction and will commit however many records are successful until the error occurs. Thanks to the helpful post below a bit of that transactional code has been condensed to make controlling the transaction easier on the eyes:
Auth_TestDataSetTableAdapters.PeopleTableAdapter tableAdapter = new Auth_TestDataSetTableAdapters.PeopleTableAdapter();
Auth_TestDataSet.PeopleDataTable table = tableAdapter.GetDataByID(1);
try
{
using (TransactionScope ts = new TransactionScope())
{
table.AddPeopleRow("Test Item", (decimal)5.015);
table.AddPeopleRow("Test Item", (decimal)50.015);
tableAdapter.Update(table);
ts.Complete();
}
}
catch (SqlException ex)
{ /* ... */ }
Your approach should work.
You can simplify it a little though:
using (TransactionScope ts = new TransactionScope())
{
// your old code here
ts.Complete();
}

Categories