NHibernate: Updating database multiple times in a transaction - c#

Below is a simplified code of what I currently have.
I noticed that "status" remained as 0 instead of 1 or 2 when Point 1/2 throws an exception.
I initially thought Update() would do the trick, but it seems like I have to call Commit() for the changes to be in the DB.
What could be a good way for me to do this?
(showing status of 1 and 2 in DB upon returning/exception).
Any help is much appreciated.
using(var tx = session.BeginTransaction())
{
Monitor monitor = monitorDao.Get(id);
if (someStatus)
{
monitor.status = 1; // initially monitor.status == 0 in DB
// Point 1: some codes that might return or throw exception
}
else
{
monitor.status = 2;
// Point 2: some codes that might return or throw exception
}
monitor.status = 3;
tx.Commit();
}

It seems you have to refactor you code to get the behavior you want:
Monitor monitor = monitorDao.Get(id);
if (someStatus)
{
using(var tx = session.BeginTransaction())
{
monitor.status = 1; // initially monitor.status == 0 in DB
tx.Commit();
}
// Point 1: some codes that might return or throw exception
}
else
{
using(var tx = session.BeginTransaction())
{
monitor.status = 2; // initially monitor.status == 0 in DB
tx.Commit();
}
// Point 2: some codes that might return or throw exception
}
using(var tx = session.BeginTransaction())
{
monitor.status = 3;
tx.Commit();
}
When you had an exception, the first status update was never send to your db. By giving it a separate transaction, you first update your db by committing it. Then you do you logic which can fail. If you are doing anything at Point 1/2 which like saving to the db and what has to be in the transaction with status = 3. Then you need to refactor your code so that logic is in the second transaction.

This setting is controlled by FlushMode on ISession. You should google this topic, for example this link gives some more details on FlushMode options: http://weblogs.asp.net/ricardoperes/nhibernate-pitfalls-flush-mode.
Never: changes are never flushed automatically, you must call
ISession.Flush() explicitly;
Commit: changes are sent as soon as the
current ITransaction is committed, no need to call Flush();
Auto: the
session is flushed if a query is requested for some entity type and
there are dirty local entity instances, no need to call Flush(); this
is the default;
Always: the session is flushed before any query is
executed, also no need to call Flush().

Related

dbcontext.SaveChanges() memory exception in large records EF

I am facing out of memory exception while executing decontext.savechanges() method.
Actually i need to insert/update 10000+ record into some tables.
ie
//insert table 1 // 10000 record <br>
//insert or update record to table 2 //25000 record<br>
//table 3 1000 record<br>
....
.....
.....
db.context.savechanges(); //Exception happening in this line.
I got "Exception of type 'System.OutOfMemoryException' was thrown" error.
I tried AutoDetectChangesEnabled but its did not help
mydbcontext.Configuration.AutoDetectChangesEnabled = false;
mydbcontext.SaveChanges();
"Exception of type 'System.OutOfMemoryException' was thrown" - This normally happens to me when I don't dispose my database. The issue probably isn't the amount of records you are trying to update, its probably because those records are kept in the System Memory.
Solution 1(Recommended)
Wrap your database functions in 'using' statements which will auto dispose your database context and free up memory
public void CreateRange(List<MyModel> modelList){
using(DbContext db = new DbContext())
{
db.MyTable.AddRange(modelList);
db.SaveChanges();
}
}
Solution 2
Call 'db.Dispose();' before the end of each method
public void CreateRange(List<MyModel> modelList){
db.MyTable.AddRange(modelList);
db.SaveChanges();
db.Dispose();
}
context.BulkInsert(entitiesList);
For BulkInsert follow the link
https://github.com/borisdj/EFCore.BulkExtensions
Another is sqlBulkCopy
Fastest Way of Inserting in Entity Framework
If you want to use the Savechanges outsude the loop, so look the below code
try
{
context.Configuration.AutoDetectChangesEnabled = false;
foreach (var book in bookList)
{
context.Books.Add(book);
}
context.SaveChanges();
}
finally
{
context.Configuration.AutoDetectChangesEnabled = true;
}

DB ConnectionState = Open but context.SaveChanges throws "connection broken" exception

In my service I have a background thread that does a best effort saving of a stream of object of certain entity type. Code roughly is following:
while (AllowRun)
{
try
{
using (DbContext context = GetNewDbContext())
{
while (AllowRun && context.GetConnection().State == ConnectionState.Open)
{
TEntity entity = null;
try
{
while (pendingLogs.Count > 0)
{
lock (pendingLogs)
{
entity = null;
if (pendingLogs.Count > 0)
{
entity = pendingLogs[0];
pendingLogs.RemoveAt(0);
}
}
if (entity != null)
{
context.Entities.Add(entity);
}
}
context.SaveChanges();
}
catch (Exception e)
{
// (1)
// Log exception and continue execution
}
}
}
}
catch (Exception e)
{
// Log context initialization failure and continue execution
}
}
(this is mostly the actual code, I omitted few non-relevant parts that attempt to keep popped objects in memory until we are able to save stuff to DB again when exception is caught at (1) block)
So, essentially, there is an endless loop, trying to read items from some list and save them to Db. If we detect that connection to DB failed for some reason, it just attempts to reopen it and continue. The issue is that sometimes (I failed to figure out how to reproduce it so far), the code above when context.SaveChanges() is called starts to produce following exception (caught in (1) block):
System.Data.EntityException: An error occurred while starting a transaction on the provider connection. See the inner exception for details. --->
System.InvalidOperationException: The requested operation cannot be completed because the connection has been broken.
The error is logged, but when the execution returns to the context.GetConnection().State == ConnectionState.Open check, it evaluates to true. So we are in a state when context reports that its DB connection is open, but we can't run queries against that context. Restarting the service removes the issue (as well as messing with AllowRun variable in debugger to force recreation of context). So the question is since I can't trust context's connection state, how do I verify that I can run queries against DB?
Also, is there a clean way to figure out that connection is not in a "healthy" state? I mean, the EntityException by itself is not an indication that I should reset the connection, only if its InnerException is InvalidOperationException with some specific Message, then yes, it is time to reset it. But, now I guess there would be other situations when ConnectionState indicates that everything is fine, but I can't query DB. Can I catch those proactively, not waiting until it starts to bite me?
What is the log frequency?
if this loop take longer than connection timeout, connection closed when savechanges executing.
while (pendingLogs.Count > 0)
{
lock (pendingLogs)
{
entity = null;
if (pendingLogs.Count > 0)
{
entity = pendingLogs[0];
pendingLogs.RemoveAt(0);
}
}
if (entity != null)
{
context.Entities.Add(entity);
}
}
context.SaveChanges();
From my experience working on similar services, garbage collection won't occur until the end of the using block.
If there were a lot of Pending logs to write, this could use a lot of memory, but I also guess it might starve the dbConnection pool.
You can analyse memory usage using RedGate ANTS or a similar tool, and check dbConnections that are open using the following script from this StackOverflow question: how to see active SQL Server connections?
SELECT
DB_NAME(dbid) as DBName,
COUNT(dbid) as NumberOfConnections,
loginame as LoginName
FROM
sys.sysprocesses
WHERE
dbid > 0
GROUP BY
dbid, loginame
;
I think it's good practice to free up the context as often as you can in order to give GC a change of cleaning up, so you could rewrite the loop as:
while (AllowRun)
{
try
{
while (pendingLogs.Count > 0)
{
using (DbContext context = GetNewDbContext())
{
while (AllowRun && context.GetConnection().State == ConnectionState.Open)
{
TEntity entity = null;
try
{
lock (pendingLogs)
{
entity = null;
if (pendingLogs.Count > 0)
{
entity = pendingLogs[0];
pendingLogs.RemoveAt(0);
}
}
if (entity != null)
{
context.Entities.Add(entity);
context.SaveChanges();
}
}
catch (Exception e)
{
// (1)
// Log exception and continue execution
}
}
}
}
}
catch (Exception e)
{
// Log context initialization failure and continue execution
}
}
I recommend go through below url:
Timeout Expired is usually thrown when a sql query takes too long to run.
Sounds like a SQL job is running, backup? That might be locking tables or restarting the service.
ADONET async execution - connection broken error

Deadlock when previous query threw an exception

Using entity framework, I have a function that basically goes something like this:
using (var ctx = new Dal.MyEntities())
{
try
{
//...
// create a temp entity
Dal.Temp temp = new Dal.Temp();
// populate its children
// note that temp is set to cascade deletes down to it's children
temp.Children = from foo in foos
select new Dal.Children()
{
// set some properties...
Field1 = foo.field1,
Field2 = foo.field2
}
//...
// add temp row to temp table
ctx.Temp.Add(temp);
ctx.SaveChanges();
// some query that joins on the temp table...
var results = from d in ctx.SomeOtherTable
join t in temp.Children
on new { d.Field1, d.Field2 } equals new { t.Field1, d.Field2 }
select d;
if (results.Count() == 0)
{
throw new Exception("no results")
}
// Normal processing and return result
return results;
}
finally
{
if (temp != null && temp.ID != 0)
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
}
}
The idea is that as part of the processing of a request I need to build a temporary table with some data that then gets used to join the main query and filter the results. Once the query has been processed, the temp table should be deleted. I put the deletion part in the finally clause so that if there is a problem with the query (an exception thrown), the temporary table will always get cleaned up.
This seems to work fine, except intermittently I have a problem were the SaveChanges in the finally block throws a deadlock exception with an error message along the lines of:
Transaction (Process ID 89) was deadlocked on lock resources with another process and
has been chosen as the deadlock victim. Rerun the transaction.
I can't reliably reproduce it, but it seems to happen most often if the previous query threw the "no results" exception. Note that, due to an error that was discovered on the front end, two identically requests were being submitted under certain circumstances, but nevertheless, the code should be able to handle that.
Does anybody have an clues as to what might be happening here? Is throwing an exception inside the using block a problem? Should I handle that differently?
Update, so the exception might be a red herring. I removed it altogether (instead returning an empty result) and I still have the problem. I've tried a bunch of variations on:
using (new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted })
using (var ctx = new Dal.MyEntities())
{
}
But despite what I've read, it doesn't seem to make any difference. I still get intermittent deadlocks on the second SaveChanges to remove the temp table.
how about adding a
using (var ctx = new Dal.MyEntities())
{
try
{
//...
Dal.TempTable temp = new Dal.TempTable();
//...
ctx.TempTables.Add(temp);
// some query that joins on the temp table...
if (no
results are
returned)
{
throw new Exception("no results")
}
// Normal processing and return result
}
catch
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
finally
{
if (temp != null && temp.ID != 0)
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
}
}

Dispose not working, many dead connections

I'm getting strange things since updated to EF6,no sure this is related or not, but used to be good
I'm doing a set of work, then save it to DB , then do another , save another.
after a while,i check SQL server by sp_who2 , i found many dead connections from my computer.
Job is huge then there goes to 700 connections,
I have to kill them all manually in cycle.
program like:
while (jobDone == false)
{
var returnData=doOneSetJob();
myEntity dbconn= new myEntity;
foreach( var one in retrunData)
{
dbconn.targetTable.add(one );
try
{
dbconn.savechange();
/// even i put a dispose() here , still lots of dead connections
}
catch
{
console.writeline("DB Insertion Fail.");
dbconn.dispose();
dbconn= new myEntity();
}
}
dbconn.dispose()
}
You should consider refactoring your code so that your connection is cleaned up after your job is complete. For example:
using (var context = new DbContext())
{
while (!jobDone)
{
// Execute job and get data
var returnData = doOneSetJob();
// Process job results
foreach (var one in returnData)
{
try
{
context.TargetTable.Add(one);
context.SaveChanges();
}
catch (Exception ex)
{
// Log the error
}
}
}
}
The using statement will guarantee that your context is cleaned up properly, even if an error occurs while you are looping through the results.
In this case you should use a using statement. Taken from MSDN:
The using statement ensures that Dispose is called even if an exception occurs while you are calling methods on the object. You can achieve the same result by putting the object inside a try block and then calling Dispose in a finally block; in fact, this is how the using statement is translated by the compiler.
So, your code would look better like this:
using(var dbconn = new DbContext())
{
while (!jobDone)
{
foreach(var one in retrunData)
{
try
{
targetTable row = new TargetTable();
dbconn.TargetTable.add(row);
dbconn.SaveChanges();
}
catch (Exception ex)
{
Console.WriteLine("DB Insertion Fail.");
}
}
}
}
This way, even if your code fails at some point, the Context, resources and connections will be properly disposed.

Catch-all better alternative

I'm developing using Asp.net MVC 4, NHibernate and Session-per-request.
I have a service method which updates multiple databases so the work is wrapped in a TransactionScope. I have discovered that the NHibernate Session is not usable outside the TransactionScope due to it not being thread safe.
The code is similar to this:
public void ProcessItems()
{
var items = itemService.GetAll();
var mailMessages = new List<MailMessage>();
using(var scope = new TransactionScope())
{
foreach(var item in items)
{
itemService.UpdateOne(item);
itemService.UpdateTwo(item);
try
{
mailMessages.Add(itemService.GenerateMailMessage(item));
}
catch(Exception ex)
{
// we don't want exceptions caused be generating email to prevent DB work
if (ex is InvalidOperationException
|| ex is NullReferenceException
|| ex is FormatException
|| ex is ArgumentException
|| ex is ItemNotFoundException)
{
LogError(String.Format("Unable to generate email alert for item.Id:{0} - {1}", item.Id, ex.Message), log);
}
else
{
// For exception types we don't know we can ignore rethrow
throw;
}
}
scope.Complete()
}
mailService.SendMail(mailMessages);
}
The database updates are critical to the success of the method. The email alerts are not. I don't want problems with the generation of the email alerts to prevent the database updates taking place.
My questions are:
Given the constraints does this look like a reasonable approach?
I'm worried that an exception I haven't handled may be thrown when
generating the email message. This will cause the entire TransactionScope to
be rolled back. It feels like I want any exception to be ignored
if it happens in that try block of code. However I appreciate a
catch-all is a no-no so any other suggestions for making this more
robust are welcome.
EDIT
Just to clarify my question:
I know it would be better to generate and send the email after the TransactionScope. However I am unable to do this as GenerateMailMessage() makes use of the NHibernate Session which is not safe to use outside of the TransactionScope block.
I guess what I was really asking is would it be defensible to change the catch statement above to a geniune catch-all (still with logging taking place) in order to provide as much protection to the critical UpdateOne() and UpdateTwo() calls as possible?
Update
My advice would be to try to prevent the exception from occurring. Failing that, a catch-all is likely the only option you have remaining. Logging all exceptions is going to be critical here.
1st question: Your case isn't really a catch-all, you are catching all exceptions to query the type. My only advice is to log details for the exceptions you choose to consume.
2nd question: I would completely remove the generation of email from the scope if it is liable to fail. Once the transaction rolls back, all items will be rolled back too. Create and send all the emails on successful commit.
public void ProcessItems()
{
var items = itemService.GetAll();
var mailMessages = new List<MailMessage>();
bool committed = false;
using(var scope = new TransactionScope())
{
foreach(var item in items)
{
itemService.UpdateOne(item);
itemService.UpdateTwo(item);
}
scope.Complete()
committed = true;
}
if (committed)
{
// Embed creation code and exception handling here.
mailService.SendMail(mailMessages);
}
}
I'd suggest changing this around. Instead of generating the email there and then... keep a list of the successfully processed items in a local List and then do all the mail sends at the end after you've committed.
public void ProcessItems()
{
var items = itemService.GetAll();
var successItems = new List<Item>();
var mailMessages = new List<MailMessage>();
using(var scope = new TransactionScope())
{
foreach(var item in items)
{
itemService.UpdateOne(item);
itemService.UpdateTwo(item);
successItems.Add(item);
// you still need try/catch handling for DB updates that fail... or maybe you want it all to fail.
}
scope.Complete()
}
mailMessages = successItems.Select(i => itemService.GenerateMailMessage).ToList();
//Do stuff with mail messages
}

Categories