intermittent System.Data.Entity.Infrastructure.DbUpdateConcurrencyException - c#

The following code is causing an intermittent exception:
public int UnblockJob(int jobId)
{
using (var connect = MakeConnect())
{
var tag = connect.JobTag.SingleOrDefault(jt => jt.JobId == jobId && jt.Name == Metrics.TagNameItemBlockCaller);
if (tag == null)
{
return 0;
}
connect.JobTag.Remove(tag);
return connect.SaveChanges();
}
}
How can I correct or troubleshoot it?

From the documentation for DbUpdateConcurrencyException:
Exception thrown by DbContext when it was expected that SaveChanges for an entity would result in a database update but in fact no rows in the database were affected.
This means that the record you are attempting to delete has since been removed from the database. It would appear that you have another process that is deleting records or this function is able to be called concurrently.
There are several solutions, here are a couple:
Fix the source problem Stop other processes affecting the data.
Catch the error Wrap this method in a try/catch block, after all you may only care that the record has been deleted:
try
{
//Existing code here
}
catch(DbUpdateConcurrencyException)
{
//Safely ignore this exception
}
catch(Exception e)
{
//Something else has occurred
throw;
}

Related

DB ConnectionState = Open but context.SaveChanges throws "connection broken" exception

In my service I have a background thread that does a best effort saving of a stream of object of certain entity type. Code roughly is following:
while (AllowRun)
{
try
{
using (DbContext context = GetNewDbContext())
{
while (AllowRun && context.GetConnection().State == ConnectionState.Open)
{
TEntity entity = null;
try
{
while (pendingLogs.Count > 0)
{
lock (pendingLogs)
{
entity = null;
if (pendingLogs.Count > 0)
{
entity = pendingLogs[0];
pendingLogs.RemoveAt(0);
}
}
if (entity != null)
{
context.Entities.Add(entity);
}
}
context.SaveChanges();
}
catch (Exception e)
{
// (1)
// Log exception and continue execution
}
}
}
}
catch (Exception e)
{
// Log context initialization failure and continue execution
}
}
(this is mostly the actual code, I omitted few non-relevant parts that attempt to keep popped objects in memory until we are able to save stuff to DB again when exception is caught at (1) block)
So, essentially, there is an endless loop, trying to read items from some list and save them to Db. If we detect that connection to DB failed for some reason, it just attempts to reopen it and continue. The issue is that sometimes (I failed to figure out how to reproduce it so far), the code above when context.SaveChanges() is called starts to produce following exception (caught in (1) block):
System.Data.EntityException: An error occurred while starting a transaction on the provider connection. See the inner exception for details. --->
System.InvalidOperationException: The requested operation cannot be completed because the connection has been broken.
The error is logged, but when the execution returns to the context.GetConnection().State == ConnectionState.Open check, it evaluates to true. So we are in a state when context reports that its DB connection is open, but we can't run queries against that context. Restarting the service removes the issue (as well as messing with AllowRun variable in debugger to force recreation of context). So the question is since I can't trust context's connection state, how do I verify that I can run queries against DB?
Also, is there a clean way to figure out that connection is not in a "healthy" state? I mean, the EntityException by itself is not an indication that I should reset the connection, only if its InnerException is InvalidOperationException with some specific Message, then yes, it is time to reset it. But, now I guess there would be other situations when ConnectionState indicates that everything is fine, but I can't query DB. Can I catch those proactively, not waiting until it starts to bite me?
What is the log frequency?
if this loop take longer than connection timeout, connection closed when savechanges executing.
while (pendingLogs.Count > 0)
{
lock (pendingLogs)
{
entity = null;
if (pendingLogs.Count > 0)
{
entity = pendingLogs[0];
pendingLogs.RemoveAt(0);
}
}
if (entity != null)
{
context.Entities.Add(entity);
}
}
context.SaveChanges();
From my experience working on similar services, garbage collection won't occur until the end of the using block.
If there were a lot of Pending logs to write, this could use a lot of memory, but I also guess it might starve the dbConnection pool.
You can analyse memory usage using RedGate ANTS or a similar tool, and check dbConnections that are open using the following script from this StackOverflow question: how to see active SQL Server connections?
SELECT
DB_NAME(dbid) as DBName,
COUNT(dbid) as NumberOfConnections,
loginame as LoginName
FROM
sys.sysprocesses
WHERE
dbid > 0
GROUP BY
dbid, loginame
;
I think it's good practice to free up the context as often as you can in order to give GC a change of cleaning up, so you could rewrite the loop as:
while (AllowRun)
{
try
{
while (pendingLogs.Count > 0)
{
using (DbContext context = GetNewDbContext())
{
while (AllowRun && context.GetConnection().State == ConnectionState.Open)
{
TEntity entity = null;
try
{
lock (pendingLogs)
{
entity = null;
if (pendingLogs.Count > 0)
{
entity = pendingLogs[0];
pendingLogs.RemoveAt(0);
}
}
if (entity != null)
{
context.Entities.Add(entity);
context.SaveChanges();
}
}
catch (Exception e)
{
// (1)
// Log exception and continue execution
}
}
}
}
}
catch (Exception e)
{
// Log context initialization failure and continue execution
}
}
I recommend go through below url:
Timeout Expired is usually thrown when a sql query takes too long to run.
Sounds like a SQL job is running, backup? That might be locking tables or restarting the service.
ADONET async execution - connection broken error

Why is a duplicate key error being thrown and value still inserted

A document is inserted into a collection using c#
{
"_id" : UUID("some_guid")
}
Via
db.collection.insert(new { id = a_guid });
We rely upon the uniqueness of the guid/uuid by specifying the id in the document meaning the mongo db driver is spared from doing this.
Now, all of this is wrapped in a try..catch where a duplicate key exception is caught. Calling code uses this routine for conflict checking. That is, if a guid hasnt been encountered before - insert it - next time around and on trying to insert the same value again, the exception lets us now there's a duplicate.
We appear to be getting into a situation where values are written but an exception is STILL thrown, indicating a conflict where there isnt one.
We have had this working in a 3 node replica set.
It is NOT working in a 5 node replica set, purporting to be healthy. The write concern is set to 1, indicating acknowledgement when the master is written to (but not the journal) just like the 3 node set.
Where should I dig deeper? The duplicate exception derives from a writeconcern exception, is something screwy going on here? Is the mongo driver correctly interpreting the error and raising the right exception?
Any leads would be great!
EDIT:
var database = this.client.GetServer().GetDatabase("A_Database");
var collection = database.GetCollection<object>("A_Collection");
try
{
collection.Insert(new { Id = paymentReference.ToGuid() });
}
catch (MongoDuplicateKeyException)
{
return false;
}
return true;
This is NOT called in a loop.
You can catch the exception base MongoWriteException and filter with when by the Category, example code:
var database = this.client.GetServer().GetDatabase("A_Database");
var collection = database.GetCollection<object>("A_Collection");
try
{
collection.Insert(new { Id = paymentReference.ToGuid() });
}
catch (MongoWriteException ex) when(ex.WriteError.Category == ServerErrorCategory.DuplicateKey)
{
return false;
}
return true;
Hear's a fixed version of your code
var database = this.client.GetServer().GetDatabase("A_Database");
var collection = database.GetCollection<object>("A_Collection");
try
{
collection.Insert(new { Id = paymentReference.ToGuid() });
}
catch (Exception)
{
collection.Insert(new { Id = Guid.NewGuid(); });
return tru;
}
return true;

delete azure table storage row without checking for existence

I've been using azure table storage for years, and I'm not sure what the "proper" way to do this is with the newest WindowsAzure.Storage library, version 5.0.1-preview (for use in a new ASP.NET 5 application):
Problem:
Given a partition key and row key, delete the row without checking for existence first, and without failing if it does not exist.
Current Solution: This code works... but the exception handling is confusing:
public async Task DeleteRowAsync(CloudTable table, string partition, string row)
{
var entity = new DynamicTableEntity(partition, row);
entity.ETag = "*";
var op = TableOperation.Delete(entity);
try
{
await table.ExecuteAsync(op);
}
catch (Exception ex)
{
var result = RequestResult.TranslateFromExceptionMessage(ex.Message);
if (result == null || result.HttpStatusCode != 404)
throw ex;
}
}
Questions:
The exception itself pointed me to this TranslateFromExceptionMessage method... I can't find a whole lot of information on that and WrappedStorageException (the type of the exception that is thrown). Is this some kind of new/preferred way to check for 404 errors on storage exceptions? Does anyone know if all storage exceptions will now use this, or do I need to write code to test and figure it out?
There is an InnerException of type StorageException. Presumably our older code that used StorageException.RequestInformation.HttpStatusCode could access this inner exception in the same way. Is that "OK", or is parsing these new XML error messages better or more robust somehow?
Is there a different approach altogether that I should be considering for this case?
If you are using the latest client (Azure.Data.Tables), the delete method automatically swallows 404 responses and does not throw. This approach avoids the need to write code that introduces race conditions (checking first before performing an operations) or having to handle this condition with a try/catch block.
If you want to know if the operation actually deleted a table or it didn't exist, you can inspect the Status property of the response.
Response response = await tableClient.DeleteAsync();
if (response.Status == (int)HttpStatusCode.NotFound)
{
// entity didn't exist)
}
The RequestResult.TranslateFromExceptionMessage method is now marked [Obsolete] and I wanted a way to ignore 404's myself.
Based on your tip to check out the RequestInformation.HttpStatusCode I came up with the following:
try
{
await table.ExecuteAsync(op);
}
catch (StorageException storEx)
{
if (storEx.RequestInformation.HttpStatusCode != 404)
{
throw;
}
}
There is a similar approach found in the AspNet WebHooks project when configured to use Azure Table Storage. Take a look at the Microsoft.Aspnet.WebHooks.custom.AzureStorage StorageManager class.
I'm not sure this adds much on top of what you'd already found, but they handle everything without throwing an exception and always return a status code so you can react to that as necessary.
One difference here is they pass in the table and the operation to a multi-purpose ExecuteAsync method, rather than having one specifically for delete, but that's just an implementation detail.
Relevant code from their example:
public async Task<TableResult> ExecuteAsync(CloudTable table, TableOperation operation)
{
if (table == null)
{
throw new ArgumentNullException(nameof(table));
}
if (operation == null)
{
throw new ArgumentNullException(nameof(operation));
}
try
{
var result = await table.ExecuteAsync(operation);
return result;
}
catch (Exception ex)
{
var errorMessage = GetStorageErrorMessage(ex);
var statusCode = GetStorageStatusCode(ex);
var message = string.Format(CultureInfo.CurrentCulture, AzureStorageResources.StorageManager_OperationFailed, statusCode, errorMessage);
_logger.Error(message, ex);
return new TableResult { HttpStatusCode = statusCode };
}
}
public string GetStorageErrorMessage(Exception ex)
{
if (ex is StorageException storageException && storageException.RequestInformation != null)
{
var status = storageException.RequestInformation.HttpStatusMessage != null ?
storageException.RequestInformation.HttpStatusMessage + " " :
string.Empty;
var errorCode = storageException.RequestInformation.ExtendedErrorInformation != null ?
"(" + storageException.RequestInformation.ExtendedErrorInformation.ErrorMessage + ")" :
string.Empty;
return status + errorCode;
}
else if (ex != null)
{
return ex.Message;
}
return string.Empty;
}
public int GetStorageStatusCode(Exception ex)
{
return ex is StorageException se && se.RequestInformation != null ? se.RequestInformation.HttpStatusCode : 500;
}

Deadlock when previous query threw an exception

Using entity framework, I have a function that basically goes something like this:
using (var ctx = new Dal.MyEntities())
{
try
{
//...
// create a temp entity
Dal.Temp temp = new Dal.Temp();
// populate its children
// note that temp is set to cascade deletes down to it's children
temp.Children = from foo in foos
select new Dal.Children()
{
// set some properties...
Field1 = foo.field1,
Field2 = foo.field2
}
//...
// add temp row to temp table
ctx.Temp.Add(temp);
ctx.SaveChanges();
// some query that joins on the temp table...
var results = from d in ctx.SomeOtherTable
join t in temp.Children
on new { d.Field1, d.Field2 } equals new { t.Field1, d.Field2 }
select d;
if (results.Count() == 0)
{
throw new Exception("no results")
}
// Normal processing and return result
return results;
}
finally
{
if (temp != null && temp.ID != 0)
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
}
}
The idea is that as part of the processing of a request I need to build a temporary table with some data that then gets used to join the main query and filter the results. Once the query has been processed, the temp table should be deleted. I put the deletion part in the finally clause so that if there is a problem with the query (an exception thrown), the temporary table will always get cleaned up.
This seems to work fine, except intermittently I have a problem were the SaveChanges in the finally block throws a deadlock exception with an error message along the lines of:
Transaction (Process ID 89) was deadlocked on lock resources with another process and
has been chosen as the deadlock victim. Rerun the transaction.
I can't reliably reproduce it, but it seems to happen most often if the previous query threw the "no results" exception. Note that, due to an error that was discovered on the front end, two identically requests were being submitted under certain circumstances, but nevertheless, the code should be able to handle that.
Does anybody have an clues as to what might be happening here? Is throwing an exception inside the using block a problem? Should I handle that differently?
Update, so the exception might be a red herring. I removed it altogether (instead returning an empty result) and I still have the problem. I've tried a bunch of variations on:
using (new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted })
using (var ctx = new Dal.MyEntities())
{
}
But despite what I've read, it doesn't seem to make any difference. I still get intermittent deadlocks on the second SaveChanges to remove the temp table.
how about adding a
using (var ctx = new Dal.MyEntities())
{
try
{
//...
Dal.TempTable temp = new Dal.TempTable();
//...
ctx.TempTables.Add(temp);
// some query that joins on the temp table...
if (no
results are
returned)
{
throw new Exception("no results")
}
// Normal processing and return result
}
catch
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
finally
{
if (temp != null && temp.ID != 0)
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
}
}

Entity Framework; How to handle an exception in foreach loop and keep iterating

When I iterate through a foreach with the following code it successfully catches the first exception that occurs and adds the id to my error list. On all the subsequent iterations of the loop, it will continue to catch the previous exception.
How can I appropriately catch the exception and undo or clear the failed DeleteObject request so that subsequent deletes can be performed.
public ActionResult Delete(int[] ListData)
{
List<int> removed = new List<int>();
List<int> error = new List<int>();
Item deleteMe;
foreach (var id in ListData)
{
deleteMe = this.getValidObject(id);
if (deleteMe == null)
{
error.Add(id);
continue;
}
try
{
this.DB.Items.DeleteObject(deleteMe);
this.DB.SaveChanges();
removed.Add(id);
}
catch (DataException ex)
{
// revert change to this.DB.Items?
error.Add(id);
}
}
if (error.Count > 0)
{
return Json(new { Success = false, Removed = removed, Error = error });
}
return Json(new { Success = true, Removed = removed });
}
I have searched SO and google and most people will process all the delete objects first and then save changes so that it is one transaction. But I need it to process each transaction individually so a single failure does not stop the rest of the transactions.
I am using Entity Framework 4.
The exception I get for this specific example caused by foreign keys being associated to the item that is being removed. While in production I will be handling this scenario, it should be able to continue on no matter what the exception is.
I assume that the the same context, this.DB, is being used in this.getValidObject(id) to retrieve an entity. If that is the case, in the exception block call: this.DB.detach(deleteme). That should prevent the SaveChanges() to try to delete the problematic entity on the next iteration.
The code you present looks good. What is the error you see? As you've noted, maybe you need to un-tag something in this.DB.Items, though I don't think so. You could also try creating a new DataContext for each loop such that the old, failed DataContext's state on the world is irrelevant.
If I understood correctly, you cannot remove the entity(Item) because it has a foreign key association(child) to it.
You will first have to update all child(related) entities using the Parent(Item) you want to delete, by removing the relationship, updating the entity to relate too an alternative parent(Item) or deleting the child entity(entities) and then finally removing the Parent(Item) entity.

Categories