I am trying to perform a simple insert or update command in EF 6 but as the method is being called twice (as I want to update different fields each time) EF hasn't completed the INSERT in time for the second call to think it too needs to perform an INSERT.
Below is how I attempted to catch the error and retry with an Update
Simplified:
try{
var record = new DBRecord(){
id = 1234,
token1 = shouldUpdateToken1 ? "OK" : null,
token2 = shouldUpdateToken1 ? null : "OK"
}
dbContext.entityTable.Attach(record).State = dbContext.entityTable.Any(x => x.id == 1234) ? EntityState.Modified ? EntityState.Added;
dbContext.SaveChanges();
} catch (Exception ex)
{
// Must require update
dbContext.ChangeTracker.Clear();
// Only create object with fields to change otherwise it will overwrite existing
var newRecord = new DBRecord(){
}
id = 1234
};
if ( shouldUpdateToken1 ){
newRecord.token1 = "OK";
dbContext.entityTable.Attach(newRecord).Property(x => x.token1).IsModified = true;
} else {
newRecord.token2 = "OK";
dbContext.entityTable.Attach(newRecord).Property(x => x.token2).IsModified = true;
}
dbContext.SaveChanges();
}
Is this really the best solution ? The try catch block is allowing me to react to the primary key violation (id is the PK) as it errors with:
Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while saving the entity changes. See the inner exception for details.
---> MySqlConnector.MySqlException (0x80004005): Duplicate entry '1234' for key 'PRIMARY'
This is obviously because the inert is being attempted for both runs of this method.
I believe EF 7 has some improvements for Upsert commands but I'm locked to 6.12.
Related
A document is inserted into a collection using c#
{
"_id" : UUID("some_guid")
}
Via
db.collection.insert(new { id = a_guid });
We rely upon the uniqueness of the guid/uuid by specifying the id in the document meaning the mongo db driver is spared from doing this.
Now, all of this is wrapped in a try..catch where a duplicate key exception is caught. Calling code uses this routine for conflict checking. That is, if a guid hasnt been encountered before - insert it - next time around and on trying to insert the same value again, the exception lets us now there's a duplicate.
We appear to be getting into a situation where values are written but an exception is STILL thrown, indicating a conflict where there isnt one.
We have had this working in a 3 node replica set.
It is NOT working in a 5 node replica set, purporting to be healthy. The write concern is set to 1, indicating acknowledgement when the master is written to (but not the journal) just like the 3 node set.
Where should I dig deeper? The duplicate exception derives from a writeconcern exception, is something screwy going on here? Is the mongo driver correctly interpreting the error and raising the right exception?
Any leads would be great!
EDIT:
var database = this.client.GetServer().GetDatabase("A_Database");
var collection = database.GetCollection<object>("A_Collection");
try
{
collection.Insert(new { Id = paymentReference.ToGuid() });
}
catch (MongoDuplicateKeyException)
{
return false;
}
return true;
This is NOT called in a loop.
You can catch the exception base MongoWriteException and filter with when by the Category, example code:
var database = this.client.GetServer().GetDatabase("A_Database");
var collection = database.GetCollection<object>("A_Collection");
try
{
collection.Insert(new { Id = paymentReference.ToGuid() });
}
catch (MongoWriteException ex) when(ex.WriteError.Category == ServerErrorCategory.DuplicateKey)
{
return false;
}
return true;
Hear's a fixed version of your code
var database = this.client.GetServer().GetDatabase("A_Database");
var collection = database.GetCollection<object>("A_Collection");
try
{
collection.Insert(new { Id = paymentReference.ToGuid() });
}
catch (Exception)
{
collection.Insert(new { Id = Guid.NewGuid(); });
return tru;
}
return true;
Using entity framework, I have a function that basically goes something like this:
using (var ctx = new Dal.MyEntities())
{
try
{
//...
// create a temp entity
Dal.Temp temp = new Dal.Temp();
// populate its children
// note that temp is set to cascade deletes down to it's children
temp.Children = from foo in foos
select new Dal.Children()
{
// set some properties...
Field1 = foo.field1,
Field2 = foo.field2
}
//...
// add temp row to temp table
ctx.Temp.Add(temp);
ctx.SaveChanges();
// some query that joins on the temp table...
var results = from d in ctx.SomeOtherTable
join t in temp.Children
on new { d.Field1, d.Field2 } equals new { t.Field1, d.Field2 }
select d;
if (results.Count() == 0)
{
throw new Exception("no results")
}
// Normal processing and return result
return results;
}
finally
{
if (temp != null && temp.ID != 0)
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
}
}
The idea is that as part of the processing of a request I need to build a temporary table with some data that then gets used to join the main query and filter the results. Once the query has been processed, the temp table should be deleted. I put the deletion part in the finally clause so that if there is a problem with the query (an exception thrown), the temporary table will always get cleaned up.
This seems to work fine, except intermittently I have a problem were the SaveChanges in the finally block throws a deadlock exception with an error message along the lines of:
Transaction (Process ID 89) was deadlocked on lock resources with another process and
has been chosen as the deadlock victim. Rerun the transaction.
I can't reliably reproduce it, but it seems to happen most often if the previous query threw the "no results" exception. Note that, due to an error that was discovered on the front end, two identically requests were being submitted under certain circumstances, but nevertheless, the code should be able to handle that.
Does anybody have an clues as to what might be happening here? Is throwing an exception inside the using block a problem? Should I handle that differently?
Update, so the exception might be a red herring. I removed it altogether (instead returning an empty result) and I still have the problem. I've tried a bunch of variations on:
using (new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted })
using (var ctx = new Dal.MyEntities())
{
}
But despite what I've read, it doesn't seem to make any difference. I still get intermittent deadlocks on the second SaveChanges to remove the temp table.
how about adding a
using (var ctx = new Dal.MyEntities())
{
try
{
//...
Dal.TempTable temp = new Dal.TempTable();
//...
ctx.TempTables.Add(temp);
// some query that joins on the temp table...
if (no
results are
returned)
{
throw new Exception("no results")
}
// Normal processing and return result
}
catch
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
finally
{
if (temp != null && temp.ID != 0)
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
}
}
The following code is causing an intermittent exception:
public int UnblockJob(int jobId)
{
using (var connect = MakeConnect())
{
var tag = connect.JobTag.SingleOrDefault(jt => jt.JobId == jobId && jt.Name == Metrics.TagNameItemBlockCaller);
if (tag == null)
{
return 0;
}
connect.JobTag.Remove(tag);
return connect.SaveChanges();
}
}
How can I correct or troubleshoot it?
From the documentation for DbUpdateConcurrencyException:
Exception thrown by DbContext when it was expected that SaveChanges for an entity would result in a database update but in fact no rows in the database were affected.
This means that the record you are attempting to delete has since been removed from the database. It would appear that you have another process that is deleting records or this function is able to be called concurrently.
There are several solutions, here are a couple:
Fix the source problem Stop other processes affecting the data.
Catch the error Wrap this method in a try/catch block, after all you may only care that the record has been deleted:
try
{
//Existing code here
}
catch(DbUpdateConcurrencyException)
{
//Safely ignore this exception
}
catch(Exception e)
{
//Something else has occurred
throw;
}
I want to add data into database and I get this error.
An exception of type 'System.Data.Entity.Core.EntityException' occurred in EntityFramework.SqlServer.dll but was not handled in user code
Additional information: An error occurred while starting a transaction on the provider connection. See the inner exception for details.
Here is my code.
public void alarmlog(int id)
{
AlarmLog log = new AlarmLog();
if (id == 1)
{
log.SectorID = "LRD";
log.LineNo = "L01";
log.WorkStation = "02";
log.LMQS = 1;
log.StaffID = 6;
log.DateTime = DateTime.Now;
}
db.AlarmLogs.Add(log);
db.SaveChanges();
}
I have a hunch, that your error is located at your if(id==1) statement. My guess is, that you pass in an id which is not 1. A new Alarmlog is created, the if statement does not return true and then you attempt to add an empty Item to the database.
If all those fields, even one for that matter, may not be null, an exception does get thrown.
Remove your if-block and see if the error has vanished:
public void alarmlog(int id)
{
AlarmLog log = new AlarmLog();
log.SectorID = "LRD";
log.LineNo = "L01";
log.WorkStation = "02";
log.LMQS = 1;
log.StaffID = 6;
log.DateTime = DateTime.Now;
db.AlarmLogs.Add(log);
db.SaveChanges();
}
If you want us to stop guessing, please do what the commenters said: wrap your code in try { ... } catch(Exception ex) { ... } blocks to see what the error is.
User Table structure
Id
Username (unique constrain)
I have the problem with Nhibernate and SqlServer like this.
There are two concurrent transactions trying to insert data in the User Table.
Both transactions query the data in table to check if the new Username to insert does not appear in the table.
The problem is that let say.
Transaction1 and Transaction2 read User Table and found that there is no username embarus in User Table.
Then Transaction2 trying to insert embarus in User table while Transaction1 has been inserted and committed embarus in table already.
Therefore Transaction2 get exception for unique constrain.
Please help me to solve this problem, any ideas or article that may be useful.
I found that SqlServer 2008 uses ReadCommitted for default transaction isolation level.
Thank you so much.
You need to catch and handle the unique constraint violation. The best way to do that is to create an ISqlExceptionConverter implementation to translate the RDBMS specific exception to a custom exception in your application.
public class SqlServerExceptionConverter : ISQLExceptionConverter
{
public Exception Convert(AdoExceptionContextInfo adoExceptionContextInfo)
{
var sqlException = adoExceptionContextInfo.SqlException as SqlException;
if (sqlException != null)
{
// 2601 is unique key, 2627 is unique index; same thing:
// http://blog.sqlauthority.com/2007/04/26/sql-server-difference-between-unique-index-vs-unique-constraint/
if (sqlException.Number == 2601 || sqlException.Number == 2627)
{
return new UniqueKeyException(sqlException.Message, sqlException);
}
}
return adoExceptionContextInfo.SqlException;
}
}
public class UniqueKeyException : Exception
{
public UniqueKeyException(string message, Exception innerException)
: base(message, innerException)
{ }
}
Usage:
using (var txn = _session.BeginTransaction())
{
try
{
var user= new User
{
Name = "embarus"
};
_session.Save(user);
txn.Commit();
}
catch (UniqueKeyException)
{
txn.Rollback();
var msg = string.Format("A user named '{0}' already exists, please enter a different name or cancel.", "embarus");
// Do something useful
}
catch (Exception ex)
{
if (txn.IsActive)
{
txn.Rollback();
}
throw;
}
}
Note that you should not reuse the session after the exception occurs.