If I have the following Linq code:
context.Table1s.InsertOnSubmit(t);
context.Table1s.InsertOnSubmit(t2);
context.Table1s.InsertOnSubmit(t3);
context.SubmitChanges();
And I get a database error due to the 2nd insert, Linq throws an exception that there was an error. But, is there a way to find out that it was the 2nd insert that had the problem and not the 1st or 3rd?
To clarify, there are business reasons that I would expect the 2nd to fail (I am using a stored procedure to do the insert and am also doing some validation and raising an error if it fails). I want to be able to tell the user which one failed and why. I know this validation would be better done in the C# code and not in the database, but that is currently not an option.
You can specify explicitly a conflict mode like this one :
context.SubmitChanges(ConflictMode.ContinueOnConflict);
if you want to insert what is valid and not fail on the first conflict, then use the
context.ChangeConflicts
collection to find out which objects conflicted during the insertion.
Comment out the first and third inserts to eliminate them as suspects.
My first thought is that the second insert has the same ID as the first, but it's tough to diagnose your problem without more details about the error.
Related
I am getting an InvalidOperationException when trying to add a row using LinqToSql. We cannot duplicate it in house, and it happens about 0.06% for only one of our customers, always on a relatively simple change to the database. (Single row insert, or single field update)
Message:
This SqlTransaction has completed; it is no longer usable.
Stack Trace:
at System.Data.SqlClient.SqlTransaction.ZombieCheck()
at System.Data.SqlClient.SqlTransaction.Rollback()
at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
Here is a sample piece of code (the database autogenerates the primary key)
TableName row = new TableName();
row.Description = "something";
row.Action = "action";
Context.TableName.InsertOnSubmit(row);
Context.SubmitChanges();
We use SQL Server 2008 R2. The inserts and updates do go through on the server. But we still get the exception. There is nothing that should ever prevent these updates and inserts from taking place. No dependencies or other stuff.
How do we stop these exceptions / zombie checks / rollbacks from happening, or what is causing them in the first place?
EDIT:
After further inspection, the database update that being done by the SubmitChanges() is actually occurring. This exception is getting called after the transaction has successfully completed, and the database row is updated to the new value.
One thing to be aware of is that LinqToSql (and EntityFramework) will by default assign null to DateTime fields in your data objects, so if your table has a datetime field it will throw an exception on insert if the datacontext tries to insert that null value.
You can get around this error by either using the datetime2 type in MSSQL (which will allow the "null" value of a DateTime object - 01/01/0001) or manually assigning a valid date to the data object's DateTime field(s) prior to insert/update.
Without a more detailed stack trace, this is the only obvious problem that comes to mind. HTH.
EDIT:
Looks like this isn't entirely uncommon: http://connect.microsoft.com/VisualStudio/feedback/details/588676/system-data-linq-datacontext-submitchanges-causes-invalidoperationexception-during-rollback#details
The root problem seems to be that the internal ADO logic that LinqToSql uses isn't really configured properly for handling transactional rollbacks. From what I can tell, the only real solution is to provide a transaction object to LinqToSql and manage rollbacks yourself, which doesn't really seem all that appealing.
Imagine an object with a field that can't have a duplicate value in the database. My first instinct was to create a unique attribute that I could apply as a data annotation to a property. This unique attribute would hit the database and check if the value already exists. This would work when executing a create method, but would fail on an update. On an update, I would get a duplicate value error for every unique field of my entity whose value I don't want to change. What would be a good way, or an established practice, to accomplish this on ASP.NET MVC 2 in a way that fits nicely with the ModelState? Passing the id of my object to the attribute validator could work by checking if the duplicate value that is found is of the same entity that I am updating but I don't know how to get that data from inside of the validator.
Please forgive me if this is a stupid question or if it is phrased incoherently. It's almost 3 in the morning and I've been coding since the morning of yesterday.
For this kind of validation, I would let the database do what it already does so well. Make sure your database has the unique constraint and let it report back an error if you violate it. You can then add the error to the model errors (with a nice friendly bit of text, rather than just plonking the SQL error).
If you are determined to perform a check yourself, you can get around the UPDATE problem by excluding the current record...
SELECT COUNT(*)
FROM myTable
WHERE myTable.UniqueValue = 'ShouldBeUnique'
AND myTable.Id <> 5
In this example, you use the id of the record you are updating to avoid checking it, which means you just check other records to see if they contain the unique value.
I am updating an object of type X and its children Y using LINQ to SQL and then submitting changes and getting this error
Example Code
X objX = _context.X.ToList().Where(x => x.DeletedOn == null).First();
objX.DeletedOn = DateTime.Now;
EntitySet<Y> objYs = objX.Ys;
Y objY = objYs[0];
objY.DeletedOn = DateTime.Now;
_context.SubmitChanges();
On SubmitChanges() I get an exception "1 of 2 Updates failed", no other information as to why that happened. Any ideas?
Also the exception type is
ChangeConflictException
Sooo what was the cause of the problem
- A trigger
I did a sql profiler and saw that
When ObjY's DeletedOn property got updated a trigger updated
ObjX's property (value in table) called CountOfX
which led to an error as the SQL created by LINQ to SQL had the old CountOfX value in it.
Hence the conflict.
If you ever get this error - SQL profiler is the best place to start your investigation
ALSO NOT RELATED TO THE QUESTION
I am testing LINQ to SQL and ADO.net Framework, weirdly this error happened in LINQ to SQL but not in ADO.net framework. But I like LINQ to SQL for its Lazy Loading. Waiting for EF to get outta beta
I'm not sure what the cause of the error may be exactly, but there seem to be a number of problems with the example you've provided.
Using ToList() before the Where() method would cause your context to read the entire table from the DB into memory, convert it to an array; and then in the same line you immediately call Where which will discard the rows you've loaded, but don't need. Why not just:
_context.X.Where(...
The Where method will return multiple items, but the second line in the example doesn't appear to be iterating through each item individually. It appears to be setting the DeletedOn property for the collection itself, but the collection wouldn't have such a property. It should fail right there.
You are using DateTime.Now twice in the code. Not a problem, except that this will produce ever so slightly different date values each time it is called. You should call DateTime.Now once and assign the result to a variable so that everything you use it on gets identical values.
At the point where you have "Y objY = objYs[0]" it will fail if there are no items in the Y collection for any given X. You'd get an index out of bounds exception on the array.
So given this example, I'm not sure if anyone could speculate as to why code modeled after this example might be breaking.
In LINQ2SQL Data Context diagram select the Entity and the field where the count is stored. (A denormalized figure)
Now set the UpdateCheck = Never.
I had this kind of issue. I was debugging running single lines at a time. It turned out another process was modifying this record.
My manual debugging process was slowing down the normal speed of the function. When I ran it all the way to a line after the SubmitChanges method, it succeeded.
My scenario would be less common, but the nature of this error relates to the record becoming superceded by another function/process. In my case it was another process.
I have not been working in SQL too long, but I thought I understood that by wrapping SQL statements inside a transaction, all the statements completed, or none of them did. Here is my problem. I have an order object that has a lineitem collection. The line items are related on order.OrderId. I have verified that all the Ids are set and are correct but when I try to save (insert) the order I am getting The INSERT statement conflicted with the FOREIGN KEY constraint "FK_OrderItemDetail_Order". The conflict occurred in database "MyData", table "dbo.Order", column 'OrderId'.
psuedo code:
create a transaction
transaction.Begin()
Insert order
Insert order.LineItems <-- error occurs here
transaction.Commit
actual code:
...
entity.Validate();
if (entity.IsValid)
{
SetChangedProperties(entity);
entity.Install.NagsInstallHours = entity.TotalNagsHours;
foreach (OrderItemDetail orderItemDetail in entity.OrderItemDetailCollection)
{
SetChangedOrderItemDetailProperties(orderItemDetail);
}
ValidateRequiredProperties(entity);
TransactionManager transactionManager = DataRepository.Provider.CreateTransaction();
EntityState originalEntityState = entity.EntityState;
try
{
entity.OrderVehicle.OrderId = entity.OrderId;
entity.Install.OrderId = entity.OrderId;
transactionManager.BeginTransaction();
SaveInsuranceInformation(transactionManager, entity);
DataRepository.OrderProvider.Save(transactionManager, entity);
DataRepository.OrderItemDetailProvider.Save(transactionManager, entity.OrderItemDetailCollection); if (!entity.OrderVehicle.IsEmpty)
{
DataRepository.OrderVehicleProvider.Save(transactionManager, entity.OrderVehicle);
}
transactionManager.Commit();
}
catch
{
if (transactionManager.IsOpen)
{
transactionManager.Rollback();
}
entity.EntityState = originalEntityState;
}
}
...
Someone suggested I need to use two transactions, one for the order, and one for the line items, but I am reasonably sure that is wrong. But I've been fighting this for over a day now and I need to resolve it so I can move on even if that means using a bad work around. Am I maybe just doing something stupid?
I noticed that you said you were using NetTiers for your code generation.
I've used NetTiers myself and have found that if you delete your foreign key constraint from your table, add it back to the same table and then run the build scripts for NetTiers again after making your changes in the database might help reset the data access layer. I've tried this on occasion with positive results.
Good luck with your issue.
Without seeing your code, it is hard to say what the problem is. It could be any number of things, but look at these:
This is obvious, but your two insert commands are on the same connection (and the connection stays open the whole time) that owns the transaction right?
Are you retrieving your ID related to the constraint after the first insert and writing it back into the data for second insert before executing the command?
The constraint could be set up wrong in the DB.
You definitely do not want to use two transactions.
Looks like your insert statement for the lineItems is not correctly setting the value for the order .. this should be a result of the Insert order step. Have you looked (and tested) the individual SQL statements?
I do not think your problem has anything to do with transaction control.
I have no experience with this, but it looks like you might have specified a key value that is not available in the parent table. Sorry, but I cannot help you more than this.
The problem is how you handle the error. When an error occurs, a transaction is not automatically rolled back. You can certainly (and probably should) choose to do that, but depending on your app or where you are you may still want to commit it. And in this case, that's exactly what you're doing. You need to wrap some error handling code around there to rollback your code when the error occurs.
The error looks like that the LineItems are not being given the proper FK OrderId that was autogenerated by the the insert of the Order to the Order Table. You say you have checked the Ids, Have you checked the FKs in the order details as well ?
After calling method
_membershipProvider.DeleteUser(user.UserName, false);
where where the second parameter (false) is deleteAllRelatedData, orphaned entries are left in the database (aspnet_Users table and probably more). What is the best practice for cleaning these up?
EDIT: The user management code is already changed to now use true as the second param, but it's left a db full of junk entries. I'm wondering how best to clean these up. I'm currently looking at the sp provided with the database dbo.aspnet_Users_DeleteUser puzzling over the parameter #TablesToDeleteFrom int wondering exactly what it means. Looks like some sort of bitmask.
I guess you'd have a choice of Cascade delete or write something that runs as a job periodically.
Or better yet do as stated in Bob's comment!
Update- as it sounds like you have now stopped this from occuring, just write a SQL Script to detect the orphaned records, then turn it into a DELETE statement.
If you want to leave no orphan entries then you should set the second parameter (deleteAllRelatedData) to true. It will remove all related and child data.
http://msdn.microsoft.com/en-us/library/system.web.security.membershipprovider.deleteuser.aspx