hello community I am trying to update the secondary data within a main class, the main class is quotation and within it is the cart class which contains references to two other classes, the cart articles class and products.
I can now update the properties of the quotation class and I managed to insert a new article to the quotation but when inserting the new product the cart id becomes null, in the article cart table of the database I have the product id and the id of the cart.
How can I update the products within the quote?
How can I prevent the CarritoId from becoming null?
this is my code:
[HttpPut]
public async Task<ActionResult> Put(Cotizacion cotizacion)
{
var cotizacionoriginal = context.Cotizaciones.Where(x => x.CotizacionId == cotizacion.CotizacionId).FirstOrDefault();
if (cotizacionoriginal != null)
{
context.Entry(cotizacionoriginal).State = EntityState.Detached;
}
context.Entry(cotizacion).State = EntityState.Modified;
foreach (ArticuloCarrito articulo in cotizacion.Carrito.Articulos)
{
articulo.ProductoId = articulo.Producto.ProductoId;
articulo.Producto = null;
//articulo.Carrito.CarritoId = cotizacion.Carrito.CarritoId;
context.ArticuloCarritos.Update(articulo);
context.SaveChanges();
}
await context.SaveChangesAsync();
return NoContent();
}
this was inserted in the cart article table:
the carritoid became null and I don't want this to happen
How can I prevent the CarritoId from becoming null?
Ultimately by not passing entities between controller, view, and back. It may look like you are sending an entity to the view and the entity is being sent back in the Put, but what you are sending is a serialized JSON block and casting it to look like an entity. Doing this leads to issues like this where your Model becomes a serialization of an entity graph that may have missing pieces and gets mangled when you start attaching it to a DbContext to track as an entity. This is likely the case you're seeing, the data that was sent to the view was either incomplete, or by attaching the top level you're expecting all the related entities to be attached & tracked which isn't often the case. Altering FK references also can lead to unexpected results rather than updating available navigation properties. It also makes your application vulnerable to tampering as a malicious client or browser add-in can modify the data being sent to your server to adjust values your UI doesn't even present. Modern browser debug tools make this a pretty simple task.
Ideally the controller and view should communicate with view models based on loaded entities. This can also streamline the amount of data being shuttled around to reduce the payload size.
To help mitigate this, approach it as if the method did not receive an entity back. You are loading the entity again anyways but in your case you are doing nothing with it except detaching it again to try and attach your passed in JSON object. For example this code:
var cotizacionoriginal = context.Cotizaciones.Where(x => x.CotizacionId == cotizacion.CotizacionId).FirstOrDefault();
if (cotizacionoriginal != null)
{
context.Entry(cotizacionoriginal).State = EntityState.Detached;
}
... does absolutely nothing for you. It says "Attempt to find an object with this ID in the local cache or database, and if you find it, stop tracking it."
You're effectively going to the database for no reason. This doesn't even assert that the row exists because you're using an "OrDefault" variation.
For a start, this should read more like:
var cotizacionoriginal = context.Cotizaciones
.Where(x => x.CotizacionId == cotizacion.CotizacionId)
.Single();
This says "Load the one row from the local cache or database that has this CotizacionId". This asserts that there is actually a matching row in the database. If the record passed into the Put has an invalid ID this will throw an exception. We don't want to detach it.
One further detail. Since we are going to want to manipulate child collections and references in this object, we should eager-load them:
var cotizacionoriginal = context.Cotizaciones
.Include(x => x.Carriyo)
.ThenInclude(c => c.Articulo)
.ThenInclude(a => a.Producto)
.Where(x => x.CotizacionId == cotizacion.CotizacionId).Single();
With larger object graphs this can get rather wordy as you have to drill down each chain of related entities. A better approach is rather than updating a "whole" object graph at once, break it up into smaller legal operations where one entity relationship can be dealt with at a time.
The next step would be to validate the passed in values in your Put object. Does it appear to be complete or is anything out of place? At a minimum we should check the current session user ID to verify that they have access to this loaded Cortizacion row and have permissions to edit it. If not, throw an exception. The web site's exception handling should ensure that any serious exception, like attempting to access rows that don't exist or don't have permissions to, should be logged for admin review, and the current session should be ended. Someone may be tampering with the system or you have a bug which is resulting in possible data corruption. Either way it should be detected, reported, and fixed with the current session terminated as a precaution.
The last step would be to go through the passed in object graph and alter your "original" data to match. The important thing here is again, you cannot trust/treat the passed in parameters as "entities", only deserialized data that looks like an entity. So, if the Product changed in one of the referenced items, we will fetch a reference and update it.
foreach (ArticuloCarrito articulo in cotizacion.Carrito.Articulos)
{
if (articulo.ArticuloId == 0)
{ // TODO: Handle adding a new article if that is supported.
}
else
{
var existingArticulo = existingCarrito.Articulos.Single(x => x.ArticuloId == articulo.ArticuloId);
if (existingArticulo.ProductoId != articulo.Producto.ProductoId)
{
var producto = context.Productos.Single(x => x.ProductoId == articulo.Producto.ProductoId);
existingArticulo.Producto = producto;
}
}
}
await context.SaveChangesAsync();
Optionally above we might check the Articulo to see if a new row has been added. (No ID yet) If we do have an ID, then we check the existing Carrito Articles for a matching item. If one is not found then this would result in an exception. Once we have one, we check if the Product ID has changed. If changed, we don't use the "Producto" passed in, as that is a deserialized JSON object, so we go to the Context to load a reference and set it on our existing row.
context.SaveChanges should only be called once per operation rather than inside the loop.
When copying across values from a detached, deserialized entity to a tracked entity, you can use:
context.Entry(existingArticulo).CurrentValues.SetValues(articulo);
However, this should only be done if the values in the passed in object are validated. As far as I know this only updates value fields, not FKs or object references.
Hopefully this gives you some ideas on things to try to streamline the update process.
Related
I have a prepare function for saving records to the database that is probably a little overkill, but the idea was that I could add on to it at a later date.
public void Prepare<T>(T model) where T : class {
var key = ReflectionHelper.GetAttribute<T, KeyAttribute>();
if(null == key) { return; }
SetContext<T>();
var set = DbManager.Context.Set<T>();
object id = key.GetValue(model);
object def = key.PropertyType.GetDefaultValue();
if(id == def) { set.Add(model); }
}
The current implementation is just checking that the primary key of the record is a default value (typically 0) and then adds it to the dataset. This works for 90% of cases where tables would be built with an auto-incrementing key, however, I'm running into an issue for a table where the key is generated manually for each record, which means that it is set before inserting it into the DB.
This is obviously not ideal with the above function, which is failing the check and not actually saving it to the DB. I know that Entity Framework must have some sort of internal test to check whether a record is new or not to determine whether it needs to do an UPDATE or an INSERT and AFAIK it doesn't rely on the ID being set beforehand or I'd be running into the same issue with EF's code that I am with the above function. Is there a way that I can pull the result from that check instead of the way I'm currently doing it?
This is where Generic "one size fits all" approaches start to fall down. They work efficiently so long as the implementations are identical. As soon as you have an exceptional case it means introducing complexity.
In situations where the key cannot reflect whether an entity is new or existing (I.e. 0 / null = new) then the typical approach would be to attempt to load the entity to perform the Update, otherwise insert.
var existingEntity = set.SingleOrDefault(x => x.Id == id);
if (existingEntity != null)
{
Mapper.Map(model, existingEntity);
}
else
{
existingEntity = set.Add(model);
}
The issue that can come up with "Upsert" implementations is that the application can start accidentally inserting records that you expect to exist, and should have probably handled if they don't. (Stale data, tampering, etc.) My recommendation with systems is to be explicit with dedicated Add/Insert vs. Update method call chains.
DbSet.Update can also work to manage update or insert scenarios but this is a less optimal compared to using EF's change tracker as it will generate an UPDATE SQL statement for all columns whether they changed or not. If you manually update all of the columns or use Automapper's Map method to copy across the values, the change tracker will only generate a statement for the columns that changed. This also gives you control over ensuring that in update scenarios that only allowed values can be changed. For instance the UI is only expected to change some fields, worst case if you are passing full entities back from the client that other values in the model cannot be tampered with when your manual copy over or Automapper mappings only transfer expected field values.
How would you Upsert without select? the upsert would be a collection of entities received by a method which contains DTOs that may not be available in the database so you can NOT use attach range for example.
One way theoretically is to load the ExistingData partially with a select like dbContext.People.Where(x => x exists in requested collection).Select(x => new Person { Id = x.Id, State = x.State }).ToList() which just loads a part of the entity and not the heavy parts. But here if you update one of these returned entityItems from this collection it will not update because of the new Person its not tracking it and you also cannot say dbContext.Entry<Person>(person).State = Modified because it will throw an error and will tell you that ef core is already "Tracking" it.
So what to do.
One way would be to detach all of them from the ChangeTracker and then do the state change and it will do the update but not just on one field even if you say dbContext.Entry<Person>(person).Property(x => x.State).Modified = true. It will overwrite every fields that you haven't read from the database to their default value and it will make a mess in the database.
The other way would be to read the ChangeTracker entries and update them but it will also overwrite and it will consider like everything is chanaged.
So techinically I don't know how ef core can create the following SQL,
update People set state = 'Approved' where state != 'Approved'
without updating anything else. or loading the person first completely.
The reason for not loading your data is that you may want to update like 14000 records and those records are really heavy to load because they contain byte[] and have images stored on them for example.
BTW the lack of friendly documentation on EFCore is a disaster compare to Laravel. Recently it has cost us the loss of a huge amount of data.
btw, the examples like the code below will NOT work for us because they are updating one field which they know that it exists in database. But we are trying to upsert a collection which some of those DTOs may not be available in the database.
try
{
using (var db = new dbContext())
{
// Create new stub with correct id and attach to context.
var entity = new myEntity { PageID = pageid };
db.Pages.Attach(entity);
// Now the entity is being tracked by EF, update required properties.
entity.Title = "new title";
entity.Url = "new-url";
// EF knows only to update the propeties specified above.
db.SaveChanges();
}
}
catch (DataException)
{
// process exception
}
Edit: The used ef core version is #3.1.9
Fantastic, I found the solution (You need to also take care about your unit tests).
Entityframework is actually working fine it can be just a lack of experience which I'm documenting here in case anyone else got into the same issue.
Consider that we have an entity for Person which has a profile picture saved as Blob on it which causes that if you do something like the following for let's say 20k people the query goes slow even when you've tried to have enough correct index on your table.
You want to do this query to update these entities based on a request.
var entityIdsToUpdate = request.PeopleDtos.Select(p => p.Id);
var people = dbContext.People.Where(x => entityIdsToUpdate.Contains(x.Id)).ToList();
This is fine and it works perfectly, you will get the People collection and then you can update them based on the given data.
In these kind of updates you normally will not need to update images even if you do, then you need to increase the `TimeOut1 property on your client but for our case we did not need to update the images.
So the above code will change to this.
var entityIdsToUpdate = request.PeopleDtos.Select(p => p.Id);
var people = dbContext.People
.Select(p => new Person {
Id = p.Id,
Firstname = p.Firstname,
Lastname = p.Lastname,
//But no images to load
})
.Where(p => entityIdsToUpdate.Contains(p.Id)).ToList();
But then with this approach, EntityFramework will lose the track of your entities.
So you need to attach it like this and I will tell you how NOT to attach it.
This is the correct way for a collection
dbContext.People.AttachRange(people); //These are the people you've already queried
Now DO NOT do this, you may want to do this because you get an error from the first one from EntityFramework which says the entity is already being tracked, trust it because it already is. I will explain after the code.
//Do not do this
foreach(var entry in dbContext.ChangeTracker.Entries())
{
entry.State = EntityState.Detached;
}
//and then on updating a record you may write the following to attach it back
dbContext.Entry(Person).State = EntityState.Modified;
The above code will cause EntityFramework not to follow the changes on the entities anymore and by the last line you will tell it literally everything edited or not edited is changed and will cause you to LOSE your unedited properties like the "image".
Note: Now what can u do by mistake that even messes up the correct approach.
Well since you are not loading your whole entity, you may assume that it is still fine to assign values to the unloaded ones even if the value is not different than the one in the database. This causes entity framework to assume that something is changed and if you are setting a ModifiedOn on your records it will change it for no good reason.
And now about testing:
While you test, you may get something out from database and create a dto from that and pass the dto with the same dbContext to your SystemUnderTest the attach method will throw an error here which says this entity is already bein tracked because of that call in your test method. The best way would be create a new dbContext for each process and dispose them after you are done with them.
BTW in testing it may happen that with the same dbContext you update an entity and after the test you want to fetch if from the database. Please take note that this one which is returning to you is the "Cached" one by EntityFramework and if you have fetched it in the first place not completely like just with Select(x => ) then you will get some fields as null or default value.
In this case you should do DbContext.Entry(YOUR_ENTRY).Reload().
It is a really complete answer it may not directly be related to the question but all of the things mentioned above if you don't notice them may cause a disaster.
How do I update Tables with includes in Entity Framework Core? The following seems to update the Customer Transaction, but Not Product Id.
Both Customer and Product have attributes which changes, but CustomerId and ProductId stay same.
public void ModifyTransaction(IEnumerable<CustomerTransaction> customerTransactionList)
{
foreach (var modifyItem in customerTransactionList)
{
var existingItem = _dbContext.Set<CustomerTransaction>().Include(x => x.Product)
.FirstOrDefault(x => x.CustomerTransactionId == modifyItem.CustomerTransactionId );
if (existingItem == null)
{
_dbContext.Add(existingItem );
}
else
{
_dbContext.Entry(existingItem).State = EntityState.Modified;
}
}
_dbContext.SaveChanges();
}
using Net Core 3.1
This code is prone to errors that will creep up depending on the scenario. When passing entity classes between client and server it is important to understand that the objects passed back to the server are merely serialized copies, not tracked entities. Because of the way serialization works, where a DbContext fetching two Transaction records that both reference a Product with ID: 14 would reference the same entity instance, that same pair of transactions, when deserialized, would have two separate object references, each with a Product ID: 14.
Given your example, at a minimum you would need to do something like:
foreach (var modifyItem in customerTransactionList)
{
var existingItem = _dbContext.CustomerTransactions
.Include(x => x.Product)
.SingleOrDefault(x => x.CustomerTransactionId == modifyItem.CustomerTransactionId );
var trackedProduct = _dbContext.Products.Local(x => x.ProductId == modifyItem.Product.ProductId).SingleOrDefault();
if (trackedProduct != null)
modifyItem.Product = trackedProduct;
else
_dbContext.Products.Attach(modifyItem.Product);
if (existingItem == null)
_dbContext.Add(modifyItem);
else
{
_dbContext.Entry(existingItem).CurrentValues.SetValues(modifyItem);
if(existingItem.Product.ProductId != modifyItem.Product.ProductId)
existingItem.Product = modifyItem.Product; // tracked reference.
}
}
_dbContext.SaveChanges();
}
What this amounts to is checking for an existing transaction like you were doing. However, we also have to check for any cached copies of related entities (Product) that the DbContext may be tracking. If we don't and we try to add a transaction that has a Product with an ID matching one the Context is already tracking we will either get a PK violation or a duplicate Product record with a new Product ID created, depending on how your Product PK is configured. (I.e. DatabaseGenerated.Identity column) We update the Product reference with a local cache instance if one is found, otherwise we tell the DbContext to start tracking the product instance. This assumes that this method cannot accept new products as part of this call, and that the Product record received should legally exist. Handling new products and validating the product passed in would require additional conditional code and DB checks. From there we determine whether the Transaction is an update or an insert. In the case of an Update we can use CurrentValues.SetValues to copy across values, (as above) or Automapper, or manually copy the relevant values across. Assuming it's possible that a transaction could change the product, we also check the Product ID against the modified one, and if it's different we update the Product reference. modifyItem.Product at this point will be pointing to the DbContext tracked reference.
Updating entities with methods like this can be quite involved since you have to account for not only detecting new vs. existing records but potentially updating references to entities that the DbContext is already tracking. My strong recommendation is to adopt view models for explicit Add vs Update operations and deal with operations as atomically as possible. I.e. rather than passing a collection of transactions that might contain updates or inserts to work through, make more granular calls for each singular type of change. (Simpler, faster operations and fewer places for mistakes to occur)
I'm currently developing an MVVM app using a Model-Designer based code-first design. So far I've been able to get all the basic CRUD operations working on single entities, however I can't seem to change the properties of collection objects at all using SaveChanges() - I've used an SQL profiler to see that it's attempting to UPDATE with the old value, and a step right after SaveChanges() my changes get reverted to their old values!
Some other info:
my dbContext is loaded using DI from PRISM/Unity and kept as a Unit-of-Work for a "details" page the user will edit and then save.
My WPF UI is bound correctly and can modify the changes on an object-level.
I can successfully use Add() to insert entities.
I've verified the entity state of the entity in the child collection is Modified both by setting it and simplify debugging.
I've attempted to manually Attach() and AddOrUpdate() on any or all items.
I've turned off all Lazy Loading and instead manually included all collections.
I've manually set the Entry() properties of IsModified and CurrentValue to their desired settings.
I've tried binding my VM properties to their data by either
dbContext.Classes.Local.ToBindingList() or new ObservableCollection<Class>(Entity.Property).
Is there anything that I could be missing here? Here's one attempt I've tried:
// Assigning an Index object that contains relationships
Index = await _model.PersonIndexes.Include(i => i.PersonAddresses).FirstAsync(i => i.Id == IndexId);
// Grabbing a filtered set of Addresses based on their data
var query = Index.PersonAddresses.Where(e => e.Condition == true );
Addresses = new ObservableCollection<PersonAddress>(await query.ToListAsync());
// Ensure state is tracked (I've tried with and without all combinations of these)
foreach (PersonAddress addr in Addresses)
{
//_model.PersonAddresses.AddOrUpdate(a => a.Id, addr);
if (addr.PendingRemoval)
{
_model.PersonAddresses.Attach(addr);
_model.Entry(addr).Property(a => a.PendingRemoval).CurrentValue = true;
_model.Entry(addr).Property(a => a.PendingRemoval).IsModified = true;
}
}
// Saving (after this line the properties of the entities in the related collection get reverted to their old values - i.e. if I change a phone number in the UI, the save below will replace the new values with the previous number.
await _model.SaveChangesAsync();
So it turns out this was an unfortunate error of configuration and a bad coincidence:
1) Check your models and server schema to ensure they are in sync (especially if using generated code from EF). In my case they were not, which lead to...
2) SaveChanges() was overwriting my properties in question because I had not noticed they were incorrectly set to have their StoredGeneratorPattern set to Computed in my model code. This caused the changed values to be ignored and overwritten.
3) The test case surrounding this had only implemented the same properties that were marked incorrectly, making it appear that all changes were being reverted - causing the confusion on where the problem code actually was. If another column had been modified and watched along with the others, this might have been caught sooner.
(NOTE: This is not a duplicate of this question even though it has the same exception.)
I have a poor man's transaction in place where the strategy is:
Insert a parent and child record.
Perform a long-running operation.
If long-running operation fails, go delete the previously-inserted parent and child records.
When I attempt step 3, I get the following message:
The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted.
I understand generally what this means, but I thought I was playing by the rules and no matter how hard I try to play by the rules, I'm unsure why I'm getting this message.
We use self-tracking entities and my code is effectively this:
var parent = new Parent(1,2,3);
var child = new Child(4,5,6);
parent.Children.Add(child);
MyContext.Parents.ApplyChanges(parent);
MyContext.SaveChanges(SaveOptions.AcceptAllChangesAfterSave);
// At this point, inserts were successful and entities are in an Unchanged state.
// Also at this point, I see that parent.Children.Count == 1
var shouldDeleteEntities = false;
try
{
// This is not database-related. This process does some
// encryption/decryption and uploads some files up to
// Azure blob storage. It doesn't touch the DB.
SomeLongRunningProcess();
}
catch
{
// Oops, something bad happened. Let's delete the entities!
shouldDeleteEntities = true;
}
// At this point, both entities are in an Unchanged state, child still
// appears in parent.Children, nothing is wrong that I can see.
parent.MarkAsDeleted();
child.MarkAsDeleted();
// I've tried MyContext.ApplyChanges here for both entities, no change.
// At this point, everything appears to be in the state that
// they're supposed to be!
try
{
MyContext.SaveChanges(SaveOptions.AcceptAllChangesAfterSave);
}
catch
{
// This exception was thrown and I can't figure out why!
}
What's wrong with this logic? Why am I not able to simply delete these two records? I've tried calling MyContext.ApplyChanges after I call MarkAsDeleted. I've tried all sorts of things and no matter what, no matter how hard I try to tell the Context that I want both of them deleted, it keeps throwing this exception.
#Slauma provided this answer in the above comments but asked me to post the answer.
The problem is that there is effectively a "bug" in the Self-Tracking Entities templates for Entity Framework (something Microsoft no longer recommends you use). A blog post specifically on this topic can be found here.
Specifically, the problem is that the Context's ObjectStateManager gets out of sync with the (attached) entities' ChangeTracker.State and you end up having objects with entity.ChangeTracker.State == ObjectState.Deleted but when the context.ObjectStateManager thinks that the state is set to EntityState.Unchanged. These two are clearly very different. So this fix effectively goes and looks for any object attached to the context as EntityState.Unchanged but digs down deeper and also checks each object's ChangeTracker.State for ObjectState.Deleted to fix things up.
An easy and very thoroughly-functional work-around for this problem (that has worked well for us) can be made in the Context's T4 template by replacing the #region Handle Initial Entity State block with the following code:
#region Handle Initial Entity State
var existingEntities = context
.ObjectStateManager
.GetObjectStateEntries(System.Data.EntityState.Unchanged)
.Select(x => x.Entity as IObjectWithChangeTracker)
.Where(x => x != null);
var deletes = entityIndex.AllEntities
.Where(x => x.ChangeTracker.State == ObjectState.Deleted)
.Union(existingEntities
.Where(x => x.ChangeTracker.State == ObjectState.Deleted));
var notDeleted = entityIndex.AllEntities
.Where(x => x.ChangeTracker.State != ObjectState.Deleted)
.Union(existingEntities
.Where(x => x.ChangeTracker.State != ObjectState.Deleted));
foreach (IObjectWithChangeTracker changedEntity in deletes)
{
HandleDeletedEntity(context, entityIndex, allRelationships, changedEntity);
}
foreach (IObjectWithChangeTracker changedEntity in notDeleted)
{
HandleEntity(context, entityIndex, allRelationships, changedEntity);
}
#endregion