Entity Framework updating many to many - c#

I am trying to find the suitable form of updating a many to many relationship but i am find some issues on it.
The application is an asp.net mvc with simple injector(set up per context)
I have an entity People which has an IEnumerable and also i have a entity Team which has an IEnumerable.
The People entity has some other fields like Description, Email, etc and in its View, there are some check boxes so the user can choose the Teams.
I had tried to search on the net for the best approach for updating a many to many relationship and all that i found was deleting everything in the third table that is created and then add the Teams again.
Under is what i am trying to do, but i am getting pk's already exists. I know it is happening because firstly i load the People entity with Find method(to remove the list of Teams inside a foreach) and after i try to Attach(when the error happens) the modified object to set it's State to Modified.
public override void Modify(People obj)
{
var ppl = SearchById(obj.Id);
if (ppl.Teams.Count > 0)
{
foreach (var team in ppl.Teams.ToList())
{
ppl.Teams.Remove(team);
}
}
var entry = lpcContext.Entry(obj);
if (lpcContext.Entry(obj).State == EntityState.Detached)
dbSet.Attach(obj);
entry.State = EntityState.Modified;
}
To air it out some things, i am using the Unit Of Work pattern, so i SaveChanges later.
Are there any other approach or i have to remove the Teams one by one, SaveChanges and after that, update the object and SaveChanges again?

Unfortunately, working with detached entities isnt that straight forward in EF (yet). Attach() in EF will work for connected entities only. That means if you load an object from DB, pass it on to a view (or page is asp.net). When you read the object back from that view/page, EF will not be tracking that object anymore. If you now try to use Attach(), you will get an error that the key already exists in the DBContext. To workaround this, you need to find the entry and make changes to the entity using SetValues(). Something like this:
public virtual void Update(T entity)
{
DbEntityEntry dbEntityEntry = DbContext.Entry(entity);
if (dbEntityEntry.State == EntityState.Detached)
{
var pkey = _dbset.Create().GetType().GetProperty("Id").GetValue(entity);//assuming Id is the key column
var set = DbContext.Set<T>();
T attachedEntity = set.Find(pkey);
if (attachedEntity != null)
{
var attachedEntry = DbContext.Entry(attachedEntity);
attachedEntry.CurrentValues.SetValues(entity);
}
}
}
Please note that this will ignore any nested objects. Hence, you should make DB trip, and compare the object returned from DB to find out if you should invoke Add, Update or Delete on each child object. This is the best workaround I could find when working with disconnected objects in EF. I guess nHibernate doesnt have this bug. Last I read about this, Microsoft was going to work on this after EF 6.x. So, we'll have to wait for this, I guess. Please go through the below article to understand the issue (and possible solutions) in length:
http://blog.maskalik.com/entity-framework/2013/12/23/entity-framework-updating-database-from-detached-objects/
To talk about your specfic scenario, you should make a DB hit and find out if any new teams were selected or some existing team was dropped and call add or delete as appropriate by comparing the Team collection of People object returned by DB vs People object returned from view/page. To update the People object itself, you can use the Update() as given above.

Related

Entity Framework Instance tracking error with mapping sub-objects - is there an elegant solution?

Some 2 years+ ago I asked this question which was kindly solved by Steve Py.
I am having a similar but different problem now when mapping with sub-objects. I have had this issue a few times and worked around it, but facing doing so again, I can't help thinking there must be a more elegant solution. I am coding a memebership system in Blazor Wasm and wanting update membership details via a web-api. All very normal.
I have a library function to update the membership:
public async Task<MembershipLTDTO> UpdateMembershipAsync(APDbContext context, MembershipLTDTO sentmembership)
{
Membership? foundmembership = context.Memberships.Where(x =>x.Id == sentmembership.Id)
.Include(x => x.MembershipTypes)
.FirstOrDefault();
if (foundmembership == null)
{
return new MembershipLTDTO { Status = new InfoBool(false, "Error: Membership not found", InfoBool.ReasonCode.Not_Found) };
}
try
{
_mapper.Map(sentmembership, foundmembership, typeof(MembershipLTDTO), typeof(Membership));
//context.Entry(foundmembership).State = EntityState.Modified; <-This was a 'try-out'
context.Memberships.Update(foundmembership);
await context.SaveChangesAsync();
sentmembership.Status = new InfoBool(true, "Membership successfully updated");
return sentmembership;
}
catch (Exception ex)
{
return new MembershipLTDTO { Status = new InfoBool(false, $"{ex.Message}", InfoBool.ReasonCode.Not_Found) };
}
}
The Membership object is an EF DB object and references a many to many list of MembershipTypes:
public class Membership
{
[Key]
public int Id { get; set; }
...more stuff...
public List<MembershipType>? MembershipTypes { get; set; } // The users membership can be several types. e.g. Employee + Director + etc..
}
The MembershipLTDTO is a lightweight DTO with a few heavy objects removed.
Executing the code, I get an EF exception:
The instance of entity type 'MembershipType' cannot be tracked because another instance with the same key value for {'Id'} is already being tracked. When attaching existing entities, ensure that only one entity instance with a given key value is attached.
I think (from the previous question I asked some time ago) that I understand what is happening, and previously, I have worked around this by having a seperate function that would in this case update the membership types. Then, stripping it out of the 'found' and 'sent' objects to allow Mapper to do the rest.
In my mapping profile I have the mappings defines as follows for these object types:
CreateMap<Membership, MembershipLTDTO>();
CreateMap<MembershipLTDTO, Membership>();
CreateMap<MembershipTypeDTO, MembershipType>();
CreateMap<MembershipType, MembershipTypeDTO>();
As I was about to go and do that very thing again, I was wondering if I am missing a trick with my use of Mapper, or Entity Framework that would allow it to happen more seamlessly?
A couple of things come to mind. The first thing is that the call to context.Memberships.Update(foundmembership); isn't required here so long as you haven't disabled tracking in the DbContext. Calling SaveChanges will build an UPDATE SQL statement for whatever values change (if any) where Update will attempt to overwrite the entitiy(ies).
The issue you are likely encountering is common when dealing with references, and I would recommend a different approach because of this. To outline this, lets look at Membership Types. These would typically be a known list that we want to associate to new and existing memberships. We're not going to ever expect to create a new membership type as part of an operation where we create or update a membership, just add or remove associations to existing memberships.
The problem with using Automapper for this is when we want to associate another membership type in our passed in DTO. Say we have existing data that had a membership associated with Membership Type #1, and we want to add MemberShip Type #2. We load the original entity types to copy values across, eager loading membership types so we get the membership and Type #1, so far so good. However, when we call Mapper.Map() it sees a MemberShip Type #2 in the DTO, so it will add a new entity with ID #2 into the collection of our loaded Membership's Types collection. From here, one of three things can happen:
1) The DbContext was already tracking an instance with ID #2 and
will complain when Update tries to associate another entity reference
with ID #2.
2) The DbContext isn't tracking an instance, and attempts to add #2
as a new entity.
2.1) The database is set up for an Identity column, and the new
membership type gets inserted with the next available ID. (I.e. #16)
2.2) The database is not set up for an Identity column and the
`SaveChanges` raises a duplicate constraint error.
The issue here is that Automapper doesn't have knowledge that any new Membership Type should be retrieved from the DbContext.
Using Automapper's Map method can be used to update child collections, though it should only be used to update references that are actual children of the top-level entity. For instance if you have a Customer and a collection of Contacts where updating the customer you want to update, add, or remove contact detail records because those child records are owned by, and explicitly associated to their customer. Automapper can add to or remove from the collection, and update existing items. For references like many-to-many/many-to-one we cannot rely on that since we will want to associate existing entities, not add/remove them.
In this case, the recommendation would be to tell Automapper to ignore the Membership Types collection, then handle these afterwards.
_mapper.Map(sentmembership, foundmembership, typeof(MembershipLTDTO), typeof(Membership));
var memberShipTypeIds = sentmembership.MembershipTypes.Select(x => x.MembershipTypeId).ToList();
var existingMembershipTypeIds = foundmembership.MembershipTypes.Select(x => x.MembershipTypeId).ToList();
var idsToAdd = membershipTypeIds.Except(existingMembershipTypeIds).ToList();
var idsToRemove = existingMembershipTypeIds.Except(membershipTypeIds).ToList();
if(idsToRemove.Any())
{
var membershipTypesToRemove = foundmembership.MembershipTypes.Where(x => idsToRemove.Contains(x.MembershipTypeId)).ToList();
foreach (var membershipType in membershipTypesToRemove)
foundmembership.MembershipTypes.Remove(membershipType;
}
if(idsToAdd.Any())
{
var membershipTypesToAdd = context.MembershipTypes.Where(x => idsToRemove.Contains(x.MembershipTypeId)).ToList();
foundmembership.MembershipTypes.AddRange(membershipTypesToAdd); // if declared as List, otherwise foreach and add them.
}
context.SaveChanges();
For items being removed, we find those entities in the loaded data state and remove them from the collection. For new items being added, we go to the context, fetch them all, and add them to the loaded data state's collection.
Notwithstanding marking Steve Py's solution as the answer, because it is a solution that works, though not as 'elegant' as I would have liked.
I was pointed in another direction however by the comment from
Lucian Bargaoanu, which, though a little cryptic, after some digging I found could be made to work.
To do this I had to add 'AutoMapper.Collection' and 'AutoMapper.Collection.EntityFrameworkCore' to my solution. There was a bit of jiggery pokery around setting it up as the example [here][2], didn't match up with my set up. I used this in my program.cs:
// Auto Mapper Configurations
var mappingConfig = new MapperConfiguration(mc =>
{
mc.AddProfile(new MappingProfile());
mc.AddCollectionMappers();
});
I also had to modify my mapping profile for the object - DTO mapping to this:
//Membership Types
CreateMap<MembershipTypeDTO, MembershipType>().EqualityComparison((mtdto, mt) => mtdto.Id == mt.Id);
Which is used to tell AutoMapper which fields to use for an equality.
I took out the context.Memberships.Update as recommended by Steve Py and it works.
Posted on behalf of the question asker

What's the real difference between EntityState.Deleted and Remove() method? When to use each of them?

I'm kinda confused about recognizing a disconnected scenario and a connected scenario, I've searched the internet but I couldn't find any real answer to my questions, I'm kinda confused about entities tracking system, connected and disconnected scenarios, when should I use the Attach method and also in differences between using the Entry(entity).State = EntityState.Deleted and Remove(entity) method, and while I was searching about the last one, most of the time, they were thought identical, but it didn't match with the test that I did and what I expected
I just made a simple console app to test the differences, and how it works is that I make a person completely outside of the context instantiation scope and then pass it to the AddPerson method, because I think this makes a disconnected scenario, right? because the Remove method will complain about why I haven't attached the entity first, so I think that tells us that we're in a disconnected scenario, I'm not sure tho
This is the app:
class Program
{
static void Main(string[] args)
{
Person person = new Person()
{
PersonID = 1,
Name = "John",
Family = "Doe"
};
using (var context = new MyContext())
{
// Why this one requires attaching but the code below doesn't
context.Person.Attach(person);
context.Person.Remove(person);
context.SaveChanges();
// This method of deleting works fine without the entity being attached
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
var people = context.Person.ToList();
foreach (var p in people)
{
Console.WriteLine($"PersonID: {p.PersonID} | Name: {p.Name} | Family: {p.Family}");
}
}
Console.ReadKey();
}
}
so for the Remove method, I have to Attach the entity first, otherwise, it will throw an exception, BUT when I use the Entry(person).state = EntityState.Deleted without attaching it, it works fine, and deletes the person, now why is that, isn't this a big difference? why is it not said anywhere, I've read some websites and some other similar questions on Stackoverflow too, but this wasn't said anywhere, and for the most part, these two were presumed to be the same, and do the same thing, yes they both delete the entity, but how can we describe what happened in this test, isn't this a difference between these two?
I have two questions but I think they're related to each other, so I'm just going to ask both of them here:
When does exactly a disconnected scenario happen, and how can I recognize it, does it depend on the scope of the context instantiation, or on retrieving the entity directly from the context and then modifying it (with no need to attach it), or using an entity from outside of the context (like passing it from another scope to our context as a parameter, as I did in my test)?
Why does the Remove method requires attaching but the EntityState.Deleted doesn't, but they're presumed identical? why should I even bother to attach the entity first, while setting the state to deleted works without needing to attach, so When to use each of them?
Basically, The way I assume that how all these work (with my current understanding of Entity Framework which is probably wrong) is that when you're in a disconnected scenario, you have to attach your entity first, but then setting the state to EntityState.Deleted doesn't need attaching, so then why does the Remove method exists at all, we could use the other way of deleting all the time.
EDIT:
Based on the second code block in the accepted answer, I wrote this test, to figure out how it's working, you said that the otherPersonReference is equal to having a Attach(Person) but when I first attach the person and try to use EntityState.Deleted It works then too, and it'll delete it, but you said that it would fail, I'm a little confused :s
class Program
{
static void Main(string[] args)
{
Person person = new Person()
{
PersonID = 3,
Name = "John",
Family = "Doe"
};
using (var context = new MyContext())
{
//var pr = context.Person.Single(p => p.PersonID == 3);
context.Person.Attach(person);
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
}
Console.ReadKey();
}
}
if I uncomment the pr variable line and then comment the context.Person.Attach(person) then setting the EntityState to Deleted would fail and it'll throw an exception as expected
Setting context.Entry(person).State tells EF to start tracking the "person" instance if it isn't already tracking it. You would get an error if the DbContext was already tracking an instance for the same record.
For example, you can try the following:
var person = new Person { Id = 100 }; // assume an existing record with ID = 100;
using (var context = new AppDbContext())
{
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
}
This works as you expect... However, if you were to have code that did this:
var person = new Person { Id = 100 }; // assume an existing record with ID = 100;
using (var context = new AppDbContext())
{
var otherPersonReference = context.Persons.Single(x => x.Id == 100);
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
}
Your attempt to use context.Entry(person).State = EntityState.Deleted; would fail because the context is now already tracking an entity with that ID. It's the same behaviour as if you were to try and call Attach(person).
When dealing with short-lived DbContexts (such as when using using() blocks) and single entity operations, it can be reasonably safe to work with detached entity references, but this will get a lot more "iffy" once you start dealing with multiple possible entity references (I.e. working with lists or objects sharing references etc.) and/or calls across a DbContext which may already be tracking entity references from previous operations / iterations.
Edit: Working with detached references can be problematic and you need to take extra care when doing so. My general recommendation is to avoid it wherever possible. The approach I recommend when dealing with entities is that you should never pass an entity outside of the scope of the DbContext that read it. This means leveraging a ViewModel or DTO to represent entity-sourced details outside the scope of the DbContext. A detached EF Entity
can certainly work, but with a DTO it is explicitly clear that the data cannot be confused with a tracked entity. When it comes to performing operations like a Delete, you only really need to pass the ID.
For example, leveraging Automapper to help translate between DTOs and entities:
PersonDTO AddPerson(PersonDTO details)
{
if(details == null)
throw new ArgumentNullException("details");
using (var context = new AppDbContext())
{
// TODO: Add validations such as verifying unique name/dob etc.
var person = Mapper.Map<Person>(details); // Creates a new Person.
context.Persons.Add(person);
context.SaveChanges();
details.PersonId = person.PersonId; // After SaveChanges we can retrieve the new row's ID.
return details;
}
}
PersonDTO UpdatePerson(PersonDTO details)
{
if(details == null)
throw new ArgumentNullException("details");
using (var context = new AppDbContext())
{
var existingPerson = context.Persons.Single(x => x.PersonId == details.PersonId); // Throws if we pass an invalid PersonId.
Mapper.Map(details, existingPerson); // copies values from our DTO into Person. Mapping is configured to only copy across allowed values.
context.SaveChanges();
return Mapper.Map<PersonDTO>(existingPerson); // Return a fresh, up to date DTO of our data record.
}
}
void DeletePerson(int personId)
{
using (var context = new AppDbContext())
{
var existingPerson = context.Persons.SingleOrDefault(x => x.PersonId == details.PersonId);
if (existingPerson == null)
return; // Nothing to do.
// TODO: Verify whether the current user should be able to delete this person or not. (I.e. based on the state of the person, is it in use, etc.)
context.Persons.Remove(existingPerson);
context.SaveChanges();
}
}
In this example a Person entity does not ever leave the scope of a DbContext. The trouble with detached entities is that whenever passing an entity around to other methods and such, those methods might assume they are working with attached, complete or complete-able (i.e. through lazy loading) entities. Was the entity loaded from a DbContext that is still "alive" so if if the code wants to check person.Address that data is either eager loaded and available, or lazy-loadable? vs. #null which could mean the person does not have an address, or that without a DbContext or lazy loading we cannot determine whether it does or not. As a general rule if a method is written to accept an entity, it should always expect to have a complete, or complete-able version of that entity. Not a detached "maybe complete, maybe not" instance, not a "new"ed up instance of a class that has some arbitrary values populated, (rather than an entity representing a data row) and not a deserialized block of JSON coming from a web client. All of those can be typed as a "Person" entity, but not a Person entity.
Edit 2: "Complete" vs. "Complete-able"
A Complete entity is an entity that has all related entities eager loaded. Any method that accepts a Person should be able to access any property, including navigation properties, and receive the true value. If the Person has an Address, then a #null address should only ever mean that person does not have an address (if that is valid), not "that person does not have an address, or it just wasn't loaded." This also goes for cases where you might have a method that accepts an entity, which you haven't loaded, but want to substitute with a entity class populated with an ID and whatever data you might have on hand. That incomplete "entity" could find itself sent to other methods that expect a more complete entity. Methods should never need to guess at what they receive.
A Complete-able entity is an entity where any related entities within that entity can be lazy loaded if accessed. The consuming method doesn't need to determine whether properties are available or not, it can access Person.Address and it will always get an Address if that person is supposed to have one, whether the caller remembered to eager load it or not.
Where methods are using tightly scoped DbContexts (using()) if you return an entity then there is no way that you can guarantee later down the call-chain that this entity is complete-able. Today you can make the assurance that all properties are eager-loaded, but tomorrow a new relationship could be added leaving a navigation property somewhere within the object graph that might not be remembered to be eager-loaded.
Eager loading is also expensive, given to ensure an entity is "complete", everything needs to be loaded, whether the consumers ever need it or not. Lazy Loading was introduced to facilitate this, however, in many cases this is extremely expensive leading to a LOT of chatter with the database and the introduction of performance costs when the model evolves. Elements like serialization (a common problem in web applications) touch every property by default leading to numerous lazy load calls for every entity sent.
DTOs/ViewModels are highly recommended when data needs to leave the scope of a DbContext as it ensures only the data a consumer needs is loaded, but equally importantly, as a model may evolve, you avoid lazy loading pitfalls. Serializing a DTO rather than an Entity will ensure those new relationships don't come into play until a DTO is updated to actually need that data.

Problem with EF Core updating nested entities when using automapper

I am maintaining an application which uses EF Core to persist data to a SQL database.
I am trying to implement a new feature which requires me to retrieve an object from the database (Lets pretend its an order) manipulate it and some of the order lines which are attached to it and save it back into the database. Which wouldn't be a problem but I have inherited some of this code so need to try to stick to the existing way of doing things.
The basic process for data access is :
UI -> API -> Service -> Repository -> DataContext
The methods in the repo follow this pattern (Though I have simplified it for the purposes of this question)
public Order GetOrder(int id)
{
return _context.Orders.Include(o=>o.OrderLines).FirstOrDefault(x=>x.Id == id);
}
The service is where business logic and mapping to DTOs are applied, this is what the GetOrder method would look like :
public OrderDTO GetOrder(int id)
{
var ord = _repo.GetOrder(id);
return _mapper.Map<OrderDto>(ord);
}
So to retrieve and manipulate an order my code would look something like this
public void ManipulateAnOrder()
{
// Get the order DTO from the service
var order = _service.GetOrder(3);
// Manipulate the order
order.UpdatedBy = "Daneel Olivaw";
order.OrderLines.ForEach(ol=>ol.UpdatedBy = "Daneel Olivaw");
_service.SaveOrder(order);
}
And the method in the service which allows this to be saved back to the DB would look something like this:
public void SaveOrder(OrderDTO order)
{
// Get the original item from the database
var original = _repo.GetOrder(order.Id);
// Merge the original and the new DTO together
_mapper.Map(order, original);
_repo.Save(original);
}
Finally the repositories save method looks like this
public void Save(Order order){
_context.Update(order)
_context.SaveChanges();
}
The problem that I am encountering is using this method of mapping the Entities from the context into DTOs and back again causes the nested objects (in this instance the OrderLines) to be changed (or recreated) by AutoMapper in such a way that EF no longer recognises them as being the entities that it has just given to us.
This results in errors when updating along the lines of
InvalidOperationException the instance of ProductLine cannot be tracked because another instance with the same key value for {'Id'} is already being tracked.
Now to me, its not that there is ANOTHER instance of the object being tracked, its the same one, but I understand that the mapping process has broken that link and EF can no longer determine that they are the same object.
So, I have been looking for ways to rectify this, There are two ways that have jumped out at me as being promising,
the answer mentioned here EF & Automapper. Update nested collections
Automapper.Collection
Automapper.collection seems to be the better route, but I cant find a good working example of it in use, and the implementation that I have done doesn't seem to work.
So, I'm looking for advice from anyone who has either used automapper collections before successfully or anyone that has any suggestions as to how best to approach this.
Edit, I have knocked up a quick console app as an example, Note that when I say quick I mean... Horrible there is no DI or anything like that, I have done away with the repositories and services to keep it simple.
I have also left in a commented out mapper profile which does work, but isn't ideal.. You will see what I mean when you look at it.
Repo is here https://github.com/DavidDBD/AutomapperExample
Ok, after examining every scenario and counting on the fact that i did what you're trying to do in my previous project and it worked out of the box.
Updating your EntityFramework Core nuget packages to the latest stable version (3.1.8) solved the issue without modifying your code.
AutoMapper in fact "has broken that link" and the mapped entities you are trying to save are a set of new objects, not previously tracked by your DbContext. If the mapped entities were the same objects, you wouldn't have get this error.
In fact, it has nothing to do with AutoMapper and the mapping process, but how the DbContext is being used and how the entity states are being managed.
In your ManipulateAnOrder method after getting the mapped entities -
var order = _service.GetOrder(3);
your DbContext instance is still alive and at the repository layer it is tracking the entities you just retrieved, while you are modifying the mapped entities -
order.UpdatedBy = "Daneel Olivaw";
order.OrderLines.ForEach(ol=>ol.UpdatedBy = "Daneel Olivaw");
Then, when you are trying to save the modified entities -
_service.SaveOrder(order);
this mapped entities reach the repository layer and DbContext tries to add them to its tracking list, but finds that it already has entities of same type with same Ids in the list (the previously fetched ones). EF can track only one instance of a specific type with a specific key. Hence, the complaining message.
One way to solve this, is when fetching the Order, tell EF not to track it, like at your repository layer -
public Order GetOrder(int id, bool tracking = true) // optional parameter
{
if(!tracking)
{
return _context.Orders.Include(o=>o.OrderLines).AsNoTracking().FirstOrDefault(x=>x.Id == id);
}
return _context.Orders.Include(o=>o.OrderLines).FirstOrDefault(x=>x.Id == id);
}
(or you can add a separate method for handling NoTracking calls) and then at your Service layer -
var order = _repo.GetOrder(id, false); // for this operation tracking is false

Reattaching an entity graph and detecting collection changes

I'm using entity framework code first and exposing the northwind database through a WCF REST HTTP interface.
I've not exposed the OrderDetails table (order items) as it doesn't make sense creating an order and then adding each required OrderDetail seperately through another service. To my mind it needs to be an atomic transaction that either succeeds or fails as one. Therefore I include the Order.OrderDetails collection when passing to the client and assume I'm going to get one when an order is created or updated.
The problem however seems to be detecting changes to the OrderDetails collection when reattaching the Order entity for an update. The order itself can be set as modified to update those properties but this doesn't cascade to the OrderDetail items. So I can manually go through and set updated ones to modified but the problem lies in figuring out which ones are updated in the first place. Setting a new OrderDetail to modified will cause an error when trying to save.
I read a recommendation to set the Id of new collection items to 0 and in the server use that to decide whether it's new or existing. Northwind however uses a composite key between OrderID and ProductID for OrderDetails. These will both have to be set by the client, so I can't find a way to detect whats new. Furthermore, a deleted OrderDetail won't exist in the detached graph and I will need to figure out what has been deleted and explicitly remove it.
Any advice would be much appreciated.
public override Order Update(Order entity)
{
dbset.Attach(entity);
DataContext.Entry(entity).State = EntityState.Modified;
foreach (var orderDetail in entity.OrderDetails)
{
DataContext.Entry(orderDetail).State = EntityState.Modified;
}
return entity;
}
I've recently been allowed to open source some work I did for my employer a while ago (with some changes of course). I actually wrote an extension method to solve this problem, you can get it at http://refactorthis.wordpress.com/2012/12/11/introducing-graphdiff-for-entity-framework-code-first-allowing-automated-updates-of-a-graph-of-detached-entities/
Hope it helps!
This is common and complex issue and there is no magic which will do it for you. My solution (and the only one which works in all scenarios) was to load the Order again in your update method and manually merge changes:
public override Order Update(Order entity)
{
// No attach of entity
var attached = DataContext.Orders.Include(o => o.OrderDetails).SingleOrDefault(...);
if (attached == null) ...
// Merge changes from entity to attached - if you change any property
// it will be marked as modified automatically
foreach (var detail in attached.OrderDetails.ToList())
{
// ToList is necessary because you will remove details from the collection
// if detail exists in entity check if it must be updated and set its state
// if detail doesn't exists in entity remove if from collection - if it is \
// aggregation (detail cannot exists without Order) you must also delete it
// from context to ensure it will be deleted from the database
}
foreach (var detail in entity.OrderDetails)
{
// if it doesn't exists in attached create new detail instance,
// fill it from detail in entity and add it to attached entity -
//you must not use the same instance you got from the entity
}
DataContext.SaveChanges();
return entity;
}
There can be also need to manually check timestamps if you use them.
Alternative scenario is what you have described with 0 used for new details and negative ID for deleted details but that is logic which must be done on client. It also works only in some cases.

updating related data in entity framework

I have an update function in my repository which updates TerminalCertification entity. But this entity has a many to many relation to another class ( GomrokJustification ).
my update function update entity correctly but does not anything on related entity.
my update function is below:
public void UpdateTerminalCertification(TerminalCertification terminalCertification)
{
var lastCertification =
db.terminalCertifications.Include("TimeInfo").Include("GomrokJustifications").Where(item=>item.TerminalCertificationID==terminalCertification.TerminalCertificationID).ToList();
if (lastCertification.Count==0)
throw new TerminalCertificationNotFoundException(terminalCertification);
terminalCertification.TimeInfo = lastCertification[0].TimeInfo;
((IObjectContextAdapter)db).ObjectContext.Detach(lastCertification[0]);
((IObjectContextAdapter)db).ObjectContext.AttachTo("terminalCertifications", terminalCertification);
foreach (var gomrokJustification in terminalCertification.GomrokJustifications)
{
((IObjectContextAdapter)db).ObjectContext.AttachTo("gomrokJustifications", gomrokJustification);
((IObjectContextAdapter)db).ObjectContext.ObjectStateManager.ChangeObjectState(gomrokJustification, EntityState.Modified);
}
((IObjectContextAdapter) db).ObjectContext.ObjectStateManager.ChangeObjectState(terminalCertification,EntityState.Modified);
}
and my TerminalCetrification has a list of GomrokJustifications which was filled before by some entities. I want to those last entity being replaced by new ones. but this was not happen.
does anyone have any idea?
Instead of doing this:
var lastCertification = db.terminalCertifications
.Include("TimeInfo")
.Include("GomrokJustifications")
.Where(item=>item.TerminalCertificationID==terminalCertification.TerminalCertificationID)
.ToList();
if (lastCertification.Count==0)
throw new TerminalCertificationNotFoundException(terminalCertification);
you could just do this:
var lastCertification = db.terminalCertifications
.Include("TimeInfo")
.Include("GomrokJustifications")
.Where(item=>item.TerminalCertificationID==terminalCertification.TerminalCertificationID)
.FirstOrDefault();
if (lastCertification == null)
throw new TerminalCertificationNotFoundException(terminalCertification);
First throws an exception if there are no elements in the collection, so if you don't care about the terminalcertificationnotfoundexception you could even remove that custom exception. Your logic even seems to assume that there will be only one element in the returned list so you could even use Single(). That expresses more what you want to achieve compared to calling tolist and then retrieving the first item.
After looking carefully at your code I actually don't get the point you are trying to achieve here. You have an existing terminalcertification entity to start with, you then retrieve it again in that first query, why? You then take the timeinfo from conceptually the same entity (cause you did a get by id) to the one you get as input parameter. Why not continue working on the one that was retrieved from the database? You then detach the entity you received from the database, why? And continue working with the input terminalcertification. I think you need to look a bit more carefully on the entity framework documentation about entity state etc. Take a look at ApplyCurrentValues and detaching and attaching objects here: http://msdn.microsoft.com/en-us/library/bb896271.aspx
We'll need some more info to help you along.

Categories