Duplicate Entity Record [duplicate] - c#

My model looks something like this:
Company
-Locations
Locations
-Stores
Stores
-Products
So I want to make a copy of a Company, and all of its associations should also be copied and saved to the database.
How can I do this if I have the Company loaded in memory?
Company company = DbContext.Companies.Find(123);
If it is tricky, I can loop through each association and then call create a new object. The Id's will be different but everything else should be the same.
I am using EF 6.

Cloning object graphs with EF is a piece of cake:
var company = DbContext.Companies.AsNoTracking()
.Include(c => c.Locations
.Select(l => l.Stores
.Select(s => s.Products)))
.Where(c => c.Id == 123)
.FirstOrDefault();
DbContext.Companies.Add(company);
DbContext.SaveChanges();
A few things to note here.
AsNoTracking() is vital, because the objects you add to the context shouldn't be tracked already.
Now if you Add() the company, all entities in its object graph will be marked as Added as well.
I assume that the database generates new primary key values (identity columns). If so, EF will ignore the current values from the existing objects in the database. If not, you'll have to traverse the object graph and assign new values yourself.
One caveat: this only works well if the associations are 1:0..n. If there is a n:m association, identical entities may get inserted multiple times. If, for example, Store-Product is n:m and product A occurs at store 1 and store 2, product A will be inserted twice. If you want to prevent this, you should fetch the objects by one context, with tracking (i.e. without AsNoTracking), and Add() them in a new context. By enabling tracking, EF keeps track of identical entities and won't duplicate them. In this case, proxy creation should be disabled, otherwise the entities keep a reference to the context they came from.
More details here: Merge identical databases into one

I would add a method to each model that needs to be cloneable this way, I'd recommend an interface for it also.
It could be done something like this:
//Company.cs
Company DeepClone()
{
Company clone = new Company();
clone.Name = this.name;
//...more properties (be careful when copying reference types)
clone.Locations = new List<Location>(this.Locations.Select(l => l.DeepClone()));
return clone;
}
You should repeat this basic pattern for every class and "child" class that needs to be copiable. This way each object is aware of how to create a deep clone of its self, and passes responsibility for child objects off to the child class, neatly encapsulating everything.
It could be used this way:
Company copyOfCompany123 = DbContext.Companies.Find(123).DeepClone;
My apologies if there are any errors in the above code; I don't have Visual Studio available at the moment to verify everything, I'm working from memory.
One other really simple and code efficient way to deeply clone an object using serialization can be found in this post How do you do a deep copy an object in .Net (C# specifically)?
public static T DeepClone<T>(T obj)
{
using (var ms = new MemoryStream())
{
var formatter = new BinaryFormatter();
formatter.Serialize(ms, obj);
ms.Position = 0;
return (T) formatter.Deserialize(ms);
}
}
Just be aware that this can have some pretty serious resource and performance issues depending on your object structure. Every class that you want to use it on must also be marked with the [Serializable] attribute.

Related

Entity Framework Instance tracking error with mapping sub-objects - is there an elegant solution?

Some 2 years+ ago I asked this question which was kindly solved by Steve Py.
I am having a similar but different problem now when mapping with sub-objects. I have had this issue a few times and worked around it, but facing doing so again, I can't help thinking there must be a more elegant solution. I am coding a memebership system in Blazor Wasm and wanting update membership details via a web-api. All very normal.
I have a library function to update the membership:
public async Task<MembershipLTDTO> UpdateMembershipAsync(APDbContext context, MembershipLTDTO sentmembership)
{
Membership? foundmembership = context.Memberships.Where(x =>x.Id == sentmembership.Id)
.Include(x => x.MembershipTypes)
.FirstOrDefault();
if (foundmembership == null)
{
return new MembershipLTDTO { Status = new InfoBool(false, "Error: Membership not found", InfoBool.ReasonCode.Not_Found) };
}
try
{
_mapper.Map(sentmembership, foundmembership, typeof(MembershipLTDTO), typeof(Membership));
//context.Entry(foundmembership).State = EntityState.Modified; <-This was a 'try-out'
context.Memberships.Update(foundmembership);
await context.SaveChangesAsync();
sentmembership.Status = new InfoBool(true, "Membership successfully updated");
return sentmembership;
}
catch (Exception ex)
{
return new MembershipLTDTO { Status = new InfoBool(false, $"{ex.Message}", InfoBool.ReasonCode.Not_Found) };
}
}
The Membership object is an EF DB object and references a many to many list of MembershipTypes:
public class Membership
{
[Key]
public int Id { get; set; }
...more stuff...
public List<MembershipType>? MembershipTypes { get; set; } // The users membership can be several types. e.g. Employee + Director + etc..
}
The MembershipLTDTO is a lightweight DTO with a few heavy objects removed.
Executing the code, I get an EF exception:
The instance of entity type 'MembershipType' cannot be tracked because another instance with the same key value for {'Id'} is already being tracked. When attaching existing entities, ensure that only one entity instance with a given key value is attached.
I think (from the previous question I asked some time ago) that I understand what is happening, and previously, I have worked around this by having a seperate function that would in this case update the membership types. Then, stripping it out of the 'found' and 'sent' objects to allow Mapper to do the rest.
In my mapping profile I have the mappings defines as follows for these object types:
CreateMap<Membership, MembershipLTDTO>();
CreateMap<MembershipLTDTO, Membership>();
CreateMap<MembershipTypeDTO, MembershipType>();
CreateMap<MembershipType, MembershipTypeDTO>();
As I was about to go and do that very thing again, I was wondering if I am missing a trick with my use of Mapper, or Entity Framework that would allow it to happen more seamlessly?
A couple of things come to mind. The first thing is that the call to context.Memberships.Update(foundmembership); isn't required here so long as you haven't disabled tracking in the DbContext. Calling SaveChanges will build an UPDATE SQL statement for whatever values change (if any) where Update will attempt to overwrite the entitiy(ies).
The issue you are likely encountering is common when dealing with references, and I would recommend a different approach because of this. To outline this, lets look at Membership Types. These would typically be a known list that we want to associate to new and existing memberships. We're not going to ever expect to create a new membership type as part of an operation where we create or update a membership, just add or remove associations to existing memberships.
The problem with using Automapper for this is when we want to associate another membership type in our passed in DTO. Say we have existing data that had a membership associated with Membership Type #1, and we want to add MemberShip Type #2. We load the original entity types to copy values across, eager loading membership types so we get the membership and Type #1, so far so good. However, when we call Mapper.Map() it sees a MemberShip Type #2 in the DTO, so it will add a new entity with ID #2 into the collection of our loaded Membership's Types collection. From here, one of three things can happen:
1) The DbContext was already tracking an instance with ID #2 and
will complain when Update tries to associate another entity reference
with ID #2.
2) The DbContext isn't tracking an instance, and attempts to add #2
as a new entity.
2.1) The database is set up for an Identity column, and the new
membership type gets inserted with the next available ID. (I.e. #16)
2.2) The database is not set up for an Identity column and the
`SaveChanges` raises a duplicate constraint error.
The issue here is that Automapper doesn't have knowledge that any new Membership Type should be retrieved from the DbContext.
Using Automapper's Map method can be used to update child collections, though it should only be used to update references that are actual children of the top-level entity. For instance if you have a Customer and a collection of Contacts where updating the customer you want to update, add, or remove contact detail records because those child records are owned by, and explicitly associated to their customer. Automapper can add to or remove from the collection, and update existing items. For references like many-to-many/many-to-one we cannot rely on that since we will want to associate existing entities, not add/remove them.
In this case, the recommendation would be to tell Automapper to ignore the Membership Types collection, then handle these afterwards.
_mapper.Map(sentmembership, foundmembership, typeof(MembershipLTDTO), typeof(Membership));
var memberShipTypeIds = sentmembership.MembershipTypes.Select(x => x.MembershipTypeId).ToList();
var existingMembershipTypeIds = foundmembership.MembershipTypes.Select(x => x.MembershipTypeId).ToList();
var idsToAdd = membershipTypeIds.Except(existingMembershipTypeIds).ToList();
var idsToRemove = existingMembershipTypeIds.Except(membershipTypeIds).ToList();
if(idsToRemove.Any())
{
var membershipTypesToRemove = foundmembership.MembershipTypes.Where(x => idsToRemove.Contains(x.MembershipTypeId)).ToList();
foreach (var membershipType in membershipTypesToRemove)
foundmembership.MembershipTypes.Remove(membershipType;
}
if(idsToAdd.Any())
{
var membershipTypesToAdd = context.MembershipTypes.Where(x => idsToRemove.Contains(x.MembershipTypeId)).ToList();
foundmembership.MembershipTypes.AddRange(membershipTypesToAdd); // if declared as List, otherwise foreach and add them.
}
context.SaveChanges();
For items being removed, we find those entities in the loaded data state and remove them from the collection. For new items being added, we go to the context, fetch them all, and add them to the loaded data state's collection.
Notwithstanding marking Steve Py's solution as the answer, because it is a solution that works, though not as 'elegant' as I would have liked.
I was pointed in another direction however by the comment from
Lucian Bargaoanu, which, though a little cryptic, after some digging I found could be made to work.
To do this I had to add 'AutoMapper.Collection' and 'AutoMapper.Collection.EntityFrameworkCore' to my solution. There was a bit of jiggery pokery around setting it up as the example [here][2], didn't match up with my set up. I used this in my program.cs:
// Auto Mapper Configurations
var mappingConfig = new MapperConfiguration(mc =>
{
mc.AddProfile(new MappingProfile());
mc.AddCollectionMappers();
});
I also had to modify my mapping profile for the object - DTO mapping to this:
//Membership Types
CreateMap<MembershipTypeDTO, MembershipType>().EqualityComparison((mtdto, mt) => mtdto.Id == mt.Id);
Which is used to tell AutoMapper which fields to use for an equality.
I took out the context.Memberships.Update as recommended by Steve Py and it works.
Posted on behalf of the question asker

The instance of entity type cannot be tracked because another instance with the same key value for is already being tracked [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 23 days ago.
The community is reviewing whether to reopen this question as of 12 days ago.
Improve this question
The instance of entity type 'AssegnazioneLotto' cannot be tracked
because another instance with the same key value for
{'Id_AssegnazioneLotto'} is already being tracked.
When attaching existing entities, ensure that only one entity instance
with a given key value is attached.
Consider using 'DbContextOptionsBuilder.EnableSensitiveDataLogging'
to see the conflicting key values.
I encounter this error when we call data from a table and update it.
I solved it by calling a view which calls the table.
Why does this happen?
How can I solve without creating additional views?
The simplest answer: Don't pass entities around outside of the scope they were read. Pass view models (POCO objects rather than entities) and fetch entities on update to copy expected values across.
The complex answer is that when updating entity references, all entity references including child collections and many-to-1 references, you need to check if the DbContext is tracking a matching reference, and either replace the references with the tracked entity, or tell the DbContext to dump the tracked reference before attaching.
For example, an update method that accepts a detached or deserialized "entity". What works sometimes, but then craps other times:
public void UpdateOrder(Order order)
{
context.Update(order);
// OR
context.Attach(order);
context.Entry(order).State = EntityState.Modified;
context.SaveChanges();
}
Looks simple and clean, but craps out when the DbContext instance might already be tracking a matching Order instance. When it is, you get that exception.
The safety check:
public void UpdateOrder(Order order)
{
var existingOrder = context.Orders.Local.SingleOrDefault(o => o.OrderId == order.OrderId);
if (existingOrder != null)
context.Entry(existingOrder).State = EntityState.Detatched;
context.Update(order);
// OR
context.Attach(order);
context.Entry(order).State = EntityState.Modified;
context.SaveChanges();
}
That example checks the local tracking cache for a matching order and dumps any tracked instance. The key here is searching the .Local with the DbSet to search the local tracking cache, not hitting the DB.
Where this gets more complex is where Order contains other entity references like OrderLines, or a reference to a Customer, etc. When dealing with detached entities you need to check over the entire object graph for tracked references.
public void UpdateOrder(Order order)
{
var existingOrder = context.Orders.Local.SingleOrDefault(o => o.OrderId == order.OrderId);
if (existingOrder != null)
context.Entry(existingOrder).State = EntityState.Detatched;
var customer = context.Customers.Local.SingleOrDefault(c => c.CustomerId = order.Customer.CustomerId);
if (customer != null)
order.Customer = customer; // Replace our Customer reference with the tracked one.
else
context.Attach(order.Customer);
context.Update(order);
// OR
context.Attach(order);
context.Entry(order).State = EntityState.Modified;
context.SaveChanges();
}
As you can see, this starts to get complex and cumbersome pretty quick as you need to check every reference. Hence, it's simpler to avoid passing detached or serialized entities around. Using a View Model offers many benefits for performance and simplifying issues like this. Coupled with AutoMapper or a similar mapper that supports projection can make operations with view models very simple:
Selecting Orders:
var orders = context.Orders.Where(/* suitable conditions */)
.ProjectTo<OrderViewModel>(_mapperConfig)
.ToList();
Where _mapperConfig is an AutoMapper configuration that tells AutoMapper how to convert an Order into an OrderViewModel. This can follow conventions or optionally contain mapping rules to build a flattened view model for an Order and it's relative details. ProjectTo works with EF's IQueryable to build an SQL SELECT statement across the entity graph to return only the data needed to populate the view model. This is far more efficient than using Map which would require all related entities to be eager loaded.
When updating:
public void UpdateOrder(UpdateOrderViewModel orderVM)
{
var order = context.Orders.Single(o => o.OrderId == orderVM.OrderId);
if (orderVM.RowVersion != order.RowVersion)
throw new StaleDataException(); // placeholder to handle the situation where the data has changed since our view got the order details.
var mapper = _mapperConfig.CreateMapper();
mapper.Map(orderVM, order);
context.SaveChanges();
}
orderVM could be an OrderViewModel returned, but typically I would recommend packaging just the fields that can be updated into a dedicated view model. The "magic" is in the AutoMapper configuration which governs what fields get copied from the view model back into the entity. If can include child data such as OrderLines or such, in which case you would want to ensure those child entities are eager loaded /w .Include in your DB fetch. AutoMapper's Map method in this case is the variant that copies mapped values from a source to a destination, so values are copied across directly into the tracked entity instance. EF will build an SQL UPDATE statement based on what values actually charge rather than overwriting the entire record.
You can also use the same technique with detached entities to avoid your issue. The benefit of using AutoMapper is that you can configure which values can be legally copied over from the deserialized/detached entity provided into the real data:
public void UpdateOrder(Order updatedOrder)
{
var order = context.Orders.Single(o => o.OrderId == orderVM.OrderId);
if (updatedOrder.RowVersion != order.RowVersion)
throw new StaleDataException(); // placeholder to handle the situation where the data has changed since our view got the order details.
var mapper = _mapperConfig.CreateMapper();
mapper.Map(updatedOrder, order);
context.SaveChanges();
}
This ensures we only change what is allowed to change, and avoids the whole crapshoot of tracked references. In our mapper configuration we literally have an entry like:
cfg.CreateMap<Order, Order>(...)
which will hold explicit rules to ignore copying across fields and related entities we don't want copied across on an Update.
The downside of doing this is the overhead of sending entire entities and potentially their related entities across the wire back and forth, plus to be "safe" from tampering, a lot more effort needs to go into the mapper configuration or copying across allowed values explicitly.
I had the same issue with EF Core and Blazor Server. Switching the scope in the service collection to "Transient" and using a ServiceScopeFactory for the queries/updates did the trick. You'll see below I'm using the Blazor style dependency injection, but constructor injection will still work the same way for an IServiceScopeFactory
[Inject]
IServiceScopeFactory _serviceScopeFactory { get; set; }
private async Task UpdateItem(GridCommandEventArgs args)
{
var utilityItem = (EntityModelSample)args.Item;
using (var scope1 = _serviceScopeFactory.CreateScope())
{
var dbContext = scope1.ServiceProvider.GetService<SampleDbContext>();
dbContext.Update(utilityItem);
await dbContext.SaveChangesAsync();
}
LoadData();
}
In the startup code:
builder.Services.AddDbContext<InternalUtilitiesDbContext>(option => option.UseSqlServer(connectionString), ServiceLifetime.Transient);
this code fix your problems::
builder.Services.AddDbContext(option => option.UseSqlServer(connectionString), ServiceLifetime.Transient);
ServiceLifetime.Transient

What's the real difference between EntityState.Deleted and Remove() method? When to use each of them?

I'm kinda confused about recognizing a disconnected scenario and a connected scenario, I've searched the internet but I couldn't find any real answer to my questions, I'm kinda confused about entities tracking system, connected and disconnected scenarios, when should I use the Attach method and also in differences between using the Entry(entity).State = EntityState.Deleted and Remove(entity) method, and while I was searching about the last one, most of the time, they were thought identical, but it didn't match with the test that I did and what I expected
I just made a simple console app to test the differences, and how it works is that I make a person completely outside of the context instantiation scope and then pass it to the AddPerson method, because I think this makes a disconnected scenario, right? because the Remove method will complain about why I haven't attached the entity first, so I think that tells us that we're in a disconnected scenario, I'm not sure tho
This is the app:
class Program
{
static void Main(string[] args)
{
Person person = new Person()
{
PersonID = 1,
Name = "John",
Family = "Doe"
};
using (var context = new MyContext())
{
// Why this one requires attaching but the code below doesn't
context.Person.Attach(person);
context.Person.Remove(person);
context.SaveChanges();
// This method of deleting works fine without the entity being attached
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
var people = context.Person.ToList();
foreach (var p in people)
{
Console.WriteLine($"PersonID: {p.PersonID} | Name: {p.Name} | Family: {p.Family}");
}
}
Console.ReadKey();
}
}
so for the Remove method, I have to Attach the entity first, otherwise, it will throw an exception, BUT when I use the Entry(person).state = EntityState.Deleted without attaching it, it works fine, and deletes the person, now why is that, isn't this a big difference? why is it not said anywhere, I've read some websites and some other similar questions on Stackoverflow too, but this wasn't said anywhere, and for the most part, these two were presumed to be the same, and do the same thing, yes they both delete the entity, but how can we describe what happened in this test, isn't this a difference between these two?
I have two questions but I think they're related to each other, so I'm just going to ask both of them here:
When does exactly a disconnected scenario happen, and how can I recognize it, does it depend on the scope of the context instantiation, or on retrieving the entity directly from the context and then modifying it (with no need to attach it), or using an entity from outside of the context (like passing it from another scope to our context as a parameter, as I did in my test)?
Why does the Remove method requires attaching but the EntityState.Deleted doesn't, but they're presumed identical? why should I even bother to attach the entity first, while setting the state to deleted works without needing to attach, so When to use each of them?
Basically, The way I assume that how all these work (with my current understanding of Entity Framework which is probably wrong) is that when you're in a disconnected scenario, you have to attach your entity first, but then setting the state to EntityState.Deleted doesn't need attaching, so then why does the Remove method exists at all, we could use the other way of deleting all the time.
EDIT:
Based on the second code block in the accepted answer, I wrote this test, to figure out how it's working, you said that the otherPersonReference is equal to having a Attach(Person) but when I first attach the person and try to use EntityState.Deleted It works then too, and it'll delete it, but you said that it would fail, I'm a little confused :s
class Program
{
static void Main(string[] args)
{
Person person = new Person()
{
PersonID = 3,
Name = "John",
Family = "Doe"
};
using (var context = new MyContext())
{
//var pr = context.Person.Single(p => p.PersonID == 3);
context.Person.Attach(person);
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
}
Console.ReadKey();
}
}
if I uncomment the pr variable line and then comment the context.Person.Attach(person) then setting the EntityState to Deleted would fail and it'll throw an exception as expected
Setting context.Entry(person).State tells EF to start tracking the "person" instance if it isn't already tracking it. You would get an error if the DbContext was already tracking an instance for the same record.
For example, you can try the following:
var person = new Person { Id = 100 }; // assume an existing record with ID = 100;
using (var context = new AppDbContext())
{
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
}
This works as you expect... However, if you were to have code that did this:
var person = new Person { Id = 100 }; // assume an existing record with ID = 100;
using (var context = new AppDbContext())
{
var otherPersonReference = context.Persons.Single(x => x.Id == 100);
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
}
Your attempt to use context.Entry(person).State = EntityState.Deleted; would fail because the context is now already tracking an entity with that ID. It's the same behaviour as if you were to try and call Attach(person).
When dealing with short-lived DbContexts (such as when using using() blocks) and single entity operations, it can be reasonably safe to work with detached entity references, but this will get a lot more "iffy" once you start dealing with multiple possible entity references (I.e. working with lists or objects sharing references etc.) and/or calls across a DbContext which may already be tracking entity references from previous operations / iterations.
Edit: Working with detached references can be problematic and you need to take extra care when doing so. My general recommendation is to avoid it wherever possible. The approach I recommend when dealing with entities is that you should never pass an entity outside of the scope of the DbContext that read it. This means leveraging a ViewModel or DTO to represent entity-sourced details outside the scope of the DbContext. A detached EF Entity
can certainly work, but with a DTO it is explicitly clear that the data cannot be confused with a tracked entity. When it comes to performing operations like a Delete, you only really need to pass the ID.
For example, leveraging Automapper to help translate between DTOs and entities:
PersonDTO AddPerson(PersonDTO details)
{
if(details == null)
throw new ArgumentNullException("details");
using (var context = new AppDbContext())
{
// TODO: Add validations such as verifying unique name/dob etc.
var person = Mapper.Map<Person>(details); // Creates a new Person.
context.Persons.Add(person);
context.SaveChanges();
details.PersonId = person.PersonId; // After SaveChanges we can retrieve the new row's ID.
return details;
}
}
PersonDTO UpdatePerson(PersonDTO details)
{
if(details == null)
throw new ArgumentNullException("details");
using (var context = new AppDbContext())
{
var existingPerson = context.Persons.Single(x => x.PersonId == details.PersonId); // Throws if we pass an invalid PersonId.
Mapper.Map(details, existingPerson); // copies values from our DTO into Person. Mapping is configured to only copy across allowed values.
context.SaveChanges();
return Mapper.Map<PersonDTO>(existingPerson); // Return a fresh, up to date DTO of our data record.
}
}
void DeletePerson(int personId)
{
using (var context = new AppDbContext())
{
var existingPerson = context.Persons.SingleOrDefault(x => x.PersonId == details.PersonId);
if (existingPerson == null)
return; // Nothing to do.
// TODO: Verify whether the current user should be able to delete this person or not. (I.e. based on the state of the person, is it in use, etc.)
context.Persons.Remove(existingPerson);
context.SaveChanges();
}
}
In this example a Person entity does not ever leave the scope of a DbContext. The trouble with detached entities is that whenever passing an entity around to other methods and such, those methods might assume they are working with attached, complete or complete-able (i.e. through lazy loading) entities. Was the entity loaded from a DbContext that is still "alive" so if if the code wants to check person.Address that data is either eager loaded and available, or lazy-loadable? vs. #null which could mean the person does not have an address, or that without a DbContext or lazy loading we cannot determine whether it does or not. As a general rule if a method is written to accept an entity, it should always expect to have a complete, or complete-able version of that entity. Not a detached "maybe complete, maybe not" instance, not a "new"ed up instance of a class that has some arbitrary values populated, (rather than an entity representing a data row) and not a deserialized block of JSON coming from a web client. All of those can be typed as a "Person" entity, but not a Person entity.
Edit 2: "Complete" vs. "Complete-able"
A Complete entity is an entity that has all related entities eager loaded. Any method that accepts a Person should be able to access any property, including navigation properties, and receive the true value. If the Person has an Address, then a #null address should only ever mean that person does not have an address (if that is valid), not "that person does not have an address, or it just wasn't loaded." This also goes for cases where you might have a method that accepts an entity, which you haven't loaded, but want to substitute with a entity class populated with an ID and whatever data you might have on hand. That incomplete "entity" could find itself sent to other methods that expect a more complete entity. Methods should never need to guess at what they receive.
A Complete-able entity is an entity where any related entities within that entity can be lazy loaded if accessed. The consuming method doesn't need to determine whether properties are available or not, it can access Person.Address and it will always get an Address if that person is supposed to have one, whether the caller remembered to eager load it or not.
Where methods are using tightly scoped DbContexts (using()) if you return an entity then there is no way that you can guarantee later down the call-chain that this entity is complete-able. Today you can make the assurance that all properties are eager-loaded, but tomorrow a new relationship could be added leaving a navigation property somewhere within the object graph that might not be remembered to be eager-loaded.
Eager loading is also expensive, given to ensure an entity is "complete", everything needs to be loaded, whether the consumers ever need it or not. Lazy Loading was introduced to facilitate this, however, in many cases this is extremely expensive leading to a LOT of chatter with the database and the introduction of performance costs when the model evolves. Elements like serialization (a common problem in web applications) touch every property by default leading to numerous lazy load calls for every entity sent.
DTOs/ViewModels are highly recommended when data needs to leave the scope of a DbContext as it ensures only the data a consumer needs is loaded, but equally importantly, as a model may evolve, you avoid lazy loading pitfalls. Serializing a DTO rather than an Entity will ensure those new relationships don't come into play until a DTO is updated to actually need that data.

C# Entity Framework Dynamic Proxy - Maintain Separate Object Instances of Same Record and Only Track Changes in One?

Long story short, I have some complicated objects (made up of tons of sub-objects [some generated by database first to EF objects], collections, and properties) that I use, and during an edit operation, I want to compare separate object instance values manually or reuse parts of my object with values from the database and other values from say an excel spreadsheet upload. The problem is that entity framework appears to be referencing the same dynamic proxy between two separate object instances?
For example:
Car myCarOld = dbContext.Cars.Where(c=>c.id == id).FirstOrDefault();
Car myCar = dbContext.Cars.Where(c=>c.id == id).FirstOrDefault();
string oldMake = myCarOld.Make;
myCar.Make = "Toyota"; // Why is this line also updating myCarOld? Shouldn't they be separate object instances with their own unique values?
if(myCarOld.Make != myCar.Make){
Console.WriteLine("Hey, they don't match which is what I expect.");
}else{
Console.WriteLine("Hey, they do match in value, huh?");
}
Outputs "Hey, they do match in value, huh?". How do I prevent this from happening? I want to track changes in the myCar object only without messing up the old original values in myCarOld. I could deep clone the object before making changes to the object, but that doesn't work in my case because some of the base MVC objects I use like SelectListItem aren't serializable.
I read something about detaching the context object context.Entry(personEntity).State = EntityState.Detached;, but that seems like a lot of work to do it for all the EF objects in my custom object? I'm not even sure that would help in my case, and I'm not sure if I'm describing what I'm looking to do properly either. I'm confused. Please help clear this up. I appreciate any help.
Similar to Entity Framework and maintaining two instances of entity but is there a way to do it without altering the query? I just want to take the result and keep it as a separate object with its own set of unique values.
I have some complicated objects...
&
I could deep clone the object before making changes to the object, but that doesn't work in my case because some of the base MVC objects I use like SelectListItem aren't serializable.
Short answer is stuff like SelectListItem don't belong mixed into entity graphs. Provided the entity relatives are eager loaded, a deep copy serialization would have been a good bet for tracking an initial state for a record.
What you are seeing is by design. It's no different if I have a collection of cars such as List cars, then go:
var car1 = cars[0];
var car2 = cars[0];
,,these point to the same car. EF will check to see if it knows about car ID "n", if not, it will load it from the DB and return it. From that point it knows about it, so if you ask for "n" again, it will return a reference to the same record.
Barring a deep copy clone, use separate DbContext instances:
using (var context = new CarContext())
{
using (var originalContext = new CarContext())
{
var originalCar = originalContext.Cars.Single(x => x.CarId == carId);
var car = context.Cars.Single(x => x.CarId == carId);
// Do your thing to Car, reference originalCar for comparisons.
context.SaveChanges();
// Do not call originalContext.SaveChanges()
}
}
The cost is 2x reads from the database for your objects. Also, you cannot copy references from originalCar into car. I.e. anything like car.Engine = originalCar.Engine. The entity loaded by originalContext is tracked by originalContext, not context. Attempting to do so will result in errors regarding that the entity is already tracked. Attempting to detach and re-attach will also result in errors or wonky behaviour such as duplicate rows or key violations.

Updating whole graph before disposing context in Entity Framework

Let's say I have two simple entities:
class Cat
{
int Id;
string Name;
ICollection<CatRelation> ToRelations;
ICollection<CatRelation> FromRelations;
}
class CatRelation
{
int FromCatId;
int ToCatId;
Cat FromCat;
Cat ToCat;
string RelationType;
}
What I would like to do is load all the Cats and their relations, and have the navigation properties work throughout the whole graph. So far I have something like this:
context.Cats.Include(cat => cat.ToRelations)
.Include(cat => cat.FromRelations)
.ToList()
After this the context is disposed of. Further down the line the list is iterated through. This works fine for getting to the relations -entities, but if I, for example, iterate over the Cats and then try to iterate over all their relations, the other end of the CatRelation is there, but its navigation properties won't work (ContextDisposed). As in, given the following cat var cat1 = cats.First().ToRelations.First().ToCat, if I try to access cat1.ToRelations, I get a ContextDisposed -exception.
So is there a way for me to ask the context to fix all these navigation properties (because I know I have loaded all the Cats of all the CatRelations), before disposing of the context?
For a graph I think it would be better to load the entire table, then construct the graph yourself. Even if you could get EF to recursively pull all of the data from the database, it wouldn't reuse the existing objects for relations (if they exist in memory) but rather construct new instances with the same data. That's likely not what you want and it would result in a lot more data being transferred to boot.
In any event I don't think it's possible to get EF to pull data that is nested arbitrarily deep or might have cycles in their relationship graph.

Categories