Below is an example code which should demonstrate what I'm trying to achieve:
public static void addRecord(CargoShip ship, DbContext db)
{
// Realistically, in my application this is decided by business logic, - omitted for brevity
long containerTypeIDNewContainer = 5;
Container containerToAdd = new Container()
{
cargoShipBelongingToID = ship.cargoShipID,
containerTypeID = containerTypeIDNewContainer
};
ship.Containers.Add(containerToAdd);
// This line of code is what I want to be able to do
// WITHOUT doing a db.SaveChanges
// ContainerType is a lookup table in the database, each container type has pre-defined
// Height/Width etc
// This throws a null exception, ContainerType object does not exist.
string containerHeight = containerToAdd.ContainerType.Height;
}
I basically want to be able to create a new Container and fill in a foreign key on that object, and be able to access the child of the foreign key as an object in code.
if I call a SaveChanges, it will naturally go and fetch the values for the Container object, but I want to fill in those values/objects without having to do a Savechanges.
edit:
Forgot to mention incase it is not clear. The relationship is: CargoShip has many Container, a Container has ONE ContainerType, which is a FK and is non nullable.
Can you help with this.
Thanks
EF will not fill in all your FK properties for you just because you set an ID in your entity. Those properties will only be set after you call SaveChanges.
For that reason, there's not much point in calling Add unless you do actually intend to call SaveChanges at some point.
If you simply want to get the appropriate ContainerType, then you may as well do a Select (or Find) - which ultimately boils down to much the same SQL query that EF would have to use behind the scenes anyway:
var containerType = db.ContainerTypes.Find(containerTypeIdNewContainer);
Note that the Find method will only go to the DB if the particular containerType record does not already exist in the local cache. However, the lifetime of the local cache is tied to the lifetime of the DbContext, so may be very short.
It may make sense to maintain your own cache of commonly-used entities like this if they're relatively stable:
var containerTypes = db.ContainerTypes.ToDictionary(x => x.Id, x => x);
...
var containerType = containerTypes[containerTypeIdNewContainer];
Assuming you can access your DbContext (not familiar with EntityFrameworkConnection class) then you can simply use an IQueryable.
IQueryable<T> query = db.Set<ContainerType>()
var containerType = query.FirstOrDefault(x => x.containerTypeID == containerTypeIDNewContainer)
string containerHeight = containerType.Height;
Hope I understood your question properly
Related
Some 2 years+ ago I asked this question which was kindly solved by Steve Py.
I am having a similar but different problem now when mapping with sub-objects. I have had this issue a few times and worked around it, but facing doing so again, I can't help thinking there must be a more elegant solution. I am coding a memebership system in Blazor Wasm and wanting update membership details via a web-api. All very normal.
I have a library function to update the membership:
public async Task<MembershipLTDTO> UpdateMembershipAsync(APDbContext context, MembershipLTDTO sentmembership)
{
Membership? foundmembership = context.Memberships.Where(x =>x.Id == sentmembership.Id)
.Include(x => x.MembershipTypes)
.FirstOrDefault();
if (foundmembership == null)
{
return new MembershipLTDTO { Status = new InfoBool(false, "Error: Membership not found", InfoBool.ReasonCode.Not_Found) };
}
try
{
_mapper.Map(sentmembership, foundmembership, typeof(MembershipLTDTO), typeof(Membership));
//context.Entry(foundmembership).State = EntityState.Modified; <-This was a 'try-out'
context.Memberships.Update(foundmembership);
await context.SaveChangesAsync();
sentmembership.Status = new InfoBool(true, "Membership successfully updated");
return sentmembership;
}
catch (Exception ex)
{
return new MembershipLTDTO { Status = new InfoBool(false, $"{ex.Message}", InfoBool.ReasonCode.Not_Found) };
}
}
The Membership object is an EF DB object and references a many to many list of MembershipTypes:
public class Membership
{
[Key]
public int Id { get; set; }
...more stuff...
public List<MembershipType>? MembershipTypes { get; set; } // The users membership can be several types. e.g. Employee + Director + etc..
}
The MembershipLTDTO is a lightweight DTO with a few heavy objects removed.
Executing the code, I get an EF exception:
The instance of entity type 'MembershipType' cannot be tracked because another instance with the same key value for {'Id'} is already being tracked. When attaching existing entities, ensure that only one entity instance with a given key value is attached.
I think (from the previous question I asked some time ago) that I understand what is happening, and previously, I have worked around this by having a seperate function that would in this case update the membership types. Then, stripping it out of the 'found' and 'sent' objects to allow Mapper to do the rest.
In my mapping profile I have the mappings defines as follows for these object types:
CreateMap<Membership, MembershipLTDTO>();
CreateMap<MembershipLTDTO, Membership>();
CreateMap<MembershipTypeDTO, MembershipType>();
CreateMap<MembershipType, MembershipTypeDTO>();
As I was about to go and do that very thing again, I was wondering if I am missing a trick with my use of Mapper, or Entity Framework that would allow it to happen more seamlessly?
A couple of things come to mind. The first thing is that the call to context.Memberships.Update(foundmembership); isn't required here so long as you haven't disabled tracking in the DbContext. Calling SaveChanges will build an UPDATE SQL statement for whatever values change (if any) where Update will attempt to overwrite the entitiy(ies).
The issue you are likely encountering is common when dealing with references, and I would recommend a different approach because of this. To outline this, lets look at Membership Types. These would typically be a known list that we want to associate to new and existing memberships. We're not going to ever expect to create a new membership type as part of an operation where we create or update a membership, just add or remove associations to existing memberships.
The problem with using Automapper for this is when we want to associate another membership type in our passed in DTO. Say we have existing data that had a membership associated with Membership Type #1, and we want to add MemberShip Type #2. We load the original entity types to copy values across, eager loading membership types so we get the membership and Type #1, so far so good. However, when we call Mapper.Map() it sees a MemberShip Type #2 in the DTO, so it will add a new entity with ID #2 into the collection of our loaded Membership's Types collection. From here, one of three things can happen:
1) The DbContext was already tracking an instance with ID #2 and
will complain when Update tries to associate another entity reference
with ID #2.
2) The DbContext isn't tracking an instance, and attempts to add #2
as a new entity.
2.1) The database is set up for an Identity column, and the new
membership type gets inserted with the next available ID. (I.e. #16)
2.2) The database is not set up for an Identity column and the
`SaveChanges` raises a duplicate constraint error.
The issue here is that Automapper doesn't have knowledge that any new Membership Type should be retrieved from the DbContext.
Using Automapper's Map method can be used to update child collections, though it should only be used to update references that are actual children of the top-level entity. For instance if you have a Customer and a collection of Contacts where updating the customer you want to update, add, or remove contact detail records because those child records are owned by, and explicitly associated to their customer. Automapper can add to or remove from the collection, and update existing items. For references like many-to-many/many-to-one we cannot rely on that since we will want to associate existing entities, not add/remove them.
In this case, the recommendation would be to tell Automapper to ignore the Membership Types collection, then handle these afterwards.
_mapper.Map(sentmembership, foundmembership, typeof(MembershipLTDTO), typeof(Membership));
var memberShipTypeIds = sentmembership.MembershipTypes.Select(x => x.MembershipTypeId).ToList();
var existingMembershipTypeIds = foundmembership.MembershipTypes.Select(x => x.MembershipTypeId).ToList();
var idsToAdd = membershipTypeIds.Except(existingMembershipTypeIds).ToList();
var idsToRemove = existingMembershipTypeIds.Except(membershipTypeIds).ToList();
if(idsToRemove.Any())
{
var membershipTypesToRemove = foundmembership.MembershipTypes.Where(x => idsToRemove.Contains(x.MembershipTypeId)).ToList();
foreach (var membershipType in membershipTypesToRemove)
foundmembership.MembershipTypes.Remove(membershipType;
}
if(idsToAdd.Any())
{
var membershipTypesToAdd = context.MembershipTypes.Where(x => idsToRemove.Contains(x.MembershipTypeId)).ToList();
foundmembership.MembershipTypes.AddRange(membershipTypesToAdd); // if declared as List, otherwise foreach and add them.
}
context.SaveChanges();
For items being removed, we find those entities in the loaded data state and remove them from the collection. For new items being added, we go to the context, fetch them all, and add them to the loaded data state's collection.
Notwithstanding marking Steve Py's solution as the answer, because it is a solution that works, though not as 'elegant' as I would have liked.
I was pointed in another direction however by the comment from
Lucian Bargaoanu, which, though a little cryptic, after some digging I found could be made to work.
To do this I had to add 'AutoMapper.Collection' and 'AutoMapper.Collection.EntityFrameworkCore' to my solution. There was a bit of jiggery pokery around setting it up as the example [here][2], didn't match up with my set up. I used this in my program.cs:
// Auto Mapper Configurations
var mappingConfig = new MapperConfiguration(mc =>
{
mc.AddProfile(new MappingProfile());
mc.AddCollectionMappers();
});
I also had to modify my mapping profile for the object - DTO mapping to this:
//Membership Types
CreateMap<MembershipTypeDTO, MembershipType>().EqualityComparison((mtdto, mt) => mtdto.Id == mt.Id);
Which is used to tell AutoMapper which fields to use for an equality.
I took out the context.Memberships.Update as recommended by Steve Py and it works.
Posted on behalf of the question asker
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 23 days ago.
The community is reviewing whether to reopen this question as of 12 days ago.
Improve this question
The instance of entity type 'AssegnazioneLotto' cannot be tracked
because another instance with the same key value for
{'Id_AssegnazioneLotto'} is already being tracked.
When attaching existing entities, ensure that only one entity instance
with a given key value is attached.
Consider using 'DbContextOptionsBuilder.EnableSensitiveDataLogging'
to see the conflicting key values.
I encounter this error when we call data from a table and update it.
I solved it by calling a view which calls the table.
Why does this happen?
How can I solve without creating additional views?
The simplest answer: Don't pass entities around outside of the scope they were read. Pass view models (POCO objects rather than entities) and fetch entities on update to copy expected values across.
The complex answer is that when updating entity references, all entity references including child collections and many-to-1 references, you need to check if the DbContext is tracking a matching reference, and either replace the references with the tracked entity, or tell the DbContext to dump the tracked reference before attaching.
For example, an update method that accepts a detached or deserialized "entity". What works sometimes, but then craps other times:
public void UpdateOrder(Order order)
{
context.Update(order);
// OR
context.Attach(order);
context.Entry(order).State = EntityState.Modified;
context.SaveChanges();
}
Looks simple and clean, but craps out when the DbContext instance might already be tracking a matching Order instance. When it is, you get that exception.
The safety check:
public void UpdateOrder(Order order)
{
var existingOrder = context.Orders.Local.SingleOrDefault(o => o.OrderId == order.OrderId);
if (existingOrder != null)
context.Entry(existingOrder).State = EntityState.Detatched;
context.Update(order);
// OR
context.Attach(order);
context.Entry(order).State = EntityState.Modified;
context.SaveChanges();
}
That example checks the local tracking cache for a matching order and dumps any tracked instance. The key here is searching the .Local with the DbSet to search the local tracking cache, not hitting the DB.
Where this gets more complex is where Order contains other entity references like OrderLines, or a reference to a Customer, etc. When dealing with detached entities you need to check over the entire object graph for tracked references.
public void UpdateOrder(Order order)
{
var existingOrder = context.Orders.Local.SingleOrDefault(o => o.OrderId == order.OrderId);
if (existingOrder != null)
context.Entry(existingOrder).State = EntityState.Detatched;
var customer = context.Customers.Local.SingleOrDefault(c => c.CustomerId = order.Customer.CustomerId);
if (customer != null)
order.Customer = customer; // Replace our Customer reference with the tracked one.
else
context.Attach(order.Customer);
context.Update(order);
// OR
context.Attach(order);
context.Entry(order).State = EntityState.Modified;
context.SaveChanges();
}
As you can see, this starts to get complex and cumbersome pretty quick as you need to check every reference. Hence, it's simpler to avoid passing detached or serialized entities around. Using a View Model offers many benefits for performance and simplifying issues like this. Coupled with AutoMapper or a similar mapper that supports projection can make operations with view models very simple:
Selecting Orders:
var orders = context.Orders.Where(/* suitable conditions */)
.ProjectTo<OrderViewModel>(_mapperConfig)
.ToList();
Where _mapperConfig is an AutoMapper configuration that tells AutoMapper how to convert an Order into an OrderViewModel. This can follow conventions or optionally contain mapping rules to build a flattened view model for an Order and it's relative details. ProjectTo works with EF's IQueryable to build an SQL SELECT statement across the entity graph to return only the data needed to populate the view model. This is far more efficient than using Map which would require all related entities to be eager loaded.
When updating:
public void UpdateOrder(UpdateOrderViewModel orderVM)
{
var order = context.Orders.Single(o => o.OrderId == orderVM.OrderId);
if (orderVM.RowVersion != order.RowVersion)
throw new StaleDataException(); // placeholder to handle the situation where the data has changed since our view got the order details.
var mapper = _mapperConfig.CreateMapper();
mapper.Map(orderVM, order);
context.SaveChanges();
}
orderVM could be an OrderViewModel returned, but typically I would recommend packaging just the fields that can be updated into a dedicated view model. The "magic" is in the AutoMapper configuration which governs what fields get copied from the view model back into the entity. If can include child data such as OrderLines or such, in which case you would want to ensure those child entities are eager loaded /w .Include in your DB fetch. AutoMapper's Map method in this case is the variant that copies mapped values from a source to a destination, so values are copied across directly into the tracked entity instance. EF will build an SQL UPDATE statement based on what values actually charge rather than overwriting the entire record.
You can also use the same technique with detached entities to avoid your issue. The benefit of using AutoMapper is that you can configure which values can be legally copied over from the deserialized/detached entity provided into the real data:
public void UpdateOrder(Order updatedOrder)
{
var order = context.Orders.Single(o => o.OrderId == orderVM.OrderId);
if (updatedOrder.RowVersion != order.RowVersion)
throw new StaleDataException(); // placeholder to handle the situation where the data has changed since our view got the order details.
var mapper = _mapperConfig.CreateMapper();
mapper.Map(updatedOrder, order);
context.SaveChanges();
}
This ensures we only change what is allowed to change, and avoids the whole crapshoot of tracked references. In our mapper configuration we literally have an entry like:
cfg.CreateMap<Order, Order>(...)
which will hold explicit rules to ignore copying across fields and related entities we don't want copied across on an Update.
The downside of doing this is the overhead of sending entire entities and potentially their related entities across the wire back and forth, plus to be "safe" from tampering, a lot more effort needs to go into the mapper configuration or copying across allowed values explicitly.
I had the same issue with EF Core and Blazor Server. Switching the scope in the service collection to "Transient" and using a ServiceScopeFactory for the queries/updates did the trick. You'll see below I'm using the Blazor style dependency injection, but constructor injection will still work the same way for an IServiceScopeFactory
[Inject]
IServiceScopeFactory _serviceScopeFactory { get; set; }
private async Task UpdateItem(GridCommandEventArgs args)
{
var utilityItem = (EntityModelSample)args.Item;
using (var scope1 = _serviceScopeFactory.CreateScope())
{
var dbContext = scope1.ServiceProvider.GetService<SampleDbContext>();
dbContext.Update(utilityItem);
await dbContext.SaveChangesAsync();
}
LoadData();
}
In the startup code:
builder.Services.AddDbContext<InternalUtilitiesDbContext>(option => option.UseSqlServer(connectionString), ServiceLifetime.Transient);
this code fix your problems::
builder.Services.AddDbContext(option => option.UseSqlServer(connectionString), ServiceLifetime.Transient);
ServiceLifetime.Transient
I'm kinda confused about recognizing a disconnected scenario and a connected scenario, I've searched the internet but I couldn't find any real answer to my questions, I'm kinda confused about entities tracking system, connected and disconnected scenarios, when should I use the Attach method and also in differences between using the Entry(entity).State = EntityState.Deleted and Remove(entity) method, and while I was searching about the last one, most of the time, they were thought identical, but it didn't match with the test that I did and what I expected
I just made a simple console app to test the differences, and how it works is that I make a person completely outside of the context instantiation scope and then pass it to the AddPerson method, because I think this makes a disconnected scenario, right? because the Remove method will complain about why I haven't attached the entity first, so I think that tells us that we're in a disconnected scenario, I'm not sure tho
This is the app:
class Program
{
static void Main(string[] args)
{
Person person = new Person()
{
PersonID = 1,
Name = "John",
Family = "Doe"
};
using (var context = new MyContext())
{
// Why this one requires attaching but the code below doesn't
context.Person.Attach(person);
context.Person.Remove(person);
context.SaveChanges();
// This method of deleting works fine without the entity being attached
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
var people = context.Person.ToList();
foreach (var p in people)
{
Console.WriteLine($"PersonID: {p.PersonID} | Name: {p.Name} | Family: {p.Family}");
}
}
Console.ReadKey();
}
}
so for the Remove method, I have to Attach the entity first, otherwise, it will throw an exception, BUT when I use the Entry(person).state = EntityState.Deleted without attaching it, it works fine, and deletes the person, now why is that, isn't this a big difference? why is it not said anywhere, I've read some websites and some other similar questions on Stackoverflow too, but this wasn't said anywhere, and for the most part, these two were presumed to be the same, and do the same thing, yes they both delete the entity, but how can we describe what happened in this test, isn't this a difference between these two?
I have two questions but I think they're related to each other, so I'm just going to ask both of them here:
When does exactly a disconnected scenario happen, and how can I recognize it, does it depend on the scope of the context instantiation, or on retrieving the entity directly from the context and then modifying it (with no need to attach it), or using an entity from outside of the context (like passing it from another scope to our context as a parameter, as I did in my test)?
Why does the Remove method requires attaching but the EntityState.Deleted doesn't, but they're presumed identical? why should I even bother to attach the entity first, while setting the state to deleted works without needing to attach, so When to use each of them?
Basically, The way I assume that how all these work (with my current understanding of Entity Framework which is probably wrong) is that when you're in a disconnected scenario, you have to attach your entity first, but then setting the state to EntityState.Deleted doesn't need attaching, so then why does the Remove method exists at all, we could use the other way of deleting all the time.
EDIT:
Based on the second code block in the accepted answer, I wrote this test, to figure out how it's working, you said that the otherPersonReference is equal to having a Attach(Person) but when I first attach the person and try to use EntityState.Deleted It works then too, and it'll delete it, but you said that it would fail, I'm a little confused :s
class Program
{
static void Main(string[] args)
{
Person person = new Person()
{
PersonID = 3,
Name = "John",
Family = "Doe"
};
using (var context = new MyContext())
{
//var pr = context.Person.Single(p => p.PersonID == 3);
context.Person.Attach(person);
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
}
Console.ReadKey();
}
}
if I uncomment the pr variable line and then comment the context.Person.Attach(person) then setting the EntityState to Deleted would fail and it'll throw an exception as expected
Setting context.Entry(person).State tells EF to start tracking the "person" instance if it isn't already tracking it. You would get an error if the DbContext was already tracking an instance for the same record.
For example, you can try the following:
var person = new Person { Id = 100 }; // assume an existing record with ID = 100;
using (var context = new AppDbContext())
{
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
}
This works as you expect... However, if you were to have code that did this:
var person = new Person { Id = 100 }; // assume an existing record with ID = 100;
using (var context = new AppDbContext())
{
var otherPersonReference = context.Persons.Single(x => x.Id == 100);
context.Entry(person).State = EntityState.Deleted;
context.SaveChanges();
}
Your attempt to use context.Entry(person).State = EntityState.Deleted; would fail because the context is now already tracking an entity with that ID. It's the same behaviour as if you were to try and call Attach(person).
When dealing with short-lived DbContexts (such as when using using() blocks) and single entity operations, it can be reasonably safe to work with detached entity references, but this will get a lot more "iffy" once you start dealing with multiple possible entity references (I.e. working with lists or objects sharing references etc.) and/or calls across a DbContext which may already be tracking entity references from previous operations / iterations.
Edit: Working with detached references can be problematic and you need to take extra care when doing so. My general recommendation is to avoid it wherever possible. The approach I recommend when dealing with entities is that you should never pass an entity outside of the scope of the DbContext that read it. This means leveraging a ViewModel or DTO to represent entity-sourced details outside the scope of the DbContext. A detached EF Entity
can certainly work, but with a DTO it is explicitly clear that the data cannot be confused with a tracked entity. When it comes to performing operations like a Delete, you only really need to pass the ID.
For example, leveraging Automapper to help translate between DTOs and entities:
PersonDTO AddPerson(PersonDTO details)
{
if(details == null)
throw new ArgumentNullException("details");
using (var context = new AppDbContext())
{
// TODO: Add validations such as verifying unique name/dob etc.
var person = Mapper.Map<Person>(details); // Creates a new Person.
context.Persons.Add(person);
context.SaveChanges();
details.PersonId = person.PersonId; // After SaveChanges we can retrieve the new row's ID.
return details;
}
}
PersonDTO UpdatePerson(PersonDTO details)
{
if(details == null)
throw new ArgumentNullException("details");
using (var context = new AppDbContext())
{
var existingPerson = context.Persons.Single(x => x.PersonId == details.PersonId); // Throws if we pass an invalid PersonId.
Mapper.Map(details, existingPerson); // copies values from our DTO into Person. Mapping is configured to only copy across allowed values.
context.SaveChanges();
return Mapper.Map<PersonDTO>(existingPerson); // Return a fresh, up to date DTO of our data record.
}
}
void DeletePerson(int personId)
{
using (var context = new AppDbContext())
{
var existingPerson = context.Persons.SingleOrDefault(x => x.PersonId == details.PersonId);
if (existingPerson == null)
return; // Nothing to do.
// TODO: Verify whether the current user should be able to delete this person or not. (I.e. based on the state of the person, is it in use, etc.)
context.Persons.Remove(existingPerson);
context.SaveChanges();
}
}
In this example a Person entity does not ever leave the scope of a DbContext. The trouble with detached entities is that whenever passing an entity around to other methods and such, those methods might assume they are working with attached, complete or complete-able (i.e. through lazy loading) entities. Was the entity loaded from a DbContext that is still "alive" so if if the code wants to check person.Address that data is either eager loaded and available, or lazy-loadable? vs. #null which could mean the person does not have an address, or that without a DbContext or lazy loading we cannot determine whether it does or not. As a general rule if a method is written to accept an entity, it should always expect to have a complete, or complete-able version of that entity. Not a detached "maybe complete, maybe not" instance, not a "new"ed up instance of a class that has some arbitrary values populated, (rather than an entity representing a data row) and not a deserialized block of JSON coming from a web client. All of those can be typed as a "Person" entity, but not a Person entity.
Edit 2: "Complete" vs. "Complete-able"
A Complete entity is an entity that has all related entities eager loaded. Any method that accepts a Person should be able to access any property, including navigation properties, and receive the true value. If the Person has an Address, then a #null address should only ever mean that person does not have an address (if that is valid), not "that person does not have an address, or it just wasn't loaded." This also goes for cases where you might have a method that accepts an entity, which you haven't loaded, but want to substitute with a entity class populated with an ID and whatever data you might have on hand. That incomplete "entity" could find itself sent to other methods that expect a more complete entity. Methods should never need to guess at what they receive.
A Complete-able entity is an entity where any related entities within that entity can be lazy loaded if accessed. The consuming method doesn't need to determine whether properties are available or not, it can access Person.Address and it will always get an Address if that person is supposed to have one, whether the caller remembered to eager load it or not.
Where methods are using tightly scoped DbContexts (using()) if you return an entity then there is no way that you can guarantee later down the call-chain that this entity is complete-able. Today you can make the assurance that all properties are eager-loaded, but tomorrow a new relationship could be added leaving a navigation property somewhere within the object graph that might not be remembered to be eager-loaded.
Eager loading is also expensive, given to ensure an entity is "complete", everything needs to be loaded, whether the consumers ever need it or not. Lazy Loading was introduced to facilitate this, however, in many cases this is extremely expensive leading to a LOT of chatter with the database and the introduction of performance costs when the model evolves. Elements like serialization (a common problem in web applications) touch every property by default leading to numerous lazy load calls for every entity sent.
DTOs/ViewModels are highly recommended when data needs to leave the scope of a DbContext as it ensures only the data a consumer needs is loaded, but equally importantly, as a model may evolve, you avoid lazy loading pitfalls. Serializing a DTO rather than an Entity will ensure those new relationships don't come into play until a DTO is updated to actually need that data.
Background: I'm using EF6 and Database First.
I'm run into a scenario that has me perplexed. After creating a new object, populating the Navigation Properties with new objects, and calling SaveChanges, the navigation properties are reset. The first line of code that references the navigation property after the SaveChanges call will end up re-fetching the data from the database. Is this expected behavior, and can someone explain why it behaves this way? Here is a sample code block of my scenario:
using (DbContext context = new DbContext) {
Foo foo = context.Foos.Create();
context.Foos.Add(foo);
...
Bar bar = context.Bars.Create();
context.Bars.Add(bar);
...
FooBar foobar = context.FooBars.Create();
context.FooBars.Add(foobar)
foobar.Foo = foo;
foobar.Bar = bar;
//foo.FooBars is already populated, so 1 is returned and no database query is executed.
int count = foo.FooBars.Count;
context.SaveChanges();
//This causes a new query against the database - Why?
count = foo.FooBars.Count;
}
I can't say 100%, but I doubt this behavior is made specifically.
As for why it behaves this way, the root of the issue is that DbCollectionEntry.IsLoaded property is false until the navigation property is either implicitly lazy loaded or explicitly loaded using Load method.
Also seems that lazy loading is suppressed while the entity is in Added state, that's why the first call does not trigger reload. But once you call SaveChanges, the entity state becomes Unmodified, and although the collection is not really set to `null' or cleared, the next attempt to access the collection property will trigger lazy reload.
// ...
Console.WriteLine(context.Entry(foo).Collection(e => e.FooBars).IsLoaded); // false
//foo.FooBars is already populated, so 1 is returned and no database query is executed.
int count = foo.FooBars.Count;
// Cache the collection property into a variable
var foo_FooBars = foo.FooBars;
context.SaveChanges();
Console.WriteLine(context.Entry(foo).State); // Unchanged!
Console.WriteLine(context.Entry(foo).Collection(e => e.FooBars).IsLoaded); // false
// The collection is still there, this does not trigger database query
count = foo_FooBars.Count;
// This causes a new query against the database
count = foo.FooBars.Count;
If you want to avoid the reload, the workaround is to explicitly set IsLoaded property to true.
// ...
context.SaveChanges();
context.Entry(foo).Collection(e => e.FooBars).IsLoaded = true;
context.Entry(bar).Collection(e => e.FooBars).IsLoaded = true;
// No new query against the database :)
count = foo.FooBars.Count;
Why would you expect it not to re-query the database? EF isn't holding a complete cached copy of your database. What if you have some trigger(s) doing a row insert after something you changed or some column that serves as a function column that EF doesn't know the exact calculation for? It needs to get the most recent data otherwise your Count won't be correct.
I think the issue is the concept of 'context' is being confused. You are in a 'connected' state when you are in the context of EF. It doesn't always care to reuse your data smartly. It knows "I exist as an extension of the database level through a context set up to talk to it." If you have an object you create or one already existing say 'FooBars' and you do this:
foo.Foobars.(some operation)
If foo is your context and Foobars is some object off of it, it is referenced as an extension of context. If you want to realize reuse without incurring roundtrip to the database collect an object or objects outside the scope of the context like:
var foobars= new Foobars;
using(var foo = new new DbContext)
{
foobars = foo.FooBars;
}
var cnt = foobars.Count();
Generally speaking with EF, I see a lot of people do something like have a whole 'using(var context = new Efcontext())' over an entire method that may be long and do this process over and over everywhere. It is not bad out of the box per say, but doing this everywhere I would simply say: "Do you need to keep a db connection open over and over again?" Sometimes you do, most of the time you do not though.
I have been exploring different methods of editing/updating a record within Entity Framework 5 in an ASP.NET MVC3 environment, but so far none of them tick all of the boxes I need. I'll explain why.
I have found three methods to which I'll mention the pros and cons:
Method 1 - Load original record, update each property
var original = db.Users.Find(updatedUser.UserId);
if (original != null)
{
original.BusinessEntityId = updatedUser.BusinessEntityId;
original.Email = updatedUser.Email;
original.EmployeeId = updatedUser.EmployeeId;
original.Forename = updatedUser.Forename;
original.Surname = updatedUser.Surname;
original.Telephone = updatedUser.Telephone;
original.Title = updatedUser.Title;
original.Fax = updatedUser.Fax;
original.ASPNetUserId = updatedUser.ASPNetUserId;
db.SaveChanges();
}
Pros
Can specify which properties change
Views don't need to contain every property
Cons
2 x queries on database to load original then update it
Method 2 - Load original record, set changed values
var original = db.Users.Find(updatedUser.UserId);
if (original != null)
{
db.Entry(original).CurrentValues.SetValues(updatedUser);
db.SaveChanges();
}
Pros
Only modified properties are sent to database
Cons
Views need to contain every property
2 x queries on database to load original then update it
Method 3 - Attach updated record and set state to EntityState.Modified
db.Users.Attach(updatedUser);
db.Entry(updatedUser).State = EntityState.Modified;
db.SaveChanges();
Pros
1 x query on database to update
Cons
Can't specify which properties change
Views must contain every property
Question
My question to you guys; is there a clean way that I can achieve this set of goals?
Can specify which properties change
Views don't need to contain every property (such as password!)
1 x query on database to update
I understand this is quite a minor thing to point out but I may be missing a simple solution to this. If not method one will prevail ;-)
You are looking for:
db.Users.Attach(updatedUser);
var entry = db.Entry(updatedUser);
entry.Property(e => e.Email).IsModified = true;
// other changed properties
db.SaveChanges();
I really like the accepted answer. I believe there is yet another way to approach this as well. Let's say you have a very short list of properties that you wouldn't want to ever include in a View, so when updating the entity, those would be omitted. Let's say that those two fields are Password and SSN.
db.Users.Attach(updatedUser);
var entry = db.Entry(updatedUser);
entry.State = EntityState.Modified;
entry.Property(e => e.Password).IsModified = false;
entry.Property(e => e.SSN).IsModified = false;
db.SaveChanges();
This example allows you to essentially leave your business logic alone after adding a new field to your Users table and to your View.
foreach(PropertyInfo propertyInfo in original.GetType().GetProperties()) {
if (propertyInfo.GetValue(updatedUser, null) == null)
propertyInfo.SetValue(updatedUser, propertyInfo.GetValue(original, null), null);
}
db.Entry(original).CurrentValues.SetValues(updatedUser);
db.SaveChanges();
I have added an extra update method onto my repository base class that's similar to the update method generated by Scaffolding. Instead of setting the entire object to "modified", it sets a set of individual properties. (T is a class generic parameter.)
public void Update(T obj, params Expression<Func<T, object>>[] propertiesToUpdate)
{
Context.Set<T>().Attach(obj);
foreach (var p in propertiesToUpdate)
{
Context.Entry(obj).Property(p).IsModified = true;
}
}
And then to call, for example:
public void UpdatePasswordAndEmail(long userId, string password, string email)
{
var user = new User {UserId = userId, Password = password, Email = email};
Update(user, u => u.Password, u => u.Email);
Save();
}
I like one trip to the database. Its probably better to do this with view models, though, in order to avoid repeating sets of properties. I haven't done that yet because I don't know how to avoid bringing the validation messages on my view model validators into my domain project.
public interface IRepository
{
void Update<T>(T obj, params Expression<Func<T, object>>[] propertiesToUpdate) where T : class;
}
public class Repository : DbContext, IRepository
{
public void Update<T>(T obj, params Expression<Func<T, object>>[] propertiesToUpdate) where T : class
{
Set<T>().Attach(obj);
propertiesToUpdate.ToList().ForEach(p => Entry(obj).Property(p).IsModified = true);
SaveChanges();
}
}
Just to add to the list of options. You can also grab the object from the database, and use an auto mapping tool like Auto Mapper to update the parts of the record you want to change..
Depending on your use case, all the above solutions apply. This is how i usually do it however :
For server side code (e.g. a batch process) I usually load the entities and work with dynamic proxies. Usually in batch processes you need to load the data anyways at the time the service runs. I try to batch load the data instead of using the find method to save some time. Depending on the process I use optimistic or pessimistic concurrency control (I always use optimistic except for parallel execution scenarios where I need to lock some records with plain sql statements, this is rare though). Depending on the code and scenario the impact can be reduced to almost zero.
For client side scenarios, you have a few options
Use view models. The models should have a property UpdateStatus(unmodified-inserted-updated-deleted). It is the responsibility of the client to set the correct value to this column depending on the user actions (insert-update-delete). The server can either query the db for the original values or the client should send the original values to the server along with the changed rows. The server should attach the original values and use the UpdateStatus column for each row to decide how to handle the new values. In this scenario I always use optimistic concurrency. This will only do the insert - update - delete statements and not any selects, but it might need some clever code to walk the graph and update the entities (depends on your scenario - application). A mapper can help but does not handle the CRUD logic
Use a library like breeze.js that hides most of this complexity (as described in 1) and try to fit it to your use case.
Hope it helps
EF Core 7.0 new feature: ExecuteUpdate
Finally! After a long wait, EF Core 7.0 now has a natively supported way to run UPDATE (and also DELETE) statements while also allowing you to use arbitrary LINQ queries (.Where(u => ...)), without having to first retrieve the relevant entities from the database: The new built-in method called ExecuteUpdate — see "What's new in EF Core 7.0?".
ExecuteUpdate is precisely meant for these kinds of scenarios, it can operate on any IQueryable instance, and lets you update specific columns on any number of rows, while always issuing a single UPDATE statement behind the scenes, making it as efficient as possible.
Usage:
Imagine you want to update a specific user's email and display name:
dbContext.Users
.Where(u => u.Id == someId)
.ExecuteUpdate(b => b
.SetProperty(u => u.Email, "NewEmail#gmail.com")
.SetProperty(u => u.DisplayName, "New Display Name")
);
As you can see, ExecuteUpdate requires you to make one or more calls to the SetProperty method, to specify which property to update, and also what new value to assign to it.
EF Core will translate this into the following UPDATE statement:
UPDATE [u]
SET [u].[Email] = "NewEmail#gmail.com",
[u].[DisplayName] = "New Display Name"
FROM [Users] AS [u]
WHERE [u].[Id] = someId
Also, ExecuteDelete for deleting rows:
There's also a counterpart to ExecuteUpdate called ExecuteDelete, which, as the name implies, can be used to delete a single or multiple rows at once without first fetching them.
Usage:
// Delete users that haven't been active in 2022:
dbContext.Users
.Where(u => u.LastActiveAt.Year < 2022)
.ExecuteDelete();
Similar to ExecuteUpdate, ExecuteDelete will generate DELETE SQL statements behind the scenes — in this case, the following one:
DELETE FROM [u]
FROM [Users] AS [u]
WHERE DATEPART(year, [u].[LastActiveAt]) < 2022
Other notes:
Keep in mind that both ExecuteUpdate and ExecuteDelete are "terminating", meaning that the update/delete operation will take place as soon as you call the method. You're not supposed to call dbContext.SaveChanges() afterwards.
If you're curious about the SetProperty method, and you're confused as to why ExectueUpdate doesn't instead receive a member initialization expression (e.g. .ExecuteUpdate(new User { Email = "..." }), then refer to this comment (and the surrounding ones) on the GitHub issue for this feature.
Furthermore, if you're curious about the rationale behind the naming, and why the prefix Execute was picked (there were also other candidates), refer to this comment, and the preceding (rather long) conversation.
Both methods also have async equivalents, named ExecuteUpdateAsync, and ExecuteDeleteAsync respectively.
There are some really good answers given already, but I wanted to throw in my two cents. Here is a very simple way to convert a view object into a entity. The simple idea is that only the properties that exist in the view model get written to the entity. This is similar to #Anik Islam Abhi's answer, but has null propagation.
public static T MapVMUpdate<T>(object updatedVM, T original)
{
PropertyInfo[] originalProps = original.GetType().GetProperties();
PropertyInfo[] vmProps = updatedVM.GetType().GetProperties();
foreach (PropertyInfo prop in vmProps)
{
PropertyInfo projectProp = originalProps.FirstOrDefault(x => x.Name == prop.Name);
if (projectProp != null)
{
projectProp.SetValue(original, prop.GetValue(updatedVM));
}
}
return original;
}
Pros
Views don't need to have all the properties of the entity.
You never have to update code when you add remove a property to a view.
Completely generic
Cons
2 hits on the database, one to load the original entity, and one to save it.
To me the simplicity and low maintenance requirements of this approach outweigh the added database call.