In .NET Core 2.2 I'm stuck with filtering IQueryable built as:
_context.Ports.Include(p => p.VesselsPorts)
.ThenInclude(p => p.Arrival)
.Include(p => p.VesselsPorts)
.ThenInclude(p => p.Departure)
.OrderBy(p => p.PortLocode);
in many-to-many relation. And the entity models are such as:
public class PortModel
{
[Key]
public string PortLocode { get; set; }
public double? MaxKnownLOA { get; set; }
public double? MaxKnownBreadth { get; set; }
public double? MaxKnownDraught { get; set; }
public virtual ICollection<VesselPort> VesselsPorts { get; set; }
}
public class VesselPort
{
public int IMO { get; set; }
public string PortLocode { get; set; }
public DateTime? Departure { get; set; }
public DateTime? Arrival { get; set; }
public VesselModel VesselModel { get; set; }
public PortModel PortModel { get; set; }
}
Based on this this SO answer I managed to create LINQ like that:
_context.Ports.Include(p => p.VesselsPorts).ThenInclude(p => p.Arrival).OrderBy(p => p.PortLocode)
.Select(
p => new PortModel
{
PortLocode = p.PortLocode,
MaxKnownBreadth = p.MaxKnownBreadth,
MaxKnownDraught = p.MaxKnownDraught,
MaxKnownLOA = p.MaxKnownLOA,
VesselsPorts = p.VesselsPorts.Select(vp => vp.Arrival > DateTime.UtcNow.AddDays(-1)) as ICollection<VesselPort>
}).AsQueryable();
BUT what I need is to find all port records, where:
VesselsPorts.Arrival > DateTime.UtcNow.AddDays(-1) quantity is greater than int x = 5 value (for the example). And I have no clue how to do it :/
Thanks to #GertArnold comment, I ended up with query:
ports = ports.Where(p => p.VesselsPorts.Where(vp => vp.Arrival > DateTime.UtcNow.AddDays(-1)).Count() > x);
When using entity framework people tend to use Include instead of Select to save them some typing. It is seldom wise to do so.
The DbContext holds a ChangeTracker. Every complete row from any table that you fetch during the lifetime of the DbContext is stored in the ChangeTracker, as well as a clone. You get a reference to the copy. (or maybe a reference to the original). If you change properties of the data you got, they are changed in the copy that is in the ChangeTracker. During SaveChanges, the original is compared to the copy, to see if the data must be saved.
So if you are fetching quite a lot of data, and use include, then every fetched items is cloned. This might slow down your queries considerably.
Apart from this cloning, you will probably fetch more properties than you actually plan to use. Database management systems are extremely optimized in combining tables, and searching rows within tables. One of the slower parts is the transfer of the selected data to your local process.
For example, if you have a database with Schools and Students, with the obvious one to many-relation, then every Student will have a foreign key to the School he attends.
So if you ask for School [10] with his 2000 Students, then every Student will have a foreign key value of [10]. If you use Include, then you will be transferring this same value 10 over 2000 times. What a waste of processing power!
In entity framework, when querying data, always use Select to select the properties, and Select only the properties that you actually plan to use. Only use Include if you plan to change the fetched items.
Certainly don't use Include to save you some typing!
Requirement: Give me the Ports with their Vessels
var portsWithTheirVessels = dbContext.Ports
.Where(port => ...) // if you don't want all Ports
.Select(port => new
{
// only select the properties that you want:
PortLocode = port.PortLoCode,
MaxKnownLOA = port.MaxKnownLOA,
MaxKnownBreadth = prot.MaxKnownBreadth,
MaxKnownDraught = ports.MaxKnownDraught,
// The Vessels in this port:
Vessels = port.VesselsPort.Select(vessel => new
{
// again: only the properties that you plan to use
IMO = vessel.IMO,
...
// do not select the foreign key, you already know the value!
// PortLocode = vessle.PortLocode,
})
.ToList(),
});
Entity framework knows your one-to-many relation, and knows that if you use the virtual ICollection that it should do a (Group-)Join.
Some people prefer to do the Group-Join themselves, or they use a version of entity framework that does not support using the ICollection.
var portsWithTheirVessels = dbContext.Ports.GroupJoin(dbContext.VesselPorts,
port => port.PortLocode, // from every Port take the primary key
vessel => vessel.PortLocode, // from every Vessel take the foreign key to Port
// parameter resultSelector: take every Port with its zero or more Vessels to make one new
(port, vesselsInThisPort) => new
{
PortLocode = port.PortLoCode,
...
Vessels = vesselsInThisPort.Select(vessel => new
{
...
})
.ToList(),
});
Alternative:
var portsWithTheirVessels = dbContext.Ports.Select(port => new
{
PortLocode = port.PortLoCode,
...
Vessels = dbContext.VesselPorts.Where(vessel => vessel.PortLocode == port.PortLocode)
.Select(vessel => new
{
...
}
.ToList(),
});
Entity framework will translate this also to a GroupJoin.
Related
As an example let's say my database has a table with thousands of ships with every ship potentially having thousands of passengers as a navigation property:
public DbSet<Ship> Ship { get; set; }
public DbSet<Passenger> Passenger { get; set; }
public class Ship
{
public List<Passenger> passengers { get; set; }
//properties omitted for example
}
public class Passenger
{
//properties omitted for example
}
The example use case is that someone is fetching all ships per API and would like to know for each ship whether it is empty (0 passengers), so the returned JSON will contain a list of ships each with a bool whether it is empty.
My current code seems very inefficient (including all passengers just to determine if a ship is empty):
List<Ship> ships = dbContext.Ship
.Include(x => x.passengers)
.ToList();
and later when the ships are serialized to JSON:
jsonShip.isEmpty = !ship.passengers.Any();
I would like a more performant (and not bloated) alternative to including all passengers. What options do I have?
I have looked at computed columns but they only seem to support sql as string. If possible I would like to stay in the C# code world, so for example having a property which is set correctly by being automatically woven in the SQL query would be optimal.
Create a Data Transfer Object for Ship that reflects the shape of your JSON result, like -
public class ShipDto
{
public int Id { get; set; }
public string Name { get; set; }
public bool IsEmpty { get; set; }
}
Then use projection in your query -
var ships = dbCtx.Ships
.Select(p => new ShipDto
{
Id = p.Id,
Name = p.Name,
IsEmpty = !p.Passengers.Any()
})
.ToList();
Usually, APIs need to produce responses of various shapes and DTOs give you well defined models to represent the shape of your API response. Domain entities are not always suitable for this.
If your domain entity (Ship) has a lot of properties, then copying all those properties in the .Select() method might be cumbersome. You can use AutoMapper to map them for you. AutoMapper has a ProjectTo<T>() method that can generate the SQL and return the projected result. For example, you can achieve the above result with a mapping configuration -
CreateMap<Ship, ShipDto>()
.ForMember(d => d.IsEmpty, opt => opt.MapFrom(s => !s.Passengers.Any()));
and a query -
var ships = Mapper.ProjectTo<ShipDto>(dbCtx.Ships).ToList();
assuming all other properties in ShipDto are named similar as in Ship entity.
EDIT :
If you don't want a DTO model -
you can add a NotMapped property in Ship model -
public class Ship
{
public int Id { get; set; }
public string Name { get; set; }
[NotMapped]
public bool IsEmpty { get; set; }
public List<Passenger> passengers { get; set; }
}
and then do the query like -
var ships = dbCtx.Ships
.Select(p => new Ship
{
Id = p.Id,
Name = p.Name,
IsEmpty = !p.Passengers.Any()
})
.ToList();
You can return an anonymous type -
var ships = dbCtx.Ships
.Select(p => new
{
Id = p.Id,
Name = p.Name,
IsEmpty = !p.Passengers.Any()
})
.ToList();
If I understand your intention correctly...
One way is to store the number of passengers inside each Ship entity. This can work well if you use Domain Driven Design, treat the Ship as an aggregate root, and only add or remove passengers through methods exposed on the given Ship entity, e.g. RegisterPassenger() / RemovePassenger(). Inside these methods, increment or decrement the passenger number along with adding or removing the passenger.
Then, obviously, you can query the Ships dbset with a PassengerCount < 0 projection to the bool you need. And, again obviously, it won't even touch the Passengers table.
In traditional anemic domain ASP.NET systems this sort of data redundancy might be a bit more risky, because properties are usually publicly mutable, and you have multiple services that 'massage' the entities, which is a potential source of data integrity loss.
I am trying to use anonymous types in Entity Framework, but I am getting an error about
Unable to create a constant value
MinQty and MaxQty are int so I don't know if I need to add to Convert.ToInt32?
Unable to create a constant value of type 'Anonymous type'. Only primitive types or enumeration types are supported in this context.
This builds a list object
var listOfLicense = (from l in db.License
select new
{
l.ProductId,
l.MinLicense,
l.MaxLicense
}).tolist();
This is the larger EF object where I am getting the error am I missing a casting?
var ShoppingCart = (from sc in db.ShoppingCarts
Select new model.Shoppingchart{
ShoppingCartId= sc.Id,
MinQty = (int)listOfLicense
.Where(mt => (int)mt.ProductId == sc.ProductId)
.Select(mt => (int)mt.MinLicense)
.Min(mt => mt.Value),
MaxQty = (int)listOfLicense
.Where(mt => (int)mt.ProductId == p.ProductId)
.Select(mt =>(int) mt.MaxQty)
.Max(mt => mt.Value)}.tolist();
This builds a list object
var listOfLicense = (from l in db.License
select new
{
l.ProductId,
l.MinLicense,
l.MaxLicense
})
The above example does not build a list of objects. It builds a query to return objects of that anonymous type.
This builds an in-memory list of objects of that type:
var listOfLicense = (from l in db.License
select new
{
l.ProductId,
l.MinLicense,
l.MaxLicense
}).ToList();
Using .ToList() here will execute the query and return a materialized list of the anonymous types. From there, your code may work as expected without the exception. However, this is effectively loading the 3 columns from all rows in your database table, which may be a problem as the system matures and rows are added.
The error you are getting isn't a casting issue, it is a translation issue. Because your initial query is still just an EF Query, (IQueryable) any further querying against it will need to conform to EF limitations. EF has to be able to translate what your expressions are trying to select back into SQL. In your case, what your real code is trying to do is breaking those rules.
Generally it is better to let EF work with the IQueryable rather than materializing an entire list to memory. Though to accomplish that we'd need to either see the real code, or a minimum reproducible example.
This code:
MinQty = (int)listOfLicense
.Where(mt => (int)mt.ParentProductId == p.ProductId)
.Select(mt => (int)mt.MinLicense)
.Min(mt => mt.Value),
... does not fit with the above anonymous type as there is no correlation between what mt.ParentProductId is in relation to the anonymous type. (p seems to be associated with that type, not mt so there looks to be a lot of Query code missing from your example.)
Edit: based on your updated example:
var ShoppingCart = (from sc in db.ShoppingCarts
Select new model.Shoppingchart{
ShoppingCartId= sc.Id,
MinQty = (int)listOfLicense
.Where(mt => (int)mt.ProductId == sc.ProductId)
.Select(mt => (int)mt.MinLicense)
.Min(mt => mt.Value),
MaxQty = (int)listOfLicense
.Where(mt => (int)mt.ProductId == p.ProductId)
.Select(mt =>(int) mt.MaxQty)
.Max(mt => mt.Value)}.ToList();
It may be possible to build something like this into a single query expression depending on the relationships between ShoppingCart, Product, and Licence. It almost looks like "Licence" really refers to a "Product" which contains a min and max quantity that you're interested in.
Assuming a structure like:
public class Product
{
[Key]
public int ProductId { get; set; }
public int MinQuantity { get; set; }
public int MaxQuantity { get; set; }
// ...
}
// Here lies a question on how your shopping cart to product relationship is mapped. I've laid out a many-to-many relationship using ShoppingCartItems
public class ShoppingCart
{
[Key]
public int ShoppingCartId { get; set; }
// ...
public virtual ICollection<ShoppingCartItem> ShoppingCartItems { get; set; } = new List<ShoppingCartItem>();
}
public class ShoppingCartItem
{
[Key, Column(0), ForeignKey("ShoppingCart")]
public int ShoppingCartId { get; set; }
public virtual ShoppingCart ShoppingCart{ get; set; }
[Key, Column(1), ForeignKey("Product")]
public int ProductId { get; set; }
public virtual Product Product { get; set; }
}
With something like this, to get shopping carts with their product min and max quantities:
var shoppingCarts = db.ShoppingCarts
.Select(sc => new model.ShoppingCart
{
ShoppingCartId = sc.Id,
Products = sc.ShoppingCartItems
.Select(sci => new model.Product
{
ProductId = sci.ProductId,
MinQuantity = sci.MinQuantity,
MaxQuantity = sci.MaxQuantity
}).ToList()
}).ToList();
This would provide a list of Shopping Carts with each containing a list of products with their respective min/max quantities.
If you also wanted a Lowest min quantity and highest max quantity across all products in a cart:
var shoppingCarts = db.ShoppingCarts
.Select(sc => new model.ShoppingCart
{
ShoppingCartId = sc.Id,
Products = sc.ShoppingCartItems
.Select(sci => new model.Product
{
ProductId = sci.ProductId,
MinQuantity = sci.MinQuantity,
MaxQuantity = sci.MaxQuantity
}).ToList(),
OverallMinQuantity = sc.ShoppingCartItems
.Min(sci => sci.MinQuantity),
OverallMaxQuantity = sc.ShoppingCartItems
.Max(sci => sci.MaxQuantity),
}).ToList();
Though I'm not sure how practical a figure like that might be in relation to a shopping cart structure. In any case, with navigation properties set up for the relationship between your entities, EF should be perfectly capable of building an IQueryable query for the data you want to retrieve without resorting to pre-fetching lists. One issue with pre-fetching and re-introducing those lists into further queries is that there will be a maximum # of rows that EF can handle. Like with SQL IN clauses, there is a maximum # of items that can be parsed from a set.
In any case it sounds like it's provided you with some ideas to try and get to the figures you want.
I have two PostgreSQL tables, "stock_appreciation_rights" and "sar_vest_schedule" which are mapped to the classes "StockAppreciationRights" and "SarVestingUnit". The relationship is this: One SarVestingUnit is associated with one StockAppreciationRights, and one StockAppreciationRight is associated with many SarVestingUnit. Here is the SarVestingUnit class:
public class SarVestingUnit
{
public string UbsId { get; set; }
public short Units { get; set; }
public DateTime VestDate { get; set; }
public virtual StockAppreciationRights Sar { get; set; }
}
Here is the StockAppreciationRights class:
public class StockAppreciationRights
{
public StockAppreciationRights()
{
SarVestingUnits = new HashSet<SarVestingUnit>();
}
public string UbsId { get; set; }
public DateTime GrantDate { get; set; }
public DateTime ExpirationDate { get; set; }
public short UnitsGranted { get; set; }
public decimal GrantPrice { get; set; }
public short UnitsExercised { get; set; }
public virtual ICollection<SarVestingUnit> SarVestingUnits { get; set; }
}
And a snippet from my dbContext:
modelBuilder.Entity<SarVestingUnit>(entity =>
{
entity.HasKey(e => new { e.UbsId, e.VestDate })
.HasName("sar_vest_schedule_pkey");
entity.ToTable("sar_vest_schedule");
entity.Property(e => e.UbsId)
.HasColumnName("ubs_id")
.HasColumnType("character varying");
entity.Property(e => e.VestDate)
.HasColumnName("vest_date")
.HasColumnType("date");
entity.Property(e => e.Units).HasColumnName("units");
entity.HasOne(d => d.Sar)
.WithMany(p => p.SarVestingUnits)
.HasForeignKey(d => d.UbsId)
.OnDelete(DeleteBehavior.ClientSetNull)
.HasConstraintName("sar_vest_schedule_ubs_id_fkey")
.IsRequired();
});
Here is the behavior I don't understand. If I get just the StockAppreciationRights from my dbcontext like this:
var x = new personalfinanceContext();
//var test = x.SarVestingUnit.ToList();
var pgSARS = x.StockAppreciationRights.ToList();
The SarVestingUnits collection is empty in my StockAppreciationRights objects.
Likewise, if I just get the SarVestingUnits and not the StockAppreciationRights like this:
var x = new personalfinanceContext();
var test = x.SarVestingUnit.ToList();
//var pgSARS = x.StockAppreciationRights.ToList();
The Sar property in my SarVestingUnit objects is null.
However, if I get them both like this:
var x = new personalfinanceContext();
var test = x.SarVestingUnit.ToList();
var pgSARS = x.StockAppreciationRights.ToList();
Then everything is populated as it should be. Can someone explain this behavior? Obviously I'm new to entity framework and PostgreSQL.
From your described behaviour it seems that lazy loading is either disabled or not available (I.e. EF Core <=2.1)
Lazy loading would assign proxies for related references that it hasn't loaded so that if those are later accessed, a DB query against the DbContext would be made to retrieve them. This allows data to be retrieved "as required" but can result in a significant performance penalty.
Alternatively you can eager-load related data. For example:
var pgSARS = context.StockAppreciationRights.Include(x => x.SarVestingUnits).ToList();
would tell EF to load the StockAppreciation rights and pre-fetch any vesting units for each of those entries. You can use Include and ThenInclude to drill down and pre-fetch any number of dependencies.
The reason your final example appears to work is that EF will auto-populate relations that the context knows about. When you load the first set, it won't resolve any of the related entities, however, loading the second set and EF already knows about the related other entities and will auto-set those references for you.
The real power of EF though is not dealing with entities like a 1-to-1 mapping to tables (outside of editing/inserting) but rather leveraging projection to pull relational data to populate what you need. Leveraging Select or Automapper's ProjectTo methods mean you can extract whatever data you want through the mapped EF relationships and EF can build you an efficient query to fetch it rather than worrying about lazy or eager loading. For instance if you want to get a list of vesting units with their associated right details:
var sars = context.StockAppreciationRights.Select(x => new SARSummary
{
UbsId = x.UbsId,
Units = x.Units,
VestDate = x.VestDate,
GrantDate = x.Sar.GrantDate,
ExpirationDate = x.Sar.ExpirationDate,
UnitsGranted = x.Sar.UnitsGranted,
GrantPrice = x.Sar.GrantPrice
}).ToList();
From within the Select statement you can access any of the related details to flatten into a view model that serves your immediate needs, or even compose a hierarchy of view models simplified for what the view/consumer needs.
I am trying to add new record into SQL database using EF. The code looks like
public void Add(QueueItem queueItem)
{
var entity = queueItem.ApiEntity;
var statistic = new Statistic
{
Ip = entity.Ip,
Process = entity.ProcessId,
ApiId = entity.ApiId,
Result = entity.Result,
Error = entity.Error,
Source = entity.Source,
DateStamp = DateTime.UtcNow,
UserId = int.Parse(entity.ApiKey),
};
_statisticRepository.Add(statistic);
unitOfWork.Commit();
}
There is navigation Api and User properties in Statistic entity which I want to load into new Statistic entity. I have tried to load navigation properties using code below but it produce large queries and decrease performance. Any suggestion how to load navigation properties in other way?
public Statistic Add(Statistic statistic)
{
_context.Statistic.Include(p => p.Api).Load();
_context.Statistic.Include(w => w.User).Load();
_context.Statistic.Add(statistic);
return statistic;
}
Some of you may have question why I want to load navigation properties while adding new entity, it's because I perform some calculations in DbContext.SaveChanges() before moving entity to database. The code looks like
public override int SaveChanges()
{
var addedStatistics = ChangeTracker.Entries<Statistic>().Where(e => e.State == EntityState.Added).ToList().Select(p => p.Entity).ToList();
var userCreditsGroup = addedStatistics
.Where(w => w.User != null)
.GroupBy(g => g.User )
.Select(s => new
{
User = s.Key,
Count = s.Sum(p=>p.Api.CreditCost)
})
.ToList();
//Skip code
}
So the Linq above will not work without loading navigation properties because it use them.
I am also adding Statistic entity for full view
public class Statistic : Entity
{
public Statistic()
{
DateStamp = DateTime.UtcNow;
}
public int Id { get; set; }
public string Process { get; set; }
public bool Result { get; set; }
[Required]
public DateTime DateStamp { get; set; }
[MaxLength(39)]
public string Ip { get; set; }
[MaxLength(2083)]
public string Source { get; set; }
[MaxLength(250)]
public string Error { get; set; }
public int UserId { get; set; }
[ForeignKey("UserId")]
public virtual User User { get; set; }
public int ApiId { get; set; }
[ForeignKey("ApiId")]
public virtual Api Api { get; set; }
}
As you say, the following operations against your context will generate large queries:
_context.Statistic.Include(p => p.Api).Load();
_context.Statistic.Include(w => w.User).Load();
These are materialising the object graphs for all statistics and associated api entities and then all statistics and associated users into the statistics context
Just replacing this with a single call as follows will reduce this to a single round trip:
_context.Statistic.Include(p => p.Api).Include(w => w.User).Load();
Once these have been loaded, the entity framework change tracker will fixup the relationships on the new statistics entities, and hence populate the navigation properties for api and user for all new statistics in one go.
Depending on how many new statistics are being created in one go versus the number of existing statistics in the database I quite like this approach.
However, looking at the SaveChanges method it looks like the relationship fixup is happening once per new statistic. I.e. each time a new statistic is added you are querying the database for all statistics and associated api and user entities to trigger a relationship fixup for the new statistic.
In which case I would be more inclined todo the following:
_context.Statistics.Add(statistic);
_context.Entry(statistic).Reference(s => s.Api).Load();
_context.Entry(statistic).Reference(s => s.User).Load();
This will only query for the Api and User of the new statistic rather than for all statistics. I.e you will generate 2 single row database queries for each new statistic.
Alternatively, if you are adding a large number of statistics in one batch, you could make use of the Local cache on the context by preloading all users and api entities upfront. I.e. take the hit upfront to pre cache all user and api entities as 2 large queries.
// preload all api and user entities
_context.Apis.Load();
_context.Users.Load();
// batch add new statistics
foreach(new statistic in statisticsToAdd)
{
statistic.User = _context.Users.Local.Single(x => x.Id == statistic.UserId);
statistic.Api = _context.Api.Local.Single(x => x.Id == statistic.ApiId);
_context.Statistics.Add(statistic);
}
Would be interested to find out if Entity Framework does relationship fixup from its local cache.
I.e. if the following would populate the navigation properties from the local cache on all the new statistics. Will have a play later.
_context.ChangeTracker.DetectChanges();
Disclaimer: all code entered directly into browser so beware of the typos.
Sorry I dont have the time to test that, but EF maps entities to objects. Therefore shouldnt simply assigning the object work:
public void Add(QueueItem queueItem)
{
var entity = queueItem.ApiEntity;
var statistic = new Statistic
{
Ip = entity.Ip,
Process = entity.ProcessId,
//ApiId = entity.ApiId,
Api = _context.Apis.Single(a => a.Id == entity.ApiId),
Result = entity.Result,
Error = entity.Error,
Source = entity.Source,
DateStamp = DateTime.UtcNow,
//UserId = int.Parse(entity.ApiKey),
User = _context.Users.Single(u => u.Id == int.Parse(entity.ApiKey)
};
_statisticRepository.Add(statistic);
unitOfWork.Commit();
}
I did a little guessing of your namings, you should adjust it before testing
How about make a lookup and load only necessary columns.
private readonly Dictionary<int, UserKeyType> _userKeyLookup = new Dictionary<int, UserKeyType>();
I'm not sure how you create a repository, you might need to clean up the lookup once the saving changes is completed or in the beginning of the transaction.
_userKeyLookup.Clean();
First find in the lookup, if not found then load from context.
public Statistic Add(Statistic statistic)
{
// _context.Statistic.Include(w => w.User).Load();
UserKeyType key;
if (_userKeyLookup.Contains(statistic.UserId))
{
key = _userKeyLookup[statistic.UserId];
}
else
{
key = _context.Users.Where(u => u.Id == statistic.UserId).Select(u => u.Key).FirstOrDefault();
_userKeyLookup.Add(statistic.UserId, key);
}
statistic.User = new User { Id = statistic.UserId, Key = key };
// similar code for api..
// _context.Statistic.Include(p => p.Api).Load();
_context.Statistic.Add(statistic);
return statistic;
}
Then change the grouping a little.
var userCreditsGroup = addedStatistics
.Where(w => w.User != null)
.GroupBy(g => g.User.Id)
.Select(s => new
{
User = s.Value.First().User,
Count = s.Sum(p=>p.Api.CreditCost)
})
.ToList();
I am using EF 4.1 "code first" to create my db and objects.
Given:
public class Order
{
public int Id { get; set; }
public string Name { get; set; }
public virtual OrderType OrderType { get; set; }
}
public class OrderType
{
public int Id { get; set; }
public string Name { get; set; }
}
An order has one ordertype. An order type is just a look up table. The values dont change. Using Fluent API:
//Order
ToTable("order");
HasKey(key => key.Id);
Property(item => item.Id).HasColumnName("order_id").HasColumnType("int");
Property(item => item.Name).HasColumnName("name").HasColumnType("string").HasMaxLength(10).IsRequired();
HasRequired(item => item.OrderType).WithMany().Map(x => x.MapKey("order_type_id")).WillCascadeOnDelete(false);
//OrderType
ToTable("order_type");
HasKey(key => key.Id);
Property(item => item.Id).HasColumnName("order_type_id").HasColumnType("int");
Property(item => item.Name).HasColumnName("name").HasColumnType("nvarchar").HasMaxLength(100).IsRequired();
Now in our App we load all our lookup data and cache it.
var order = new Order
{
Name = "Bob"
OrderType = GetFromOurCache(5) //Get order type for id 5
};
var db = _db.GetContext();
db.Order.Add(order);
db.SaveChanges();
Our you-beaut order is saved but with a new order type, courtesy of EF. So now we have two same order types in our database. What can I do to alter this behaviour?
TIA
With EF 4.1 you can do this before calling SaveChanges:
db.Entry(order.OrderType).State = EntityState.Unchanged;
Alternatively to Yakimych's solution you can attach the OrderType to the context before you add the order to let EF know that the OrderType already exists in the database:
var order = new Order
{
Name = "Bob"
OrderType = GetFromOurCache(5) //Get order type for id 5
};
var db = _db.GetContext();
db.OrderTypes.Attach(order.OrderType);
db.Order.Add(order);
db.SaveChanges();
Yakimych / Slauma - thanks for the answers. Interestingly I tried both ways and neither worked. Hence I asked the question. Your answers confirmed that I must be doing something wrong, and sure enough I wasnt managing my dbContext properly.
Still its a pain that EF automatically wants to insert lookup/static data even when you supply the full object (including the lookups unique Id). It puts the onus on the developer to remember to set the state. To make things a little easier I do:
var properties = entry.GetType().GetProperties().Where(x => x.PropertyType.GetInterface(typeof(ISeedData).Name) != null);
foreach (var staticProperty in properties)
{
var n = staticProperty.GetValue(entry, null);
Entry(n).State = EntityState.Unchanged;
}
in SaveChanges override.
Again thanks for the help.