Ef returning null, but entity exists - c#

I'm trying to load some user info in code, but EF returns null.
foreach (var user in allAutoUsers)
{
Wallet wallet = db.CabinetUsers
.Find(user.IdCabinetUser)?
.Wallets
.FirstOrDefault(x => x.TypeCurrency == currency);
}
Variable user reporting 1 wallet, but when I'm trying to get it in code above it returns null.
Are there some ways to solve this problem?

Read more into Linq expressions rather than relying on Find. You can be running into issues if your relationships between entities are not defined as virtual which would prevent EF from lazy loading them.
The issue with using .Find() is that it will return the entity if it exists, however, attempting to access any related property that the DbContext isn't already aware of will require a lazy load call. This can be easily missed if lazy loading is disabled, or the member isn't virtual, and it can be a performance issue when it is enabled.
Instead, Linq can allow you to query through the object graph directly to get what you want:
foreach (var user in allAutoUsers)
{
Wallet wallet = db.CabinetUsers
.Where(x => x.IdCabinetUser == user.IdCabinetUser)
.SelectMany(x => x.Wallets)
.SingleOrDefault(x => x.TypeCurrency == currency);
// do stuff with wallet...
}
This assumes that there will be only 1 wallet for the specified currency per user. When expecting 0 or 1, use SingleOrDefault. Use FirstOrDefault only when you are expecting 0 or many, want the "first" one and have specified an OrderBy clause to ensure the first item is predictable.
This will result in a query per user. To accomplish with 1 query for all users:
var userIds = allAutoUsers.Select(x => x.IdCabinetUser).ToList();
var userWallets = db.CabinetUsers
.Where(x => userIds.Contains(x.IdCabinetUser))
.Select(x => new
{
x.IdCabinetUser,
Wallet = x.SelectMany(x => x.Wallets)
.SingleOrDefault(x => x.TypeCurrency == currency);
}).ToList();
From this I would consider expanding the wallets SelectMany with a Select for the details in the Wallet you actually care about rather than a reference to the entire wallet entity. This has the benefit of speeding up the query, reducing memory use, and avoids the potential for lazy load calls tripping things up if Wallet references any other entities that get touched later on.
For example if you only need the IdWallet, WalletName, TypeCurrency, and Balance:
// replace this line from above...
Wallet = x.SelectMany(x => x.Wallets)
// with ...
Wallet = x.SelectMany(x => x.Wallets.Select(w => new
{
w.IdWallet,
w.WalletName,
w.TypeCurrency,
w.Ballance
}) // ...
From there you can foreach to your heart's content without extra queries:
foreach ( var userWallet in userWallets)
{
// do stuff with userWallet.Wallet and userWallet.IdCabinetUser.
}
If you want to return the wallet details to a calling method or view or such, then you cannot use an anonymous type for that ( new { } ). Instead you will need to define a simple class for the data you want to return and use Select into that. I.e. new WalletDTO { IdWallet = w.IdWallet, //... } Even if you use Entities, it is recommended to reduce these into DTOs or ViewModels rather than returning entities. Entities should not "live" longer than the DbContext that spawned them otherwise you get all kinds of nasty behaviour popping up like ObjectDisposedException and serialization exceptions.

Related

Infinite loop in a circular reference with Entity Framework and .NET Core

My question is: why does Entity Framework Core does it automatically load subclasses when using .Include? From my understanding I am supposed to specify the .ThenInclude but even without it, it loads..
The details
I have an unavoidable circular reference and specifically for that reason I avoided lazy loading, so that I could hard specify what should be included.
My data model - I have highlighted the circular reference:
A user should be able to subscribe to a bunch of categories.
A job should belong to one or more category
What I want to achieve
I built an endpoint that takes 3 optional parameters, a userId, JobId and a collection of categories.
I want to return a collection of jobs filtered by the above parameters
My attempt that is causing the infinite loop
var user = await UserAccessor.GetAppUser();
var result = DataContext.Jobs
.Include(job => job.Customer)
.Include(job => job.Categories)
.AsNoTracking()
.AsQueryable();
if (request.JobId != null)
{
result = result.Where(job => job.Id == request.JobId);
}
if (request.UserEmail != null)
{
result = result.Where(job => job.Customer.Email == request.UserEmail);
}
if (request.Categories != null)
{
var categories = await CategoryService.ConvertStringToCategories(request.Categories);
result = result.Where(job => job.Categories.Intersect(categories).Any());
}
return Mapper.Map<List<Job>, List<JobDto>>(await result.ToListAsync());
If I put a result.ToList() anywhere in the code and look at the actual values it seems it goes into an infinite loop without me specifying the .ThenInclude().
Have a look at the screenshot below, the infinite loop goes as follows
Collection of jobs (this is the result variable)
Job has a property collection of category (this I included with .Include(job => job.Categories))
Category has a property collection of jobs (this should be null as I didn't include it with a .ThenInclude() but it is not arrrggg)

Updating nested list without AsNoTracking

Simplified model:
Profile {Guid Id, string Name, List<Property> Properties}
Property {Guid Id, string Name, List<Type> Types}
Type {Guid Id, string Key, string Value}
DbContext:
{
public DbSet<Profile> Profiles { get; set; }
}
I didn't include Properties and Types in the DbContext so I used the ModelBuilder:
modelBuilder.Entity<Property>().HasMany<Type>();
In the Update service:
public async Task<Response> Update([FromBody] Profile profile)
{
var entity = await _context.Profiles
.Include(x => x.Properties)
.ThenInclude(x => x.Types)
.FirstOrDefaultAsync(x => x.Id == profile.Id);
foreach (var prop in profile.Properties)
{
var existingProp = entity.Properties.SingleOrDefault(a => a.Id == prop.Id);
//Update
if (existingProp != null)
{
var entry = _context.Entry(existingProp);
entry.State = EntityState.Modified;
existingProp.ChargeFrom(prop);//maps the new values to the db entity
_context.SaveChanges();
}
}
}
But the above code throws this exception at SaveChanges:
The instance of entity type 'Type' cannot be tracked because another
instance with the same key value for {'Id'} is already being tracked.
When attaching existing entities, ensure that only one entity instance
with a given key value is attached. Consider using
'DbContextOptionsBuilder.EnableSensitiveDataLogging' to see the
conflicting key values.
I marked the Types entity AsNoTracking:
.ThenInclude(x => x.Types).AsNoTracking()
and the problem is solved, but I don't know why this exception is thrown, some other thread mentions that the DbContext might be used by another process, and might be registered as Singleton, but in my case, it's registered as scoped.
I marked the Types entity AsNoTracking:
.ThenInclude(x => x.Types).AsNoTracking()
and the problem is solved, but I don't know why this exception is thrown
The reason for the error will be because this line:
existingProp.ChargeFrom(prop);//maps the new values to the db entity
... will be attempting to copy the untracked Types from prop into existingProp. Using AsNoTracking will remove the exception but it will most likely be resulting in a duplication of data on SaveChanges where Type would be set up with an Identity key or duplicate row exceptions. If you received no exception I would be checking the Types collection to see if there are duplicate rows appearing there.
When copying data across from an untracked entity to a tracked entity, you will want to ensure that only values, and not references, are copied across. Copying an untracked reference across, EF will treat that as a new entity by default. Even if you force its state over to Modified, the DbContext could already be tracking an entity with that ID.
If Property.Types is a collection of references, such as an association to a lookup, and these could change where associations are added and removed, then to apply changes you need to load the associated types from the database then use that to remove associations that are no longer valid and add ones that aren't currently associated.
For example: given a Property (PropertyA) with Types (Type1) and (Type2), if we edit that to have (Type1) and (Type3) we need to fetch Type1 & Type3 from the DbContext (tracked) then compare against the tracked PropertyA to determine to remove Type2 and add Type3
var entity = await _context.Profiles
.Include(x => x.Properties)
.ThenInclude(x => x.Types)
.SingleAsync(x => x.Id == profile.Id);
// Get the IDs for all Types we want to associate... In the above example this would
// ask for Type1 and Type3 if only the one property. We get a Distinct list because
// multiple properties might reference the same TypeId(s).
var existingTypeIds = profile.Properties
.SelectMany(x => x.Types.Select(t => t.Id))
.Distinct()
.ToList();
// Load references to all Types that will be needed. Where associating new types, these will be referenced.
var existingTypes = _context.Types
.Where(x => existingTypeIds.Contains(x.Id))
.ToList();
foreach (var prop in profile.Properties)
{
existingProp = entity.Properties.SingleOrDefault(x => x.Id == prop.Id);
if (existingProp == null)
continue;
var updatedTypeIds = prop.Types.Select(x => x.Id).ToList();
var existingTypeIds = existingProp.Types.Select(x => x.Id).ToList();
var addedTypeIds = updatedTypeIds.Except(existingTypeIds).ToList();
var removedTypeIds = existingTypeIds.Except(updatedTypeIds).ToList();
var addedTypes = existingTypes
.Where(x => addedTypeIds.Contains(x.Id))
.ToList();
var removedTypes = existingProp.Types
.Where(x => removedTypeIds.Contains(x.Id))
.ToList();
foreach(var removedType in removedTypes)
existingProp.Types.Remove(removedType);
foreach(var addedType in addedTypes)
existingProp.Types.Add(addedType);
}
If instead the type is a child row that contains properties that can be updated then these values should be copied across between the updated data and the existing data state. This adds a considerable amount of work, though tools like AutoMapper can be configured to help. You still need to manage cases where Types can be added, removed, or have contents changed. That would apply to Properties as well, as your example only handles cases where a property is updated, not potentially added or removed.
Ultimately it can be beneficial to try and structure update scenarios to be as atomic as possible to avoid an update that is going to make changes to an entire object graph of entities, properties, and types but rather one update for just entity values, one for property values, and one for a single type update. This would apply also to adding a property, adding a type, removing a property, removing a type. While it may look like more code to break up operations like this, it keeps them very simple and straight forward rather than one big, complex method trying to compare the before and after to figure out what to add, remove, and update. Bugs hide in complex code, not simple methods. :)
When editing an object graph you should also avoid calling SaveChanges more than once. Instead of calling it within the loop of properties it should be done once when the loop is completed. The reason for this is that something like an exception on one of the properties would result in an incomplete/invalid data state being persisted. If you have 4 properties in the object being saved, and the 3rd one fails with an exception for any reason, the first 2 will be updated with the last two not persisting. Generally within an update operation, the update should follow an "all or nothing" approach to persistence.
Hopefully that helps explain the behavior you are seeing and gives you something to consider moving forward.

Entity Framework Include directive not getting all expected related rows

While debugging some performance issues I discovered that Entity framework was loading a lot of records via lazy loading (900 extra query calls ain't fast!) but I was sure I had the correct include. I've managed to get this down to quite a small test case to demonstrate the confusion I'm having, the actual use case is more complex so I don't have a lot of scope to re-work the signature of what I'm doing but hopefully this is a clear example of the issue I'm having.
Documents have Many MetaInfo rows related. I want to get all documents grouped by MetaInfo rows with a specific value, but I want all the MetaInfo rows included so I don't have to fire off a new request for all the Documents MetaInfo.
So I've got the following Query.
ctx.Configuration.LazyLoadingEnabled = false;
var DocsByCreator = ctx.Documents
.Include(d => d.MetaInfo) // Load all the metaInfo for each object
.SelectMany(d => d.MetaInfo.Where(m => m.Name == "Author") // For each Author
.Select(m => new { Doc = d, Creator = m })) // Create an object with the Author and the Document they authored.
.ToList(); // Actualize the collection
I expected this to have all the Document / Author pairs, and have all the Document MetatInfo property filled.
That's not what happens, I get the Document objects, and the Authors just fine, but the Documents MetaInfo property ONLY has MetaInfo objects with Name == "Author"
If I move the where clause out of the select many it does the same, unless I move it to after the actualisation (which while here might not be a big deal, it is in the real application as it means we're getting a huge amount more data than we want to deal with.)
After playing with a bunch of different ways to do this I think it really looks like the issue is when you do a select(...new...) as well as the where and the include. Doing the select, or the Where clause after actualisation makes the data appear the way I expected it to.
I figured it was an issue with the MetaInfo property of Document being filtered, so I rewrote it as follows to test the theory and was surprised for find that this also gives the same (I think wrong) result.
ctx.Configuration.LazyLoadingEnabled = false;
var DocsByCreator = ctx.Meta
.Where(m => m.Name == "Author")
.Include(m => m.Document.MetaInfo) // Load all the metaInfo for Document
.Select(m => new { Doc = m.Document, Creator = m })
.ToList(); // Actualize the collection
Since we're not putting the where on the Document.MetaInfo property I expected this to bypass the problem, but strangely it doesn't the documents still only appear to have "Author" MetaInfo object.
I've created a simple test project and uploaded it to github with a bunch of test cases in, as far as I can tell they should all pass, bug only the ones with premature actualisation pass.
https://github.com/Robert-Laverick/EFIncludeIssue
Anyone got any theories? Am I abusing EF / SQL in some way I'm missing? Is there anything I can do differently to get the same organisation of results? Is this a bug in EF that's just been hidden from view by the LazyLoad being on by default, and it being a bit of an odd group type operation?
This is a limitation in EF in that Includes will be ignored if the scope of the entities returned is changed from where the include was introduced.
I couldn't find the reference to this for EF6, but it is documented for EF Core. (https://learn.microsoft.com/en-us/ef/core/querying/related-data) (see "ignore includes") I suspect it is a limit in place to stop EF's SQL generation from going completely AWOL in certain scenarios.
So while var docs = context.Documents.Include(d => d.Metas) would return the metas eager loaded against the document; As soon as you .SelectMany() you are changing what EF is supposed to return, so the Include statement is ignored.
If you want to return all documents, and include a property that is their author:
var DocsByCreator = ctx.Documents
.Include(d => d.MetaInfo)
.ToList() // Materialize the documents and their Metas.
.SelectMany(d => d.MetaInfo.Where(m => m.Name == "Author") // For each Author
.Select(m => new { Doc = d, Creator = m })) // Create an object with the Author and the Document they authored.
.ToList(); // grab your collection of Doc and Author.
If you only want documents that have authors:
var DocsByCreator = ctx.Documents
.Include(d => d.MetaInfo)
.Where(d => d.MetaInfo.Any(m => m.Name == "Author")
.ToList() // Materialize the documents and their Metas.
.SelectMany(d => d.MetaInfo.Where(m => m.Name == "Author") // For each Author
.Select(m => new { Doc = d, Creator = m })) // Create an object with the Author and the Document they authored.
.ToList(); // grab your collection of Doc and Author.
This means you will want to be sure that all of your filtering logic is done above that first 'ToList() call. Alternatively you can consider resolving the Author meta after the query such as when view models are populated, or an unmapped "Author" property on Document that resolves it. Though I generally avoid unmapped properties because if their use slips into an EF query, you get a nasty error at runtime.
Edit: Based on the requirement to skip & take I would recommend utilizing view models to return data rather than returning entities. Using a view model you can instruct EF to return just the raw data you need, compose the view models with either simple filler code or utilizing Automapper which plays nicely with IQueryable and EF and can handle most deferred cases like this.
For example:
public class DocumentViewModel
{
public int DocumentId { get; set; }
public string Name { get; set; }
public ICollection<MetaViewModel> Metas { get; set; } = new List<MetaViewModel>();
[NotMapped]
public string Author // This could be update to be a Meta, or specialized view model.
{
get { return Metas.SingleOrDefault(x => x.Name == "Author")?.Value; }
}
}
public class MetaViewModel
{
public int MetaId { get; set; }
public string Name { get; set; }
public string Value { get; set; }
}
Then the query:
var viewModels = context.Documents
.Select(x => new DocumentViewModel
{
DocumentId = x.DocumentId,
Name = x.Name,
Metas = x.Metas.Select(m => new MetaViewModel
{
MetaId = m.MetaId,
Name = m.Name,
Value = m.Value
}).ToList()
}).Skip(pageNumber*pageSize)
.Take(PageSize)
.ToList();
The relationship of an "author" to a document is implied, not enforced, at the data level. This solution keeps the entity models "pure" to the data representation and lets the code handle transforming that implied relationship into exposing a document's author.
The .Select() population can be handled by Automapper using .ProjectTo<TViewModel>().
By returning view models rather than entities you can avoid issues like this where .Include() operations get invalidated, plus avoid issues due to the temptation of detaching and reattaching entities between different contexts, plus improve performance and resource usage by only selecting and transmitting the data needed, and avoiding lazy load serialization issues if you forget to disable lazy-load or unexpected #null data with it.

How to Performance Test This and Suggestions to Make Faster?

I seem to have written some very slow piece of code which gets slower when I have to deal with EF Core.
Basically I have a list of items that store attributes in a Json string in the database as I am storing many different items with different attributes.
I then have another table that contains the display order for each attribute, so when I send the items to the client I am order them based on that order.
It is kinda slow at doing 700 records in about 18-30 seconds (from where I start my timer, not the whole block of code).
var itemDtos = new List<ItemDto>();
var inventoryItems = dbContext.InventoryItems.Where(x => x.InventoryCategoryId == categoryId);
var inventorySpecifications = dbContext.InventoryCategorySpecifications.Where(x => x.InventoryCategoryId == categoryId).Select(x => x.InventorySpecification);
Stopwatch a = new Stopwatch();
a.Start();
foreach (var item in inventoryItems)
{
var specs = JObject.Parse(item.Attributes);
var specDtos = new List<SpecDto>();
foreach (var inventorySpecification in inventorySpecifications.OrderBy(x => x.DisplayOrder))
{
if (specs.ContainsKey(inventorySpecification.JsonKey))
{
var value = specs.GetValue(inventorySpecification.JsonKey);
var newSpecDto = new SpecDto()
{
Key = inventorySpecification.JsonKey,
Value = displaySpec.ToString()
};
specDtos.Add(newSpecDto);
}
}
var dto = new InventoryItemDto()
{
// create dto
};
inventoryItemDtos.Add(dto);
}
Now it goes crazy slow when I add EF some more columns that I need info from.
In the //create dto area I access some information from other tables
var dto = new InventoryItemDto()
{
// access brand columns
// access company columns
// access branch columns
// access country columns
// access state columns
};
By trying to access these columns in the loop takes 6mins to process 700 rows.
I don't understand why it is so slow, it's the only change I really made and I made sure to eager load everything in.
To me it almost makes me think eager loading is not working, but I don't know how to verify if it is or not.
var inventoryItems = dbContext.InventoryItems.Include(x => x.Branch).ThenInclude(x => x.Company)
.Include(x => x.Branch).ThenInclude(x => x.Country)
.Include(x => x.Branch).ThenInclude(x => x.State)
.Include(x => x.Brand)
.Where(x => x.InventoryCategoryId == categoryId).ToList();
so I thought because of doing this the speed would not be that much different then the original 18-30 seconds.
I would like to speed up the original code too but I am not really sure how to get rid of the dual foreach loops that is probably slowing it down.
First, loops inside loops is a very bad thing, you should refactor that out and make it a single loop. This should not be a problem because inventorySpecifications is declared outside the loop
Second, the line
var inventorySpecifications = dbContext.InventoryCategorySpecifications.Where(x => x.InventoryCategoryId == categoryId).Select(x => x.InventorySpecification);
should end with ToList(), because it's enumerations is happening within the inner foreach, which means that the query is running for each of "inventoryItems"
that should save you a good amount of time
I'm no expert but this part of your second foreach raises a red flag: inventorySpecifications.OrderBy(x => x.DisplayOrder). Because this is getting called inside another foreach it's doing the .OrderBy call every time you iterate over inventoryItems.
Before your first foreach loop, try this: var orderedInventorySpecs = inventorySpecifications.OrderBy(x => x.DisplayOrder); and then use foreach (var inventorySpec in orderedInventorySpecs) and see if it makes a difference.
To help you better understand what EF is running behind the scenes add some logging in to expose the SQL being run which might help you see how/where your queries are going wrong. This can be extremely helpful to help determine if your queries are hitting the DB too often. As a very general rule you want to hit the DB as few times as possible and retrieve only the information you need via the use of .Select() to reduce what is being returned. The docs for the logging are: http://learn.microsoft.com/en-us/ef/core/miscellaneous/logging
I obviously cannot test this and I am a little unsure where your specDto's go once you have them but I assume they become part of the InventoryItemDto?
var itemDtos = new List<ItemDto>();
var inventoryItems = dbContext.InventoryItems.Where(x => x.InventoryCategoryId == categoryId).Select(x => new InventoryItemDto() {
Attributes = x.Attributes,
//.....
// access brand columns
// access company columns
// access branch columns
// access country columns
// access state columns
}).ToList();
var inventorySpecifications = dbContext.InventoryCategorySpecifications
.Where(x => x.InventoryCategoryId == categoryId)
.OrderBy(x => x.DisplayOrder)
.Select(x => x.InventorySpecification).ToList();
foreach (var item in inventoryItems)
{
var specs = JObject.Parse(item.Attributes);
// Assuming the specs become part of an inventory item?
item.specs = inventorySpecification.Where(x => specs.ContainsKey(x.JsonKey)).Select(x => new SpecDto() { Key = x.JsonKey, Value = specs.GetValue(x.JsonKey)});
}
The first call to the DB for inventoryItems should produce one SQL query that will pull all the information you need at once to construct your InventoryItemDto and thus only hits the DB once. Then it pulls the specs out and uses OrderBy() before materialising which means the OrderBy will be run as part of the SQL query rather than in memory. Both those results are materialised via .ToList() which will cause EF to pull the results into memory in one go.
Finally the loop goes over your constructed inventoryItems, parses the Json and then filters the specs based on that. I am unsure of where you were using the specDtos so I made an assumption that it was part of the model. I would recomend checking the performance of the Json work you are doing as that could be contributing to your slow down.
A more integrated approach to using Json as part of your EF models can be seen at this answer: https://stackoverflow.com/a/51613611/621524 however you will still be unable to use those properties to offload execution to SQL as accessing properties that are defined within code will cause queries to fragment and run in several parts.

Entity Framework Include() is not working within complex query

Consider following LINQ query:
var item = (from obj in _db.SampleEntity.Include(s => s.NavProp1)
select new
{
ItemProp1 = obj,
ItemProp2 = obj.NavProp2.Any(n => n.Active)
}).SingleOrDefault();
This runs as expected, but item.ItemProp1.NavProp1 is NULL.
As it explains here this is because of the query actually changes after using Include(). but the question is what is the solution with this situation?
Edit:
When I change the query like this, every things works fine:
var item = (from obj in _db.SampleEntity.Include(s => s.NavProp1)
select obj).SingleOrDefault();
Regarding to this article I guess what the problem is... but the solution provided by author not working in my situation (because of using anonymous type in final select rather than entity type).
As you mentioned, Include is only effective when the final result of the query consists of the entities that should include the Include-d navigation properties.
So in this case Include has effect:
var list = _db.SampleEntity.Include(s => s.NavProp1).ToList();
The SQL query will contain a JOIN and each SampleEntity will have its NavProp1 loaded.
In this case it has no effect:
var list = _db.SampleEntity.Include(s => s.NavProp1)
.Select(s => new { s })
.ToList();
The SQL query won't even contain a JOIN, EF completely ignores the Include.
If in the latter query you want the SampleEntitys to contain their NavProp1s you can do:
var list = _db.SampleEntity
.Select(s => new { s, s.NavProp1 })
.ToList();
Now Entity Framework has fetched SampleEntitys and NavProp1 entities from the database separately, but it glues them together by a process called relationship fixup. As you see, the Include is not necessary to make this happen.
However, if Navprop1 is a collection, you'll notice that...
var navprop1 = list.First().s.Navprop1;
...will still execute a query to fetch Navprop1 by lazy loading. Why is that?
While relationship fixup does fill Navprop1 properties, it doesn't mark them as loaded. This only happens when Include loaded the properties. So now we have SampleEntity all having their Navprop1s, but you can't access them without triggering lazy loading. The only thing you can do to prevent this is
_db.Configuration.LazyLoadingEnabled = false;
var navprop1 = list.First().s.Navprop1;
(or by preventing lazy loading by disabling proxy creation or by not making Navprop1 virtual.)
Now you'll get Navprop1 without a new query.
For reference navigation properties this doesn't apply, lazy loading isn't triggered when it's enabled.
In Entity Framework core, things have changed drastically in this area. A query like _db.SampleEntity.Include(s => s.NavProp1).Select(s => new { s }) will now include NavProp1 in the end result. EF-core is smarter in looking for "Includable" entities in the end result. Therefore, we won't feel inclined to shape a query like Select(s => new { s, s.NavProp1 }) in order to populate the navigation property. Be aware though, that if we use such a query without Include, lazy loading will still be triggered when s.NavProp1 is accessed.
I know this will probably get a few laughs, but don't forget the obvious like i just did. The row in the database didn't actually have a foreign key reference! I should have checked the dam data first before thinking EF Include wasn't working! Grrr. 30 minutes of my life I won't get back.
If your model is defined properly it should work without any problems.
using System.Data.Entity;
var item = _db.SampleEntity
.Include(p => p.NavigationProperty)
.Select(p => new YourModel{
PropertyOne = p.Something,
PropertyTwo = p.NavigationProperty.Any(x => x.Active)
})
.SingleOrDefault(p => p.Something == true);
How did you find that item.ItemProp1.NavProp1 is null. EF uses proxies to load all required properties when you try to access it.
What about
var item = (from obj in _db.SampleEntity.Include(s => s.NavProp1)
select obj).SingleOrDefault();
Assert.IsNotNull(obj.NavProp1);
Assert.IsNotNull(obj.NavProp2);
You can also try with
var item = (from obj in _db.SampleEntity.Include(s => s.NavProp1)
select new
{
ItemProp1 = obj,
NavProp1 = obj.NavProp1,
ItemProp2 = obj.NavProp2.Any(n => n.Active)
}).SingleOrDefault();
Assert.IsNotNull(item.NavProp1)
Of course I assume that you don't have any problems with EF navigation property mappings.

Categories