Recently I've started learning Entity Framework Core and I'm curious if it's fine to use DbContext instance inside the entity class.
Sample code:
class User {
public int Id { get; set; }
public string Name { get; set; }
public ICollection<Order> Orders { get; set; }
public void LoadOrders() {
using (var context = new StoreContext()) {
Orders = context.Orders
.Where(x => x.UserId == Id)
.ToList();
}
}
}
User entity has a relation with Order class, both of them have appropriate tables in the DataBase created using migrations from Entity Framework.
The purpose of LoadOrders() method is simply to load related entities for current user when it's necessary.
Now I wanted to know if that's a valid approach?
Or maybe I should always load related entities at the same time when I'm loading the parent object? (E.g. .Include().ThenInclude())
Or maybe the code of LoadOrders() method should be located in some additional class like UserHelper that would be used along with the User entity.
You should avoid using an approach like this because the User will be loaded by one DbContext, while it's orders would be associated to another, disposed context. When you go to update a User, you would be facing errors or duplicate orders, or a messy business of reassociating the order (and other child entities) to contexts before saving. Down the road there will undoubtedly be confusion if orders are mapped to users and someone goes and writes code to .Include(x => x.Orders) If you completely detach related entities from EF and rely on load on demand, you lose out on a lot of the capability that EF gives you.
Issues like this typically stem from mixing up the scope/lifespan of entities vs. the scope of the context they are loaded from. For example loading entities in one method with a DbContext, returning them, then later deciding you want to access related entities but the DbContext was disposed. The simplest method I can recommend using is adopting POCO view models and ensuring that entities never exit the scope of their DbContext, only view models do. That way you can sculpt a view model structure to represent the data you need, then use entities and their references to populate those view models using .Select() without worrying about lazy loading or eager loading.
For instance:
using (var context = new StoreContext())
{
var userViewModel = context.Users.Where(x => x.UserId == userId)
.Select(x => new UserViewModel
{
UserId = x.UserId,
UserName = x.UserName,
Orders = x.Orders
.Where(o => o.IsActive)
.Select( o => new OrderViewModel
{
OrderId = o.OrderId,
OrderNumber = o.OrderNumber
Price = o.OrderItems.Sum(i => i.Price)
}).ToList()
}).SingleOrDefault();
return userViewModel;
}
Automapper can assist with mapping entities to view models. It's not a one-to-one tree structure map, but rather aligning the view model to represent the data the view needs, then filling that with the entity structure. You just need to be a bit careful to only pull data and supported aggregate methods from the entities because these will be passed to SQL, so no .Net or custom functions in the .Select. Let the view models accept raw values and provide alternate properties to perform formatting, or use .Select() to fetch anonymous types, get EF to materialize those into POCO instances with .ToList()/.Single()/etc. and then populate your view models from those using Linq2Object.
Working with entities on demand and view models / DTOs for the to-and-fro of data avoids a lot of hassle with entities. Done right, EF can pull this data extremely fast and it avoids performance pitfalls such as tripping lazy loads during serialization. It means that when you're done with the View Model you will need to re-load the entity to apply changes. It may seem to make more sense to simply use entities then have EF magically re-attach them and persist changes, but your view model will have all the info needed to quickly fetch that entity by ID if needed, and you will need to consider cases where the data may have changed between the time you first retrieved the entity, and the time you are prepared to alter it.
Related
I have a simple class called Company and a corresponding service (CompanyService) to access an Azure SQL Database. A company contains Users.
public class Company
{
[Key]
public int Id { get; set; }
[Required]
[StringLength(64, MinimumLength = 5)]
public string Name { get; set; }
public ICollection<User> Users { get; set; } = new List<User>(); // removing the initialization does not make any difference
}
public class User
{
[Key]
public int Id { get; set; }
[Required]
[StringLength(64, MinimumLength = 5)]
public string Name { get; set; }
public Company Company { get; set; }
public int CompanyId { get; set; }
}
Both of these models are added in my DbContext and I can make simple CRUD operations. Anyways, I started to play around with the UI and wanted to display the users of a company. So basically, you have a table with all companies -> you select a company and click the "edit" button -> a new view opens where the properties of the company can now be updated and all corresponding users are displayed.
After implementing this, I realized, that my Company.Users list is completely empty, even though there should be 10 dummy users in it. I checked the view, where all users are displayed and see that all of them are there. I navigated to my companies page again, selected the same company and what do I see? The company's users!
So what is the problem: The user data is only loaded from my CompanyService AFTER I have accessed the user data from my UserService. I have 0 clue, why this is happening.
I access my data like this:
public async Task<Company?> GetCompanyById(int id)
{
return await this.Context.Company.Include(c => c.Users).FirstOrDefaultAsync(c => c.Id == id);
}
My database is created like this:
builder.Services.AddDbContextFactory<DatabaseContext>(options => options.UseSqlServer(builder.Configuration.GetConnectionString("DefaultConnection")));
You're not actually using lazy loading at all. For Lazy loading to work you need to declare the Company.Users collection as virtual. Even then it will only work while the company is still within scope of the DbContext it was read from. This can trip you up when passing entities around. When Lazy Loading is enabled and you attempt to access an unloaded collection after the DbContext is disposed for instance, you will get an exception.
What you are seeing instead is EF's entity tracking and reference population behaviour. When you request an entity that has relationships to other entities, you can eager load those relationships to ensure the associated entities are loaded and referenced, or leave them for EF to work out. EF will automatically associate any related entities that it happens to already be tracking, regardless of whether you tell it to eager load or not.
So for example lets say I have a Parent Child relationship. Parent ID P1 has 3 children, ID C1, C3, and C3. With lazy loading disabled if I use the following statement:
var parent = context.Parents.Single(p => p.Id == "P1");
var count = parent.Children.Count();
I will get "0". (Provided the Children collection is initialized, otherwise I'd get a NullReferenceException) If I turn on lazy loading, or I eager load like below:
var parent = context.Parents.Include(p => p.Children).Single(p => p.Id == "P1");
var count = parent.Children.Count();
in both cases I would get "3".
Now where things get interesting. If lazy loading is disabled/Children is not virtual and I do the following.
var child = context.Children.Single(c => c.Id == "C1");
var parent = context.Parents.Single(p => p.Id == "P1");
var count = parent.Children.Count();
I will get a count of "1", not "0" because I didn't eager load, not "3" because lazy loading isn't applied here. If 2 of the related children happened to be tracked, I'd get "2". Now normally you won't do something so obvious, but any earlier code that "might" have loaded a tracked reference to related data to a row you are later loading will be automatically included in that new row's relation. This can lead to confusing bugs where sometimes related data seems to be available but not other times, or not a complete set of data is available.
This is one underlying reason why you want DbContext lifespans to be as short as possible. The more entities a DbContext is tracking the longer it takes to "work out" possible relationships between tracked entities to populate references, and also the more potentially stale references might be used. (Rather than loading fresh, current data from the DB)
The best practices with EF when it comes to avoiding issues like this:
When reading information for display, use Projection rather than loading entities. I.e. load ViewModels using Select or Automapper's ProjectTo. This avoids loading tracked entities and ensures that data comes from current persisted data state. It also builds far more efficient queries than loading entities with eager loaded relationships. (Which get slow since EF is building Cartesian products behind the scenes)
If you don't need to update data but do need to load entities, use AsNoTracking. This avoids entities stuffing the tracking cache, slowing things down and causing these problems.
When updating entities, always eager load relationships that need to be updated. Ideally design systems to update child relations independent of the parent. I.e. AddChild, RemoveChild, UpdateChild rather than making changes to a children collection and trying to update them along with every other change witnin a single UpdateParent.
Ensure that DbContexts are not alive any longer than they are absolutely needed. Long-lived DbContexts collect expensive baggage.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 23 days ago.
The community is reviewing whether to reopen this question as of 12 days ago.
Improve this question
The instance of entity type 'AssegnazioneLotto' cannot be tracked
because another instance with the same key value for
{'Id_AssegnazioneLotto'} is already being tracked.
When attaching existing entities, ensure that only one entity instance
with a given key value is attached.
Consider using 'DbContextOptionsBuilder.EnableSensitiveDataLogging'
to see the conflicting key values.
I encounter this error when we call data from a table and update it.
I solved it by calling a view which calls the table.
Why does this happen?
How can I solve without creating additional views?
The simplest answer: Don't pass entities around outside of the scope they were read. Pass view models (POCO objects rather than entities) and fetch entities on update to copy expected values across.
The complex answer is that when updating entity references, all entity references including child collections and many-to-1 references, you need to check if the DbContext is tracking a matching reference, and either replace the references with the tracked entity, or tell the DbContext to dump the tracked reference before attaching.
For example, an update method that accepts a detached or deserialized "entity". What works sometimes, but then craps other times:
public void UpdateOrder(Order order)
{
context.Update(order);
// OR
context.Attach(order);
context.Entry(order).State = EntityState.Modified;
context.SaveChanges();
}
Looks simple and clean, but craps out when the DbContext instance might already be tracking a matching Order instance. When it is, you get that exception.
The safety check:
public void UpdateOrder(Order order)
{
var existingOrder = context.Orders.Local.SingleOrDefault(o => o.OrderId == order.OrderId);
if (existingOrder != null)
context.Entry(existingOrder).State = EntityState.Detatched;
context.Update(order);
// OR
context.Attach(order);
context.Entry(order).State = EntityState.Modified;
context.SaveChanges();
}
That example checks the local tracking cache for a matching order and dumps any tracked instance. The key here is searching the .Local with the DbSet to search the local tracking cache, not hitting the DB.
Where this gets more complex is where Order contains other entity references like OrderLines, or a reference to a Customer, etc. When dealing with detached entities you need to check over the entire object graph for tracked references.
public void UpdateOrder(Order order)
{
var existingOrder = context.Orders.Local.SingleOrDefault(o => o.OrderId == order.OrderId);
if (existingOrder != null)
context.Entry(existingOrder).State = EntityState.Detatched;
var customer = context.Customers.Local.SingleOrDefault(c => c.CustomerId = order.Customer.CustomerId);
if (customer != null)
order.Customer = customer; // Replace our Customer reference with the tracked one.
else
context.Attach(order.Customer);
context.Update(order);
// OR
context.Attach(order);
context.Entry(order).State = EntityState.Modified;
context.SaveChanges();
}
As you can see, this starts to get complex and cumbersome pretty quick as you need to check every reference. Hence, it's simpler to avoid passing detached or serialized entities around. Using a View Model offers many benefits for performance and simplifying issues like this. Coupled with AutoMapper or a similar mapper that supports projection can make operations with view models very simple:
Selecting Orders:
var orders = context.Orders.Where(/* suitable conditions */)
.ProjectTo<OrderViewModel>(_mapperConfig)
.ToList();
Where _mapperConfig is an AutoMapper configuration that tells AutoMapper how to convert an Order into an OrderViewModel. This can follow conventions or optionally contain mapping rules to build a flattened view model for an Order and it's relative details. ProjectTo works with EF's IQueryable to build an SQL SELECT statement across the entity graph to return only the data needed to populate the view model. This is far more efficient than using Map which would require all related entities to be eager loaded.
When updating:
public void UpdateOrder(UpdateOrderViewModel orderVM)
{
var order = context.Orders.Single(o => o.OrderId == orderVM.OrderId);
if (orderVM.RowVersion != order.RowVersion)
throw new StaleDataException(); // placeholder to handle the situation where the data has changed since our view got the order details.
var mapper = _mapperConfig.CreateMapper();
mapper.Map(orderVM, order);
context.SaveChanges();
}
orderVM could be an OrderViewModel returned, but typically I would recommend packaging just the fields that can be updated into a dedicated view model. The "magic" is in the AutoMapper configuration which governs what fields get copied from the view model back into the entity. If can include child data such as OrderLines or such, in which case you would want to ensure those child entities are eager loaded /w .Include in your DB fetch. AutoMapper's Map method in this case is the variant that copies mapped values from a source to a destination, so values are copied across directly into the tracked entity instance. EF will build an SQL UPDATE statement based on what values actually charge rather than overwriting the entire record.
You can also use the same technique with detached entities to avoid your issue. The benefit of using AutoMapper is that you can configure which values can be legally copied over from the deserialized/detached entity provided into the real data:
public void UpdateOrder(Order updatedOrder)
{
var order = context.Orders.Single(o => o.OrderId == orderVM.OrderId);
if (updatedOrder.RowVersion != order.RowVersion)
throw new StaleDataException(); // placeholder to handle the situation where the data has changed since our view got the order details.
var mapper = _mapperConfig.CreateMapper();
mapper.Map(updatedOrder, order);
context.SaveChanges();
}
This ensures we only change what is allowed to change, and avoids the whole crapshoot of tracked references. In our mapper configuration we literally have an entry like:
cfg.CreateMap<Order, Order>(...)
which will hold explicit rules to ignore copying across fields and related entities we don't want copied across on an Update.
The downside of doing this is the overhead of sending entire entities and potentially their related entities across the wire back and forth, plus to be "safe" from tampering, a lot more effort needs to go into the mapper configuration or copying across allowed values explicitly.
I had the same issue with EF Core and Blazor Server. Switching the scope in the service collection to "Transient" and using a ServiceScopeFactory for the queries/updates did the trick. You'll see below I'm using the Blazor style dependency injection, but constructor injection will still work the same way for an IServiceScopeFactory
[Inject]
IServiceScopeFactory _serviceScopeFactory { get; set; }
private async Task UpdateItem(GridCommandEventArgs args)
{
var utilityItem = (EntityModelSample)args.Item;
using (var scope1 = _serviceScopeFactory.CreateScope())
{
var dbContext = scope1.ServiceProvider.GetService<SampleDbContext>();
dbContext.Update(utilityItem);
await dbContext.SaveChangesAsync();
}
LoadData();
}
In the startup code:
builder.Services.AddDbContext<InternalUtilitiesDbContext>(option => option.UseSqlServer(connectionString), ServiceLifetime.Transient);
this code fix your problems::
builder.Services.AddDbContext(option => option.UseSqlServer(connectionString), ServiceLifetime.Transient);
ServiceLifetime.Transient
Context
I am trying my own spin at DDD architecture. The key difference from other projects I've seen is that I am not using my Domain models as data entities, but instead I have separate models I called Stores that map from the Domain models and represent the state of the database.
If you're not familiar with DDD the idea is to completely decouple the core business logic from other elements of the application, such as database. In order to achieve that I have defined Domain models that contain the business logic and validation and then Entity Models, which represent the same sate as the Domain models (striped of business and validation logic) but also represent EF specific relationship properties.
Problem
The EF operations work for simpler operations. Let's say we have a Contest, which can contain several Trials.
Example in pseudo-code:
contest = new Contest
contest.Add(new Trial(1))
contest.Add(new Trial(2))
data.Save(contest) // performs mapping to ContestEntity and calls dbContext.Add
// So far so good
contestWithTrials = data.Get() // contest comes with 2 Included Trials
contestWithTrials.Add(new Trial(3))
data.Save(contestWithTrials) // performs mapping, calls dbContext.Update and tries to save but fails.
The error is:
The instance of entity type 'Trial' cannot be tracked because another instance with the key value '{Id: 1}' is already being tracked
or
Attempted to update or delete an entity that does not exist in the store
For some reason the mapping confuses EF and it tries to re-create the already existing Trial, but I cannot understand why - I can see that the entities are added correctly in DbSet.Local just before SaveChanges is called, but still it throws.
I've setup a PoC branch here. It's a console application with minimal reproducible example per Progrman's advice bellow. Since the setup requires several packages I think it's better in a repo instead of a single file.
It is a good idea to separate domain model classes containing business logic from infrastructure dependencies, in your case database concerns. But as you are utilizing EF Core you can dismiss your Entity Models altogether as EF Core is already designed in a way that allows you to separate domain and database concerns.
Let's look at an example from the Microsoft powered EShopOnWeb project.
The domain model class Order (an aggregate root of the Ordering context) contains the domain logic and is structured so that business invariants can be adhered to the best way.
When you look at the Order class you see that it has no database or other infrastructure dependencies. The domain model class is also located in the
https://github.com/dotnet-architecture/eShopOnWeb/blob/master/src/ApplicationCore/Entities/OrderAggregate/Order.cs
of the solution.
public class Order : BaseEntity, IAggregateRoot
{
private Order()
{
// required by EF
}
public Order(string buyerId, Address shipToAddress, List<OrderItem> items)
{
Guard.Against.NullOrEmpty(buyerId, nameof(buyerId));
Guard.Against.Null(shipToAddress, nameof(shipToAddress));
Guard.Against.Null(items, nameof(items));
BuyerId = buyerId;
ShipToAddress = shipToAddress;
_orderItems = items;
}
public string BuyerId { get; private set; }
public DateTimeOffset OrderDate { get; private set; } = DateTimeOffset.Now;
public Address ShipToAddress { get; private set; }
private readonly List<OrderItem> _orderItems = new List<OrderItem>();
public IReadOnlyCollection<OrderItem> OrderItems => _orderItems.AsReadOnly();
public decimal Total()
{
var total = 0m;
foreach (var item in _orderItems)
{
total += item.UnitPrice * item.Units;
}
return total;
}
}
In order to map the business model to the database in order to persist the data built-in functionality from EF Core can be used by simply defining a corresponding configuration class as shown below. To separate it from the business layer it is, amongst other things, also located in the infrastructure layer (or data layer) of the project.
public class OrderConfiguration : IEntityTypeConfiguration<Order>
{
public void Configure(EntityTypeBuilder<Order> builder)
{
var navigation = builder.Metadata.FindNavigation(nameof(Order.OrderItems));
navigation.SetPropertyAccessMode(PropertyAccessMode.Field);
builder.OwnsOne(o => o.ShipToAddress, a =>
{
a.WithOwner();
a.Property(a => a.ZipCode)
.HasMaxLength(18)
.IsRequired();
a.Property(a => a.Street)
.HasMaxLength(180)
.IsRequired();
a.Property(a => a.State)
.HasMaxLength(60);
a.Property(a => a.Country)
.HasMaxLength(90)
.IsRequired();
a.Property(a => a.City)
.HasMaxLength(100)
.IsRequired();
});
}
}
The only thing required by EF Core is the private parameterless constructor in the Order domain model class which is, from my point-of-view, an acceptable trade-off considering you can save the effort of writing database mapping classes.
If I am constrained by other frameworks that do not provide such capabilities I often also go a similar way as you are doing now, but in case of having the features of EF Core at hand I would suggest to reconsider you approach an give EF Core configuration features a try.
I know this is not the exact answer to the technical problem you are facing but I wanted to show you an alternative approach.
Your problem is, that as you are loading your entities from the database EF Core starts tracking them in its change tracker to identify changes you make to the loaded entities as soon as SaveChanges() is called. This behaviour works fine as long as you modify the actual object that was loaded by EF.
What you are doing is: loading a DatabaseTrial (lets say it has id 1), then mapping it to DomainTrial, potentially modify it, and then mapping it to NEW instance of DatabaseTrial which also has id 1 and adding it to the context. This confuses EF because it now has two diffent objects (by reference) which both have id 1. This is not allowed as ids have to be unique (if EF did not throw this exception which DatabaseTrial object should used to update the database entry?).
The solution is quite simple: Just use AsNoTracking() when loading the entities from the database. This will prevent the change tracker from keeping track of the originally loaded object and as soon as Update() is called only the new entity will be tracked in the "Modified" state and used to update the database entry. As the documentation states:
For entity types with generated keys if an entity has its primary key value set then it will be tracked in the Modified state. If the primary key value is not set then it will be tracked in the Added state. This helps ensure new entities will be inserted, while existing entities will be updated. An entity is considered to have its primary key value set if the primary key property is set to anything other than the CLR default for the property type.
this will also work for your Trial which is being added to your Contest as its primary key is set to the default value after creation and EF will know that it must be inserted.
I'm trying to figure out the best approach to architecting this project. Basically, it's a "band" profile site. I'm using ASP.NET 4, EF, and Automapper (structuremap too, but that's not important). I'm running into performance issues and need advice on whether my approach is right or not (my guess is not). I'll focus on specific sections and provide stripped down examples.
I have a EntityFramework repository class that interacts directly onto the EF objects using LINQ:
[Pluggable("Repository")]
public class EntityDataRepository : IRepository
{
static EntityDataRepository()
{
// other mappings removed
// Data. objects are EF objects, mapping to my DTO classes
Mapper.CreateMap<Data.Event, Models.EventModel>();
Mapper.CreateMap<Data.Genre, Models.GenreModel>();
Mapper.CreateMap<Data.Band, Models.BandModel>();
}
public IEnumerable<BandModel> GetBandsByUser(Guid userId)
{
using (var ctx = new DbContext())
{
var user = GetCurrentUserModel(ctx, userId);
var efBands = from r in user.BandRelations
orderby r.Date
select r.Band;
return Mapper.Map<IEnumerable<Data.Band>, IEnumerable<Models.BandModel>>(efBands);
}
}
}
Bands have genres and events. Note that it maps the EF objects to my DTO object, and returns a list of them. It acts as a proxy to enable my controllers to invoke methods on to obtain the data that it needs (actual logic altered to show what I need):
namespace OpenGrooves.Web.Areas.Edit.Controllers
{
[Authorize]
public class MyBandsController : BaseController
{
public ActionResult ShowBands()
{
IEnumerable<BandModel> bands = repository.GetBandsByUser(loggedUserGuid).First();
return View(bands);
}
}
}
Finally, here's the BandModel class, which is mirroring the Band class entity in EF:
public class BandModel
{
// fluff and scalar properties removed
public IEnumerable<EventModel> Events { get; set; }
public IEnumerable<GenreModel> Genres { get; set; }
}
Basically, am I doing this right? In my EF to DTO classes, the Band EF entity has navigational properties, such as Genres and Events. The problem is, during the mapping that takes place in automapper, these list properties are being populated, especially if one of my proxy methods returns a list of BandModels. It seems to be invoking the Genres and Event EF queries for each record, which is a major performance killer obviously (at least 2 queries for Events and Genres are ran for each BandModel object returned).
Is it OK practice to use EF objects directly in my controllers, possibly even used as models for views?
Is there something I need to change in my mappings to enable lazy loading for these navigational properties (events, genres off a BandModel object)?
Thanks!!
Is it OK practice to use EF objects directly in my controllers, possibly even used as models for views?
Yes, Kinda.
This answer is subjective and depends on how you view your separation of concerns. Most MVC developers, including me, swear on view models. It decouples your data or domain classes from the presentation layer. This is awesome.
Some people don't like being awesome, including other languages and frameworks like every PHP MVC framework, Rails, and Django. Nobody can say these languages "do it wrong" but us .NET devs subscribe to a different paradigm.
Your second question is strange you say "is there something to enable lazy loading" right after you say lazy loading is happening. Care to explain?
Lazy loading is on by default in EF4.
I'm new to Entity Framework. I've created a Multi Tier Application, in MVC, using Ninject.
I have three tables in the database. Say Table A, Table B and Table C.
Table A has a foreign key relating it to Table B
Table B has a foreign key relating it to Table C
Table A => Table B => Table C
My Application has a "service" which will be called from an MVC Controller.
The service acts as a repository for every entity (ie TableAService, TableBService), which is responsible for Creating, Reading, Updating or Deleting Entities from the EF DataContext, aswell as possibly performing business logic on those entities.
Within my MVC controller I have a reference to the appropriate service. For example:
private TableAService _tableAService;
public TableAController(EFDataContext dataContext)
{
_tableAService = new TableAService(dataContext);
}
public ActionResult Index()
{
return View();
}
TableAService would look something like this:
private EFDataContext _dataContext;
public TableAService(EFDataContext dataContext)
{
_dataContext = dataContext;
}
public TableA GetById(int tableAId)
{
_dataContext.TableA.SingleOrDefault(ta => ta.TableAId == tableAId);
}
I appreciate that the example's service would be tightly coupled to the dataSource, my actual implementation differs slightly but the concept is the same, I have a service with a dataContext that I'd like to return entities from.
QUESTION : - When I'm in the GetById method in the TableAService, SingleOrDefault gives me a TableB navigation property, which allows me to access all TableB's properties, including a TableC navigation property.
However, when I pass TableA back to the Controller, I can't access any of TableB's properties.
Within the Service I've also tried:
private ObjectSet<TableA> _objSet = _dataContext.CreateObjectSet<TableA>();
and
return _objSet.SingleOrDefault(ta => ta.TableAId == tableAId);
This doesn't seem to make any difference to being able to Access the TableC navigation property on TableB from the TableA entity.
Any help would be greatly appreciated!
Cheers,
James
The contents of a navigation property can only be loaded within the scope of an active ObjectContext.
This is required because the actual rows will be fetched through a round-trip to the database.
In your case your are trying to access a navigation property after the ObjectContext that was used to retrieve the entities in the first place has been disposed and the connection lost.
Assuming you're using Entity Framework 4 there are 2 solutions to this problem:
Eager loading: the related entities are loaded up front, that is you explicitly instruct Entity Framework to load all related entities while it's retrieving the results of the query. This can be done in a couple of different ways: by including the related entities in the query's projection, by invoking the ObjectQuery.Include method or by invoking the Load method on the navigation property, alternatively the ObjectContext.LoadProperty method.
Lazy loading: the related entities are loaded on-demand, that is when the navigation property's getter is accessed for the first time. Note that this must still be done within the scope of an ObjectContext.
In your case eager loading using the Include method is probably the most appropriate solution:
public TableA GetById(int tableAId)
{
return _dataContext.TableA
.Include("TableB.TableC") // use dot-notation to specify depth in the object graph
.SingleOrDefault(ta => ta.TableAId == tableAId);
}
Related resources:
Loading Related Objects
Navigation Properties
Try changing your GetById method to:
public TableA GetById(int tableAId)
{
return _dataContext.TableA.Include("TableB").SingleOrDefault(ta => ta.TableAId == tableAId);
}
If you want to get the TableC record as well, use:
public TableA GetById(int tableAId)
{
return _dataContext.TableA.Include("TableB.TableC").SingleOrDefault(ta => ta.TableAId == tableAId);
}
The reason is that Entity Framework uses "lazy loading", which means that stuff isn't loaded from the database until you really need it. The trouble is, once you're out of the service, the data context doesn't exist any more to get the table B record. The Include method instructs EF to load the data from related tables straight away.
It's took some digging, but the problem I was experiencing was that my application wasn't referencing System.Data.Entity.
Adding the reference to the application allowed me to access the properties of the related objects from within the controller.
Thanks for your answers guys, they did help me rule out possibilities. I up voted each.