My scenario is I have an object FOO which has a virtual List<bar> property on it. This is being auto generated by EF.
After I load FOO I dispose of the data context, I am turning FOO into a business object through a DTO. For example
var newFOO = FOO_Dto.change(FOO);
Inside of FOO_Dto.change I want to check if the virtual list property is empty/null. I understand that closing the ObjectContext and checking the navigation property will throw an error. In My Data Layer there are times when I return FOO with the list and FOO without the list.
My Question is how do I check the Navigation Property to see if the list has been populated or not and avoid the ObjectContext error that is currently generating
Thank you very much!!
EDIT
From the comments section, I purposely want the context closed before I check to see if I loaded the List<Bar> property.
No, you can't, other than the ugly way of trying and catching the exception. You can only determine whether a collection is loaded by getting the owner's DbEntityEntry, which you can only obtain through a context instance.
But if you know up front that the collection may be addressed outside the scope of the context, you need to load it while the context is alive, OR not load it and prevent lazy loading. You should never allow lazy loading to occur outside the lifespan of a context.
In most cases this means you'll have to turn off lazy loading and eagerly load all data required by a consuming method.
The more I work with EF in a disconnected fashion the less I allow lazy loading. I'm close to considering lazy loading an anti-pattern.
Related
In a EF 6 project, I am writing validation functions for entities. some are static while others are instance methods of the entities themselves.
Ignoring whether this is bad practice or not, I'd like to check whether the entities were created using a context and if so, whether they are still attached.
Please note that these functions do NOT have access to the context object, just the entity classes.
As an example, a method validates Department entity and cascades validation to all associated Department.Employee instances.
If the hierarchy was created manually, validation will succeed.
If the hierarchy was created using a context which is still alive, validation will succeed albeit slower.
If the hierarchy was created using a context which has been disposed, validation will fail with an ObjectDisposedException (provided proxy-creation was enabled and .Include(***) was not used).
So the question, is it possible to detect the above scenarios without access to a DbContext instance? If not, how can we best validate entire hierarchies irrespective of how they were created.
var result = true;
var departments = ???; // Constructed manually or through a DbContext instance.
foreach (var department in departments)
{
result &= department.Validate();
foreach (var employee in department.Employees)
{
result &= employee.Validate();
}
}
EDIT: Please note that this is for a desktop application that cannot have long-running DbContext instances. they are almost always disposed immediately after retrieving data. Re-querying the database does not seem a viable option for validation since it is triggered by trivial user input and would slow down the entire user experience.
From your question
Please note that these functions do NOT have access to the context object, just the entity classes.
two solutions come to mind, none really palatable:
Build your own tracker and make it available to these methods somehow.
Add something to your entities, for example a WasLoaded property that gets set when you query your context. That WasLoaded could be set by either
Writing an EF interceptor that sets it.
Adding an artificial bit column with all values set to 1. Then map that to the property; the property will be false if you constructed it outside of the context, true if loaded from the context.
The tracker seems to be the cleanest because it doesn't pollute your model. The interceptor is a decent alternative if you're not concerned about your model.
And while it doesn't answer your question directly, you could avoid the use of proxies, in which case your validation works the same way regardless because you have your model in memory. There's the usual trade-offs to consider though.
I'm not sure how you'd detect the last scenario. I suppose you could have your tracker track more than the entities... have it also track the context's state.
I attempted to get the object that has lazy properties in a session and tried to update it in another session. But it failed to do so
with an Error: No persister for SecUserProxy (actual class is SecUser)
I'm using NHibernate 3.4. When I googled I came to know its a bug which has been fixed.
I also came across this post in which it is said if your proxy object implements INhibernateProxy, you can unproxy the object
with NHibernate. As NHibernate no longer supports pluggable proxy factories (like Castle, LinFu etc), it uses an internal one, I'm
assuming the internal one is perhaps INhibernateProxy
So I did the following in New session where I want to update my object as:
object unprox_obj = Session
.GetSessionImplementation()
.PersistenceContext.Unproxy(secUserobj);
in anticipation of getting this same object but with the real type i-e SecUser so that it can update without any error. But it still returns a proxy object.
I'm not able to understand what is going on?
Updated:
I have realized just now that 'secUserobj' is not INhibernateProxy. So How can I make it INhibernateProxy in order to update my object within another session?
if (secUserobj is INHibernateProxy)
{
unprox_obj = Session
.GetSessionImplementation()
.PersistenceContext.Unproxy(secUserobj);
}
Detached objects (loaded in one session, and kept as reference) could be re-attached. We can use session.Merge() or session.Lock()
9.4.2. Updating detached objects
Many applications need to retrieve an object in one transaction, send it to the UI layer for manipulation, then save the changes in a new transaction....
...
...The last case can be avoided by using Merge(Object o). This method copies the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with the session, it will be loaded. The method returns the persistent instance. If the given instance is unsaved or does not exist in the database, NHibernate will save it and return it as a newly persistent instance. Otherwise, the given instance does not become associated with the session. In most applications with detached objects, you need both methods, SaveOrUpdate() and Merge().
19.1.4. Initializing collections and proxies
...
You may also attach a previously loaded object to a new ISession with Merge() or Lock() before accessing uninitialized collections (or other proxies). No, NHibernate does not, and certainly should not do this automatically, since it would introduce ad hoc transaction semantics!
...
So, we can pass detached reference into .Merge(), and later work with returned (brand new) object reference:
MyEntity reAttached = session.Merge<MyEntity>(detached);
Be careful, this should be done (as stated above) before any of the detached collections were touched.
I'm experiencing something rather odd. I'm tinkering with NHibernate 3.2 mapping by code and have a very simple object model I'm using just to play.
None of my properties in the entire model are marked as virtual because I dont want lazy loading. I'm mapping by code and in each class mapping I'm setting Lazy(false);
However when it comes to mapping collections, if I try and access a collection after the session has ended I get an error "failed to lazily initialize a collection of role...".
I have to explicitly set collectionMapping.Lazy(CollectionLazy.NoLazy); before it will eager load the collection. It was my understanding that lazy loading was not possible unless your properties in your model were defined at virtuals?
Have I fundamentally missed something?
virtual is needed more than just for lazy loading. NHibernate requires them to be virtual because it creates a run-time proxy of the class and injects behavior.
Virtual properties and methods are only needed for lazy associations (many-to-one or one-to-one) because NHibnerate needs to set a proxy entity on the association property.
Collections (one-to-many and many-to-many) don't need any virtual properties because only the collection is lazy, not the entities in the collection. NHibernate will always use its own collection classes, even if you disable lazy loading.
You still need to use IList<T> instead of List<T>, because NH needs its own collection implementation.
Consider:
You won't get very far in a complex model without lazy loading, except your database fits into RAM or you don't mind to cut you OO model into pieces which destroys both maintainability and performance.
You can have entities without virtual members when you use interfaces to create proxies from. However, you should only use these interfaces to reference to entities, because they could always be proxies.
This question has been asked 500 different times in 50 different ways...but here it is again, since I can't seem to find the answer I'm looking for:
I am using EF4 with POCO proxies.
A.
I have a graph of objects I fetched from one instance of an ObjectContext. That ObjectContext is disposed.
B.
I have an object I fetched from another instance of an ObjectContext. That ObjectContext has also been disposed.
I want to set a related property on a bunch of things from A using the entity in B....something like
foreach(var itemFromA in collectionFromA)
{
itemFromA.RelatedProperty = itemFromB;
}
When I do that, I get the exception:
System.InvalidOperationException occurred
Message=The relationship between the two objects cannot be defined because they are attached to different ObjectContext objects.
Source=System.Data.Entity
StackTrace:
at System.Data.Objects.DataClasses.RelatedEnd.Add(IEntityWrapper wrappedTarget, Boolean applyConstraints, Boolean addRelationshipAsUnchanged, Boolean relationshipAlreadyExists, Boolean allowModifyingOtherEndOfRelationship, Boolean forceForeignKeyChanges)
at System.Data.Objects.DataClasses.RelatedEnd.Add(IEntityWrapper wrappedEntity, Boolean applyConstraints)
at System.Data.Objects.DataClasses.EntityReference`1.set_ReferenceValue(IEntityWrapper value)
at System.Data.Objects.DataClasses.EntityReference`1.set_Value(TEntity value)
at
I guess I need to detach these entities from the ObjectContexts when they dispose in order for the above to work... The problem is, detaching all entities from my ObjectContext when it disposes seems to destroy the graph. If I do something like:
objectContext.ObjectStateManager.GetObjectStateEntries(EntityState.Added | EntityState.Deleted | EntityState.Modified | EntityState.Unchanged)
.Select(i => i.Entity).OfType<IEntityWithChangeTracker>().ToList()
.ForEach(i => objectContext.Detach(i));
All the relations in the graph seem to get unset.
How can I go about solving this problem?
#Danny Varod is right. You should use one ObjectContext for the whole workflow. Moreover because your workflow seems as one logical feature containing multiple windows it should probably also use single presenter. Then you would follow recommended approach: single context per presenter. You can call SaveChanges multiple times so it should not break your logic.
The source of this issue is well known problem with deficiency of dynamic proxies generated on top of POCO entities combined with Fixup methods generated by POCO T4 template. These proxies still hold reference to the context when you dispose it. Because of that they think that they are still attached to the context and they can't be attached to another context. The only way how to force them to release the reference to the context is manual detaching. In the same time once you detach an entity from the context it is removed from related attached entities because you can't have mix of attached and detached entities in the same graph.
The issue actually not occures in the code you call:
itemFromA.RelatedProperty = itemFromB;
but in the reverse operation triggered by Fixup method:
itemFromB.RelatedAs.Add(itemFromA);
I think the ways to solve this are:
Don't do this and use single context for whole unit of work - that is the supposed usage.
Remove reverse navigation property so that Fixup method doesn't trigger that code.
Don't use POCO T4 template with Fixup methods or modify T4 template to not generate them.
Turn off lazy loading and proxy creation for these operations. That will remove dynamic proxies from your POCOs and because of that they will be independent on the context.
To turn off proxy creation and lazy loading use:
var context = new MyContext();
context.ContextOptions.ProxyCreationEnabled = false;
You can actually try to write custom method to detach the whole object graph but as you said it was asked 500 times and I haven't seen working solution yet - except the serialization and deserialization to the new object graph.
I think you have a few different options here, 2 of them are:
Leave context alive until you are done with the process, use only 1 context, not 2.
a. Before disposing of context #1, creating a deep clone of graph, using BinaryStreamer or a tool such as ValueInjecter or AutoMapper.
b. Merge changes from context #2 into cloned graph.
c. Upon saving, merge changes from cloned graph into graph created by new ObjectContext.
For future reference, this MSDN blogs link can help decide you decide what to do when:
http://blogs.msdn.com/b/dsimmons/archive/2008/02/17/context-lifetimes-dispose-or-reuse.aspx
I don't think you need to detach to solve the problem.
We do something like this:
public IList<Contact> GetContacts()
{
using(myContext mc = new mc())
{
return mc.Contacts.Where(c => c.City = "New York").ToList();
}
}
public IList<Sale> GetSales()
{
using(myContext mc = new mc())
{
return mc.Sales.Where(c => c.City = "New York").ToList();
}
}
public void SaveContact(Contact contact)
{
using (myContext mc = new myContext())
{
mc.Attach(contact);
contact.State = EntityState.Modified;
mc.SaveChanges();
}
}
public void Link()
{
var contacts = GetContacts();
var sales = GetSales();
foreach(var c in contacts)
{
c.AddSales(sales.Where(s => s.Seller == c.Name));
SaveContact(c);
}
}
This allows us to pull the data, pass it to another layer, let them do whatever they need to do, and then pass it back and we update or delete it. We do all of this with a separate context (one per method) (one per request).
The important thing to remember is, if you're using IEnumerables, they are deferred execution. Meaning they don't actually pull the information until you do a count or iterate over them. So if you want to use it outside your context you have to do a ToList() so that it gets iterated over and a list is created. Then you can work with that list.
EDIT Updated to be more clear, thanks to #Nick's input.
Ok I get it that your object context has long gone.
But let's look at it this way, Entity Framework implements unit of work concept, in which it tracks the changes you are making in your object graph so it can generate the SQL corresponding to the changes you have made. Without attached to context, there is no way it can tack changes.
If you have no control over context then I don't think there is anything you can do.
Otherwise there are two options,
Keep your object context alive for longer lifespan like session of user logged in etc.
Try to regenerate your proxy classes using self tracking text template that will enable change tracking in disconnected state.
But even in case of self tracking, you might still get little issues.
If I have an object that lazy loads an association with very large objects, is there a way I can do processing at the time the lazy load occurs? I thought I could use AssociateWith or LoadWith from DataLoadOptions, but there are very, very specific restrictions on what you can do in those. Basically I need to be notified when an EntitySet<> decides it's time to load the associated object, so I can catch that event and do some processing on the loaded object. I don't want to simply walk through the EntitySet when I load the parent object, because that will force all the lazy loaded items to load (defeating the purpose of lazy loading entirely).
Subscribe to the ListChanged Event
EntitySet<T> exposes an event called ListChanged which you can use to detect if an item is being added. Evaluate the ListChangedType property of the ListChangedEventArgs. Here's a link to the values available in the ListChangedType enum.
There is no danger of forcing the load to execute as long as you avoid requesting the enumerator.
http://msdn.microsoft.com/en-us/library/system.componentmodel.listchangedtype.aspx
I don't see any extensibility points for this available; the only thing I can see is in the FK entity there is a Created method on each individual object that gets fired from the constructor...
So the constructor calls created, and personally, I'm not 100% sure that the entity set loading creates each individual object at that time, and fires the event...
HTH.
There are a bunch of built in extensibility methods to the datacontext and data classes generated by Linq2SQL.
http://msdn.microsoft.com/en-us/library/bb882671.aspx
http://csainty.blogspot.com/2008/01/linq-to-sql-extending-data-classes.html
Either of these may be able to serve the purpose you need.
You are definitely not bound to use the default EntitySet<> but can use any IList<> collection instead. I did a little reflection on EntitySet<> but have found no hook into the Load() method, which implements enumerating the lazy source of the entity set (it's where the EntitySet actually gets queried and materialized).
Linq To SQL will use the Assign() method to assign an IEnumerable source (which is lazy by default) to your collection. Starting from there, you can implement your own lazy loading EntitySet with a custom hook at the point you first enumerate the source collection (execute the query).