What is the difference between the following:
db.AcceptAllChanges();
// vs
db.SaveChanges();
db.AddToCustomer()
// vs
db.Customers.AddObject(Mycustomer);
and why there is db.Customers.DeleteObject(Mycustomer); and no db.DeleteFromCustomer(Mycustomer);
when should i use each one ?
also is entity framework threadsafe ? i mean if two threads update the object in the context sametime would it crash ?
thanks in advance
db.AcceptAllChanges() assumes you have finished with any associated change history and discards it - if you have any further problems you cannot recover those changes. db.SaveChanges(false) does keep those changes in memory in case there are problems.
See http://blogs.msdn.com/b/alexj/archive/2009/01/11/savechanges-false.aspx for a more in depth answer.
db.AddToCustomer() is a strongly typed wrapper around db.Customers.AddObject(). Look at the definition of it and you'll see what I mean. I would use the db.AddToCustomer() method purely as it is strongly typed and gives you compile time type checking.
I imagine the only reason why there's no DeleteFromCustomer() is that they didn't think the work would be necessary (people tend to add more than they delete). There's nothing to stop you creating your own extension methods to implement it yourself however.
The EF is not thread safe, if you want to perform updates you'll need to manage the locking yourself. See http://blog.cincura.net/230902-multithreading-with-entity-framework/ for more :)
AcceptAllChanges only sets all added and modified entities in ObjectContextStateManager instance to Unchanged state and detach all deleted entities but it didn't execute changes in database. SaveChanges executes changes in database and by default also accept changes (can be configured not to do it).
AddToCustomer is the same as Customers.AddObject - it is just a shortcut (same with DeleteObject). The first method is generated by code generator (and I think it calls the second one which is standard method of ObjectSet).
Entity framework is not thread safe. Moreover you should be very careful when sharing ObjectContext among several threads.
Related
Im running a process that will affect a lot of records within a database for a user. I only want to apply all of the changes or none of them depending on the result of all of the changes. (e.g if one of the sub processes fail then no changes overall should take place). I also want to save notifications to the database to alert users of the outcome of the processes (e.g if a sub process fails then a notification is raised to let the user know that no changes were made due to reason x).
The best way I can think to do this is to detach all of the entries within the change tracker as they are added, then create notifications if something has succeeded or failed and save changes, then when it comes to applying all the changes I can iterate though the change tracker and reset the Entity State and save changes once more.
The issue i'm facing with this approach is that when it comes to reset the Entity State, I don't know whether the entity is Added or Modified. I could implement my own change tracker to store the previous state of the entity but it would make EF's change tracker redundant.
I could also only add all of the entity's right when I come to save them but that would require passing many objects down a chain link of nested methods right until the end.
Does anyone have any better suggestions or is it standard practice to use one of the mentioned hacks for this problem?
It sounds like you are trying to implement the Unit of Work pattern. The DbContext of EntityFramework makes this fairly easy to use, as the DbContext its self is the unit of work.
Just instantiate a new context and make the changes you need to it. You can pass the context around to any functions that make their changes. Once the "logical unit" operations are complete, call SaveChanges. As long as the individual methods do not call SaveChanges, you can compose them together in to a single unit, committed once the entire logical operation as finished. Everything will be committed atomically, within a single transaction. The data won't be left in an inconsistent state.
You told about transactions. Using Transactions or SaveChanges(false) and AcceptAllChanges()?
also you can implement versions of data in DB. as for me it will be more ease and correct way (you must always only insert data and never update. 1-to-many). in this case you can simply delete last records or mark them as unactive
I have a method that will Remove a set of entities.
I have a method that will AddOrUpdate a set of entities.
These methods are independently useful, but they have issues working together in entity framework.
The problem is that after removing a set of entities, for example (A,B,C,D), subsequent queries that resolve to one or more of those records always return cached garbage copies whose property values were nulled during the removal process. An intermediate DbContext.SaveChanges solves the issue, but introduces additional overhead and leaves the operation in a half-complete state, so it would also have to be wrapped in another transaction.
What's the best way to handle this.
Should I avoid an API that has Remove and Add/Update operations altogether, instead opting for an up-front hybrid operation that determines which ones are actually being removed and which ones are sticking around to be updated? or
Should I keep the two useful methods and just wrap the two steps in a transaction scope, so that I can save changes to the context immediately after the remove, allowing subsequent adds/updates to properly reflect their removal, while still have the ability to commit or rollback at the end (e.g. if the new permissions can/cannot be set)?
I don't want use lower-level operations such as turning off tracking or attaching/detaching entities.
Suppose the entities are permissions. Business logic dictates that I should use a logical two-step process to reset a user's permissions by first deleting any that I have permission to delete, followed immediately by trying to add/update any new permissions that I am allowed to assign.
When approaching this with two separate steps, I run into the problem as follows. The first problem I encounter is that immediately after removing a set of permission entities like (A,B,C,D), entity framework mangles the object properties, setting many of them to null (presumably to sever foreign key relationships). The problem is that because the entities still exist in the database, when trying to "add or update" a permission which still has a record in the database but has been removed in the context, EF always returns the cached/garbage copy of it. So although I've removed the entity... I can't actually determine, within that same context, whether I need to re-attach/update it or add a new entity outright. In other words, the framework returns an entity as though it exists (because it does still exist in the database) in spite of it being flagged as removed, but that entity object has garbage/null data, so that I can't even tell at that point whether it's safe to add a new one or I should try to "un-remove" the existing one.
It seems to me that such a remove/add-or-update pattern is simply not good for this kind of entity framework (or even ORMs in general). Instead, I'd have to determine, in a single up-front operation, whether any of the new permissions already exist, so I can selectively delete the ones that are going away, while updating the ones that are just being reassigned (a new access level, for example).
I'm working on a WP7 mango App that makes use of Linq2SQL for data access.
I'm having a Note entity that has an Auto generated Key of type int.
The first time I add a new note to the db, the operation works fine, the note saves and then if I delete it from the db, it also gets removed from the db. The first entity is always of Id=0.
Then if I want to add a new note after removing the first note, I get an exception saying that the entity already exists. I concluded that the first entity with Id=0 has not been removed even though I called SubmitChanges on my data context.
Also, I'm using the same data context for data operations on my repository and on the same repository instance (a singleton for performance reasons).
To confirm that behavior, I tried to make the succession of calls and it failed !!!
this.DbContext.Notes.DeleteOnSubmit(value);
this.DbContext.SubmitChanges();
this.DbContext.Notes.InsertOnSubmit(value);
this.DbContext.SubmitChanges();
It says that it cannot add an Entity that already exists.
Any explanation for this behavior?
Thanks in advance.
Note :
When I use two different instances of the data context, this behavior disappears.
Well You answered your own question really at the end. Lets step through this:
You get the DbContext from the Database
Your Deleting an entry and submitting the Database(OK fine)
Now on this insertion your using an OLD instance of the database.
Everytime you make a
SubmitChanges();
You have update your reference, because its old.
So if you have a method that does multiple Transactions you NEED to refresh your local variable.
ONE instance of a Database should do ONE change
I'm using .NET entity framework 4.1 with code-first approach to effectively solve the following problem, here simplified.
There's a database table with tens of thousands of entries.
Several users of my program need to be able to
View the (entire) table in a GridRow, which implied that the entire Table has to be downloaded.
Modify values of any random row, changes are frequent but need not be persisted immediately. It's expected that different users will modify different rows, but this is not always true. Some loss of changes is permitted, as users will most likely update same rows to same values.
On occasion add new rows.
Sounds simple enough. My initial approach was to use a long-running DbContext instance. This one DbContext was supposed to track changes to the entities, so that when SaveChanges() is called, most of the legwork is done automatically. However many have pointed out that this is not an optimal solution in the long run, notably here. I'm still not sure if I understand the reasons, and I don't see what a unit-of-work is in my scenario either. The user chooses herself when to persist changes, and let's say that client always wins for simplicity. It's also important to note that objects that have not been touched don't overwrite any data in the database.
Another approach would be to track changes manually or use objects that track changes for me, however I'm not too familiar with such techniques, and I would welcome a nudge in the right direction.
What's the correct way to solve this problem?
I understand that this question is a bit wishy-washy, but think of it as more fundamental. I lack fundamental understanding about how to solve this class of problems. It seems to me that long living DbContext is the right way, but knowledgeable people tell me otherwise, which leads me to confusion and imprecise questions.
EDIT1
Another point of confusion is the existance of Local property on the DbSet<> object. It invites me to use a long running context, as another user has posted here.
Problem with long running context is that it doesn't refresh data - I more discussed problems here. So if your user opens the list and modify data half an hour she doesn't know about changes. But in case of WPF if your business action is:
Open the list
Do as many actions as you want
Trigger saving changes
Then this whole is unit of work and you can use single context instance for that. If you have scenario where last edit wins you should not have problems with this until somebody else deletes record which current user edits. Additionally after saving or cancelling changes you should dispose current context and load data again - this will ensure that you really have fresh data for next unit of work.
Context offers some features to refresh data but it only refreshes data previously loaded (without relations) so for example new unsaved records will be still included.
Perhaps you can also read about MS Sync framework and local data cache.
Sounds to me like your users could have a copy (cached) of the data for an indefinate period of time. The longer the users are using cached data the greater the odds that they could become disconnected from the database connection in DbContext. My guess is EF doesn't handle this well and you probably want to deal with that. (e.g. occaisionally connected architecture). I would expect implementing that may solve many of your issues.
Sorry about the title - hopefully the question will make clear what I want to know, then maybe someone can suggest a better title and I'll edit it!
We have a domain model. Part of this model is a collection of "Assets" that the user currently has. The user can then create "Actions" that are possible future changes to the state of these "Assets". At present, these actions have an "Apply" method and a reference to their associated "Asset". This "Apply" method makes a modification to the "Asset" and returns it.
At various points in the code, we need to pull back a list of assets with any future dated actions applied. However, we often need to do this within the scope of an NHibernate transaction and therefore when the transaction is committed the changes to the "Asset" will be saved as well - but we don't want them to be.
We've been through various ways of doing this
Cloning a version of the "Asset" (so that it is disconnected from Nhibernate) and then applying the "Action" and returning this cloned copy.
Actually using Nhibernate to disconnect the object before returning it
Obviously these each have various (massive!) downsides.
Any ideas? Let me know if this question requires further explanation and what on earth I should change the title to!
It's been a while since I had any NHibernate fun, but could you retrieve the Assets using a second NHibernate session? Changes made to the Asset will then not be saved when the transaction on the first session commits.
You could manage this with NHibernate using ISession.Evict(obj) or similar techniques, but honestly it sounds like you're missing a domain concept. I would model this as:
var asset = FetchAsset();
var proposedAsset = asset.ApplyActionsToClone();
The proposedAsset would be a clone of the original asset with the actions applied to it. This cloned object would be disconnected from NHibernate and therefore not persisted when the Unit of Work commits. If applying the actions is expensive, you could even do the following:
asset.ApplyProposedChanges(proposedAsset);
I have been working around a similar problem where performance was also an issue, thus it was not possible to re-load the aggregate using a secondary (perhaps stateless) session. And because the entities that needed to be changed "temporarily" where very complex, I could not easily clone them.
What I ended up with was "manually" rolling back the changes to what would be the assets in your case. It turned out to work well. We stored each action applied to the entity as a list of events (in memory that is). After use the events could be re-read and each change could be rolled back by a counter-action.
If it's only a small variety of actions that can be applied, I would say it's easily manageable to create a counter-action for each, or else it might be possible to create a more generic mechanism.
We had only four actions, so we went for the manual edition.
Sounds like you want to use a Unit of Work wrapper, so you can commit or revert changes as needed.