I have this code for example for an event handler
public void ONDataArrived ( string data )
{
//do some processing and save it to DB using EF
ctx.Add ( x ) ;
ctx.SaveChanges () ;
}
Is there any chance that EF may error if this event fired a couple of times in the same time ?
thanks
The context objects from the Entity Framework are not threadsafe - thus it will break.
You will need to synchronize the Context in case events will be processed in parallel.
EF 5 can work in several different models, depending on how you want to use it. There are templates for using context-tracked entities, self-tracked entities, or POCOs. For your case, I would recommend not keeping your context object around. Self-tracking entities are probably what you're looking for - they store internally all of the information needed to update the database instead of relying on the context to track it.
If you go the self-tracked route, then your OnDataArrived method would just create a new context object and update the entity, which would also address the threading issue mentioned by weismat.
Related
EF Core 6 and .NET 6.
Suppose all my entities have a LastUpdateAt property, which is a DateTime that gets updated every time an entity is added or modified.
I get an entity from the context and show it to the user (web page, WPF window, whatever). At some point, the user clicks a Save button.
Before I save, I want to check if the entity has been updated by someone else since I got my copy. However, I'm struggling to see how to do this.
If I query the context, it just gives me back the entity I already have (including any changes my user has made).
If I refresh the entity, it overwrites the one in my context, losing my user's changes.
How do I check if the database version has a newer time stamp than the one in my context?
Thanks
Moving the discussion here since I need to paste longer text. In this article it's said, during SaveChanges(), if the DATABASE version was modified in the mean time it will throw DbUpdateConcurrencyException. In that exception you have all 3 values and YOU can decide on how to resolve the conflict:
Resolving a concurrency conflict involves merging the pending changes from the current DbContext with the values in the database. What values get merged will vary based on the application and may be directed by user input.
There are three sets of values available to help resolve a concurrency conflict:
Current values are the values that the application was attempting to write to the database.
Original values are the values that were originally retrieved from the database, before any edits were made.
Database values are the values currently stored in the database.
If you are loading an entity, keeping a DbContext instance open, updating that entity, then saving to the same DbContext instance then by default you are relying on EF to manage concurrency. This follows a "last in wins". You can let EF manage the concurrency by adding a [ConcurrencyCheck] on the LastUpdateAt property or using a Row Version via [Timestamp]. This will cause EF to fail updating if the underlying data has been updated. From there you have to decide how you want to handle it.
If you want to perform the concurrency check yourself then there are a couple of options.
Structure your code to shorten the lifespan of the DbContext using either detached entities or projected View Models. This will generally have flow-on benefits to your code performance as the original longer-lived DbContext can easily find ways to cause bloat, or accumulate "poisoned" entities if alive too long. Automapper is a great tool to assist here where you can use ProjectTo to get the view models, then Map(source, destination) to copy the values across afterward. In this way you load the data including the last modified at value, make your changes, then when saving, you load the data, validate the modified at etc. then copy the values across and save.
Scope a DbContext instance to check the data before saving.
.
private DateTime getFooLastUpdateAt(int fooId)
{
using(var context = new AppDbContext())
{
var lastUpdateAt = context.Foos
.Where(x => x.FooId == fooId)
.Select(x => x.LastUpdateAt)
.Single();
return lastUpdateAt;
}
}
This could use an injected DbContext factory or such to create the DbContext instance..
I have seen other questions about this same error, but I am unable to correct the error with those suggestions in my code; I think that this is a different problem and not a duplicate.
I have an app that makes a series of rules, of which the user can set properties in the GUI. There is a table of Rules in a connected database, with the primary key on the Rule.Id. When the user saves changes to a rule, the existing rule gets "IsActive=0" to hide it, then a new database record is made with the properties from the GUI written to the database. It looks to the user as though they have edited the rule, but the database actually sees a new rule reflecting the new properties (this allows for a history to be kept), connected to the old rule by another reference field.
In the C# code for the app, the View Model for each rule contains an EF Rule object property. When the user clicks "save" I use the parameters set in the view to build the ruleViewModel.Rule for each ruleViewModel they want to save, with porperties matching the GUI. The MainViewModel contains the DbContext object called dbo, so I use the ruleViewModel.Rule to write to the mainViewModel.dbo.Entry which I save to the Entity Framework. Here are the three basic steps performed for each saveable Rule View Model:
// get the rule from the GUI and use it to make sure we are updating the right rule in EF (which is connected to the mainViewModel)
var dboItem = ruleViewModel.MainViewModel.dbo.Rules.Single(r => r.Id == ruleViewModel.Rule.Id);
// set the values in the EF item to be those we got from the GUI
ruleViewModel.MainViewModel.dbo.Entry(dboItem).CurrentValues.SetValues(ruleViewModel.Rule);
// Save the differences
ruleViewModel.MainViewModel.dbo.SaveChanges();
If the user only saves a single rule, it all works fine, but if they subsequently try to save another, or if they save more than one at once, they get the following error, which is return by the ..SetValues(..) line:
Message = "The property 'Id' is part of the object's key information and cannot be modified. "
I see from other questions on this subject that there is a feature of EF that stops you from writing the same object twice to the database with a different Id, so this error often happens within a loop. I have tried using some of the suggestions, like adding
viewModel.MainViewModel.dbo.Rules.Add(dboItem);
and
viewModel.MainViewModel.dbo.Entry(dboItem).Property(x => x.Id).IsModified = false;
before the SaveChanges() command, but that has not helped with the problem (not to mention changing the function of the code). I see that some other suggestions say that the Entry should be created within the loop, but in this case, the entries are all existing rules in the database - it seems to me (perhaps erroneously) that I cannot create them inside the save loop, since they are the objects over which the loop is built - for each entity I find, I want to save changes.
I'm really confused about what to do and tying myself increasingly in knots trying to fix the error. It's been several days now and my sanity and self-esteem is beginning to wane! Any pointers to get me working in the right direction to stop the error appearing and allow me to set the database values would be really welcome as I feel like I have hit a complete dead end! The first time around the loop, everything works perfectly.
Aside from the questionable location of the DbContext and view models containing entities, this looks like it would work as expected. I'm assuming from the MVVM tag that this is a Windows application rather than a web app. The only issue is that this assumes that the Rule entity in your ruleViewModel is detached from the DbContext. If the DbContext is still tracking that entity reference then getting the entity from the DbContext again would pass you back the same reference.
It would probably be worth testing this once in a debug session. If you add the following:
var dboItem = ruleViewModel.MainViewModel.dbo.Rules.Single(r => r.Id == ruleViewModel.Rule.Id);
bool isReferenceSame = Object.ReferenceEquals(dboItem, ruleViewModel.Rule);
Do you get an isReferenceSame value of True or False? If True, the DbContext in your main view model is still tracking the Rule entity and the whole get dboItem and SetValues isn't necessary. If False, then the ruleViewModel is detached.
If the entities are attached and being tracked then edits to the view model entities would be persisted when you call a SaveChanges on the DbContext. (No load & SetValues needed) This should apply to single or multiple entity edits.
If the entities are detached then normally the approach for updating an entity across DbContext instances would look more like:
var context = mainViewModel.dbo;
foreach( var ruleViewModel in updatedRuleViewModels)
{
// This should associate the Entity in the ruleViewModel with the DbContext and set it's tracking state to Modified.
context.Entry(ruleViewModel.Rule).State = EntityState.Modified;
}
context.SaveChanges();
There are a couple of potential issues with this approach that you should consider avoiding if possible. A DbContext should be kept relatively short lived, so seeing a reference to a DbContext within a ViewModel is a bit of a red flag. Overall I don't recommend putting entity references inside view models or passing them around outside of the scope of the DbContext they were created in. EF certainly supports it, but it requires a bit more care and attention to assess whether entities are tracked or not, and in situations like web applications, opens the domain to invalid tampering. (Trusting the entity coming in where any change is attached or copied across overwriting the data state)
I have a situation where I am caching the contents of a table in the database. I am using Entity Framework 6 against an SQL Azure back-end. When the data in the table is updated.The process looks a little like this:
Receive data from UI
Insert/Update according to current state of store
Trigger Cache rebuild ( on separate service )
Then on the cache service
Clear cache
Load all entities from the table
Add the collection to the cache
The code on the data service works along these lines- this is obviously a highly abstracted version, but it shows the steps we go through:
public void UpdateProperty( int newVal )
{
SetNewPropertyVal(newVal);
TriggerUpdateEvent( newVal );
}
private void SetNewPropertyVal(int newVal)
{
using (var context = new MyContext())
{
using ( var mySet = context.Set<MyEntityType>();
{
var record = mySet.FindRecordToUpdate();
record.UpdateableFieldValue = newVal;
context.SaveChanges();
}
}
}
The problem is that although context.SaveChanges() has been called before the TriggerUpdateEvent is raised, when cache rebuild ( running in a separate, fully independent, thread against a separate instance of the DbContext ) retrieves the collection of entities, it contains the old value for the updated property. This looks like a race condition- if I put a simple Thread.Sleep(1000) in the cache refresh it works consistently, but I can't believe that is a good solution to this problem.
How do I avoid triggering a cache rebuild until the Entity Framework has actually updated the data store? I thought a transaction might do the trick, but SQL Azure doesn't seem to offer them.
In this case #ivan-stoev correctly explained that there is no reason for this code to fail synchronously. That lead me to explore the Cache rebuild process in more detail and there was a reliance on a second cache concealed away in an AutoMapper configuration that was causing the old value to show up in my searches.
So for anyone else who turns up with this problem, the bug isn't in this part of your code.
This question has been asked 500 different times in 50 different ways...but here it is again, since I can't seem to find the answer I'm looking for:
I am using EF4 with POCO proxies.
A.
I have a graph of objects I fetched from one instance of an ObjectContext. That ObjectContext is disposed.
B.
I have an object I fetched from another instance of an ObjectContext. That ObjectContext has also been disposed.
I want to set a related property on a bunch of things from A using the entity in B....something like
foreach(var itemFromA in collectionFromA)
{
itemFromA.RelatedProperty = itemFromB;
}
When I do that, I get the exception:
System.InvalidOperationException occurred
Message=The relationship between the two objects cannot be defined because they are attached to different ObjectContext objects.
Source=System.Data.Entity
StackTrace:
at System.Data.Objects.DataClasses.RelatedEnd.Add(IEntityWrapper wrappedTarget, Boolean applyConstraints, Boolean addRelationshipAsUnchanged, Boolean relationshipAlreadyExists, Boolean allowModifyingOtherEndOfRelationship, Boolean forceForeignKeyChanges)
at System.Data.Objects.DataClasses.RelatedEnd.Add(IEntityWrapper wrappedEntity, Boolean applyConstraints)
at System.Data.Objects.DataClasses.EntityReference`1.set_ReferenceValue(IEntityWrapper value)
at System.Data.Objects.DataClasses.EntityReference`1.set_Value(TEntity value)
at
I guess I need to detach these entities from the ObjectContexts when they dispose in order for the above to work... The problem is, detaching all entities from my ObjectContext when it disposes seems to destroy the graph. If I do something like:
objectContext.ObjectStateManager.GetObjectStateEntries(EntityState.Added | EntityState.Deleted | EntityState.Modified | EntityState.Unchanged)
.Select(i => i.Entity).OfType<IEntityWithChangeTracker>().ToList()
.ForEach(i => objectContext.Detach(i));
All the relations in the graph seem to get unset.
How can I go about solving this problem?
#Danny Varod is right. You should use one ObjectContext for the whole workflow. Moreover because your workflow seems as one logical feature containing multiple windows it should probably also use single presenter. Then you would follow recommended approach: single context per presenter. You can call SaveChanges multiple times so it should not break your logic.
The source of this issue is well known problem with deficiency of dynamic proxies generated on top of POCO entities combined with Fixup methods generated by POCO T4 template. These proxies still hold reference to the context when you dispose it. Because of that they think that they are still attached to the context and they can't be attached to another context. The only way how to force them to release the reference to the context is manual detaching. In the same time once you detach an entity from the context it is removed from related attached entities because you can't have mix of attached and detached entities in the same graph.
The issue actually not occures in the code you call:
itemFromA.RelatedProperty = itemFromB;
but in the reverse operation triggered by Fixup method:
itemFromB.RelatedAs.Add(itemFromA);
I think the ways to solve this are:
Don't do this and use single context for whole unit of work - that is the supposed usage.
Remove reverse navigation property so that Fixup method doesn't trigger that code.
Don't use POCO T4 template with Fixup methods or modify T4 template to not generate them.
Turn off lazy loading and proxy creation for these operations. That will remove dynamic proxies from your POCOs and because of that they will be independent on the context.
To turn off proxy creation and lazy loading use:
var context = new MyContext();
context.ContextOptions.ProxyCreationEnabled = false;
You can actually try to write custom method to detach the whole object graph but as you said it was asked 500 times and I haven't seen working solution yet - except the serialization and deserialization to the new object graph.
I think you have a few different options here, 2 of them are:
Leave context alive until you are done with the process, use only 1 context, not 2.
a. Before disposing of context #1, creating a deep clone of graph, using BinaryStreamer or a tool such as ValueInjecter or AutoMapper.
b. Merge changes from context #2 into cloned graph.
c. Upon saving, merge changes from cloned graph into graph created by new ObjectContext.
For future reference, this MSDN blogs link can help decide you decide what to do when:
http://blogs.msdn.com/b/dsimmons/archive/2008/02/17/context-lifetimes-dispose-or-reuse.aspx
I don't think you need to detach to solve the problem.
We do something like this:
public IList<Contact> GetContacts()
{
using(myContext mc = new mc())
{
return mc.Contacts.Where(c => c.City = "New York").ToList();
}
}
public IList<Sale> GetSales()
{
using(myContext mc = new mc())
{
return mc.Sales.Where(c => c.City = "New York").ToList();
}
}
public void SaveContact(Contact contact)
{
using (myContext mc = new myContext())
{
mc.Attach(contact);
contact.State = EntityState.Modified;
mc.SaveChanges();
}
}
public void Link()
{
var contacts = GetContacts();
var sales = GetSales();
foreach(var c in contacts)
{
c.AddSales(sales.Where(s => s.Seller == c.Name));
SaveContact(c);
}
}
This allows us to pull the data, pass it to another layer, let them do whatever they need to do, and then pass it back and we update or delete it. We do all of this with a separate context (one per method) (one per request).
The important thing to remember is, if you're using IEnumerables, they are deferred execution. Meaning they don't actually pull the information until you do a count or iterate over them. So if you want to use it outside your context you have to do a ToList() so that it gets iterated over and a list is created. Then you can work with that list.
EDIT Updated to be more clear, thanks to #Nick's input.
Ok I get it that your object context has long gone.
But let's look at it this way, Entity Framework implements unit of work concept, in which it tracks the changes you are making in your object graph so it can generate the SQL corresponding to the changes you have made. Without attached to context, there is no way it can tack changes.
If you have no control over context then I don't think there is anything you can do.
Otherwise there are two options,
Keep your object context alive for longer lifespan like session of user logged in etc.
Try to regenerate your proxy classes using self tracking text template that will enable change tracking in disconnected state.
But even in case of self tracking, you might still get little issues.
I need to reset a boolean field in a specific table before I run an update.
The table could have 1 million or so records and I'd prefer not to have to have to do a select before update as its taking too much time.
Basically what I need in code is to produce the following in TSQL
update tablename
set flag = false
where flag = true
I have some thing close to what I need here http://www.aneyfamily.com/terryandann/post/2008/04/Batch-Updates-and-Deletes-with-LINQ-to-SQL.aspx
but have yet to implement it but was wondering if there is a more standard way.
To keep within the restrictions we have for this project, we cant use SPROCs or directly write TSQL in an ExecuteStoreCommand parameter on the context which I believe you can do.
I'm aware that what I need to do may not be directly supported in EF4 and we may need to look at a SPROC for the job [in the total absence of any other way] but I just need to explore fully all possibilities first.
In an EF ideal world the call above to update the flag would be possible or alternatively it would be possible to get the entity with the id and the boolean flag only minus the associated entities and loop through the entity and set the flag and do a single SaveChanges call, but that may not be the way it works.
Any ideas,
Thanks in advance.
Liam
I would go to stakeholder who introduced restirctions about not using SQL or SProc directly and present him these facts:
Updates in ORM (like entity framework) work this way: you load object you perform modification you save object. That is the only valid way.
Obviously in you case it would mean load 1M entities and execute 1M updates separately (EF has no command batching - each command runs in its own roundtrip to DB) - usually absolutely useless solution.
The example you provided looks very interesting but it is for Linq-To-Sql. Not for Entity framework. Unless you implement it you can't be sure that it will work for EF, because infrastructure in EF is much more complex. So you can spent several man days by doing this without any result - this should be approved by stakeholder.
Solution with SProc or direct SQL will take you few minutes and it will simply work.
In both solution you will have to deal with another problem. If you already have materialized entities and you will run such command (via mentioned extension or via SQL) these changes will not be mirrored in already loaded entities - you will have to iterate them and set the flag.
Both scenarios break unit of work because some data changes are executed before unit of work is completed.
It is all about using the right tool for the right requirement.
Btw. loading of realted tables can be avoided. It is just about the query you run. Do not use Include and do not access navigation properties (in case of lazy loading) and you will not load relation.
It is possible to select only Id (via projection), create dummy entity (set only id and and flag to true) and execute only updates of flag but it will still execute up to 1M updates.
using(var myContext = new MyContext(connectionString))
{
var query = from o in myContext.MyEntities
where o.Flag == false
select o.Id;
foreach (var id in query)
{
var entity = new MyEntity
{
Id = id,
Flag = true
};
myContext.Attach(entity);
myContext.ObjectStateManager.GetObjectStateEntry(entity).SetModifiedProperty("Flag");
}
myContext.SaveChanges();
}
Moreover it will only work in empty object context (or at least no entity from updated table can be attached to context). So in some scenarios running this before other updates will require two ObjectContext instances = manually sharing DbConnection or two database connections and in case of transactions = distributed transaction and another performance hit.
Make a new EF model, and only add the one Table you need to make the update on. This way, all of the joins don't occur. This will greatly speed up your processing.
ObjectContext.ExecuteStoreCommand ( _
commandText As String, _
ParamArray parameters As Object() _
) As Integer
http://msdn.microsoft.com/en-us/library/system.data.objects.objectcontext.executestorecommand.aspx
Edit
Sorry, did not read the post all the way.