I am using SQL Server as my back end database and Entity Framework 6 to access it.
I want to undo all the changes done to the database by a method. The method makes several calls to 4 different databases and thus 4 different contexts. I am not able to keep track of changes to revert them at the end.
I am aware of context.ChangeTracker.Entries() that keeps record of DB changes. But I am unable to utilize it because changes are lost as soon as a context goes out of scope. And I need need to revert the changes at the end of methods after accessing all 4 databases.
You should use transactionScope or beginTransaction, here you can get some basic information on both topics to get you started and here you can learn about the difference between them which will help you choose the right one for you.
Related
Im running a process that will affect a lot of records within a database for a user. I only want to apply all of the changes or none of them depending on the result of all of the changes. (e.g if one of the sub processes fail then no changes overall should take place). I also want to save notifications to the database to alert users of the outcome of the processes (e.g if a sub process fails then a notification is raised to let the user know that no changes were made due to reason x).
The best way I can think to do this is to detach all of the entries within the change tracker as they are added, then create notifications if something has succeeded or failed and save changes, then when it comes to applying all the changes I can iterate though the change tracker and reset the Entity State and save changes once more.
The issue i'm facing with this approach is that when it comes to reset the Entity State, I don't know whether the entity is Added or Modified. I could implement my own change tracker to store the previous state of the entity but it would make EF's change tracker redundant.
I could also only add all of the entity's right when I come to save them but that would require passing many objects down a chain link of nested methods right until the end.
Does anyone have any better suggestions or is it standard practice to use one of the mentioned hacks for this problem?
It sounds like you are trying to implement the Unit of Work pattern. The DbContext of EntityFramework makes this fairly easy to use, as the DbContext its self is the unit of work.
Just instantiate a new context and make the changes you need to it. You can pass the context around to any functions that make their changes. Once the "logical unit" operations are complete, call SaveChanges. As long as the individual methods do not call SaveChanges, you can compose them together in to a single unit, committed once the entire logical operation as finished. Everything will be committed atomically, within a single transaction. The data won't be left in an inconsistent state.
You told about transactions. Using Transactions or SaveChanges(false) and AcceptAllChanges()?
also you can implement versions of data in DB. as for me it will be more ease and correct way (you must always only insert data and never update. 1-to-many). in this case you can simply delete last records or mark them as unactive
I'm using EF 4.0 with VS2010. I have 2 clients running my applicaion.
When I save the changes in one client, I see them in the SQL server, but the 2nd client doesn't see them.
I need to restart the application to see the changes.
I'm using a Data layer for all the DB stuff, I leave my connection open all the time (as suggest in some post I read) might it be the problem??? any workaround I can't write the DL from scratch again.
10x
By default if an entity is loaded to the context that instance is returned when you query the database for a set of entities which will include the above entity.
You need to set the MergeOption to OverwriteChanges to get the changes in the database.
context.Products.MergeOption = MergeOption.OverwriteChanges;
var products = context.Products.Where(/**/);
Its better to create short lived to contexts to avoid such problems.
EntityFramwork isn't updating data when you change it on other connection. To get new state you have to recreate Context and load all data again.
I need the ability to sync multiple remote databases, upload and download, with my main database.
However, the problem lies in the fact that I need to sync the entire database, and the database schema is going to be being updated constantly, and I didn't see any way to code it to grab the entire database schema without adding each individual table to the SyncScope.
This is problematic as that scope will always be changing. I solved the initial problem of removing the existing scope, and adding a new one, but I still cannot find any simple solutions, without querying system tables, and parsing the results, and passing those results (for 150+) tables back to my SyncScope.
The reasons I originally looked at Sync Framework are:
I need to be able to manage the direction of the sync (upload/download) when I do a sync programatically from C# on a button click.
I need the ability to turn on that button, based off their network connectivity.
There's additional tasks that need to be done on a sync download, such as changing connection strings of the mobile units, and storing information about their connection and unit in the database.
There's additional tasks that need to be run on a sync upload, such as verifying data against customer business rules through my OR/M, archiving the data to a network storage, restarting the application, and changing connection strings again.
Eventually, I need partial data sets, decided/chosen by the customer, at run-time, at the object level, in an OR/M framework. These objects, may coincide with one or more tables I won't know of at design-time, or may not even exist at design-time.
Does anyone know if another framework encompasses all my requirements, or if there is a simpler way to do this in the sync framework?
For this task, especially with a changing schema, you could consider Merge Replication instead of the Sync framework.
I'm using .NET entity framework 4.1 with code-first approach to effectively solve the following problem, here simplified.
There's a database table with tens of thousands of entries.
Several users of my program need to be able to
View the (entire) table in a GridRow, which implied that the entire Table has to be downloaded.
Modify values of any random row, changes are frequent but need not be persisted immediately. It's expected that different users will modify different rows, but this is not always true. Some loss of changes is permitted, as users will most likely update same rows to same values.
On occasion add new rows.
Sounds simple enough. My initial approach was to use a long-running DbContext instance. This one DbContext was supposed to track changes to the entities, so that when SaveChanges() is called, most of the legwork is done automatically. However many have pointed out that this is not an optimal solution in the long run, notably here. I'm still not sure if I understand the reasons, and I don't see what a unit-of-work is in my scenario either. The user chooses herself when to persist changes, and let's say that client always wins for simplicity. It's also important to note that objects that have not been touched don't overwrite any data in the database.
Another approach would be to track changes manually or use objects that track changes for me, however I'm not too familiar with such techniques, and I would welcome a nudge in the right direction.
What's the correct way to solve this problem?
I understand that this question is a bit wishy-washy, but think of it as more fundamental. I lack fundamental understanding about how to solve this class of problems. It seems to me that long living DbContext is the right way, but knowledgeable people tell me otherwise, which leads me to confusion and imprecise questions.
EDIT1
Another point of confusion is the existance of Local property on the DbSet<> object. It invites me to use a long running context, as another user has posted here.
Problem with long running context is that it doesn't refresh data - I more discussed problems here. So if your user opens the list and modify data half an hour she doesn't know about changes. But in case of WPF if your business action is:
Open the list
Do as many actions as you want
Trigger saving changes
Then this whole is unit of work and you can use single context instance for that. If you have scenario where last edit wins you should not have problems with this until somebody else deletes record which current user edits. Additionally after saving or cancelling changes you should dispose current context and load data again - this will ensure that you really have fresh data for next unit of work.
Context offers some features to refresh data but it only refreshes data previously loaded (without relations) so for example new unsaved records will be still included.
Perhaps you can also read about MS Sync framework and local data cache.
Sounds to me like your users could have a copy (cached) of the data for an indefinate period of time. The longer the users are using cached data the greater the odds that they could become disconnected from the database connection in DbContext. My guess is EF doesn't handle this well and you probably want to deal with that. (e.g. occaisionally connected architecture). I would expect implementing that may solve many of your issues.
I am currently evaluating the Microsoft sync framework as a possible solution to sync data between two SQL databases. The examples I have seen so far rely on "tracking tables" containing the information used to track changes to be synced, with triggers on the main tables to keep them up to date.
My database already contains lots of this information (for an existing feature of the software), so it would be good to make use of that instead of having to migrate it all to the new tracking tables. I also don't like the ideas of doubling-up each table into a data table and a tracking table, and adding three triggers to each table - that sounds like it is likely to be a performance issue?
Is there any way of customising the tracking mechanism used by the sync framework (ie. the way in which changes are tracked)?
Yes, it is entirely possible to write your own logic to track changes and use them. For eg. one of the db syncproviders I have used, requires that you should define selectincrementalinsert command. Now which table(s) that data comes from and how you filter out the latest records is immaterial - you just need to define a query or an sp that gives you this data. This applies to all the other incremental sps (which deal with the change tracking)
Along with that you need an anchor value to define when the last sync has happened. I think there is no point in avoiding this one, since this is used exclusively for synchronization and your existing tracking tables will not contain a replacement for this.