How to to prevent EF from retrieving certain objects - c#

Excuse me for my broken English.
In my application, all objects in the context have a property called ObsoleteFlag, which basically means if the object should still be used on the frontend. It's some sort of "soft-delete" flag without actually having to delete the data.
Now I want to prevent EF from returning any object where ObsoleteFlag is set to true (1)
If for example I retrieve object X, the navigational list property Y contains all the related objects of type Y, no matter what the ObsoleteFlag is set to.
Is there some general way of preventing EF from doing this? I don't want to check on the ObsoleteFlag property everywhere I access the context, and for every navigational property that may be loaded too.
Thanks and sorry for my broken English.

Two different approaches:
In your repository layer have a GetAllWhatever() that returns IQueryable<Whatever> and uses Where(x => !x.Obsolete) and use this whenever you retrieve objects of this type.
Create a view of Create View ActiveWhatever As Select * from ActiveWhatever Where obsolete = 0 and bind to that rather than the table.
The first is essentially checking the flag every time, but doing so in one place, so you don't have to keep thinking about it.
The second is much the same, but the work is pushed to the database instead of the .NET code. If you are going to modify the entities or add new entities you will have to make it a modifiable view, but just how that is done depends on the database in question (e.g. you can do it with triggers in SQL Server, and triggers or rules in PostgreSQL).
The second can also include having a rule or trigger for DELETE that sets your obsolete property instead of deleting, so that a normal delete as far as Entity Framework is concerned becomes one of your soft-deletes as far as the database is concerned.
I'd go for that approach unless you had a reason to object to a view existing just to help the application's implementation (that is you're heavily into the database being "pure" in being concerned with the data rather than its use). But then, if it's handy for one application it's likely handy for more, given the very meaning of this "obsolete".

Related

C# EF 6 CurrentValues.SetValues cannot change Object's Key Information

I have seen other questions about this same error, but I am unable to correct the error with those suggestions in my code; I think that this is a different problem and not a duplicate.
I have an app that makes a series of rules, of which the user can set properties in the GUI. There is a table of Rules in a connected database, with the primary key on the Rule.Id. When the user saves changes to a rule, the existing rule gets "IsActive=0" to hide it, then a new database record is made with the properties from the GUI written to the database. It looks to the user as though they have edited the rule, but the database actually sees a new rule reflecting the new properties (this allows for a history to be kept), connected to the old rule by another reference field.
In the C# code for the app, the View Model for each rule contains an EF Rule object property. When the user clicks "save" I use the parameters set in the view to build the ruleViewModel.Rule for each ruleViewModel they want to save, with porperties matching the GUI. The MainViewModel contains the DbContext object called dbo, so I use the ruleViewModel.Rule to write to the mainViewModel.dbo.Entry which I save to the Entity Framework. Here are the three basic steps performed for each saveable Rule View Model:
// get the rule from the GUI and use it to make sure we are updating the right rule in EF (which is connected to the mainViewModel)
var dboItem = ruleViewModel.MainViewModel.dbo.Rules.Single(r => r.Id == ruleViewModel.Rule.Id);
// set the values in the EF item to be those we got from the GUI
ruleViewModel.MainViewModel.dbo.Entry(dboItem).CurrentValues.SetValues(ruleViewModel.Rule);
// Save the differences
ruleViewModel.MainViewModel.dbo.SaveChanges();
If the user only saves a single rule, it all works fine, but if they subsequently try to save another, or if they save more than one at once, they get the following error, which is return by the ..SetValues(..) line:
Message = "The property 'Id' is part of the object's key information and cannot be modified. "
I see from other questions on this subject that there is a feature of EF that stops you from writing the same object twice to the database with a different Id, so this error often happens within a loop. I have tried using some of the suggestions, like adding
viewModel.MainViewModel.dbo.Rules.Add(dboItem);
and
viewModel.MainViewModel.dbo.Entry(dboItem).Property(x => x.Id).IsModified = false;
before the SaveChanges() command, but that has not helped with the problem (not to mention changing the function of the code). I see that some other suggestions say that the Entry should be created within the loop, but in this case, the entries are all existing rules in the database - it seems to me (perhaps erroneously) that I cannot create them inside the save loop, since they are the objects over which the loop is built - for each entity I find, I want to save changes.
I'm really confused about what to do and tying myself increasingly in knots trying to fix the error. It's been several days now and my sanity and self-esteem is beginning to wane! Any pointers to get me working in the right direction to stop the error appearing and allow me to set the database values would be really welcome as I feel like I have hit a complete dead end! The first time around the loop, everything works perfectly.
Aside from the questionable location of the DbContext and view models containing entities, this looks like it would work as expected. I'm assuming from the MVVM tag that this is a Windows application rather than a web app. The only issue is that this assumes that the Rule entity in your ruleViewModel is detached from the DbContext. If the DbContext is still tracking that entity reference then getting the entity from the DbContext again would pass you back the same reference.
It would probably be worth testing this once in a debug session. If you add the following:
var dboItem = ruleViewModel.MainViewModel.dbo.Rules.Single(r => r.Id == ruleViewModel.Rule.Id);
bool isReferenceSame = Object.ReferenceEquals(dboItem, ruleViewModel.Rule);
Do you get an isReferenceSame value of True or False? If True, the DbContext in your main view model is still tracking the Rule entity and the whole get dboItem and SetValues isn't necessary. If False, then the ruleViewModel is detached.
If the entities are attached and being tracked then edits to the view model entities would be persisted when you call a SaveChanges on the DbContext. (No load & SetValues needed) This should apply to single or multiple entity edits.
If the entities are detached then normally the approach for updating an entity across DbContext instances would look more like:
var context = mainViewModel.dbo;
foreach( var ruleViewModel in updatedRuleViewModels)
{
// This should associate the Entity in the ruleViewModel with the DbContext and set it's tracking state to Modified.
context.Entry(ruleViewModel.Rule).State = EntityState.Modified;
}
context.SaveChanges();
There are a couple of potential issues with this approach that you should consider avoiding if possible. A DbContext should be kept relatively short lived, so seeing a reference to a DbContext within a ViewModel is a bit of a red flag. Overall I don't recommend putting entity references inside view models or passing them around outside of the scope of the DbContext they were created in. EF certainly supports it, but it requires a bit more care and attention to assess whether entities are tracked or not, and in situations like web applications, opens the domain to invalid tampering. (Trusting the entity coming in where any change is attached or copied across overwriting the data state)

Determine if the context of an entity proxy has been disposed

In a EF 6 project, I am writing validation functions for entities. some are static while others are instance methods of the entities themselves.
Ignoring whether this is bad practice or not, I'd like to check whether the entities were created using a context and if so, whether they are still attached.
Please note that these functions do NOT have access to the context object, just the entity classes.
As an example, a method validates Department entity and cascades validation to all associated Department.Employee instances.
If the hierarchy was created manually, validation will succeed.
If the hierarchy was created using a context which is still alive, validation will succeed albeit slower.
If the hierarchy was created using a context which has been disposed, validation will fail with an ObjectDisposedException (provided proxy-creation was enabled and .Include(***) was not used).
So the question, is it possible to detect the above scenarios without access to a DbContext instance? If not, how can we best validate entire hierarchies irrespective of how they were created.
var result = true;
var departments = ???; // Constructed manually or through a DbContext instance.
foreach (var department in departments)
{
result &= department.Validate();
foreach (var employee in department.Employees)
{
result &= employee.Validate();
}
}
EDIT: Please note that this is for a desktop application that cannot have long-running DbContext instances. they are almost always disposed immediately after retrieving data. Re-querying the database does not seem a viable option for validation since it is triggered by trivial user input and would slow down the entire user experience.
From your question
Please note that these functions do NOT have access to the context object, just the entity classes.
two solutions come to mind, none really palatable:
Build your own tracker and make it available to these methods somehow.
Add something to your entities, for example a WasLoaded property that gets set when you query your context. That WasLoaded could be set by either
Writing an EF interceptor that sets it.
Adding an artificial bit column with all values set to 1. Then map that to the property; the property will be false if you constructed it outside of the context, true if loaded from the context.
The tracker seems to be the cleanest because it doesn't pollute your model. The interceptor is a decent alternative if you're not concerned about your model.
And while it doesn't answer your question directly, you could avoid the use of proxies, in which case your validation works the same way regardless because you have your model in memory. There's the usual trade-offs to consider though.
I'm not sure how you'd detect the last scenario. I suppose you could have your tracker track more than the entities... have it also track the context's state.

Linq-To-Sql with WCF, Models, and POCO ViewModels Disconnected "DataContext" Timestamp/Rowversion

I have a Linq-To-Sql based repository class which I have been successfully using. I am adding some functionality to the solution, which will provide WCF based access to the database.
I have not exposed the generated Linq classes as DataContracts, I've instead created my own "ViewModel" as a POCO for each entity I am going to be returning.
My question is, in order to do updates and take advantage of some of the Linq-To-Sql features like cyclic references from within my Service, do I need to add a Rowversion/Timestamp field to each table in by database so I can use code like dc.Table.Attach(myDisconnectedObject)? The alternitive, seems ugly:
var updateModel = dc.Table.SingleOrDefault(t => t.ID == myDisconnectedObject.ID);
updateModel.PropertyA = myDisconnectedObject.PropertyA;
updateModel.PropertyB = myDisconnectedObject.PropertyB;
updateModel.PropertyC = myDisconnectedObject.PropertyC;
// and so on and so forth
dc.SubmitChanges();
I guess a RowVersion/TimeStamp column on each table might be the best and least intrusive option - just basically check for that one value, and you're sure whether or not your data might have been modified in the mean time. All other columns can be set to Update Check=Never. This will take care of handling the possible concurrency issues when updating your database from "returning" objects.
However, the other thing you should definitely check out is AutoMapper - it's a great little component to ease those left-right-assignment orgies you have to go through when using ViewModels / Data Transfer Objects by making this mapping between two object types a snap. It's well used, well tested, used by many and very stable - a winner!

Ensure LINQ to SQL Entities Delete On Submit

What is the best way to mark some entities DeleteOnSubmit(). Is there a way to check and say to the context that this is for deletion?
Example: I have an Entity which reference an EntitySet<> and i delete from the EntitySet<> 4 of the 8 entities. When submitting changes i want to say DeleteOnSubmit() on those 4! This scenario should play on a single EntityRef<> too.
Of course DataContext lives in another layer so...grabbing, changing, sending back is the job.
Thank you.
This is pretty hard to answer based on the description of your architecture. Just because you're using a layered approach doesn't mean that you can't call DeleteOnSubmit... you'd just call your own method that wraps that I presume.
Unless, of course, you're instantiating your DataContext object in the update routine. in this case you'd have to do something else. Your data layer could expose a method like MarkForDelete() which just adds the entity to a collection, then expose a separate SubmitChanges() that iterates over the collected items for deletion, attaches them to the datacontext and then does the actual DeleteAllOnSubmit() call.
That said I've never really bothered with the whole entity serialization/deserialization/reattach thing as it seems fraught with peril. I usually just collect the primary keys in a list, select out the entities and re-delete them. It's no more work, really.
Take a look at DeleteAllOnSubmit(). You pass this method a list of entities to be deleted.

Reconstituting domain objects from database: identity problem

We are using Linq to SQL to read and write our domain objects to a SQL Server database.
We are exposing a number of services (via WCF) to do various operations. Conecptually, the implementation of these operations consists of three steps: reconstitute the necessary domain objects from the database; execute the operation on the domain objects; persist the (now changed) domain objects back to the database.
Problem is that sometimes, there are two or more instances of the same entity objects, which can lead to inconsistenties when saving the objects back to the db. A little made-up example:
public void Move(string sourceLocationid, destinationLocationId, itemId);
which is supposed to move the item with the given id from the source to the destination location (actual services are more complicated, often involving many locations, items etc). Now, it could be that both source and destination location id are the same - a naive implementation would just reconstitute two instances of the entity object, which would lead to problems.
This issue is now "solved" by checking for it manually, i.e. we reconstitute a first location, check if the id of the second is different from it, and if so reconsistute the second, and so on. This is obvisouly difficult and error-prone.
Anyway, I was actually surprised that there does not seem to be a "standard" solution for this in domain driven design. In particular, repositories or factories do not seem to solve this problem (unless they maintain their own cache, which then needs to be updated etc).
My idea would be to make a DomainContext object per operation, which tracks and caches the domain objects used in that particular method. Instead of reconstituing and saving individual domain objects, such an object would be reconstituted and saved as a whole (possibly using repositories), and it could act as a cache for the domain objects used in that particular operation.
Anyway, it seems that this is a common problem, so how is this usually dealt with? What do you think of the idea above?
The DataContext in Linq-To-Sql supports the Identity Map concept out of the box and should be caching the objects you retrieve. The objects will only be different if you are not using the same DataContext for each GetById() operation.
Linq to Sql objects aren't really valid outside of the lifetime of the DataContext. You may find Rick Strahl's Linq to SQL DataContext Lifetime Management a good background read.
Also, the ORM is not responsible for logic in the domain. It's not going to disallow your example Move operation. That's up for the domain to decide what that means. Does it ignore it? or is it an error? It's your domain logic, and that needs to be implemented at the service boundary you are creating.
However, Linq-To-Sql does know when an object changes, and from what I've looked at, it won't record the change if you are re-assigning the same value. e.g. if Item.LocationID = 12, setting the locationID to 12 again won't trigger an update when SubmitChanges() is called.
Based on the example given, I'd be tempted to return early without ever loading an object if the source and destination are the same.
public void Move(string sourceLocationId, destinationLocationId, itemId)
{
if( sourceLocationId == destinationLocationId )
return;
using( DataContext ctx = new DataContext() )
{
Item item = ctx.Items.First( o => o.ItemID == itemId );
Location destination =
ctx.Locations.First( o => o.LocationID == destinationLocationID );
item.Location = destination;
ctx.SubmitChanges();
}
}
Another small point, which may or may not be applicable, is you should make your interfaces as chunky as possible. e.g. If you're typically going to perform 10 move operations at once, it's better to call 1 service method to perform all 10 operations at once, rather than 1 operation at a time. ref: chunky vs chatty
Many ORMs use two concepts that, if I understand you, address your issue. The first and most relevant is Context this is responsible for ensuring that only one object represents a entity (database table row, in the simple case) no mater how many times or ways it's requested from the database. The second is Unit of Work; this ensures that updates to the database for a group of entities either all succeed or all fail.
Both of these are implemented by the ORM I'm most familiar with (LLBLGen Pro), however I believe NHibernate and others also implement these concepts.

Categories