I have been exploring different methods of editing/updating a record within Entity Framework 5 in an ASP.NET MVC3 environment, but so far none of them tick all of the boxes I need. I'll explain why.
I have found three methods to which I'll mention the pros and cons:
Method 1 - Load original record, update each property
var original = db.Users.Find(updatedUser.UserId);
if (original != null)
{
original.BusinessEntityId = updatedUser.BusinessEntityId;
original.Email = updatedUser.Email;
original.EmployeeId = updatedUser.EmployeeId;
original.Forename = updatedUser.Forename;
original.Surname = updatedUser.Surname;
original.Telephone = updatedUser.Telephone;
original.Title = updatedUser.Title;
original.Fax = updatedUser.Fax;
original.ASPNetUserId = updatedUser.ASPNetUserId;
db.SaveChanges();
}
Pros
Can specify which properties change
Views don't need to contain every property
Cons
2 x queries on database to load original then update it
Method 2 - Load original record, set changed values
var original = db.Users.Find(updatedUser.UserId);
if (original != null)
{
db.Entry(original).CurrentValues.SetValues(updatedUser);
db.SaveChanges();
}
Pros
Only modified properties are sent to database
Cons
Views need to contain every property
2 x queries on database to load original then update it
Method 3 - Attach updated record and set state to EntityState.Modified
db.Users.Attach(updatedUser);
db.Entry(updatedUser).State = EntityState.Modified;
db.SaveChanges();
Pros
1 x query on database to update
Cons
Can't specify which properties change
Views must contain every property
Question
My question to you guys; is there a clean way that I can achieve this set of goals?
Can specify which properties change
Views don't need to contain every property (such as password!)
1 x query on database to update
I understand this is quite a minor thing to point out but I may be missing a simple solution to this. If not method one will prevail ;-)
You are looking for:
db.Users.Attach(updatedUser);
var entry = db.Entry(updatedUser);
entry.Property(e => e.Email).IsModified = true;
// other changed properties
db.SaveChanges();
I really like the accepted answer. I believe there is yet another way to approach this as well. Let's say you have a very short list of properties that you wouldn't want to ever include in a View, so when updating the entity, those would be omitted. Let's say that those two fields are Password and SSN.
db.Users.Attach(updatedUser);
var entry = db.Entry(updatedUser);
entry.State = EntityState.Modified;
entry.Property(e => e.Password).IsModified = false;
entry.Property(e => e.SSN).IsModified = false;
db.SaveChanges();
This example allows you to essentially leave your business logic alone after adding a new field to your Users table and to your View.
foreach(PropertyInfo propertyInfo in original.GetType().GetProperties()) {
if (propertyInfo.GetValue(updatedUser, null) == null)
propertyInfo.SetValue(updatedUser, propertyInfo.GetValue(original, null), null);
}
db.Entry(original).CurrentValues.SetValues(updatedUser);
db.SaveChanges();
I have added an extra update method onto my repository base class that's similar to the update method generated by Scaffolding. Instead of setting the entire object to "modified", it sets a set of individual properties. (T is a class generic parameter.)
public void Update(T obj, params Expression<Func<T, object>>[] propertiesToUpdate)
{
Context.Set<T>().Attach(obj);
foreach (var p in propertiesToUpdate)
{
Context.Entry(obj).Property(p).IsModified = true;
}
}
And then to call, for example:
public void UpdatePasswordAndEmail(long userId, string password, string email)
{
var user = new User {UserId = userId, Password = password, Email = email};
Update(user, u => u.Password, u => u.Email);
Save();
}
I like one trip to the database. Its probably better to do this with view models, though, in order to avoid repeating sets of properties. I haven't done that yet because I don't know how to avoid bringing the validation messages on my view model validators into my domain project.
public interface IRepository
{
void Update<T>(T obj, params Expression<Func<T, object>>[] propertiesToUpdate) where T : class;
}
public class Repository : DbContext, IRepository
{
public void Update<T>(T obj, params Expression<Func<T, object>>[] propertiesToUpdate) where T : class
{
Set<T>().Attach(obj);
propertiesToUpdate.ToList().ForEach(p => Entry(obj).Property(p).IsModified = true);
SaveChanges();
}
}
Just to add to the list of options. You can also grab the object from the database, and use an auto mapping tool like Auto Mapper to update the parts of the record you want to change..
Depending on your use case, all the above solutions apply. This is how i usually do it however :
For server side code (e.g. a batch process) I usually load the entities and work with dynamic proxies. Usually in batch processes you need to load the data anyways at the time the service runs. I try to batch load the data instead of using the find method to save some time. Depending on the process I use optimistic or pessimistic concurrency control (I always use optimistic except for parallel execution scenarios where I need to lock some records with plain sql statements, this is rare though). Depending on the code and scenario the impact can be reduced to almost zero.
For client side scenarios, you have a few options
Use view models. The models should have a property UpdateStatus(unmodified-inserted-updated-deleted). It is the responsibility of the client to set the correct value to this column depending on the user actions (insert-update-delete). The server can either query the db for the original values or the client should send the original values to the server along with the changed rows. The server should attach the original values and use the UpdateStatus column for each row to decide how to handle the new values. In this scenario I always use optimistic concurrency. This will only do the insert - update - delete statements and not any selects, but it might need some clever code to walk the graph and update the entities (depends on your scenario - application). A mapper can help but does not handle the CRUD logic
Use a library like breeze.js that hides most of this complexity (as described in 1) and try to fit it to your use case.
Hope it helps
EF Core 7.0 new feature: ExecuteUpdate
Finally! After a long wait, EF Core 7.0 now has a natively supported way to run UPDATE (and also DELETE) statements while also allowing you to use arbitrary LINQ queries (.Where(u => ...)), without having to first retrieve the relevant entities from the database: The new built-in method called ExecuteUpdate — see "What's new in EF Core 7.0?".
ExecuteUpdate is precisely meant for these kinds of scenarios, it can operate on any IQueryable instance, and lets you update specific columns on any number of rows, while always issuing a single UPDATE statement behind the scenes, making it as efficient as possible.
Usage:
Imagine you want to update a specific user's email and display name:
dbContext.Users
.Where(u => u.Id == someId)
.ExecuteUpdate(b => b
.SetProperty(u => u.Email, "NewEmail#gmail.com")
.SetProperty(u => u.DisplayName, "New Display Name")
);
As you can see, ExecuteUpdate requires you to make one or more calls to the SetProperty method, to specify which property to update, and also what new value to assign to it.
EF Core will translate this into the following UPDATE statement:
UPDATE [u]
SET [u].[Email] = "NewEmail#gmail.com",
[u].[DisplayName] = "New Display Name"
FROM [Users] AS [u]
WHERE [u].[Id] = someId
Also, ExecuteDelete for deleting rows:
There's also a counterpart to ExecuteUpdate called ExecuteDelete, which, as the name implies, can be used to delete a single or multiple rows at once without first fetching them.
Usage:
// Delete users that haven't been active in 2022:
dbContext.Users
.Where(u => u.LastActiveAt.Year < 2022)
.ExecuteDelete();
Similar to ExecuteUpdate, ExecuteDelete will generate DELETE SQL statements behind the scenes — in this case, the following one:
DELETE FROM [u]
FROM [Users] AS [u]
WHERE DATEPART(year, [u].[LastActiveAt]) < 2022
Other notes:
Keep in mind that both ExecuteUpdate and ExecuteDelete are "terminating", meaning that the update/delete operation will take place as soon as you call the method. You're not supposed to call dbContext.SaveChanges() afterwards.
If you're curious about the SetProperty method, and you're confused as to why ExectueUpdate doesn't instead receive a member initialization expression (e.g. .ExecuteUpdate(new User { Email = "..." }), then refer to this comment (and the surrounding ones) on the GitHub issue for this feature.
Furthermore, if you're curious about the rationale behind the naming, and why the prefix Execute was picked (there were also other candidates), refer to this comment, and the preceding (rather long) conversation.
Both methods also have async equivalents, named ExecuteUpdateAsync, and ExecuteDeleteAsync respectively.
There are some really good answers given already, but I wanted to throw in my two cents. Here is a very simple way to convert a view object into a entity. The simple idea is that only the properties that exist in the view model get written to the entity. This is similar to #Anik Islam Abhi's answer, but has null propagation.
public static T MapVMUpdate<T>(object updatedVM, T original)
{
PropertyInfo[] originalProps = original.GetType().GetProperties();
PropertyInfo[] vmProps = updatedVM.GetType().GetProperties();
foreach (PropertyInfo prop in vmProps)
{
PropertyInfo projectProp = originalProps.FirstOrDefault(x => x.Name == prop.Name);
if (projectProp != null)
{
projectProp.SetValue(original, prop.GetValue(updatedVM));
}
}
return original;
}
Pros
Views don't need to have all the properties of the entity.
You never have to update code when you add remove a property to a view.
Completely generic
Cons
2 hits on the database, one to load the original entity, and one to save it.
To me the simplicity and low maintenance requirements of this approach outweigh the added database call.
Related
I have a prepare function for saving records to the database that is probably a little overkill, but the idea was that I could add on to it at a later date.
public void Prepare<T>(T model) where T : class {
var key = ReflectionHelper.GetAttribute<T, KeyAttribute>();
if(null == key) { return; }
SetContext<T>();
var set = DbManager.Context.Set<T>();
object id = key.GetValue(model);
object def = key.PropertyType.GetDefaultValue();
if(id == def) { set.Add(model); }
}
The current implementation is just checking that the primary key of the record is a default value (typically 0) and then adds it to the dataset. This works for 90% of cases where tables would be built with an auto-incrementing key, however, I'm running into an issue for a table where the key is generated manually for each record, which means that it is set before inserting it into the DB.
This is obviously not ideal with the above function, which is failing the check and not actually saving it to the DB. I know that Entity Framework must have some sort of internal test to check whether a record is new or not to determine whether it needs to do an UPDATE or an INSERT and AFAIK it doesn't rely on the ID being set beforehand or I'd be running into the same issue with EF's code that I am with the above function. Is there a way that I can pull the result from that check instead of the way I'm currently doing it?
This is where Generic "one size fits all" approaches start to fall down. They work efficiently so long as the implementations are identical. As soon as you have an exceptional case it means introducing complexity.
In situations where the key cannot reflect whether an entity is new or existing (I.e. 0 / null = new) then the typical approach would be to attempt to load the entity to perform the Update, otherwise insert.
var existingEntity = set.SingleOrDefault(x => x.Id == id);
if (existingEntity != null)
{
Mapper.Map(model, existingEntity);
}
else
{
existingEntity = set.Add(model);
}
The issue that can come up with "Upsert" implementations is that the application can start accidentally inserting records that you expect to exist, and should have probably handled if they don't. (Stale data, tampering, etc.) My recommendation with systems is to be explicit with dedicated Add/Insert vs. Update method call chains.
DbSet.Update can also work to manage update or insert scenarios but this is a less optimal compared to using EF's change tracker as it will generate an UPDATE SQL statement for all columns whether they changed or not. If you manually update all of the columns or use Automapper's Map method to copy across the values, the change tracker will only generate a statement for the columns that changed. This also gives you control over ensuring that in update scenarios that only allowed values can be changed. For instance the UI is only expected to change some fields, worst case if you are passing full entities back from the client that other values in the model cannot be tampered with when your manual copy over or Automapper mappings only transfer expected field values.
How would you Upsert without select? the upsert would be a collection of entities received by a method which contains DTOs that may not be available in the database so you can NOT use attach range for example.
One way theoretically is to load the ExistingData partially with a select like dbContext.People.Where(x => x exists in requested collection).Select(x => new Person { Id = x.Id, State = x.State }).ToList() which just loads a part of the entity and not the heavy parts. But here if you update one of these returned entityItems from this collection it will not update because of the new Person its not tracking it and you also cannot say dbContext.Entry<Person>(person).State = Modified because it will throw an error and will tell you that ef core is already "Tracking" it.
So what to do.
One way would be to detach all of them from the ChangeTracker and then do the state change and it will do the update but not just on one field even if you say dbContext.Entry<Person>(person).Property(x => x.State).Modified = true. It will overwrite every fields that you haven't read from the database to their default value and it will make a mess in the database.
The other way would be to read the ChangeTracker entries and update them but it will also overwrite and it will consider like everything is chanaged.
So techinically I don't know how ef core can create the following SQL,
update People set state = 'Approved' where state != 'Approved'
without updating anything else. or loading the person first completely.
The reason for not loading your data is that you may want to update like 14000 records and those records are really heavy to load because they contain byte[] and have images stored on them for example.
BTW the lack of friendly documentation on EFCore is a disaster compare to Laravel. Recently it has cost us the loss of a huge amount of data.
btw, the examples like the code below will NOT work for us because they are updating one field which they know that it exists in database. But we are trying to upsert a collection which some of those DTOs may not be available in the database.
try
{
using (var db = new dbContext())
{
// Create new stub with correct id and attach to context.
var entity = new myEntity { PageID = pageid };
db.Pages.Attach(entity);
// Now the entity is being tracked by EF, update required properties.
entity.Title = "new title";
entity.Url = "new-url";
// EF knows only to update the propeties specified above.
db.SaveChanges();
}
}
catch (DataException)
{
// process exception
}
Edit: The used ef core version is #3.1.9
Fantastic, I found the solution (You need to also take care about your unit tests).
Entityframework is actually working fine it can be just a lack of experience which I'm documenting here in case anyone else got into the same issue.
Consider that we have an entity for Person which has a profile picture saved as Blob on it which causes that if you do something like the following for let's say 20k people the query goes slow even when you've tried to have enough correct index on your table.
You want to do this query to update these entities based on a request.
var entityIdsToUpdate = request.PeopleDtos.Select(p => p.Id);
var people = dbContext.People.Where(x => entityIdsToUpdate.Contains(x.Id)).ToList();
This is fine and it works perfectly, you will get the People collection and then you can update them based on the given data.
In these kind of updates you normally will not need to update images even if you do, then you need to increase the `TimeOut1 property on your client but for our case we did not need to update the images.
So the above code will change to this.
var entityIdsToUpdate = request.PeopleDtos.Select(p => p.Id);
var people = dbContext.People
.Select(p => new Person {
Id = p.Id,
Firstname = p.Firstname,
Lastname = p.Lastname,
//But no images to load
})
.Where(p => entityIdsToUpdate.Contains(p.Id)).ToList();
But then with this approach, EntityFramework will lose the track of your entities.
So you need to attach it like this and I will tell you how NOT to attach it.
This is the correct way for a collection
dbContext.People.AttachRange(people); //These are the people you've already queried
Now DO NOT do this, you may want to do this because you get an error from the first one from EntityFramework which says the entity is already being tracked, trust it because it already is. I will explain after the code.
//Do not do this
foreach(var entry in dbContext.ChangeTracker.Entries())
{
entry.State = EntityState.Detached;
}
//and then on updating a record you may write the following to attach it back
dbContext.Entry(Person).State = EntityState.Modified;
The above code will cause EntityFramework not to follow the changes on the entities anymore and by the last line you will tell it literally everything edited or not edited is changed and will cause you to LOSE your unedited properties like the "image".
Note: Now what can u do by mistake that even messes up the correct approach.
Well since you are not loading your whole entity, you may assume that it is still fine to assign values to the unloaded ones even if the value is not different than the one in the database. This causes entity framework to assume that something is changed and if you are setting a ModifiedOn on your records it will change it for no good reason.
And now about testing:
While you test, you may get something out from database and create a dto from that and pass the dto with the same dbContext to your SystemUnderTest the attach method will throw an error here which says this entity is already bein tracked because of that call in your test method. The best way would be create a new dbContext for each process and dispose them after you are done with them.
BTW in testing it may happen that with the same dbContext you update an entity and after the test you want to fetch if from the database. Please take note that this one which is returning to you is the "Cached" one by EntityFramework and if you have fetched it in the first place not completely like just with Select(x => ) then you will get some fields as null or default value.
In this case you should do DbContext.Entry(YOUR_ENTRY).Reload().
It is a really complete answer it may not directly be related to the question but all of the things mentioned above if you don't notice them may cause a disaster.
I have a software that has been in the works for a while, today our client decided we NOT delete any data but instead hide them. To do this, I plan to add an "isDeleted" property to all tables and change all methods for deletion to set this property to "true" instead.
Problem is, I have 1000 times more reading than deletion, I can have a User and try to read all Comments of this User by using entity relation, I have to either add a "Where(x => !x.isDeleted)" to every single read like this or if it is possible, opt out ALL data that has isDeleted as true from being read.
Is the latter possible in any way? If not, is there an alternative to writing "Where(x => !x.isDeleted)" a thousand times?
I've looked at this problem before in the past and rolling your own solution is much more difficult than you'd initially think, mostly because it's really hard to change how Include statements load the related entities (EF doesn't really allow you to filter them).
But there is a library that can do it for you.
Filtering the read results
It can be done quite easily using the EntityFramework.DynamicFilters library. (I am not in any way affiliated with the devs, I just really like their library)
The main readme actually has an example that fits your use case:
modelBuilder.Filter("IsDeleted", (ISoftDelete d) => d.IsDeleted, false);
Essentially, it will only return results Where(d => !d.IsDeleted), which is exactly what you'd want. This filter is applied to all direct fetches and include statements, which means that those soft deleted entities are essentially non-existing as far as your domain is concerned.
This does assume that your entities all derive from a shared root which has the delete flag, which is something I'd advise you to do anyway.
Soft-deleting the entities
It's also possible to convert hard deletes into soft deletes in your database context itself, which means that you don't need to rewrite your delete code to instead update the entity (which can be a cumbersome rewrite, and it's always possible that someone forgets it here and there).
You can override the SaveChanges (and SaveChangesAsync) behavior in your context class. This allows you to find all the entities that are going to be deleted, and gives you the option to convert this into an update statement while also raising the IsDeleted flag.
It also ensures that no one can forget to soft delete. Your developers can simply hard delete the entities (when handling the code), and the context will convert it for them.
public class MyContext : DbContext
{
public override int SaveChanges()
{
ConvertHardDeleteToSoftDelete();
return base.SaveChanges();
}
public override async Task<int> SaveChangesAsync(CancellationToken cancellationToken = default)
{
ConvertHardDeleteToSoftDelete();
return await base.SaveChangesAsync(cancellationToken);
}
private void ConvertHardDeleteToSoftDelete()
{
var deletedEntries = ChangeTracker
.Entries<ISoftDelete>()
.Where(entry => entry.State == EntityState.Deleted)
.ToList();
foreach (var entry in deletedEntries)
{
entry.State = EntityState.Modified;
entry.IsDeleted = true;
}
}
}
Combined with the dynamic filter suggestion above, this means that such a soft deleted entity will not appear again in your application, but it will still exist in the database.
I'm making an application in C# and I'm using the EF Code First for my database-creation (for a SQL-Server database).
I have a class "Address" which is used in several other classes.
So several records can relate to the same Address-record. Is there an option in the EF where I can delete the Address-record when it is nowhere used anymore? Unless if I'm wrong, the CascadeOnDelete-option will remove the record once a certain record is deleted while others still relate to the Address-record.
Also, it wouldn't be very useful to create a new Address-record for each record that relates to it, because most Address-record would be exactly the same (for example, a lot of Address-record would just contain the name of the same country or city).
Sorry if it all sounds a bit fuzzy, I would give some code but I don't really know what code that would be.
The short answer is no, EF doesn't natively provide that feature.
The way I see it, there are a couple of things that you can (should) do to get to your desired result:
First, if you're concerned about duplicate data (Country, City, State, Region, etc.) in an address, you should extract them into their own tables and provide references. This will mean instead of having a varchar of United States in Address.Country for example, it would be a foreign key of 1 maybe, to the Countries table. This isn't a bad idea as it will also allow you to standardize more easily (so you don't get US, U.S., and United States say).
Second, you can have a business logic layer on top of your database (let's call it BO, so when you need to save a Person, you call PersonBO.Save(Person), which interacts with your database on your behalf. Then, extract your check one level further, to a static mainBO class maybe. Your PersonBO (and any other classes that use Addresses) can then call mainBO.FindAndDeleteUnusedAddresses() passing the applicable object (Person person in this case):
FindAndDeleteUnusedAddresses(Person person) {
using (var db = new Entities())
{
var personCount = db.Persons.Where(p => p.Address == person.Address).Count();
var businessCount = db.Businesses.Where(b => b.Address == person.Address).Count();
// other objects that have addresses here
if (personCount + businessCount == 0) // others included as necessary
{
db.Entry(person.Address).State = EntityState.Deleted;
db.SaveChanges();
}
}
}
This is how EF behaves when CascadeOnDelete switched off.
It will throw exception on attempt to delete row that is used in any relationship. You will need to catch this exception to show custom message or do something else you need in this case.
If you need to remove Address in a moment when last entity that uses this address deleted it is probably better to check if address is not used in any other places and mark it for deletion manually.
I have found some information regarding this but not enough for me to understand what the best practice for is for this scenario. I have your typicaly TPH setup with an abstract base class "Firm". I have several children "Small Firm", "Big Firm" etc inheriting from Firm. In reality I actually have different realistic classifications for firms but I am trying to keep it simple in this example. In the database as per TPH I have a single Firm table with a FirmTypeId column (int) that differentiates between all these types. Everything works great except I have a requirement to allow a user to change one type of firm into another. For example a user might have made a mistake when adding the firm, and would like to change it from Big Firm to Small Firm. Because entity framework does not allow exposing the discriminating database column to be exposed as a property, I don't believe there is a way to change one type into another via EF. Please correct me if I am wrong. The way I see it I have two options:
Don't use TPH. Simply have a Firm Entity and go back to using .Where(FirmTypeId == something) to differentiate between the types.
Execute SQL directly using context.ExecuteStoreCommand to update the FirmTypeId column of the database.
I've seen a post where people suggest that One of the tenets of OOP is that instances cannot change their type. Although that makes perfect sense to me, I just don't seem to be able to connect the dots. If we were to follow this rule, then the only time to use any kind of inheritance (TPH/TPT) is when one is sure that one type would never be converted into another. So a Small Firm will never become a Big Firm. I see suggestions that composition should be used instead. Even though it doesn't make sense to me (meaning I don't see how a Firm has a Big Firm, to me a Big Firm is a Firm), I can see how composition can be modeled in EF if the data is in multiple tables. However in a situation where I have a single table in the database it seems it's TPH or what I've described in #1 and #2 above.
I've ran into this problem in our project, where we have core DBContext and some "pluggable" modules with their own DBContexts, in which "module user" inherits "core (base) user". Hope that's understandable.
We also needed the ability to change (let's call it) User to Customer (and if needed also to another "inherited" Users at the same time, so that user can use all those modules.
Because of that we tried using TPT inheritance, instead of TPH - but TPH would work somehow too.
One way is to use custom stored procedure as suggested by many people...
Another way that came to my mind is to send custom insert/update query to DB. In TPT it would be:
private static bool UserToCustomer(User u, Customer c)
{
try
{
string sqlcommand = "INSERT INTO [dbo].[Customers] ([Id], [Email]) VALUES (" + u.Id + ", '" + c.Email + "')";
var sqlconn = new SqlConnection(ConfigurationManager.ConnectionStrings["DBContext"].ConnectionString);
sqlconn.Open();
var sql = new SqlCommand(sqlcommand, sqlconn);
var rows = sql.ExecuteNonQuery();
sqlconn.Close();
return rows == 1;
}
catch (Exception)
{
return false;
}
}
In this scenario Customer inherits User and has only string Email.
When using TPH the query would only change from INSERT ... VALUES ... to UPDATE ... SET ... WHERE [Id] = .... Dont forget to change Discriminator column too.
After next call dbcontext.Users.OfType<Customer> there is our original user, "converted" to customer.
Bottomline: I also tried solution from another question here, which included detaching original entity (user) from ObjectStateManager and making new entity (customer) state modified, then saving dbcontext.SaveChanges(). That didn't work for me (neither TPH nor TPT). Either because using separate DBContexts per module, or because EntityFramework 6(.1) ignores this.
It can be found here.
Yes, you got it all right. EF inheritance does not support this scenario. The best way to change a Firm type for an existing Firm is to use a stored procedure.
Please take a look at this post for more info:
Changing Inherited Types in Entity Framework
Unless you explicitly want to use the polymorphic functionality of the relational inheritance, then why not look at a splitting strategy?
http://msdn.microsoft.com/en-us/data/ff657841.aspx
EDIT: APOLOGIES, THIS IS AN EF 6.x ANSWER
I'm posting example code for completeness. In this scenario, I have a base Thing class. Then, sub-classes: ActiveThing and DeletedThing
My OData ThingsController, has a main GetThings which I intend to only expose ActiveThings, but, it's GetThing(ThingId) can still return either type of object. The Delete action performs a conversion from ActiveThing to DeletedThing much in the way requested by the OP, and much in the manner described in other answers. I'm using inline SQL (parameterized)
public class myDbModel:DbContext
{
public myDbModel(): base("name=ThingDb"){}
public DbSet<Thing> Things { get; set; } //db table
public DbSet<ActiveThing> ActiveThings { get; set; } // now my ThingsController 'GetThings' pulls from this
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
//TPH (table-per-hierarchy):
modelBuilder.Entity<Ross.Biz.ThingStatusLocation.Thing>()
.Map<Ross.Biz.ThingStatusLocation.ActiveThing>(thg => thg.Requires("Discriminator").HasValue("A"))
.Map<Ross.Biz.ThingStatusLocation.DeletedThing>(thg => thg.Requires("Discriminator").HasValue("D"));
}
}
Here's my updated ThingsController.cs
public class ThingsController : ODataController
{
private myDbModel db = new myDbModel();
/// <summary>
/// Only exposes ActiveThings (not DeletedThings)
/// </summary>
/// <returns></returns>
[EnableQuery]
public IQueryable<Thing> GetThings()
{
return db.ActiveThings;
}
public async Task<IHttpActionResult> Delete([FromODataUri] long key)
{
using (var context = new myDbModel())
{
using (var transaction = context.Database.BeginTransaction())
{
Thing thing = await db.Things.FindAsync(key);
if (thing == null || thing is DeletedThing) // love the simple expressiveness here
{
return NotFound();//was already deleted previously, so return NotFound status code
}
//soft delete: converts ActiveThing to DeletedThing via direct query to DB
context.Database.ExecuteSqlCommand(
"UPDATE Things SET Discriminator='D', DeletedOn=#NowDate WHERE Id=#ThingId",
new SqlParameter("#ThingId", key),
new SqlParameter("#NowDate", DateTimeOffset.Now)
);
context.ThingTransactionHistory.Add(new Ross.Biz.ThingStatusLocation.ThingTransactionHistory
{
ThingId = thing.Id,
TransactionTime = DateTimeOffset.Now,
TransactionCode = "DEL",
UpdateUser = User.Identity.Name,
UpdateValue = "MARKED DELETED"
});
context.SaveChanges();
transaction.Commit();
}
}
return StatusCode(HttpStatusCode.NoContent);
}
}