I have a question about Saving a list of object in ASP.NET MVC.
First I'm not using EntityFramework or Nh like ORM tool, just use Ado.net
suppose I have an object Product, and I want to collect all the products data via javascript and batch update the product list in one call.
my question is when should I differentiate which item is inserted, updated, or deleted?
one strategy is that I have a enum property on the DTO object and
also on the javascript ViewModel, and when I add an item into the
viewModel, I marked this object to add, and if I changed one Item, I
marked it to updated. so when this request come to the action, I can
know which items to be insert or update.
pros: it's easy on server side, don't need to differentiate the object status from server side.
cons: if I want to publish this action to webapi that will be called by third party, that may need third party user to
differentiate the state of the object.
differentiate the data from server side, just give me a list of object, on the server side, first retrive the current data from database, compare the data, then check which record to be inserted or updated.
pros: all the compare are done from server side.
cons: proformance issue
what ever the data passed from client, just remove the current data and insert the new data
I hope someone could give me an advice, what's the best practice to handle this situation, I think it's quite common but I can't find a best solution.
I've seen option 1 where added/deleted/modified items are maintained in javascript arrays and posted back to server. But for some reason, I didn't like it maybe because of writing client side code to maintain state.
So, I had used second option and thanks to LINQ for making my task easier. Assuming list has some unique id, below is pseudo code. Note: newly added items should have unique random id's, otherwise there might be chance of treating them as already existing item. In my case its GUID, so there was no chance of overriding.
var submittedIds = vmList.Select(a=>a.Id).ToList();
var dbIds = dbList.Select(d=>d.Id).ToList();
//Added items
var newIds = submittedIds.Except(dbIds).ToList();
//loop over newIds and construct list object to contain newly added items
//Deleted items
var deletedIds = dbIds.Except(submittedIds).ToList();
//Modified items
var modifiedIds = dbIds.Intersect(submittedIds).ToList();//if the values don't change, update statement won't do any harm here
This approach gives reasonable performance unless you are dealing with huge lists.
I think third option is not good. For ex: if you plan to implement audit features on your tables, it will give you wrong functionality. If a new record is inserted, you will have entries for all records as deleted and then one inserted which is wrong because only one is inserted.
3rd strategy is suitable for simple situations e.g. when you want to update a Purchase Order items, an Order will not have too much OrderLineItems. However, you have to take care concurrency issue.
I think your first strategy is best suitable in general case. It's also easy to implement. When you want to publish your service to a 3rd party, it's usual that a client must follow the service definition and requirement.
Update
For 1st strategy: If you don't want your clients have to specify status for their data, then do it for them. You can separate the SaveOrder service into smaller services: CreateOrder, UpdateOrder, DeleteOrder.
Related
Excuse me for my broken English.
In my application, all objects in the context have a property called ObsoleteFlag, which basically means if the object should still be used on the frontend. It's some sort of "soft-delete" flag without actually having to delete the data.
Now I want to prevent EF from returning any object where ObsoleteFlag is set to true (1)
If for example I retrieve object X, the navigational list property Y contains all the related objects of type Y, no matter what the ObsoleteFlag is set to.
Is there some general way of preventing EF from doing this? I don't want to check on the ObsoleteFlag property everywhere I access the context, and for every navigational property that may be loaded too.
Thanks and sorry for my broken English.
Two different approaches:
In your repository layer have a GetAllWhatever() that returns IQueryable<Whatever> and uses Where(x => !x.Obsolete) and use this whenever you retrieve objects of this type.
Create a view of Create View ActiveWhatever As Select * from ActiveWhatever Where obsolete = 0 and bind to that rather than the table.
The first is essentially checking the flag every time, but doing so in one place, so you don't have to keep thinking about it.
The second is much the same, but the work is pushed to the database instead of the .NET code. If you are going to modify the entities or add new entities you will have to make it a modifiable view, but just how that is done depends on the database in question (e.g. you can do it with triggers in SQL Server, and triggers or rules in PostgreSQL).
The second can also include having a rule or trigger for DELETE that sets your obsolete property instead of deleting, so that a normal delete as far as Entity Framework is concerned becomes one of your soft-deletes as far as the database is concerned.
I'd go for that approach unless you had a reason to object to a view existing just to help the application's implementation (that is you're heavily into the database being "pure" in being concerned with the data rather than its use). But then, if it's handy for one application it's likely handy for more, given the very meaning of this "obsolete".
System I am working with ATM is C# and oracle however problem I am having is system agnostic (could happen to system with java and mysql or any other front-end and back-end combination):
I have TransactionDetail object that can have 9 statuses
Open,
Complete,
Cancelled,
No Quote,
Quoted,
Instructed,
Declined,
Refunded,
Removed
From my experience when one has to deal with statuses in front-end code he should do everything he can to avoid object status having a setter. It is because status is inherent quality and has to be determined at the moment when it is being needed - in other words status should always be determined by a method or get only property and not set.
So statuses are being retrieved with mechanisms like this (this is only a fragment of code but should give you indication how it works)
public TransactionStatus TransactionStatus()
{
if (db.DeclinedTransactions.Any(o => o.TransactionId == this.TransactionId))
return TransactionStatus.Declined;
}
MI is asking for these transaction statuses in a SQL view that would also contain all the data related to transaction.
If object status can be determined only from data of object itself creating computed columns can solve this problem in database. But what about objects like TransactionDetail that spans multiple tables - there isn't computed column mechanism that would allow to 'peek' into other tables.
The only solution I can think of is adding SQL function that determines state and then create a SQL view that contains function + data from table. What I don't like about this approach is that it requires to duplicate logic in code and in database.
How one should design system around state of object which to be determined requires information from more than one table, in a way that would not require to duplicate mechanisms in code and back-end?
If this were a project I was working on, I would not be looking to create a View to calculate this data.
I would be looking at my application business logic.
Whilst a fully normalised database makes perfect sense to the DBAs, there are cases where application performance and scalability can benefit greatly from a little de-normalization.
If you have a reliable framework of business logic (i.e. well defined business objects, good encapsulation, reliable unit tests) then I would personally be looking to add this to the business objects.
This then allows you to define you Status behaviour in code and update an explicit Status. For example, if a change is made to a business object that puts it into a different TransactionStatus then you can explicitly make the change to that status on the business object and persist the entire change to your database.
The usual response to this kind of design suggestion is that you then have to ensure you have the burden of keeping the two things in sync (explicit status vs state of the object) - the answer to that is making sure there is only one piece of logic to carry out these changes and that your business logic is water-tight as described before.
An example:
Invoice contains one or more InvoiceItem
InvoiceItem has a value.
Invoice, when displayed, needs an invoice value total
Usual way that this is done is to use SUM() to calculate the Invoice total "on the fly" in the database to populate an Invoice.Total value.
But if my business logic is well defined - perhaps I add InvoiceItem to an Invoice object in code, and the Add logic also takes the value from InvoiceItem and adds it to an Invoice.Total value - then when I commit the changes, I can also commit that Invoice.Total value.
When I want to display the total, I have a single value, rather than having to aggregate in the database.
In my Windows Phone 8 C#/XAML .NET 4.5 Application I need to work with a local SQLite database.
Since I'm not the only one doing the project, the database was created by someone else and I don't feel comfortable editing and/or searching in his code.
I need to find a way to update existing element in the database, but I do not see any way to update. I've googled through it and found a way, I'm just not sure that I'm doing it correctly (and if it will work).
Is this a correct way to update an item in the database using linq-2-sql?
example:
MyDoctorModel doctor... //existing doctor, contains properties like Name, Phone, doctorId (which is corresponding to the Id of doctor in database)
dbContext //existing object of database, custom made, contains table object + additional methods
//I'm linq-2-sql and linq rookie, so I'm not sure if it's normal or not
//dbContext.DOCTOR is object of the table in the database, contains
//columns like Id(integer),NAME(string) etc...
var dbDoctor = dbContext.DOCTOR.Where(e => e.Id == doctor.DoctorId).First();
dbDoctor.NAME = doctor.Name;
....//etc, updating values for the actual ones
dbContext.SubmitChanges(); //and submiting changes
Is this the right way to update an existing item in table?
P.S: I know it's probably fairly easy and this is basic question, but I could not find explanation, that would be satisfactory for me (simple enough for my thick head to understand).
yes, it is the correct way, you first find the object you want to update through your db context and then change some properties of it and then call SubmitChanges of same context
I have a somewhat complex permission system that uses six database tables in total and in order to speed it up, I would like to cache these tables in memory instead of having to hit the database every page load.
However, I'll need to update this cache when a new user is added or a permission is changed. I'm not sure how to go about having this in memory cache, and how to update it safely without causing problems if its accessed at the same time as updating
Does anyone have an example of how to do something like this or can point me in the right direction for research?
Without knowing more about the structure of the application, there are lots of possible options. One such option might be to abstract the data access behind a repository interface and handle in-memory caching within that repository. Something as simple as a private IEnumerable<T> on the repository object.
So, for example, say you have a User object which contains information about the user (name, permissions, etc.). You'd have a UserRepository with some basic fetch/save methods on it. Inside that repository, you could maintain a private static HashSet<User> which holds User objects which have already been retrieved from the database.
When you fetch a User from the repository, it first checks the HashSet for an object to return, and if it doesn't find out it gets it from the database, adds it to the HashSet, then returns it. When you save a User it updates both the HashSet and the database.
Again, without knowing the specifics of the codebase and overall design, it's hard to give a more specific answer. This should be a generic enough solution to work in any application, though.
I would cache items as you use it, which means on your data layer when you are getting you data back you check on your cache if it is available there otherwise you go to the database and cache the result after.
public AccessModel GetAccess(string accessCode)
{
if(cache.Get<string>(accessCode) != null)
return cache.Get<string>(accessCode);
return GetFromDatabase(accessCode);
}
Then I would think next on my cache invalidate strategy. You can follow two ways:
One would be set expire data to be 1 hour and then you just hit the database once in a hour.
Or invalidate the cache whenever you update the data. That is for sure the best but is a bit more complex.
Hope it helps.
Note: you can either use ASP.NET Cache or another solution like memcached depending on your infrastructure
Is it hitting the database every page load that's the problem or is it joining six tables that's the problem?
If it's just that the join is slow, why not create a database table that summarizes the data in a way that is much easier and faster to query?
This way, you just have to update your summary table each time you add a user or update a permission. If you group all of this into a single transaction, you shouldn't have issues with out-of-sync data.
You can take advantage of ASP.NET Caching and SqlCacheDependency Class. There is article on MSDN.
You can use the Cache object built in ASP.Net. Here is an article that explains how.
I can suggest cache such data in Application state object. For thread-safe usage, consider using lock operator. Your code would look something like this:
public void ClearTableCache(string tableName)
{
lock (System.Web.HttpContext.Current)
{
System.Web.HttpContext.Current.Application[tableName] = null;
}
}
public SomeDataType GetTableData(string tableName)
{
lock (System.Web.HttpContext.Current)
{
if (System.Web.HttpContext.Current.Application[tableName] == null)
{
//get data from DB then put it into application state
System.Web.HttpContext.Current.Application[tableName] = dataFromDb;
return dataFromDb;
}
return (SomeDataType)System.Web.HttpContext.Current.Application[tableName];
}
}
I've got a list of entity object Individual for an employee survey app - an Individual represents an employee or outside rater. The individual has the parent objects Team and Team.Organization, and the child objects Surveys, Surveys.Responses. Responses, in turn, are related to Questions.
So usually, when I want to check the complete information about an Individual, I need to fetch Individuals.Include(Team.Organization).Include(Surveys.Responses.Question).
That's obviously a lot of includes, and has a performance cost, so when I fetch a list of Individuals and don't need their related objects, I don't bother with the Includes... but then the user wants to manipulate an Individual. So here's the challenge. I seem to have 3 options, all bad:
1) Modify the query that downloads the big list of Individuals to .Include(Team.Organization).Include(Surveys.Responses.Question). This gives it bad performance.
2) Individuals.Load(), TeamReference.Load(), OrganizationReference.Load(), Surveys.Load(), (and iterate through the list of Surveys and load their Responses and the Responses' Questions).
3) When a user wishes to manipulate an Individual, I drop that reference and fetch a whole brand new Individual from the database by its primary key. This works, but is ugly because it means I have two different kinds of Individuals, and I can never use one in place of the other. It also creates ugly problems if I'm iterating across a list repeatedly, as it's tricky to avoid loading and dropping the fully-included Individuals repeatedly, which is wasteful.
Is there any way to say
myIndividual.Include("Team.Organization").Include("Surveys.Responses.Question");
with an existing Individual entity, instead of taking approach (3)?
That is, is there any middle-ground between "fetch everything from the database up-front" and "late-load one relationship at a time"?
Possible solution that I'm hoping I could get insight about:
So there's no way to do a manually-implemented explicit load on a navigational-property? No way to have the system interpret
Individual.Surveys = from survey in MyEntities.Surveys.Include("Responses.Question")
where survey.IndividualID = Individual.ID
select survey; //Individual.Surveys is the navigation collection property holding Surveys on the Individual.
Individual.Team = from team in MyEntities.Teams.Include("Organization")
where team.ID = Individual.TeamID
select team;
as just loading Individual's related objects from the database instead of being an assignment/update operation? If this means no actual change in X and Y, can I just do that?
I want a way to manually implement a lazy or explicit load that isn't doing it a dumb (one relation at a time) way. Really, the Teams and Organizationss aren't the problem, but the Survey.Responses.Questions are a massive buttload of database hits.
I'm using 3.5, but for the sake of others (and when my project finally migrates to 4) I'm sure responses relevant to 4 would be appreciated. In that context, similar customization of lazy loading would be good to hear about too.
edit: Switched the alphabet soup to my problem domain, edited for clarity.
Thanks
The Include statement is designed to do exactly what you're hoping to do. Having multiple includes does indeed eager load the related entities.
Here is a good blog post about it:
http://thedatafarm.com/blog/data-access/the-cost-of-eager-loading-in-entity-framework/
In addition, you can use strongly typed "Includes" using some nifty ObjectContext extension methods. Here is an example:
http://blogs.microsoft.co.il/blogs/shimmy/archive/2010/08/06/say-goodbye-to-the-hard-coded-objectquery-t-include-calls.aspx