I have two objects that are connected in a way such that ObjectA contains an ICollection of ObjectB. I would like to be able to determine the indexOf ObjectB in inside the ICollection<ObjectB> that is stored in ObjectA.
One of the solutions was to convert the ICollection to a list and then use the built in IndexOf. However, when multiple threads access the ICollection, in many cases, I can get the same indexOf value for multiple ObjectBs.
Question Is there any way to have a certain field (of type int) that stores the index of ObjectB inside the ICollection? If not, is there anyway to ensure that indexOf (when multiple threads attempt to access it) gives the right index (i.e. no matter of the thread)?
Possible solutions I've tried to ensure to use a new context for each look up as well as GetDatabaseValues() and Reload(). This has worked better (especially in Debug mode), but when the debug mode is turned off, the same index of value is given to more ObjectBs.
Edit
I tried to add an OrderBy statement, but it seems like none of the approaches work.
// var objectB = new ObjectB();
using(var context = new ContextDb())
{
var objectA = context.ObjectAs.Single(x => x.Id == 1);
objectA.objectBs.Add(objectB);
context.SaveChanges();
context.Entry(objectB).Reload();
context.Entry(objectA).Reload();
var list = objectA.objectBs.Select(x => x.Id).OrderBy(x => x).ToList(); // order by primary key.
sb.AppendLine( string.Join(",", list.ToArray())); // for testing
objectB.LocalId= list.IndexOf(objectB.Id) + 1; // the "local id"
context.SaveChanges();
}
The result is quite strange, although I seem to be able to see a pattern. Note, the code above is in a for loop that runs a certain amount of times. During the first iteration (first line in the string builder) gives the following:
2932 2932,2933 2932,2933,2934 2932,2933,2934,2935,2936 2932,2933,2934,2935,2936,2937,2938 2932,2933,2934,2935,2936,2937,2938,2939,2940 2932,2933,2934,2935,2936,2937,2938,2939,2940,2941,2942
The second line:
2932,2933,2934,2935,2936,2937,2938,2939,2940,2941,2942,2943,2944,2945 2932,2933,2934,2935,2936,2937,2938,2939,2940,2941,2942,2943,2944,2945,2946,2947
The last line:
2932,2933,2934,2935,2936,2937,2938,2939,2940,2941,2942,2943,2944,2945,2946,2947,2948,2949,2950,2951,2952,2953,2954,2955,2956,2957,2958,2959,2960,2961,2962,2963,2964,2965,2966,2967,2968,2969,2970
Does anyone know if there is a build in way in Entity framework to avoid these duplicates?
Honestly, I'm still not entirely clear on what you're looking for. However, there's a couple of things I can say based on your expanded question. First, when you pull anything from a database, there's no inherent order. By default, it'll generally be ordered by PK, if possible, or more appropriately by "insert order". However, that's risky to rely on if an exact order is necessary. If you need a truly exact and replicate-able order, then you need to issue an ORDER BY clause with the order you want.
Especially if you relying on navigation properties filled by Entity Framework through either eager or lazy-loading a foreign key, you can't rely on the implict order at all. Again, if order is important, then you need to use OrderBy or OrderByDescending with some property on the entity to make sure that you get a true apples-to-apples order comparison.
Use IDictionary in which you can store index as well as the object.
IDictionary
Related
I have the following method updating an entity. The only biff I had was that when an non-existing ID were provided, I got an harsh exception.
public bool Update(Thing thing)
{
Context.Things.Update(thing);
int result = Context.SaveChanges();
return result == 1;
}
So I added a check to control the exception thrown (plus some nice logging and other facilitation). Eventually, I plan to skip the throwing up entirely.
public bool UpdateWithCheck(Thing thing)
{
Thing target = Context.Things.SingleOrDefault(a => a.Id == thing.Id);
if (target == null)
throw new CustomException($"No thing with ID {thing.Id}.");
Context.Things.Update(thing);
int result = Context.SaveChanges();
return result == 1;
}
No, this doesn't work, because the entity already is being tracked. I have several options to handle that.
Change to Context.Where(...).AsNoTracking().
Explicitly set the updated fields in target and save it.
Horse around with entity states and tampering with the tracker.
Removing the present and adding the new one.
I can't decide which is the best practice. Googling gave me the default examples that do not contain the check for pre-existing status in the same operation.
The reason for the exception is because by loading the entity from the Context to check if it exists, you now have a tracked reference. When you go to update the detatched reference, EF will complain that a instance is already tracked.
The simplest work-around would be:
public bool UpdateWithCheck(Thing thing)
{
bool doesExist = Context.Things.Any(a => a.Id == thing.Id);
if (!doesExist)
throw new CustomException($"No thing with ID {thing.Id}.");
Context.Things.Update(thing);
int result = Context.SaveChanges();
return result == 1;
}
However, there are two problems with this approach. Firstly, because we don't know the scope of the DbContext instance or can guarantee the order of methods, it may be possible that at some point that DbContext instance could have loaded and tracked that instance of the thing. This can manifest as seemingly intermittent errors. The proper way to guard against that would be something like:
public bool UpdateWithCheck(Thing thing)
{
bool doesExist = Context.Things.Any(a => a.Id == thing.Id);
if (!doesExist)
throw new CustomException($"No thing with ID {thing.Id}.");
Thing existing = Context.Things.Local.SingleOrDefault(a => a.Id == thing.Id);
if (existing != null)
Context.Entry(existing).State = EntityState.Detached;
Context.Things.Update(thing);
int result = Context.SaveChanges();
return result == 1;
}
This checks the local tracking cache for any loaded instances, and if found, detaches them. The risk here is that any modifications that haven't be persisted in those tracked references will be discarded, and any references floating around that would have assumed were attached, will now be detached.
The second significant issue is with using Update(). When you have detached entities being passed around there is a risk that data you don't intend to be updated could be updated. Update will replace all columns, where typically if a client might only be expected to update a subset of them. EF can be configured to check row versions or timestamps on entities against the database before updating when your database is set up to support them (Such as Snapshot isolation) which can help guard against stale overwrites, but still allow unexpected tampering.
As you've already figured out, the better approach is to avoid passing detached entities around, and instead use dedicated DTOs. This avoids potential confusion about what objects represent view/consumer state vs. data state. By explicitly copying the values across from the DTO to the entity, or configuring a mapper to copy supported values, you also protect your system from unexpected tampering and potential stale overwrites. One consideration with this approach is that you should guard updates to avoid unconditionally overwriting data with potentially stale data by ensuring your Entity and DTO have a RowVersion/Timestamp to compare. Before copying from DTO to the freshly loaded Entity, compare the version, if it matches then nothing has changed in the data row since you fetched and composed your DTO. If it has changed, that means someone else has updated the underlying data row since the DTO was read, so your modifications are against stale data. From there, take an appropriate action such as discard changes, overwrite changes, merge the changes, log the fact, etc.
Just alter properties of target and call SaveChanges() - remove the Update call. I'd say the typical use case these days is for the input thing to not actually be a Thing but to be a ThingViewModel, ThingDto or some other variation on a theme of "an object that carries enough data to identify and update a Thing but isn't actually a DB entity". To that extent, if the notion of updating properties of Thing from ThingViewModel by hand bores you, you can look at a mapper (AutoMapper is probably the most well known but there are many others) to do the copying for you, or even set you up with a new Thing if you decide to turn this method into an Upsert
I'm running into some speed issues in my project and it seems like the primary cause it calls to the database using entity framework. Every time I call the database, it is always done as
database.Include(...).Where(...)
and I'm wondering if that is different than
database.Where(...).Include(...)?
My thinking is that the first way includes everything for all the elements in the target table, then filters out the ones I want, while the second one filters out the ones I want, then only includes everything for those. I don't fully understand entity framework, so is my thinking correct?
Entity Framework delays its querying as long as it can, up until the point where your code start working on the data. Just to prove the example:
var query = db.People
.Include(p => p.Cars)
.Where(p => p.Employer.Name == "Globodyne")
.Select(p => p.Employer.Founder.Cars);
With all these chained calls, EF has not yet called the database. Instead, it has kept track of what you're trying to fetch, and it knows what query to run if you start working with the data. If you never do anything else with query after this point, then you will never hit the database.
However, if you do any of the following:
var result = query.ToList();
var firstCar = query.FirstOrDefault();
var founderHasCars = query.Any();
Now, EF is forced to look at the database because it cannot answer your question unless it actually fetches the data from the database. At this point, not before, does EF actually hit the database.
For reference, this trigger to fetch the data is often referred to as "enumerating the collection", i.e. turning a query into an actual result set.
By deferring the execution of that query for as long as possible, EF is able to wait and see if you're going to filter/order/paginate/transform/... the result set, which could lead to EF needing to return less data than when it executes every command immediately.
This also means that when you call Include, you're not actually hitting the database, so you're not going to be loading data from items that will later be filtered by your Where clause, if you didn't enumerate the collection.
Take these two examples:
var list1 = db.People
.Include(p => p.Cars)
.ToList() // <= enumeration
.Where(p => p.Name == "Bob");
var list2 = db.People
.Include(p => p.Cars)
.Where(p => p.Name == "Bob")
.ToList(); // <= enumeration
These lists will eventually yield the same result. However, the first list will fetch data before you filter it because you called ToList before Where. This means you're going to be loading all people and their cars in memory, only to then filter that list in memory.
The second list, however, will only enumerate the collection when it already knows about the Where clause, and therefore EF will only load people named Bob and their cars into memory. The filtering will happen on the database before it gets sent back to your runtime.
You did not show enough code for me to verify whether you are prematurely enumerating the collection. I hope this answer helps you in determining whether this is the cause of your performance issues.
database.Include(...).Where(...) and I'm wondering if that is different than database.Where(...).Include(...)?
Assuming this code is verbatim (except the missing db set) and there is nothing happening inbetween the Include and Where, the order does not change the execution and therefore it is not the source of your performance issue.
I generally advise you to put your Include statements before anything else (i.e. right after db.MyTable), as a matter of readability. The other operations depends on the specific query you're trying to construct.
Most of the times order of clauses will not make any difference
Include statement tells to SQL Join one table with another
While Where will results in.. yes, SQL Where
When you do something like database.Include(...).Where(...) you are building IQueryable object that will be transleted to direct SQL after you try to access it like with .ToList() or .FirstOrDefault() and those queries are already optimized
So if you still have performance issues - you should use profiler to look for bottlenecks and maybe consider using stored procedures (those could be integrated with EF)
I have a Entity-Set employee_table, I'm getting the data through excel sheet which I have loaded in the memory and the user will click Save to save the changes in the db and its all good for the first time inserting records and no issue with that.
but how can I update only the changes that are made? meaning that, let say I have 10 rows and 5 columns and out of 10 rows say row 7th was modified and out of 5 column let say 3rd column was modified and I just need to update only those changes and keep the existing value of the other columns.
I can do with checking if (myexistingItem.Name != dbItem.Name) { //update } but its very tedious and not efficient and I'm sure there is a better way to handle.
here is what I got so far.
var excelData = SessionWrapper.GetSession_Model().DataModel.OrderBy(x => x.LocalName).ToList();;
var dbData = context.employee_master.OrderBy(x => x.localname).ToList();
employee_master = dbEntity = employee_master();
if (dbData.Count > 0)
{
//update
foreach (var dbItem in dbData)
{
foreach(var xlData in excelData)
{
if(dbItem.customer == xlData.Customer)
{
dbEntity.customer = xlData.Customer;
}
//...do check rest of the props....
db.Entry(dbEntity).State = EntityState.Modified;
db.employee_master.Add(dbEntity);
}
}
//save
db.SaveChanges();
}
else
{
//insert
}
You can make this checking more generic with reflection.
Using this answer to get the value by property name.
public static object GetPropValue(object src, string propName)
{
return src.GetType().GetProperty(propName).GetValue(src, null);
}
Using this answer to set the value by property name.
public static void SetPropertyValue(object obj, string propName, object value)
{
obj.GetType().GetProperty(propName).SetValue(obj, value, null);
}
And this answer to list all properties
public static void CopyIfDifferent(Object target, Object source)
{
foreach (var prop in target.GetType().GetProperties())
{
var targetValue = GetPropValue(target, prop.Name);
var sourceValue = GetPropValue(source, prop.Name);
if (!targetValue.Equals(sourceValue))
{
SetPropertyValue(target, prop.Name, sourceValue);
}
}
}
Note: If you need to exclude some properties, you can implement very easy by passsing the list of properties to the method and you can check in if to be excluded or not.
Update:
I am updating this answer to provide a little more context as to why I suggested not going with a hand-rolled reflection-based solution for now; I also want to clarify that there is nothing wrong with a such a solution per se, once you have identified that it fits the bill.
First of all, I assume from the code that this is a work in progress and therefore not complete. In that case, I feel that the code doesn't need more complexity before it's done and a hand-rolled reflection-based approach is more code for you to write, test, debug and maintain.
For example, right now you seem to have a situation where there is a simple 1:1 simple copy from the data in the excel to the data in the employee_master object. So in that case reflection seems like a no-brainer, because it saves you loads of boring manual property assignments.
But what happens when HR (or whoever uses this app) come back to you with the requirement: If Field X is blank on the Excel sheet, then copy the value "Blank" to the target field, unless it's Friday, in which case copy the value "N.A".
Now a generalised solution has to accomodate custom business logic and could start to get burdensome. I have been in this situation and unless you are very careful, it tends to end up with turning a mess in the long run.
I just wanted to point this out - and recommend at least looking at Automapper, because this already provides one very proven way to solve your issue.
In terms of efficiency concerns, they are only mentioned because the question mentioned them, and I wanted to point out that there are greater inefficiencies at play in the code as posted as opposed to the inefficiency of manually typing 40+ property assignments, or indeed the concern of only updating changed fields.
Why not rewrite the loop:
foreach (var xlData in excelData)
{
//find existing record in database data:
var existing = dbData.FirstOrDefault(d=>d.customer==xlData.Customer);
if(existing!=null)
{
//it already exists in database, update it
//see below for notes on this.
}
else
{
//it doesn't exist, create employee_master and insert it to context
//or perform validation to see if the insert can be done, etc.
}
//and commit:
context.SaveChanges();
}
This lets you avoid the initial if(dbData.Count>0) because you will always insert any row from the excel sheet that doesn't have a matching entry in dbData, so you don't need a separate block of code for first-time insertion.
It's also more efficient than the current loop because right now you are iterating every object in dbData for every object in xlData; that means if you have 1,000 items in each you have a million iterations...
Notes on the update process and efficiency in general
(Note: I know the question wasn't directly about efficiency, but since you mentioned it in the context of copying properties, I just wanted to give some food for thought)
Unless you are building a system that has to do this operation for multiple entities, I'd caution against adding more complexity to your code by building a reflection-based property copier.
If you consider the amount of properties you have to copy (i.e the number of foo.x = bar.x type statements) , and then consider the code required to have a robust, fully tested and provably efficient reflection-based property copier (i.e with built-in cache so you don't have to constantly re-reflect type properties, a mechanism to allow you to specify exceptions, handling for edge cases where for whatever unknown reason you discover that for random column X the value "null" is to be treated a little differently in some cases, etc), you may well find that the former is actually significantly less work :)
Bear in mind that even the fastest reflection-based solution will always still be slower than a good old fashioned foo.x = bar.x assignment.
By all means if you have to do this operation for 10 or 20 separate entities, consider the general case, otherwise my advice would be, manually write the property copy assignments, get it right and then think about generalising - or look at Automapper, for example.
In terms of only updating field that have changed - I am not sure you even need to. If the record exists in the database, and the user has just presented a copy of that record which they claim to be the "correct" version of that object, then just copy all the values they gave and save them to the database.
The reason I say this is because in all likelihood, the efficiency of only sending for example 4 modified fields versus 25 or whatever, pales into insignificance next to the overhead of the actual round-trip to the database itself; I'd be surprised if you were able to observe a meaningful performance increase in these kinds of operations by not sending all columns - unless of course all columns are NVARCHAR(MAX) or something :)
If concurrency is an issue (i.e. other uses might be modifying the same data) then include a ROWVERSION type column in the database table, map it in Entity Framework, and handle the concurrency issues if and when they arise.
Within my company we got a briefing for performance-issues. One of the points mentioned there was the access of properties because most of them are retrieved from a database. Thus querying the object itself may result in expensive calls to underlying database.
Now I wonder if there is any difference between the following two statements (considering performace at least)
MyPropertyValue value = myObject.SomeProperty ?? ...;
And
MyPropertyValue value = (myObject.SomeProperty != null) ?
myObject.SomeProperty :
...
I suppose the former is a kind of shortcut for the latter and thus also results in two queries on the property, is this true?
Thanks for ya
The first example only access the get of the property once, and the second accesses it twice.
now, if the get does a DB call EVERY time, then the first is definitely better.
Tested in LinqPad
First expression will evaluate SomeProperty only once whereas latter will evaluate twice.
So answer depends on how expensive the property is. Generally speaking properties should be cheap. If you have a property which takes a while to return you should consider refactoring it to a method instead.
If you have a more specific question, you may get a specific answer.
I have a dictionary with around 1 milions items. I am constantly looping throw the dictionnary :
public void DoAllJobs()
{
foreach (KeyValuePair<uint, BusinessObject> p in _dictionnary)
{
if(p.Value.MustDoJob)
p.Value.DoJob();
}
}
The execution is a bit long, around 600 ms, I would like to deacrese it. Here is the contraints :
MustDoJob values mostly stay the same beetween two calls to DoAllJobs()
60-70% of the MustDoJob values == false
From time to times MustDoJob change for 200 000 pairs.
Some p.Value.DoJob() can not be computed at the same time (COM object call)
Here, I do not need the key part of the _dictionnary objet but I really do need it somewhere else
I wanted to do the following :
Parallelizes but I am not sure is going to be effective due to 4.
Sorts the dictionnary since 1. and 2. (and stop want I find the first MustDoJob == false) but I am wondering what 3. would result in
I did not implement any of the previous ideas since it could be a lot of job and I would like to investigate others options before. So...any ideas ?
What I would suggest is that your business object could raise an event to indicate that it needs to do a job when MustDoJob becomes true and you can subscribe to that event and store references to those objects in a simple list and then process the contents of that list when the DoAllJobs() method is called
My first suggestion would be to use just the values from the dictionary:
foreach (BusinessObject> value in _dictionnary.Values)
{
if(value.MustDoJob)
{
value.DoJob();
}
}
With LINQ this could be even easier:
foreach (BusinessObject value in _dictionnary.Values.Where(v => v.MustDoJob))
{
value.DoJob();
}
That makes it clearer. However, it's not clear what else is actually causing you a problem. How quickly do you need to be able to iterate over the dictionary? I expect it's already pretty nippy... is anything actually wrong with this brute force approach? What's the impact of it taking 600ms to iterate over the collection? Is that 600ms when nothing needs to do any work?
One thing to note: you can't change the contents of the dictionary while you're iterating over it - whether in this thread or another. That means not adding, removing or replacing key/value pairs. It's okay for the contents of a BusinessObject to change, but the dictionary relationship between the key and the object can't change. If you want to minimise the time during which you can't modify the dictionary, you can take a copy of the list of references to objects which need work doing, and then iterate over that:
foreach (BusinessObject value in _dictionnary.Values
.Where(v => v.MustDoJob)
.ToList())
{
value.DoJob();
}
Try using a profiler first. 4 makes me curious - 600ms may not be that much if the COM object uses most of the time, and then it is either paralellize or live with it.
I would get sure first - with a profiler run - that you dont target the totally wrong issue here.
Having established that the loop really is the problem (see TomTom's answer), I would maintain a list of the items on which MustDoJob is true -- e.g., when MustDoJob is set, add it to the list, and when you process and clear the flag, remove it from the list. (This might be done directly by the code manipulating the flag, or by raising an event when the flag changes; depends on what you need.) Then you loop through the list (which is only going to be 60-70% of the length), not the dictionary. The list might contain the object itself or just its key in the dictionary, although it will be more efficient if it holds the object itself as you avoid the dictionary lookup. It does depend on how frequently you're queuing 200k of them, and how time-critical the queuing vs. the execution is.
But again: Step 1 is make sure you're solving the right problem.
The use of a dictionary to me implies that the intention is to find items by a key, rather than visit every item. On the other hand, 600ms for looping through a million items is respectable.
Perhaps alter your logic so that you can simply pick the relevant items satisfying the condition directly out of the dictionary.
Use a List of KeyValuePairs instead. This means you can iterate over it super-quickly by doing
List<KeyValuePair<string,object>> list = ...;
int totalItems = list.Count;
for (int x = 0; x < totalItems; x++)
{
// whatever you plan to do with them, you have access to both KEY and VALUE.
}
I know this post is old, but I was looking for a way to iterate over a dictionary without the increased overhead of the Enumerator being created (GC and all), or generally a faster way to iterate over it.