If I have an object A with many properties, out of which I only need a couple, I can boost performance by not transferring unnecessary data, i.e. selecting only the object properties I need into a new type B, either named or anonymous.
Now imagine I want to bind a list of those original objects A to, say, a datagridview, which only displays the couple of properties I want. I have created the datagridview columns using the property names of the original object A and set its datasource type to typeof(A). I was wondering, if I can select into the same object A just omitting the properties I don't need, i.e.
public class MyObject
{
public string prop1 { get; set; }
public string prop2 { get; set; }
.....
public string propN { get; set; }
}
var list = context.MyObject
.Select(n => new MyObject { prop1 = n.prop1, prop2 = n.prop2 }).ToList();
In this way I don't need to define a new type, either named or anonymous. The question is, do I gain something in performance, or I still have the overhead of the original large object A information, although I do not transfer data for all its properties.
Alex
Actually, I think, the performance can't improve much as Select statement will go through all your list and create a new list of objects for you. But if you have reference property that you don't use. You can save there.
If there is no complicated logic when you show data to UI. Whey don't you keep the model as it is.
If this is for UI display only - there is no performance gain. Whatever time you might gain you will lose by creating a new list of anonymous types.
However, if you intend to send this object through the network (as a response to a request for example), then this makes sense. This way fewer properties have to be serialized and sent through the network.
In most cases, however, you should worry with performance to this level. The user won't notice an improvement on such a level. If you really wish to improve the performance of your application, you should profile it and find the hotspots.
The only meaningful performance gain, assuming your constructor is "cheap", is in the SQL and data transport from/to.
That said, not everything is about performance. Sometimes it's about clarity, extensibility, decoupling, etc. Clarity wise, you're forcing others to have to ask the question "Is this property used by the UI?"
In addition to clarity issues, you have coupling between the UI and back-end entities. This is not ideal. A cheap/temporary solution might simply be one like this. Keeping in mind that it's still coupled due to the interface on the class, but it's something that would be trivial to adjust in the future if desired.
public interface IMyModel
{
string prop1 { get; set; }
string prop2 { get; set; }
}
public class MyObject : IMyModel
{
public string prop1 { get; set; }
public string prop2 { get; set; }
.....
public string propN { get; set; }
}
IEnumerable<IMyModel> list = context.MyObject
.Select(n => new { n.prop1, n.prop2 }) // only select these properties
.ToArray() // execute the query
.Select(n => (IMyModel)new MyObject { prop1 = n.prop1, prop2 = n.prop2 }); // construct our desired object
Related
In the code below I have a concurrent dictionary that i'm using for storing a single key/value pair where the value is a collection of strings.
I will be reading and updating the strings in this single key/value pair from different threads.
I'm aware that concurrent dictionaries are not entirely thread safe if one thread has changed a value before the other thread has perhaps finished reading it. But equally i'm not sure if string values really come into this topic or not, could someone please advise?
Its also worth mentioning that although I put this "GetRunTimeVariables" method into an interface for dependency injection, I actually cant use DI all the time for accessing this method due to the stages of app startup and OIDC events sign in/out where i need to access the dictionary values within classes that can't use dependency injection, so in essence I could be accessing this dictionary from nay means as necessary throughout the lifetime of the application.
Lastly i'm not really sure if there is any benefit in pushing this method into an interface, the other option is simply new up a reference to this class each time i need it, some thoughts on this would be appreciated.
public class RunTimeSettings : IRunTimeSettings
{
// Create a new static instance of the RunTimeSettingsDictionary that is used for storing settings that are used for quick
// access during the life time of the application. Various other classes/threads will read/update the parameters.
public static readonly ConcurrentDictionary<int, RunTimeVariables> RunTimeSettingsDictionary = new ConcurrentDictionary<int, RunTimeVariables>();
public object GetRunTimeVariables()
{
dynamic settings = new RunTimeVariables();
if (RunTimeSettingsDictionary.TryGetValue(1, out RunTimeVariables runTimeVariables))
{
settings.Sitename = runTimeVariables.SiteName;
settings.Street = runTimeVariables.Street;
settings.City = runTimeVariables.City;
settings.Country = runTimeVariables.Country;
settings.PostCode = runTimeVariables.PostCode;
settings.Latitude = runTimeVariables.Latitude;
settings.Longitude = runTimeVariables.Longitude;
}
return settings;
}
}
Class for string values:
public class RunTimeVariables
{
public bool InstalledLocationConfigured { get; set; }
public string SiteName { get; set; }
public string Street { get; set; }
public string City { get; set; }
public string Country { get; set; }
public string PostCode { get; set; }
public string Latitude { get; set; }
public string Longitude { get; set; }
}
The System.String type (the classical strings of C#) is immutable. No one can modify the "content" of a String. Anyone can make a property reference a different String.
But in truth the only problem you have here is that the various values could be de-synced. You have various properties that are correlated. If one thread is modifying the object while another thread is reading it, the reading thread could see some properties of the "old" version and some properties of the "new" version. This isn't a big problem if the object once written to the ConcurrentDictionary is not changed (is "immutable" at least as a business rule). Clearly a correct solution is to have RuntimeVariables be an immutable object (composed only of readonly fields that are initialized at instancing for example)
I've been working on a project for a while to parse a list of entries from a csv file and use that data to update a database.
For each entry I create a new user instance that I put in a collection. Now I want to iterate that collection and compare the user entry to the user from the database (if it exists). My question is, how can I compare that user (entry) object to the user (db) object, while returning a list with differences?
For example following classes generated from database:
public class User
{
public int ID { get; set; }
public string EmployeeNumber { get; set; }
public string UserName { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public Nullable<int> OfficeID { get; set; }
public virtual Office Office { get; set; }
}
public class Office
{
public int ID { get; set; }
public string Code { get; set; }
public virtual ICollection<User> Users { get; set; }
}
To save some queries to the database, I only fill the properties that I can retrieve from the csv file, so the ID's (for example) are not available for the equality check.
Is there any way to compare these objects without defining a rule for each property and returning a list of properties that are modified? I know this question seems similar to some earlier posts. I've read a lot of them but as I'm rather inexperienced at programming, I'd appreciate some advice.
From what I've gathered from what I've read, should I be combining 'comparing properties generically' with 'ignoring properties using data annotations' and 'returning a list of CompareResults'?
There are several approaches that you can solve this:
Approach #1 is to create separate DTO-style classes for the contents of the CSV files. Though this involves creating new classes with a lot of similar fields, it decouples the CSV file format from your database and gives you the ability to change them later without influencing the other part. In order to implement the comparison, you could create a Comparer class. As long as the classes are almost identical, the comparison can get all the properties from the DTO class and implement the comparison dynamically (e.g. by creating and evaluating a Lambda expression that contains a BinaryExpression of type Equal).
Approach #2 avoids the DTOs, but uses attributes to mark the properties that are part of the comparison. You'd need to create a custom attribute that you assign to the properties in question. In the compare, you analyze all the properties of the class and filter out the ones that are marked with the attribute. For the comparison of the properties you can use the same approach as in #1. Downside of this approach is that you couple the comparison logic tightly with the data classes. If you'd need to implement several different comparisons, you'd clutter the data classes with the attributes.
Of course, #1 results in a higher effort than #2. I understand that it is not what you are looking for, but maybe having a separate, strongly-typed compared class is also an approach one can think about.
Some more details on a dynamic comparison algorithm: it is based on reflection to get the properties that need to be compared (depending on the approach you get the properties of the DTO or the relevant ones of the data class). Once you have the properties (in case of DTOs, the properties should have the same name and data type), you can create a LamdaExpression and compile and evaluate it dynamically. The following lines show an excerpt of a code sample:
public static bool AreEqual<TDTO, TDATA>(TDTO dto, TDATA data)
{
foreach(var prop in typeof(TDTO).GetProperties())
{
var dataProp = typeof(TDATA).GetProperty(prop.Name);
if (dataProp == null)
throw new InvalidOperationException(string.Format("Property {0} is missing in data class.", prop.Name));
var compExpr = GetComparisonExpression(prop, dataProp);
var del = compExpr.Compile();
if (!(bool)del.DynamicInvoke(dto, data))
return false;
}
return true;
}
private static LambdaExpression GetComparisonExpression(PropertyInfo dtoProp, PropertyInfo dataProp)
{
var dtoParam = Expression.Parameter(dtoProp.DeclaringType, "dto");
var dataParam = Expression.Parameter(dataProp.DeclaringType, "data");
return Expression.Lambda(
Expression.MakeBinary(ExpressionType.Equal,
Expression.MakeMemberAccess(
dtoParam, dtoProp),
Expression.MakeMemberAccess(
dataParam, dataProp)), dtoParam, dataParam);
}
For the full sample, see this link. Please note that this dynamic approach is just an easy implementation that leaves room for improvement (e.g. there is no check for the data type of the properties). It also does only check for equality and does not collect the properties that are not equal; but that should be easy to transfer.
While the dynamic approach is easy to implement, the risk for runtime errors is bigger than in a strongly-typed approach.
I currently have the following Models in my EF Code First MVC project (edited for brevity):
public class Car
{
public int Id { get; set; }
public string Descrip { get; set; }
// Navigation Property.
public virtual CarColour CarColour { get; set; }
... + numerous other navigation properties.
}
public class CarColour
{
public int Id { get; set; }
public string ColourName { get; set; }
}
The CarColour table in the DB contains many rows.
In my project, I have about 10 of these sorts of tables, which are essentially lookup tables.
Rather than have 10 lookup tables (and 10 corresponding 'hard' types in code), I was tasked with implementing a more re-usable approach, instead of having loads of lookup tables, specific to Car (in this example), along the lines of having a couple of tables, one of which may hold the item types (colour, fuel-type etc.) and one which contains the various values for each of the types. The idea being that our model will be able to be re-used by many other projects - some of which will have potentially hundreds of different attributes, and as such, we won't want to create a new Class/Type in code and generate a new lookup table for each.
I am having difficulty in understanding the c# implementation of this sort of approach and hope someone may be able to give me an example of how this can be achieved in code, more specifically, how the above models would need to change, and what additional classes would be required to accomplish this?
your base entity must implement INotifyPropertyChanged and make it generic:
public virtual CarColour CarColour {
Get { return this.carColour; }
Set {
this.Carcolour; = value
OnPropertyChanged("CarColour");
}
}
For more info see :
patterns & practices: Prism in CodePlex.
http://compositewpf.codeplex.com/wikipage?title=Model%20View%20ViewModel%20(MVVM)
Greetings
Bassam
This is not necessarily specific to EF but I've been down this road and didn't really enjoy it.
I wanted to use a single table to represent 'generic' information and while I thought it was smart, it soon showed it's limitations. One of them being the complexity you need to introduce when writing queries to extract this data if you're performing more than just 'get colours for this car'.
I'd say, if your data is simple key/value and the value type is always going to be the same then go for it, it might even be worth having this a mere 'meta-data' for an object:
public class Car
{
public int Id { get; set; }
public string Descrip { get; set; }
public MetaData CarColours { get; set; }
}
public MetaData : Dictionary<int, string>
{
public MetaData(int group){}
}
Hypothetical table:
TableMetaData(int metaGroup, int metaId, string metaValue)
If you're hoping to store different types as your value and may need to perform joining on this data - avoid it and be a bit more specific.
I need to know if there are some performance problem/consideration if I do something like this:
public Hastable Properties=...
public double ItemNumber
{
get { return (double)Properties["ItemNumber"]; }
set
{
ItemNumber = value;
Properties["ItemNumber"] = value;
}
}
Public string Property2....
Public ... Property 3....
Instead of accessing the property directly:
public string ItemNumber { get; set; }
public string prop2 { get; set; }
public string 3...{ get; set; }
It depends on your performance requirements... Accessing a Hashtable and casting the result is obviously slower than just accessing a field (auto-properties create a field implicitly), but depending on what you're trying to do, it might or might not make a significant difference. Complexity is O(1) in both cases, but accessing a hashtable obviously takes more cycles...
Well, compared to the direct property access it will surely be slower because much more code needs to be executed for the get and set operations. But since you are using a Hashtable the access should be pretty fast. You are also getting an additional overhead because of the casting since you are using weakly typed collection. Things like boxing and unboxing come with a cost. The question is whether all this will affect noticeably the performance of your application. It would really depend on your requirements. I would recommend you performing some load tests to see if this could be a bottleneck.
I want to implement a simple attribute that is used to map Database Columns to Properties.
So what i have so far is something that attached like so:
[DataField("ID")]
public int ID { get; set; }
[DataField("Name")]
public String Name { get; set; }
[DataField("BirD8")]
public DateTime BirthDay { get; set; }
Is there a way that I can make the attribute "aware" of the field it is on, so that for the properties where the name is the same as the ColumnName I can just apply the attribute without the name parameter, or would I have to deal with that at the point where I reflect the properties. I want to end up doing just this:
[DataField]
public int ID { get; set; }
[DataField]
public String Name { get; set; }
[DataField("BirD8")]
public DateTime BirthDay { get; set; }
The attribute itself won't be aware of what it's applied to, but the code processing the attributes is likely to be running through PropertyInfo values etc and finding the attributes associated with them. That code can then use both the property and the attribute appropriately.
To make things simpler, you might want to write a method on the attribute to allow it to merge its information with the information from the property, so you'd call:
DataFieldAttribute dfa = propertyInfo.GetCustomAttributes(...); // As normal
dfa = dfa.MergeWith(propertyInfo);
Note that for the sake of sanity this should create a new instance of the attribute, rather than changing the existing one. Alternatively, you might want a whole separate class to represent "the information about a data field":
DataFieldAttribute dfa = propertyInfo.GetCustomAttributes(...); // As normal
DataFieldInfo info = dfa.MergeWith(propertyInfo);
That way you could also construct DataFieldInfo objects without any reference to attributes, which might be a nice conceptual separation - allowing you to easily load the config from an XML file or something similar if you wanted to.
If you don't mind using postsharp you can look Here, at a previous question I have asked which was close. I ended up using the compile time validate to do what I wanted, although there are other options, like CompileTimeInitalize.
public override void CompileTimeInitialize(object element)
{
PropertyInfo info = element as PropertyInfo;
//....
}