If I use loops in the get of a property, does this mean every time I call get on the property, it executes the loop? For eg, below, if I call CartViewModel.Total, does this mean it executes the loop inside SubTotal and Discount?
public class CartViewModel
{
public decimal SubTotal { get { return CartViewItems.Sum(c => c.Price); } }
public decimal Discount { get { return CartViewItems.Sum(c => Total-SubTotal); } }
public decimal Total { get { return SubTotal-Discount; } }
public List<CartViewItem> CartViewItems { get; set; }
}
public class CartViewItem
{
public decimal Price { get; set; }
public int ProductId { get; set; }
public int Quantity { get; set; }
public float DiscountPercent {get; set;}
public decimal SubTotal { get { return Price*Quantity; } }
public decimal Total
{
get
{
return Convert.ToDecimal(((float)Price * (1 - (DiscountPercent / 100))
* Quantity));
}
}
}
Is there a way to optimize this?
Yes, every time you call the property, it will execute the loop.
Properties are really just syntactic sugar over method calls. Very nice syntactic sugar, but only sugar.
You might want to change CartViewModel to avoid exposing the list directly - instead, keep the list private, give some appropriate methods to mutate it (e.g. Add and Clear methods), probably make it implement IEnumerable<CartViewItem> itself, and keep track of the subtotal and discount as the list is mutated. That's assuming that CartViewItem is immutable, of course...
Alternatively (as John suggested), just make it more obvious that you're actually doing work, changing the properties to ComputeTotal(), ComputeDiscount() and ComputeSubTotal() methods.
I would recommend the "don't do that" approach.
Properties are meant for cases where the data being accessed are "almost like a field" in terms of performance and semantics. They are definitely not meant for cases where the property hides the fact that a more expensive operation is being performed.
Replace those properties with methods and see whether the calling code changes much. It will be clearer that you are doing something potentially expensive.
Depending on how often those are referenced, I might go further and inline the properties entirely. That might suggest some optimization in the callers.
Related
A pure function is one which given the same arguments, will always return the same result and will have no side effects.
So Sum(x,y) => x + y; is pure because it meets this criteria.
However, in a language like C# where you can have properties, this makes things more complicated...
class Summer
{
public int X { get; set; }
public int Y { get; set; }
public int Sum() => X + Y;
}
In the above, can Sum be considered to be pure? Can you make the argument that X and Y are still just parameters to the function?
Or would it be better if refactored to something like:
class Summer
{
public int X { get; set; }
public int Y { get; set; }
}
static class SummerExn
{
public static int Sum(this Summer s)
{
return s.X + s.Y;
}
}
In the extension method, s is a parameter so this meets the criteria for being pure I think, but realistically there is no practical difference since the underlying variables are the same. Would there be a technical reason this code is better, such as being more easily tested, faster, more memory efficient, easier to logically reason about etc. ?
Your example doesn't meet the definition you gave:
A pure function is one which given the same arguments, will always return the same result...
Every call is given the same arguments (none) yet obviously can return different results. Another definition from Wikipedia makes this a little more explicit:
The function return values are identical for identical arguments (no variation with local static variables, non-local variables...)
Properties are non-local variables.
And to be especially pedantic, not only is your example not a pure function, it's not a function at all. A function-like thing that's a non-static class member is called a method.
Introduction to the goal:
I am currently trying to optimize performance and memory usage of my code. (mainly Ram bottleneck)
The program will have many instances of the following element at the same time. Especially when historic prices should be processed at the fastest possible rate.
The struct looks like this in it's simplest way:
public struct PriceElement
{
public DateTime SpotTime { get; set; }
public decimal BuyPrice { get; set; }
public decimal SellPrice { get; set; }
}
I realized the performance benefits of using the struct just like an empty bottle and refill it after consumption. This way, I do not have to reallocate memory for each single element in the line.
However, it also made my code a little more dangerous for human errors in the program code. Namely I wanted to make sure that I always update the whole struct at once rather than maybe ending up with just an updated sellprice and buyprice because I forgot to update an element.
The element is very neat like this but I have to offload methods into functions in another classes in order to have the functionality I require - This in turn would be less intuitive and thus less preferable in code.
So I added some basic methods which make my life a lot easier:
public struct PriceElement
{
public PriceElement(DateTime spotTime = default(DateTime), decimal buyPrice = 0, decimal sellPrice = 0)
{
// assign datetime min value if not happened already
spotTime = spotTime == default(DateTime) ? DateTime.MinValue : spotTime;
this.SpotTime = spotTime;
this.BuyPrice = buyPrice;
this.SellPrice = sellPrice;
}
// Data
public DateTime SpotTime { get; private set; }
public decimal BuyPrice { get; private set; }
public decimal SellPrice { get; private set; }
// Methods
public decimal SpotPrice { get { return ((this.BuyPrice + this.SellPrice) / (decimal)2); } }
// refills/overwrites this price element
public void UpdatePrice(DateTime spotTime, decimal buyPrice, decimal sellPrice)
{
this.SpotTime = spotTime;
this.BuyPrice = buyPrice;
this.SellPrice = sellPrice;
}
public string ToString()
{
System.Text.StringBuilder output = new System.Text.StringBuilder();
output.Append(this.SpotTime.ToString("dd/MM/yyyy HH:mm:ss"));
output.Append(',');
output.Append(this.BuyPrice);
output.Append(',');
output.Append(this.SellPrice);
return output.ToString();
}
}
Question:
Let's say I have PriceElement[1000000] - will those additional methods put additional strain on the system memory or are they "shared" between all structs of type PriceElement?
Will those additional methods increase the time to create a new PriceElement(DateTime, buy, sell) instance, respectively the load on the garbage collector?
Will there be any negative impacts, I have not mentioned here?
will those additional methods put additional strain on the system memory or are they "shared" between all structs of type PriceElement?
Code is shared between all instances. So no additional memory will be used.
Code is stored separately from any data, and the memory for the code is only dependent on the amount of code, not how many instance of objects there are. This is true for both classes and structs. The main exception is generics, this will create a copy of the code for each type combination that is used. It is a bit more complicated since the code is Jitted, cached etc, but that is irrelevant in most cases since you cannot control it anyway.
I would recommend making your struct immutable. I.e. change UpdatePrice so it returns a new struct instead of changing the existing one. See why is mutable structs evil for details. Making the struct immutable allow you to mark the struct as readonly and that can help avoid copies when passing the struct with an in parameter. In modern c# you can take references to structs in an array, and that also helps avoiding copies (as you seem to be aware of).
I have developed a fairly complex spreadsheet in Excel, and I am tasked with converting it to a C# program.
What I am trying to figure out is how to represent the calculations from my spreadsheet in C#.
The calculations have many dependencies, to the point that it would almost appear to be a web, rather than a nice neat hierarchy.
The design solution I can think of is this:
Create an object to represent each calculation.
Each object has an integer or double, which contains the calculation.
this calc has inputs from other objects and so requires that they are evaluated first before it can be performed.
Each object has a second integer "completed", which evaluates to 1 if the previous calculation is successful
Each object has a third integer "ready"
This item requires all precedent object's "completed" integers evaluate to
"1" and if not, the loop skips this object
A Loop runs through all objects, until all of the "completed" integers = 1
I hope this makes sense. I am typing up the code for this but I am still pretty green with C# so at least knowing i'm on the right track is a boon :)
To clarify, this is a design query, I'm simply looking for someone more experienced with C# than myself, to verify that my method is sensible.
I appreciate any help with this issue, and I'm keen to hear your thoughts! :)
edit*
I believe the "completed" state and "ready" state are required for the loop state check to prevent errors that might occur from attempts to evaluate a calculation where precedents aren't evaluated. Is this necessary?
I have it set to "Any CPU", the default setting.
edit*
For example, one object would be a line "V_dist"
It has length, as a property.
It's length "V_dist.calc_formula" is calculated from two other objects "hpc*Tan(dang)"
public class inputs
{
public string input_name;
public int input_angle;
public int input_length;
}
public class calculations
{
public string calc_name; ///calculation name
public string calc_formula; ///this is just a string containing formula
public double calculationdoub; ///this is the calculation
public int completed; ///this will be set to 1 when "calculationdoub" is nonzero
public int ready; ///this will be set to 1 when dependent object's "completed" property = 1
}
public class Program
{
public static void Main()
{
///Horizontal Length
inputs hpc = new inputs();
hpc.input_name = "Horizontal "P" Length";
hpc.input_angle = 0;
hpc.input_length = 200000;
///Discharge Angle
inputs dang = new inputs();
dang.input_name = "Discharge Angle";
dang.input_angle = 12;
dang.input_length = 0;
///First calculation object
calculations V_dist = new calculations();
V_dist.calc_name = "Vertical distance using discharge angle";
V_dist.calc_formula = "hpc*Tan(dang)";
**V_dist.calculationdoub = inputs.hpc.length * Math.Tan(inputs.dang.input_angle);**
V_dist.completed = 0;
V_dist.ready = 0;
}
}
It should be noted that the other features I have yet to add, such as the loop, and the logic controlling the two boolean properties
You have some good ideas, but if I understand what you are trying to do, I think there is a more idiomatic -- more OOP way to solve this that is also much less complicated. I am presupposing you have a standard spreadsheet, where there are many rows on the spreadsheet that all effectively have the same columns. It may also be you have different columns in different sections of the spreadsheet.
I've converted several spreadsheets to applications, and I have settled on this approach. I think you will love it.
For each set of headers, I would model that as a single object class. Each column would be a property of the class, and each row would be one object instance.
In all but very rare cases, I would say simply model your properties to include the calculations. A simplistic example of a box would be something like this:
public class Box
{
public double Length { get; set; }
public double Width { get; set; }
public double Height { get; set; }
public double Area
{
get { return 2*Height*Width + 2*Length*Height + 2*Length*Width; }
}
public double Volume
{
get { return Length * Width * Height; }
}
}
And the idea here is if there are properties (columns in Excel) that use other calculated properties/columns as input, just use the property itself:
public bool IsHuge
{
get { return Volume > 50; }
}
.NET will handle all of the heavy lifting and dependencies for you.
In most cases, this will FLY in C# compared to Excel, and I don't think you'll have to worry about computational speed in the way you've set up your cascading objects.
When I said all but rare cases, if you have properties that are very computationally expensive, then you can make these properties private and then trigger the calculations.
public class Box
{
public double Length { get; set; }
public double Width { get; set; }
public double Height { get; set; }
public double Area { get; private set; }
public double Volume { get; private set; }
public bool IsHuge { get; private set; }
public void Calculate()
{
Area = 2*Height*Width + 2*Length*Height + 2*Length*Width;
Volume = Length * Width * Height;
IsHuge = Volume > 50;
}
}
Before you go down this path, I'd recommend you do performance testing. Unless you have millions of rows and/or very complex calculations, I doubt this second approach would be worthwhile, and you have the benefit of not needing to define when to calculate. It happens when, and only when, the property is accessed.
I need to know if there are some performance problem/consideration if I do something like this:
public Hastable Properties=...
public double ItemNumber
{
get { return (double)Properties["ItemNumber"]; }
set
{
ItemNumber = value;
Properties["ItemNumber"] = value;
}
}
Public string Property2....
Public ... Property 3....
Instead of accessing the property directly:
public string ItemNumber { get; set; }
public string prop2 { get; set; }
public string 3...{ get; set; }
It depends on your performance requirements... Accessing a Hashtable and casting the result is obviously slower than just accessing a field (auto-properties create a field implicitly), but depending on what you're trying to do, it might or might not make a significant difference. Complexity is O(1) in both cases, but accessing a hashtable obviously takes more cycles...
Well, compared to the direct property access it will surely be slower because much more code needs to be executed for the get and set operations. But since you are using a Hashtable the access should be pretty fast. You are also getting an additional overhead because of the casting since you are using weakly typed collection. Things like boxing and unboxing come with a cost. The question is whether all this will affect noticeably the performance of your application. It would really depend on your requirements. I would recommend you performing some load tests to see if this could be a bottleneck.
I haven't used LINQ extensively but the more I use it the more I realize how powerful it can be. Using the LinqDataSource.OrderBy clause is obviously easy if you want to sort from a property on the bounded items but what if you want to sort the items based on a method return? Take this class for instance (please ignore the quirky design - it's just used to emphasize my point):
public class DataItem
{
private string Id { get; set; }
private int SortValue { get; set; }
public DataItem(string id, int sortValue)
{
this.Id = id;
this.SortValue = sortValue;
}
public int GetSortValue()
{
return SortValue;
}
}
Is there a way that I can set the orderby expression on the LinqDataSource so that it uses the value returned from GetSortValue (i.e order by other members than properties) but without altering the DataItem class?
If the method has no parameters you could wrap it with a property?
public int SortOrderBy { get { return GetSortValue(); } }
Edit: This will also work if the parameters are constants or class fields/properties.
The MSDN docs mention that it is indeed possible to do custom sorting but I might have misinterpreted your question.