Effects of variable scope on performance? (C#) - c#

Assume that we have 3 classes:
Place, Contact, PhoneNumber classes.
Under the Place class, I want to have a Contact class but it is better to keep PhoneNumber class under the Contact class. So in the Place class, it is more logical to reach the PhoneNumber by first getting the Contact object, then the PhoneNumber under the Contact object.
If I often need to get the PhoneNumber object from a place object, does keeping the PhoneNumber class under the Contact class (Place.Contact.PhoneNumber) instead of directly insert that object under the Place (Place.Contact) class cause any performance issues?
Why I ask this question is that these kind of scope issues have lots of performance effects in JavaScript. Does it worth to be so paranoiac about variable scope - performance relations in C#?
Thank you.

In C# you won't see many performance issues around trivial* property getters and setters like this. However, without profiling, it is impossible to say if this will be a problem for you.
For most cases though, object graph constructions never create performance problems in C# like they can in JavaScript.
* Properties that simply return a reference to an existing object and have no additional logic.

It will have an effect on performance, but won't cause issues. The just-in-time compiler compiles member accesses into direct pointer computations (having computed the layout of each class when the assembly was loaded), so member access is much faster in C# than it is in JavaScript.

Unless this is the absolutely last stop on your list of things to try in order to make your program run slightly faster, and by "slightly" I actually mean "minuscule" in this case, then I would not worry about it.
To answer your question first, yes, this might impact code performance. The code to read ref.ref.prop will take slightly more code to do than ref.prop, obviously.
However, this will make very small impacts on code performance, and unless you're reading this property 2 levels down many many times in a loop, and doing not much else useful, the effect of having 1 or 2 levels on this particular scenario will be dwarfed by any other code you might be executing.
In any case, the general rule is to write the code the most obvious way, the most simple way, and the most understandable way, so that it is first and foremost easy to write and easy to maintain, which in the long term will lead to fewer bugs.
At some point, if your program has a performance problem, and you find out that this particular code is the piece of code that is taking the most time, at that moment, then, and only then, do you go in and try to optimize that code.

Related

Performance impact of class inheritance

So, this may seem like a very odd question for many, but here it is:
Say you have an abstract class "Object" with an abstract method doStuff() which 10.000 classes inherit from.
Then in another class you have an "Object" dictionary with 100 random objects of the "Object" type in it. You call doStuff() on them.
Does the amount of classes have any performance impact? How does the executable find which class to execute the method of? Is that a jumptable, a pointertable, the equivalent logic of a huge switch-case, ..?
If it has any performance impact, are there ways to structure your code differently to eliminate this problem?
I feel I am really overthinking this.
There is no noticeable performance impact when you call doStuff.
At runtime, the type of object you are calling doStuff on is known for sure. At compile time you'd need a giant switch statement because you don't know the type. CLR sees that you are trying to call doStuff on Subclass0679, goes into that class, and invokes the method. Simple as that.
Think about it this way. ToString() is declared in Object and all classes inherit Object. When you call ToString() on something, is it really slow? No.
The number of derived classes can have some impact.
In particular, with ten thousand derived classes and only 100 objects, chances are pretty good that each call to doStuff will actually be to a unique function, separate from the others. That means your cache won't be very effective.
This is a fairly unrealistic scenario though. A collection that could be any one of ten thousand different derived classes is unlikely to ever arise in actual practice.
To put this in perspective, .NET (in its entirety) consists of about nine thousand nine hundred classes. A decent sized application could easily add the hundred or so more needed to get to ten thousand--but you're talking about a collection that could include anything in .NET, plus anything in your application. I find it difficult to imagine a situation in which this is likely to make sense.
If you're asking this out of curiosity as a hypothetical question, then fair enough.
However, if you're trying to prematurely optimise some code and this is the level of decisions you're making, I would highly recommend you concentrate on making your code work first and then use a profiler to identify hotspots as areas to optimise.
Also, super optimised code is usually far less readable and maintainable.
Unless your code is a game engine or perform some enormous calculation, then does it really need to be so optimised? If the code communicates with the outside world at all - network, disk, db etc - then that latency will completely dwarf any imperceptible difference in timing because of using inheritance.

The size of a Get method

Are there any guidelines or general consensus towards the size of a 'Get' in terms of lines of code? I have a Get method on a member that has quite easily grown to 30 lines of code here. I'm not sure at what point this should be pulled out into a method. But then I'd only be calling it something like GetMyString and assigning the value to another member and calling it in the constructor anyway.
Is it ever worth doing this?
Is this too subjective for SO?
dcastro's answer is good but could use some expansion:
it doesn't take long to return
That's not quantified; let's quantify that. A property should not take more than, say, ten times longer than it would take to fetch a field.
it doesn't connect to external resources (databases, services, etc)
Those are slow and so typically fall under the first rule, but there is a second aspect to this: failure should be rare or impossible. Property getters should not throw exceptions.
it doesn't have any side effects
I would clarify that to observable side effects. Property getters often have the side effect that they compute the property once and cache it for later, but that's not an observable side effect.
Not only is it bad philosophically for getting a property to have an observable side effect, it can also mess up your debugging experience. Remember, when you look at an object in the debugger by default the debugger calls its property getters automatically and displays the results. If doing so is slow then that slows down debugging. If doing so can fail then your debugging experience gets full of failure messages. And if doing so has a side effect then debugging your program changes how the program works, which might make it very hard to find the bug. You can of course turn automatic property evaluation off, but it is better to design good properties in the first place.
It's not really the size that matters (no pun intended).
It's ok to have your logic in a getter as long as
it doesn't take long to return
it doesn't connect to external resources (databases, services, etc)
it doesn't have any side effects
These are only some of the guidelines for proper property usage.
Edit
The above guidelines share one ideal: Property accessors should behave like data access, because that's what users expect.
From the book Effective C# by Bill Wagner:
Properties are methods that can be viewed from the calling code like
data. That puts some expectations into your users’ heads. They will
see a property access as though it was a data access. After all,
that’s what it looks like. Your property accessors should live up to
those expectations. Get accessors should not have observable side
effects. Set accessors do modify the state, and users should be able
to see those changes.
Property accessors also have performance
expectations for your users. A property access looks like a data field
access. It should not have performance characteristics that are
significantly different than a simple data access.
Property accessors
should not perform lengthy computations, or make cross-application
calls (such as perform database queries), or do other lengthy
operations that would be inconsistent with your users’ expectations
for a property accessor.
Bonus by Alberto: http://msdn.microsoft.com/en-us/library/vstudio/ms229054%28v=vs.100%29.aspx
It's not necessarily bad, but if it were me it would make me nervous and I'd be looking to try and break it up somehow. A getter is a method so simply pulling the whole thing into a 30+ line method would be a waste of time in my opinion. I'd be trying to chop it up. E.g. if it was a loop with some checks, extracting the checks as methods or some such.
This is a common bad practice to shove a whole bunch of lines into a Get method.
I have something installed in visual studio called CodeMaid. It has something called a CodeMaid Spade which rates each method and gives you a score. The higher the score the worse your method is. It can be used on properties too. I suggest you give it a try, it helps with formatting, indentation and a bunch of other good practices as well
As a general guideline, a method should not have more lines than fit on one screen. If you have to scroll, it's too large. Split it into smaller methods.

do closures in c# cause code bloat?

Do closures in c# cause code bloat in the generated il? I was told to avoid lambdas with closure variables as they generate hidden classes in the object file that can store the context for the lambda. A class for every lambda with the closed over variables. Is this true? Or does the compiler reuse an existing class, like Tuple or some internal class?
Extra classes are only generated when they need to be - when you capture variables other than this. However, this isn't really code bloat in most cases - it's necessary in order to make the delegate work the way you need it to.
In some cases you could write more efficient code yourself, but usually to get a delegate with the same effect, you'd end up writing code which was similar to what the compiler would generate for you... but considerably harder to read.
Most of the time you shouldn't worry about this sort of "bloat" - avoid micro-optimizing for performance to start with - optimize for readability, and measure the performance instead of guessing about it. Then you can attack the bits of code which really matter, and maybe sacrifice a bit of readability for performance there, when you've proved that it's worth it.
(Writing modern C# and deliberately avoiding lambda expressions is like trying to code with one hand tied behind your back. If the person advising you is worried about the "bloat" of closures, you could probably give him a heart attack by showing him the state machine generated for async/await in C# 5...)
Yes, this is true.
A class that keeps track of the variable needs to exist. A Tuple or internal class would not be able to do this for all possible code paths, so such a state machine needs to be generated in IL specifically for each lambda/closure.
Whether this is a problem for your application or not is something for you to determine.

Is it ok to have a class with just properties?

I have a method that is over 700+ lines long. In the beginning of the method, there are around 50 local variables declared. I decided to take the local variables out and put them into a separate class as properties so I could just declare the class in the method and use the properties through out it. Is this perfectly fine or does another data type fit in here such as a struct? This method was written during classic ASP times.
I have a method that is over 700+ lines long. In the beginning of the method, there are around 50 local variables declared.
Ok, so, the length of that method is also a problem. 700 lines is just too much to keep straight in one normal person's head all at once. When you have to fix a bug in there you end up scrolling up and down and up and down and... you get the idea. It really makes things hard to maintain.
So my answer is, yes, you should likely split up your data into a structure of some sort assuming that it actually makes sense to do so (i.e., I probably wouldn't create a SomeMethodParmaters class). The next thing to do is to split that method out into smaller pieces. You may even find that you no longer need a data structure as now each method only has a handful of variables declared for the work it needs to do.
Also, this is subjective, but there is really no good reason to declare all variables at the top of the method. Try declaring them as close to when they are actually used as possible. Again, this just keeps things nice and clean for maintenance in the future. It's much easier to concentrate on one section of code when you can see it all on the screen.
Hrm... I think you'd probably be better off refactoring the method to not have to operate on so many variables at all. For instance, five methods operating on ten variables each would infinitely better. As it stands now, it feels like you're simply trying to mask an issue rather than solve it.
I would strongly recommend you take a read through this book and/or any number of web sites concerned with refactoring. http://www.amazon.com/Refactoring-Improving-Design-Existing-Code/dp/0201485672
Although you can't look at 700+ lines in a single method and automatically say that is bad, it does indicate a bad code smell. Methods should be small units of code with a single purpose. This makes it easier for you to maintain or those who come behind you. It can also help you to figure out improvements to your design and make altering your design in the future much easier.
Creating a class just to hold properties without looking at what the overall structure should be is just hiding a problem. That is not to say in this particular instance that is not a perfectly acceptable and correct solution, just that you should make sure you are taking the time to provide a properly thought out design where your classes have the properties, state, and functionality they deserve.
Hope this helps.

In C# (or any language) what is/are your favourite way of removing repetition?

I've just coded a 700 line class. Awful. I hang my head in shame. It's as opposite to DRY as a British summer.
It's full of cut and paste with minor tweaks here and there. This makes it's a prime candidate for refactoring. Before I embark on this, I'd thought I'd ask when you have lots of repetition, what are the first refactoring opportunities you look for?
For the record, mine are probably using:
Generic classes and methods
Method overloading/chaining.
What are yours?
I like to start refactoring when I need to, rather than the first opportunity that I get. You might say this is somewhat of an agile approach to refactoring. When do I feel I need to? Usually when I feel that the ugly parts of my codes are starting to spread. I think ugliness is okay as long as they are contained, but the moment when they start having the urge to spread, that's when you need to take care of business.
The techniques you use for refactoring should start with the simplest. I would strongly recommand Martin Fowler's book. Combining common code into functions, removing unneeded variables, and other simple techniques gets you a lot of mileage. For list operations, I prefer using functional programming idioms. That is to say, I use internal iterators, map, filter and reduce(in python speak, there are corresponding things in ruby, lisp and haskell) whenever I can, this makes code a lot shorter and more self-contained.
#region
I made a 1,000 line class only one line with it!
In all seriousness, the best way to avoid repetition is the things covered in your list, as well as fully utilizing polymorphism, examine your class and discover what would best be done in a base class, and how different components of it can be broken away a subclasses.
Sometimes by the time you "complete functionality" using copy and paste code, you've come to a point that it is maimed and mangled enough that any attempt at refactoring will actually take much, much longer than refactoring it at the point where it was obvious.
In my personal experience my favorite "way of removing repetition" has been the "Extract Method" functionality of Resharper (although this is also available in vanilla Visual Studio).
Many times I would see repeated code (some legacy app I'm maintaining) not as whole methods but in chunks within completely separate methods. That gives a perfect opportunity to turn those chunks into methods.
Monster classes also tend to reveal that they contain more than one functionality. That in turn becomes an opportunity to separate each distinct functionality into its own (hopefully smaller) class.
I have to reiterate that doing all of these is not a pleasurable experience at all (for me), so I really would rather do it right while it's a small ball of mud, rather than let the big ball of mud roll and then try to fix that.
First of all, I would recommend refactoring much sooner than when you are done with the first version of the class. Anytime you see duplication, eliminate it ASAP. This may take a little longer initially, but I think the results end up being a lot cleaner, and it helps you rethink your code as you go to ensure you are doing things right.
As for my favorite way of removing duplication.... Closures, especially in my favorite language (Ruby). They tend to be a really concise way of taking 2 pieces of code and merging the similarities. Of course (like any "best practice" or tip), this can not be blindly done... I just find them really fun to use when I can use them.
One of the things I do, is try to make small and simple methods that I can see on a single page in my editor (visual studio).
I've learnt from experience that making code simple makes it easier for the compiler to optimise it. The larger the method, the harder the compiler has to work!
I've also recently seen a problem where large methods have caused a memory leak. Basically I had a loop very much like the following:
while (true)
{
var smallObject = WaitForSomethingToTurnUp();
var largeObject = DoSomethingWithSmallObject();
}
I was finding that my application was keeping a large amount of data in memory because even though 'largeObject' wasn't in scope until smallObject returned something, the garbage collector could still see it.
I easily solved this by moving the 'DoSomethingWithSmallObject()' and other associated code to another method.
Also, if you make small methods, your reuse within a class will become significantly higher. I generally try to make sure that none of my methods look like any others!
Hope this helps.
Nick
"cut and paste with minor tweaks here and there" is the kind of code repetition I usually solve with an entirely non-exotic approach- Take the similar chunk of code, extract it out to a seperate method. The little bit that is different in every instance of that block of code, change that to a parameter.
There's also some easy techniques for removing repetitive-looking if/else if and switch blocks, courtesy of Scott Hanselman:
http://www.hanselman.com/blog/CategoryView.aspx?category=Source+Code&page=2
I might go something like this:
Create custom (private) types for data structures and put all the related logic in there. Dictionary<string, List<int>> etc.
Make inner functions or properties that guarantee behaviour. If you’re continually checking conditions from a publically accessible property then create an private getter method with all of the checking baked in.
Split methods apart that have too much going on. If you can’t put something succinct into the or give it a good name, then start breaking the function apart until the code is (even if these “child” functions aren’t used anywhere else).
If all else fails, slap a [SuppressMessage("Microsoft.Maintainability", "CA1502:AvoidExcessiveComplexity")] on it and comment why.

Categories