Local variables or class fields? - c#

I read today a post about performance improvement in C# and Java.
I still stuck on this one:
19. Do not overuse instance variables
Performance can be improved by using local variables. The code in example 1 will execute faster than the code in Example 2.
Example1:
public void loop() {
int j = 0;
for ( int i = 0; i<250000;i++){
j = j + 1;
}
}
Example 2:
int i;
public void loop() {
int j = 0;
for (i = 0; i<250000;i++){
j = j + 1;
}
}
Indeed, I do not understand why it should be faster to instantiate some memory and release it every time a call to the loop function is done when I could do a simple access to a field.
It's pure curiosity, I'm not trying to put the variable 'i' in the class' scope :p
Is that true that's faster to use local variables? Or maybe just in some case?

Stack faster then Heap.
void f()
{
int x = 123; // <- located in stack
}
int x; // <- located in heap
void f()
{
x = 123
}
Do not forget the principle of locality data. Local data should be better cached in CPU cache. If the data are close, they will loaded entirely into the CPU cache, and the CPU does not have to get them from memory.

The performance is down to the number of steps required to get the variable. Local variable addresses are known at compile time (they are a known offset on the stack), to access a member you load the object 'this' to get the address of the actual object, before you can get the address of the member variable.

In C# another minor difference is the number of generated MSIL instructions (I guess it's similar in Java).
It takes two instructions to load an instance field:
ldarg.0 // load "this" reference onto stack
ldfld MyClass.myField // find the field and load its value
...but it only takes one instruction to load a local variable:
ldloc.0 // load the value at index 0 from the list of local variables

Even if it will be, there will be almost non measurable difference in this cases. Probabbly in first case, there is some optimization done on processor registry level, but again:
it's almost irrelevant
and what is more important, often unpredictable.
In terms of memory, it's exactly the same, there is no any difference.
The first case it generaly better: as you declare variable there were it's imediately used, which is commonly used good pattern, as it's
easy to understand (scopes of responsibilities)
easy refactor

I tested a calculation with 500,000 iterations where I used about 20 variables locally and one that does it with fields. The local variable test was about 20 milliseconds and the one with fields was about 30 milliseconds. A significant performance gain when you use local variables.
Whether the performance difference is relevant, depends on the project. In your average business application the performance gain may not be noticeable and it is better to go for readable / maintainable code, but I am working on sound synthesis software where nano-optimizations like this actually become relevant.

I suspect there's very little difference, however in the case where the variable is a member of the object, each access requires an indirection via this (effectively), whereas the local variable does not.
More generally, the object has no need for a member i, it's only used in the context of the loop, so making it local to its use is better in any case.

Related

C# Declaring local vs out of scope function variables and performance vs readability/robustness

I've been wondering how to deal with this issue for some time now and I can't find an elegant solution to it. I think some examples are the easiest way to understand the problem.
Let's say we have this code within a class and x and y need to start at 0 when their functions get called:
// here we have x as a local variable
private void functionX() {
int x = 0;
// ...
// do stuff with x
// ...
}
// here we have y as an out of function scope variable
int y;
private void functionY() {
y = 0;
// ...
// do stuff with y
// ...
}
public void update()
{
// this is slower because x gets a new instance every time functionX gets called
for (int i = 0; i < 100000; i++) {
functionX();
}
// this is faster because y gets only one instance before the function ever gets called
for (int i = 0; i < 100000; i++) {
functionY();
}
}
I have tested this code and using the out of function scope variable instead of using the local variable yields better performance (albeit not by a lot in this example but there's a performance increase nonetheless). The downside is that you have to declare the function variable outside of function scope in order to get this performance gain which makes your code messier and more error prone.
This is only a very simple example but what happens if you have thousands of lines of code with tons of these kind of function variables and the performance gain from having them out of function scope cannot be ignored but the mess you make from having all these variables out of scope cannot be ignored either? Is there a solution to this problem or you will just absolutely have to make a choice between performance and readability/robustness?
ps. making either x or y static variables inside their functions does not work either when you have to construct multiple objects from the class they're in (all your objects will have a single instance of x and y for the whole program runtime)
Edit: simplified the code even more
if it goes for not complex variables like int/string/double etc. there is almost no diffrence unless you doing somethign 1.000.000 times a second. If you consider that with new Element() its a diffrent story example which you can test is Vector3 in c# or any other language.
Example code in c#
// slowest
void functionX(){
Vector3 newV3_0 = new Vector3();
}
// abit faster then X
Vector3 newV3_1;
void functionY(){
newV3_1 = new Vector3();
}
// the fastest solution
Vector3 newV3_2 = Vector3.zero;
void functionZ(){
//do things with vector;
}
// abit slower then Z
Vector3 newV3_3;
void functionQ(){
newV3_3 = Vector3.zero;
}
if you gonna loop separatly each of them like 1.000.000 times functionZ will be performing the fastest cause there is nothing done inside cause you perform everything outside. But there is also a downside to this you need to remember to not use this variable in other place
"I have tested this code and using the out of function scope variable instead of using the local variable yields better performance (albeit not by a lot in this example but there's a performance increase nonetheless)." I think this calls for a performance rant:
https://ericlippert.com/2012/12/17/performance-rant/ While you did part 1 (hopefully in way that gives meaningfull results), 2-6 still apply for your case.
I consider readability, then robustness the most important things. A lot of micro optimisations (dead code dection, making functions internal, adding or removing temporary variables) can be left to the JiT. If you somehow actually do need that difference, then there is a 99% case you are doing Realtime Programming. And for all it's power, realtime programming is not a strenght of the .NET Framework. Just having a Garbage Collector is usually a disqualifier.

What does it mean when constant local variables are stored in the assembly data region, while non-constant local variables are stored on the stack?

I'm reading through this wikibook and don't understand what this means within the local variables section? An ELI5 would be helpful <3
Fun fact - the stack is an implementation detail that is mostly irrelevant. Most references to stack within the C# Spec are to a toy Stack<T> class than references to a/the execution stack.
Let's take a toy method:
public int Compute(int value)
{
int b = value/2;
if (b > 10){
return b + Compute(b);
}
return b - 4;
}
This Compute method needs somewhere where it can store the contents of the b variable. Importantly, this method may recurse1, that is, it may call itself, possibly repeatedly, and it's important that each execution of the method retains its own copy of the b variable. If there was just a single storage space for the b variable in memory, then recursion wouldn't be possible.2
So, the storage for variables has to be dynamically provided - each time a method is entered, new storage has to be made available for the local variables. And one particularly common way to provide such storage is via a "stack frame" or "activation record". How much storage is actually required is subject to a complex discussion involving optimization and variable lifetimes, but in the trivial case we say that the stack frame contains enough storage space for each local variable.
However, const locals are special variables - because they don't vary. As an optimization, then, we can just store one copy of this variable somewhere, and every instance of Compute that is running can just reference that single copy.
1Here, it's trivially recursive, but it need not be so - the recursion could be hidden through a number of intermediate method calls to other methods.
2Nor could we allow the method to be called by multiple threads.

What downsides is declaring variables just before first time use it?

One of my programming philosophy is that defining variables just before it is really being used the first time. For example the way of defining variable 'x', I usually don't write code like this:
var total =0;
var x;
for(int i=0;i<100000;i++)
{
x = i;
total += x;
}
Instead, I prefer to this:
var total = 0;
for(int i=0;i<100000;i++)
{
var x = i;
total = +x;
}
This is just an example code, don't care about the real meaning of the code.
what downsides is the second way? performance?
Don't bother yourself with performance unless you really really need to (hint: 99% of the time you don't need to).
My usual philosophy (which has been confirmed by books like "The Art of Readable Code") is to declare variables in the smallest scope possible. The reason being that in terms of readability and code comprehension the less variables you have to think about at any one time the better. And defining variables in a smaller scope definitely helps with that.
Also, often times if a compiler is able to determine that (in the case of your example) moving the variable outside of the for loop to save having to create/destroy it every iteration won't change the outcome but will help performance he'll do it for you. And that's another reason not to bother with performance, the compiler is usually smarter about it than we are.
There is no performance implications, only the scope ones. You should always define variables in the innermost scope possible. This improves readability of your program.
The only "downside" is that the second version need compiler support. Old compilers needed to know all the variables the function(or a scope inside it) will be using, so you had to declare the variables in a special section(Pascal) or in the beginning of the block(C). This is not really a problem nowadays - C is the only language that does not support declaring variables anywhere and still being widely used.
The problem is that C is the most common first-language they teach in schools and universities. They teach you C, and force you to declare all variables at the beginning of the block. Then they teach you a more modern language, and because you are already used to declaring all variables at the beginning, they need to teach you to not do it.
If your first language allows you to declare a variable anywhere in the function's body, you would instinctively declare it just before you use it, and they wouldn't need to tell you that declaring variables beforehand is bad just like they don't need to tell you that smashing your computer with a 5 Kilo hammer is bad.
I recommend, like most, to keep variables within an inner scope, but exceptions
occur and I think that is what you are seeking.
C++ potentially has expensive constructor/destructor time that would be best paid for once, rather than N times. Compare
void TestPrimacyOfNUnsignedLongs(int n) {
PrimeList List(); // Makes a list of all unsigned long primes
for (int i = 0; i<n; i++) {
unsinged long x = random_ul();
if (List.IsAPrime(x)) DoThis();
}
}
or
void TestPrimacyOfNUnsignedLongs(int n) {
for (int i = 0; i<n; i++) {
PrimeList List(); // Makes a list of all unsigned long primes
unsinged long lx = random_ul();
if (List.IsAPrime(x)) DoThis();
}
}
Certainly, I could put List inside the for loop, but at a significant run time cost.
Having all variables of the same scope in the same location of the code is easier to see what variables you have and what data type there are. You don't have to look through the entire code to find it.
You have different scopes for the x variable. In the second example, you won't be able to use the x variable outside the loop.

Is it more efficient to use the keyword this when accessing instance variables?

The this keyword is optional when accessing instance fields, properties, and methods in languages like C# and Java.
I've been doing some best practice research on various languages lately, and have noticed many places recommend creating a local reference to instance fields within methods because it's more efficient. The latest mention was in an Android tutorial.
It seems to me that if you specify this._obj, it should be just as efficient as a local variable. Is this correct or is it just as 'costly' as not using this?
Does the answer change from the Android Dalvik VM, to standard Java, to C#?
public class test {
private Object[] _obj;
protected void myMethod() {
Object[] obj = _obj;
// Is there an appreciable differnce between
for(int i = 0; i < obj.length; i++) {
// do stuff
}
// and this?
for(int i = 0; i < this._obj.length; i++) {
// do stuff
}
}
}
For at least standard Java, there is a small, small difference.
I modified your example a little to this:
public class test {
private Object[] _obj;
protected void myMethodLocal() {
Object[] obj = _obj;
// Is there an appreciable differnce between
for(int i = 0; i < obj.length; i++) {
// do stuff
}
}
protected void myMethodMember() {
// and this?
for(int i = 0; i < this._obj.length; i++) {
// do stuff
}
}
}
So myMethodLocal() will cache _obj into a local variable, while myMethodMember() uses the class member _obj.
Now, let's decompile this (using javap):
protected void myMethodLocal();
Code:
0: aload_0
1: getfield #2; //Field _obj:[Ljava/lang/Object;
4: astore_1
5: iconst_0
6: istore_2
7: iload_2
8: aload_1
9: arraylength
10: if_icmpge 19
13: iinc 2, 1
16: goto 7
19: return
protected void myMethodMember();
Code:
0: iconst_0
1: istore_1
2: iload_1
3: aload_0
4: getfield #2; //Field _obj:[Ljava/lang/Object;
7: arraylength
8: if_icmpge 17
11: iinc 1, 1
14: goto 2
17: return
Without going into details, the latter example has to access the _obj field every loop iteration, while the first example already had it cached in a local reference and just needs to access the local reference.
What does this equate to in speed difference?
Not much.
While the difference between accessing a local reference and a class-reference means a lot more in a language like Python, for Java, you really don't need to worry. It's much more important to keep your code readable and maintainable than to fret over details like that.
(Plus, the above bytecode doesn't take into account what the JIT compiler might do, anyway).
If you get the instance field by a function, like getObj(), I would plug that into a variable, so you don't need to keep calling getObj() each time you want to use the same field.
Also, just as a minor note, you should probably call your class Test instead of test. Java tends to favor Upper Camel Case for class names.
No, there is absolutely no change in efficiency. Remember that in many languages, several equivalent expressions will reduce down to identical statements in the underlying bytecode or assembly or whatever the higher level language translates into.
The answer is uniform across the languages and VMs you mention.
Use it when necessary, like when a method parameter has the same name as an instance variable.
Unless CPU cycles (or memory, etc.) are a top priority, value clarity above less expressive but more efficient language syntax.
The this keyword is used for readability and most importantly making variable names unambiguous. It has no affect on performance whatsoever.
On modern PC, this may not make any difference irrespective of languages due to caching - if memory location is cached on on-chip dye then it wouldn't make any difference.
I suspect using local variable is only a single reference (the value is on the stack), while using member variable is 2 reference (reference to this, which is on the stack, then reference to the variable itself, which is on the heap)
Depending on the system, either heap or stack access could be faster.
But like what Jonathon said, unless speed is very important, don't bother yourself with this. It will only reduce readability for negligible performance.
In theory, no. Accessing "this._obj.Length" ends up generating code like:
mov eax, [ecx + offset_of_obj]
mov eax, [eax + offset_of_length]
where as "obj.length" ends up generating code like:
mov eax, [esp + offset_of_obj]
mov eax, [eax + offset_of_length]
In practice, maybe, but probably not. With virtually every x86 calling convention there are only 3 scratch registers "eax", "ecx", and "edx". All other registers must be saved on the stack before they can be updated. If you have a long function, and you don't need to access "this" then the ecx register could be repurposed to hold temporary variables, and could thus reduce the amount of stack spilling that needs to happen. But, you have to push new values on the stack in order to create the locals,so the scenarios where it would make an improvement are limited. I would ignore who ever told you that.
Some of these answers haven't answered the actual question and others are wrong. Accessing a member variable via this.obj requires dereferencing an element on the stack. Accessing a local copy of that reference eliminates the dereference step. So in theory and absent HotSpot the latter has to be more efficient. However unless you are timing nuclear reactions or something the difference will be minimal, and I would deprecate the practice any time I saw it in my shop.

DataTable Loop Performance Comparison

Which of the following has the best performance?
I have seen method two implemented in JavaScript with huge performance gains, however, I was unable to measure any gain in C# and was wondering if the compiler already does method 2 even when written like method 1.
The theory behind method 2 is that the code doesn't have to access DataTable.Rows.Count on every iteration, it can simple access the int c.
Method 1
for (int i = 0; i < DataTable.Rows.Count; i++) {
// Do Something
}
Method 2
for (int i = 0, c = DataTable.Rows.Count; i < c; i++) {
// Do Something
}
No, it can't do that since there is no way to express constant over time for a value.
If the compiler should be able to do that, there would have to be a guarantee from the code returning the value that the value is constant, and for the duration of the loop won't change.
But, in this case, you're free to add new rows to the data table as part of your loop, and thus it's up to you to make that guarantee, in the way you have done it.
So in short, the compiler will not do that optimization if the end-index is anything other than a variable.
In the case of a variable, where the compiler can just look at the loop-code and see that this particular variable is not changed, it might do that and load the value into a register before starting the loop, but any performance gain from this would most likely be negligible, unless your loop body is empty.
Conclusion: If you know, or is willing to accept, that the end loop index is constant for the duration of the loop, place it into a variable.
Edit: Re-read your post, and yes, you might see negligible performance gains for your two cases as well, because the JITter optimizes the code. The JITter might optimize your end-index read into a direct access to the variable inside the data table that contains the row count, and a memory read isn't all that expensive anyway. If, on the other hand, reading that property was a very expensive operation, you'd see a more noticable difference.

Categories