Why not use `dynamic` instead of reflection when the property is known? - c#

This question is similar to this one, but assuming that we know the member name at compile time.
Assuming that we have a class
public class MyClass
{
public string TheProperty { get; set; }
}
and in another method, we want to set the TheProperty member of an instance of that class, but we don't know the type of the instance at compile time, we only know the property name at compile time.
So, as I see it, there are two ways to do that now:
object o = new MyClass(); // For simplicity.
o.GetType().GetProperty("TheProperty").SetValue(o, "bar"); // (1)
((dynamic) o).TheProperty = "bar"; // (2)
I measured this test case using the System.Diagnostics.Stopwatch class to find out that reflection took 475 ticks and the way using dynamic took 0 ticks, therefore being about as fast as a direct call to new MyClass().TheProperty = "bar".
Since I have almost never seen the second way, I am a little confused and my questions now are:
Is there a lapse of thought or anything?
Should the second way be preferred over the first or the other way around? I don't see any disadvantages of using the second way; both (1) and (2) would throw exceptions if the property would not have been found, wouldn't they?
Why does the second way seem to be used so rarely even though seemingly being the faster?

(...)reflection took 475 ticks and the way using dynamic took 0 ticks(...)
That is simply false. The problem is that you are not understanding how dynamic works. I will assume you are correctly setting up the benchmark:
Running in Release mode with optimizations turned on and without the debugger.
You are jitting the methods before actually measuring times.
And here comes the key part you are probably not doing:
Jit the dynamic test without actually performing the dynamic runtime binding.
And why is 3 important? Because the runtime will cache the dynamic call and reuse it! So in a naive benchmark implementation, if you are doing things right, you will incurr the cost of the initial dynamic call jitting the method and therefore you won't measure it.
Run the following benchmark:
public static void Main(string[] args)
{
var repetitions = 1;
var isWarmup = true;
var foo = new Foo();
//warmup
SetPropertyWithDynamic(foo, isWarmup); //JIT method without caching the dynamic call
SetPropertyWithReflection(foo); //JIT method
var s = ((dynamic)"Hello").Substring(0, 2); //Start up the runtime compiler
for (var test = 0; test < 10; test++)
{
Console.WriteLine($"Test #{test}");
var watch = Stopwatch.StartNew();
for (var i = 0; i < repetitions; i++)
{
SetPropertyWithDynamic(foo);
}
watch.Stop();
Console.WriteLine($"Dynamic benchmark: {watch.ElapsedTicks}");
watch = Stopwatch.StartNew();
for (var i = 0; i < repetitions; i++)
{
SetPropertyWithReflection(foo);
}
watch.Stop();
Console.WriteLine($"Reflection benchmark: {watch.ElapsedTicks}");
}
Console.WriteLine(foo);
Console.ReadLine();
}
static void SetPropertyWithDynamic(object o, bool isWarmup = false)
{
if (isWarmup)
return;
((dynamic)o).TheProperty = 1;
}
static void SetPropertyWithReflection(object o)
{
o.GetType().GetProperty("TheProperty").SetValue(o, 1);
}
public class Foo
{
public int TheProperty { get; set; }
public override string ToString() => $"Foo: {TheProperty}";
}
Spot the difference between the first run and the subsequent ones?

Related

C# LINQ performance when extension method called inside where clause

I have a LINQ query like this
public static bool CheckIdExists(int searchId)
{
return itemCollection.Any(item => item.Id.Equals(searchId.ConvertToString()));
}
item.Id is a string while searchId is an int. .ConvertToString() is an extension which which converts int to string
Code for ConvertToString:
public static string ConvertToString(this object input)
{
return Convert.ToString(input, CultureInfo.InvariantCulture);
}
Now my query is, does searchId.ConvertToString() gets executed for each item in itemCollection?
Is computing searchId.ConvertToString() beforehand and calling the method like below improves performance?
public static bool CheckIdExists(int searchId)
{
string sId=searchId.ConvertToString();
return itemCollection.Any(item => item.Id.Equals(sId));
}
How to debug these two scenarios and observe their performances?
I re-generated the scenarios you talked about in your question. I tried following code and got this output.
But this is how you can debug this.
static List<string> itemCollection = new List<string>();
static void Main(string[] args)
{
for (int i = 0; i < 10000000; i++)
{
itemCollection.Add(i.ToString());
}
var watch = new Stopwatch();
watch.Start();
Console.WriteLine(CheckIdExists(580748));
watch.Stop();
Console.WriteLine($"Took {watch.ElapsedMilliseconds}");
var watch1 = new Stopwatch();
watch1.Start();
Console.WriteLine(CheckIdExists1(580748));
watch1.Stop();
Console.WriteLine($"Took {watch1.ElapsedMilliseconds}");
Console.ReadLine();
}
public static bool CheckIdExists(int searchId)
{
return itemCollection.Any(item => item.Equals(ConvertToString(searchId)));
}
public static bool CheckIdExists1(int searchId)
{
string sId =ConvertToString(searchId);
return itemCollection.Any(item => item.Equals(sId));
}
public static string ConvertToString(int input)
{
return Convert.ToString(input, CultureInfo.InvariantCulture);
}
OUTPUT:
True
Took 170
True
Took 11
How long it takes is the ultimate guide. You can create a stopwatch to log the performance of any code. Just use the ElapsedMilliseconds to see how long has been taken. For very short operations I suggest using very long loops to get a more accurate length of time.
var watch = new Stopwatch();
watch.Start();
/// CODE HERE (IDEALLY IN A LONG LOOP)
Debub.WriteLine($"Took {watch.ElapsedMilliseconds}");
Yes, it should be faster to get the string once. But I guess that compiler does optimize that thing for you (I just suspect this, don't ave anything to back it up. I just remeber that compilers are very good at detecting things that are not changing).
And no, it's not computed for every item, since LINQ method Any does not necessarily check all items. It return true for the first matching item. The only scenario when it checks all items, is where for none the lambda returns true.
If you want to test the speed difference,make sure to have more data - otherwise the difference may be too small.
Just do:
itemCollection = Enumerable.Range(0, 1000).SelectMany(x => itemCollection).ToList() // or array or whatever the type of collection you have
Than measure the times with StopWatch, just like #RobSedgwick said
I think you have two solution:
1- make log and meke inside this log datetime.now
2- you can use the diagnostic Tools tab
hopefully this help you

Field vs Property. Optimisation of performance

Please note this question related to performance only. Lets skip design guidelines, philosophy, compatibility, portability and anything what is not related to pure performance. Thank you.
Now to the question. I always assumed that because C# getters/setters are really methods in disguise then reading public field must be faster than calling a getter.
So to make sure I did a test (the code below). However this test only produces expected results (ie fields are faster than getters at 34%) if you run it from inside Visual Studio.
Once you run it from command line it shows pretty much the same timing...
The only explanation could be is that the CLR does additional optimisation (correct me if I am wrong here).
I do not believe that in real application where those properties being used in much more sophisticated way they will be optimised in the same way.
Please help me to prove or disprove the idea that in real life properties are slower than fields.
The question is - how should I modify the test classes to make the CLR change behaviour so the public field outperfroms the getters. OR show me that any property without internal logic will perform the same as a field (at least on the getter)
EDIT: I am only talking about Release x64 build.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
using System.Runtime.InteropServices;
namespace PropertyVsField
{
class Program
{
static int LEN = 20000000;
static void Main(string[] args)
{
List<A> a = new List<A>(LEN);
List<B> b = new List<B>(LEN);
Random r = new Random(DateTime.Now.Millisecond);
for (int i = 0; i < LEN; i++)
{
double p = r.NextDouble();
a.Add(new A() { P = p });
b.Add(new B() { P = p });
}
Stopwatch sw = new Stopwatch();
double d = 0.0;
sw.Restart();
for (int i = 0; i < LEN; i++)
{
d += a[i].P;
}
sw.Stop();
Console.WriteLine("auto getter. {0}. {1}.", sw.ElapsedTicks, d);
sw.Restart();
for (int i = 0; i < LEN; i++)
{
d += b[i].P;
}
sw.Stop();
Console.WriteLine(" field. {0}. {1}.", sw.ElapsedTicks, d);
Console.ReadLine();
}
}
class A
{
public double P { get; set; }
}
class B
{
public double P;
}
}
As others have already mentioned, the getters are inlined.
If you want to avoid inlining, you have to
replace the automatic properties with manual ones:
class A
{
private double p;
public double P
{
get { return p; }
set { p = value; }
}
}
and tell the compiler not to inline the getter (or both, if you feel like it):
[MethodImpl(MethodImplOptions.NoInlining)]
get { return p; }
Note that the first change does not make a difference in performance, whereas the second change shows a clear method call overhead:
Manual properties:
auto getter. 519005. 10000971,0237547.
field. 514235. 20001942,0475098.
No inlining of the getter:
auto getter. 785997. 10000476,0385552.
field. 531552. 20000952,077111.
Have a look at the Properties vs Fields – Why Does it Matter? (Jonathan Aneja) blog article from one of the VB team members on MSDN. He outlines the property versus fields argument and also explains trivial properties as follows:
One argument I’ve heard for using fields over properties is that
“fields are faster”, but for trivial properties that’s actually not
true, as the CLR’s Just-In-Time (JIT) compiler will inline the
property access and generate code that’s as efficient as accessing a
field directly.
The JIT will inline any method (not just a getter) that its internal metrics determine will be faster inlined. Given that a standard property is return _Property; it will be inlined in every case.
The reason you are seeing different behavior is that in Debug mode with a debugger attached, the JIT is significantly handicapped, to ensure that any stack locations match what you would expect from the code.
You are also forgetting the number one rule of performance, testing beats thinking. For instance even though quick sort is asymptotically faster than insertion sort, insertion sort is actually faster for extremely small inputs.
The only explanation could be is that the CLR does additional optimisation (correrct me if I am wrong here).
Yes, it is called inlining. It is done in the compiler (machine code level - i.e. JIT). As the getter/setter are trivial (i.e. very simple code) the method calls are destroyed and the getter/setter written in the surrounding code.
This does not happen in debug mode in order to support debugging (i.e. the ability to set a breakpoint in a getter or setter).
In visual studio there is no way to do that in the debugger. Compile release, run without attached debugger and you will get the full optimization.
I do not believe that in real application where those properties being used in much more sophisticated way they will be optimised in the same way.
The world is full of illusions that are wrong. They will be optimized as they are still trivial (i.e. simple code, so they are inlined).
It should be noted that it's possible to see the "real" performance in Visual Studio.
Compile in Release mode with Optimisations enabled.
Go to Debug -> Options and Settings, and uncheck "Suppress JIT optimization on module load (Managed only)".
Optionally, uncheck "Enable Just My Code" otherwise you may not be able to step in the code.
Now the jitted assembly will be the same even with the debugger attached, allowing you to step in the optimised dissassembly if you so please. This is essential to understand how the CLR optimises code.
After read all your articles, I decide to make a benchmark with these code:
[TestMethod]
public void TestFieldVsProperty()
{
const int COUNT = 0x7fffffff;
A a1 = new A();
A a2 = new A();
B b1 = new B();
B b2 = new B();
C c1 = new C();
C c2 = new C();
D d1 = new D();
D d2 = new D();
Stopwatch sw = new Stopwatch();
long t1, t2, t3, t4;
sw.Restart();
for (int i = COUNT - 1; i >= 0; i--)
{
a1.P = a2.P;
}
sw.Stop();
t1 = sw.ElapsedTicks;
sw.Restart();
for (int i = COUNT - 1; i >= 0; i--)
{
b1.P = b2.P;
}
sw.Stop();
t2 = sw.ElapsedTicks;
sw.Restart();
for (int i = COUNT - 1; i >= 0; i--)
{
c1.P = c2.P;
}
sw.Stop();
t3 = sw.ElapsedTicks;
sw.Restart();
for (int i = COUNT - 1; i >= 0; i--)
{
d1.P = d2.P;
}
sw.Stop();
t4 = sw.ElapsedTicks;
long max = Math.Max(Math.Max(t1, t2), Math.Max(t3, t4));
Console.WriteLine($"auto: {t1}, {max * 100d / t1:00.00}%.");
Console.WriteLine($"field: {t2}, {max * 100d / t2:00.00}%.");
Console.WriteLine($"manual: {t3}, {max * 100d / t3:00.00}%.");
Console.WriteLine($"no inlining: {t4}, {max * 100d / t4:00.00}%.");
}
class A
{
public double P { get; set; }
}
class B
{
public double P;
}
class C
{
private double p;
public double P
{
get => p;
set => p = value;
}
}
class D
{
public double P
{
[MethodImpl(MethodImplOptions.NoInlining)]
get;
[MethodImpl(MethodImplOptions.NoInlining)]
set;
}
}
When test in debug mode, I got this result:
auto: 35142496, 100.78%.
field: 10451823, 338.87%.
manual: 35183121, 100.67%.
no inlining: 35417844, 100.00%.
but when switch to release mode, the result is different than before.
auto: 2161291, 873.91%.
field: 2886444, 654.36%.
manual: 2252287, 838.60%.
no inlining: 18887768, 100.00%.
seems auto property is a better way.

which is faster: for or foreach [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
For vs Foreach loop in C#
Lets say I have a collection
List < Foo > list = new List< Foo >();
Now which of the foolowing loops would run faster and why:
for(int i=0; i< list.Count; i++)
or
foreach(Foo foo in list)
It depends :
For For loop, it is on How much time does it take to evaluate the value oflist.Countor whatever value is provided in condition and How much time does it take to reference item at specific index.
For Foreach loop, it depends on How much time it takes for an iterator to return a value.
For your above example, there should not be any difference because you are using a standard List class.
Who cares? Do you have a performance problem? If so, have you measured and determined that this is the slowest part of your app?
foreach is faster to type for me :) and easier to read.
Well.. you can find that out using the System.Diagnostics.StopWatch.
However the point is, why do you need to think about it. You should first consider which one is more readable and use that one instead of bothering about performance in this case.
The golden rule is always write readable code and optimize if you find a performance problem.
try this, for and foreach almost resulting the same time, but the.ForEach() method is faster
class Program
{
static void Main(string[] args)
{
//Add values
List<objClass> lst1 = new List<objClass>();
for (int i = 0; i < 9000000; i++)
{
lst1.Add(new objClass("1", ""));
}
//For loop
DateTime startTime = DateTime.Now;
for (int i = 0; i < 9000000; i++)
{
lst1[i]._s1 = lst1[i]._s2;
}
Console.WriteLine((DateTime.Now - startTime).ToString());
//ForEach Action
startTime = DateTime.Now;
lst1.ForEach(s => { s._s1 = s._s2; });
Console.WriteLine((DateTime.Now - startTime).ToString());
//foreach normal loop
startTime = DateTime.Now;
foreach (objClass s in lst1)
{
s._s1 = s._s2;
}
Console.WriteLine((DateTime.Now - startTime).ToString());
}
public class objClass
{
public string _s1 { get; set; }
public string _s2 { get; set; }
public objClass(string _s1, string _s2)
{
this._s1 = _s1;
this._s2 = _s2;
}
}
}
If you need to use the index of the current item, use for loop. There is no faster solution, you have proper solution only.
I don't have a source to back this up, but I believe they will be almost if not exactly identical due to the way the compiler does optimizations such as loop unrolling. If there is a difference, it's likely on the order of single or tens of CPU cycles, which is as good as nothing for 99.9999% of applications.
In general, foreach tends to be considered 'syntactic sugar', that is, it's nice to have, but doesn't actually do much besides change the way you word a particular piece of code.

Performance of delegate and method group

I was investigating the performance hit of creating Cachedependency objects, so I wrote a very simple test program as follows:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Web.Caching;
namespace Test
{
internal class Program
{
private static readonly string[] keys = new[] {"Abc"};
private static readonly int MaxIteration = 10000000;
private static void Main(string[] args)
{
Debug.Print("first set");
test2();
test3();
test4();
test5();
test6();
test7();
Debug.Print("second set");
test7();
test6();
test5();
test4();
test3();
test2();
}
private static void test2()
{
DateTime start = DateTime.Now;
var list = new List<CacheDependency>();
for (int i = 0; i < MaxIteration; i++)
{
list.Add(new CacheDependency(null, keys));
}
Debug.Print("test2 Time: " + (DateTime.Now - start));
}
private static void test3()
{
DateTime start = DateTime.Now;
var list = new List<Func<CacheDependency>>();
for (int i = 0; i < MaxIteration; i++)
{
list.Add(() => new CacheDependency(null, keys));
}
Debug.Print("test3 Time: " + (DateTime.Now - start));
}
private static void test4()
{
var p = new Program();
DateTime start = DateTime.Now;
var list = new List<Func<CacheDependency>>();
for (int i = 0; i < MaxIteration; i++)
{
list.Add(p.GetDep);
}
Debug.Print("test4 Time: " + (DateTime.Now - start));
}
private static void test5()
{
var p = new Program();
DateTime start = DateTime.Now;
var list = new List<Func<CacheDependency>>();
for (int i = 0; i < MaxIteration; i++)
{
list.Add(() => { return p.GetDep(); });
}
Debug.Print("test5 Time: " + (DateTime.Now - start));
}
private static void test6()
{
DateTime start = DateTime.Now;
var list = new List<Func<CacheDependency>>();
for (int i = 0; i < MaxIteration; i++)
{
list.Add(GetDepStatic);
}
Debug.Print("test6 Time: " + (DateTime.Now - start));
}
private static void test7()
{
DateTime start = DateTime.Now;
var list = new List<Func<CacheDependency>>();
for (int i = 0; i < MaxIteration; i++)
{
list.Add(() => { return GetDepStatic(); });
}
Debug.Print("test7 Time: " + (DateTime.Now - start));
}
private CacheDependency GetDep()
{
return new CacheDependency(null, keys);
}
private static CacheDependency GetDepStatic()
{
return new CacheDependency(null, keys);
}
}
}
But I can't understand why these result looks like this:
first set
test2 Time: 00:00:08.5394884
test3 Time: 00:00:00.1820105
test4 Time: 00:00:03.1401796
test5 Time: 00:00:00.1910109
test6 Time: 00:00:02.2041261
test7 Time: 00:00:00.4840277
second set
test7 Time: 00:00:00.1850106
test6 Time: 00:00:03.2941884
test5 Time: 00:00:00.1750100
test4 Time: 00:00:02.3561347
test3 Time: 00:00:00.1830105
test2 Time: 00:00:07.7324423
In particular:
Why is test4 and test6 much slower
than their delegate version? I also
noticed that Resharper specifically
has a comment on the delegate
version suggesting change test5 and
test7 to "Convert to method group".
Which is the same as test4 and test6
but they're actually slower?
I don't seem a consistent
performance difference when calling
test4 and test6, shouldn't static
calls to be always faster?
In tests with method group (4,6) C# compiler doesn't cache delegate (Func) object. It creates new every time. In 7 and 5 it caches Action object to generated method that calls your methods. So creation of Funcs from method groups is very slow (coz of Heap allocation), but calling is fast as action points directly on you method. And creation of actions from lambdas is fast as Func is cached but it points to generated method so there is one unnecessary method call.
Beware that not all lambdas can be cached (closures break this logic)
I haven't looked too far into your code, but first step would be to switch things over to use the StopWatch class instead of DateTime.Now etc.
http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx
In C# 11 the language spec was changed to allow the compiler to legally cache the delegate.
https://github.com/dotnet/roslyn/issues/5835
If you're using that version of C# or newer, you won't see allocations when passing a method group where the delegate can be cached.
That is quite interesting. I'm wondering if your million entry lists aren't causing a garbage collection and skewing your results. Try changing the order these functions are called in and see what the results give you.
Another thing is that the JIT might have optimised your code to not create the lambda each time and is just inserting the same value over and over. Might be worth running ildasm over it and see what is actually generated.
Why is test4 and test6 much slower than their delegate version? I also noticed that Resharper specifically has a comment on the delegate version suggesting change test5 and test7 to "Covert to method group". Which is the same as test4 and test6 but they're actually slower?
You'll get a big clue by adding
Debug.Print(ReferenceEquals(list[0], list[1]) ? "same" : "different");
to the end of each method.
With the delegate version, the Func gets compiled a bit like it was actually:
var func = Func<CacheDependency> <>_hiddenfieldwithinvalidC#name;
if (func == null)
{
<>_hiddenfieldwithinvalidC#name = func = () => p.GetDep();
}
While with a method group it gets compiled much the same as:
func = new Func<CacheDependency>(p.GetDep());
This memoisation is done a lot with delegates created from lambdas when the compiler can determine it is safe to do so, but not with method-groups being cast to delegates, and the performance differences you see show exactly why.
I don't seem a consistent performance difference when calling test4 and test6, shouldn't static calls to be always faster?
Not necessarily. While a static call has the advantage of one less argument to pass around (as there's no implicit this argument), this difference:
Isn't much to begin with.
Might be jitted away if this isn't used.
Might be optimised away in that the register with the this pointer in before the call is the register with the this pointer in after the call, so there's no need to actually do anything to get it in there.
Eh, something else. I'm not claiming this list is exhaustive.
Really what performance benefits there are from static is more that if you do what is naturally static in instance methods you can end up with excessive passing around of objects that isn't really needed and wastes time. That said, if you are doing what is naturally instance in static methods you can end up storing/retrieving and/or allocationg and/or passing around objects in arguments you wouldn't need to and be just as bad.

Which is faster in a loop: calling a property twice, or storing the property once?

This is more of an academic question about performance than a realistic 'what should I use' but I'm curious as I don't dabble much in IL at all to see what's constructed and I don't have a large dataset on hand to profile against.
So which is faster:
List<myObject> objs = SomeHowGetList();
List<string> strings = new List<string>();
foreach (MyObject o in objs)
{
if (o.Field == "something")
strings.Add(o.Field);
}
or:
List<myObject> objs = SomeHowGetList();
List<string> strings = new List<string>();
string s;
foreach (MyObject o in objs)
{
s = o.Field;
if (s == "something")
strings.Add(s);
}
Keep in mind that I don't really want to know the performance impact of the string.Add(s) (as whatever operation needs to be done can't really be changed), just the performance difference between setting s each iteration (let's say that s can be any primitive type or string) verses calling the getter on the object each iteration.
Your first option is noticeably faster in my tests. I'm such flip flopper! Seriously though, some comments were made about the code in my original test. Here's the updated code that shows option 2 being faster.
class Foo
{
public string Bar { get; set; }
public static List<Foo> FooMeUp()
{
var foos = new List<Foo>();
for (int i = 0; i < 10000000; i++)
{
foos.Add(new Foo() { Bar = (i % 2 == 0) ? "something" : i.ToString() });
}
return foos;
}
}
static void Main(string[] args)
{
var foos = Foo.FooMeUp();
var strings = new List<string>();
Stopwatch sw = Stopwatch.StartNew();
foreach (Foo o in foos)
{
if (o.Bar == "something")
{
strings.Add(o.Bar);
}
}
sw.Stop();
Console.WriteLine("It took {0}", sw.ElapsedMilliseconds);
strings.Clear();
sw = Stopwatch.StartNew();
foreach (Foo o in foos)
{
var s = o.Bar;
if (s == "something")
{
strings.Add(s);
}
}
sw.Stop();
Console.WriteLine("It took {0}", sw.ElapsedMilliseconds);
Console.ReadLine();
}
Most of the time, your second code snippet should be at least as fast as the first snippet.
These two code snippets are not functionally equivalent. Properties are not guaranteed to return the same result across individual accesses. As a consequence, the JIT optimizer is not able to cache the result (except for trivial cases) and it will be faster if you cache the result of a long running property. Look at this example: why foreach is faster than for loop while reading richtextbox lines.
However, for some specific cases like:
for (int i = 0; i < myArray.Length; ++i)
where myArray is an array object, the compiler is able to detect the pattern and optimize the code and omit the bound checks. It might be slower if you cache the result of Length property like:
int len = myArray.Length;
for (int i = 0; i < myArray.Length; ++i)
It really depends on the implementation. In most cases, it is assumed (as a matter of common practice / courtesy) that a property is inexpensive. However, it could that each "get" does a non-cached search over some remote resource. For standard, simple properties, you'll never notice a real difference between the two. For the worst-case, fetch-once, store and re-use will be much faster.
I'd be tempted to use get twice until I know there is a problem... "premature optimisation", etc... But; if I was using it in a tight loop, then I might store it in a variable. Except for Length on an array, which has special JIT treatment ;-p
Generally the second one is faster, as the first one recalculates the property on each iteration.
Here is an example of something that could take significant amount of time:
var d = new DriveInfo("C:");
d.VolumeLabel; // will fetch drive label on each call
Storing the value in a field is the faster option.
Although a method call doesn't impose a huge overhead, it far outweighs storing the value once to a local variable on the stack and then retrieving it.
I for one do it consistently.

Categories