C# *Strange* problem with StopWatch and a foreach loop - c#

I have the this code:
var options = GetOptions(From, Value, SelectedValue);
var stopWatch = System.Diagnostics.Stopwatch.StartNew();
foreach (Option option in options)
{
stringBuilder.Append("<option");
stringBuilder.Append(" value=\"");
stringBuilder.Append(option.Value);
stringBuilder.Append("\"");
if (option.Selected)
stringBuilder.Append(" selected=\"selected\"");
stringBuilder.Append('>');
stringBuilder.Append(option.Text);
stringBuilder.Append("</option>");
}
HttpContext.Current.Response.Write("<b>" + stopWatch.Elapsed.ToString() + "</b><br>");
It is writing:
00:00:00.0004255 in the first try (not in debug)
00:00:00.0004260 in the second try and
00:00:00.0004281 in the third try.
Now, if I change the code so the measure will be inside the foreach loop:
var options = GetOptions(From, Value, SelectedValue);
foreach (Option option in options)
{
var stopWatch = System.Diagnostics.Stopwatch.StartNew();
stringBuilder.Append("<option");
stringBuilder.Append(" value=\"");
stringBuilder.Append(option.Value);
stringBuilder.Append("\"");
if (option.Selected)
stringBuilder.Append(" selected=\"selected\"");
stringBuilder.Append('>');
stringBuilder.Append(option.Text);
stringBuilder.Append("</option>");
HttpContext.Current.Response.Write("<b>" + stopWatch.Elapsed.ToString() + "</b><br>");
}
...I get
[00:00:00.0000014, 00:00:00.0000011] = 00:00:00.0000025 in the first try (not in debug),
[00:00:00.0000016, 00:00:00.0000011] = 00:00:00.0000027 in the second try and
[00:00:00.0000013, 00:00:00.0000011] = 00:00:00.0000024 in the third try.
?!
It is completely unsense according to the first results... I've heard that the foreach loop is slow, but never imagined that it is so slow... Is it that?
options has 2 options.
Here's the option class, if it is needed:
public class Option
{
public Option(string text, string value, bool selected)
{
Text = text;
Value = value;
Selected = selected;
}
public string Text
{
get;
set;
}
public string Value
{
get;
set;
}
public bool Selected
{
get;
set;
}
}
Thanks.

The foreach loop itself has nothing to do with the time difference.
What is the GetOptions method returning? My guess is that it's not returning a collection of options, but rather an enumerator that is capable of getting the options. That means that actually fetching the options are not done until you start to iterate them.
In the first case you are starting the clock before starting iterating the options, which means that the time for fetching the options is included in the time.
In the second case you are starting the clock after starting iterating the options, which means that the time for fetching the options is not included in the time.
So, the time difference that you see it not due to the foreach loop itself, it's the time it takes to fetch the options.
You can make sure that the options are fetched immediately by reading them into a collection:
var options = GetOptions(From, Value, SelectedValue).ToList();
Now measure the performance, and you will see very little difference.

If you measure the time taken to do something 160 times, it will usually take of the order of 160 times longer than measuring the time it takes to do it once. Are you suggesting that the contents of the loop is only executed once, or are you trying to compare chalk and cheese?
In the first case, try changing the last line of your code from using
stopWatch.Elapsed.ToString()
to
stopWatch.Elapsed.ToString() / options.Count
That will at least mean you are comparing one iteration with one iteration.
However, your results will still be useless. Timing a very short operation once gives poor results - you have to repeat such thing tens of thousands of times to get a statistically meaningingful average time. Otherwise the inaccuracy of the system clock and the overheads involved in starting and stopping your timer will swamp your results.
Also, what is the PC doing while all this is happening? If there are other processes loading the CPU, then they could easily interfere with your timings. If you're running this on a busy server then you may get competely random results.
Lastly, how you exceute the tests can alter things. If you always run test 1 followed by test 2, it's possible that running the first test affects CPU caches (e.g. of the data in the options list) etc so that the following code is able to execute faster. If garbage collection occurs during one of your tests, it wil skew the results.
You need to eliminate all these factors before you have numbers that are worth comparing. Only then should you ask "why is test 1 running so much slower than test 2"?

The first code example doesn't output anything until all the options have been iterated while the second one outputs a time after the first option has been processed. If there are multiple options, you would expect to see such a difference.

Just pause it a few times in the IDE and you'll see where the time goes.
There's a very natural and strong temptation to think that the time things take is proportional to how much code they are. For example, which do you think is faster?
for (MyClass x in y)
for (MyClass theParticularInstanceOfClass in MyCollectionOfInstances)
It is natural to think that the first is faster, when in fact the code size is irrelevant and could be hiding a multitude of expensive operations.

Related

Why does my object take a long time to be created?

I'm writing code that scans large sections of text and performs some basic statistics on it, such as number of upper and lower case characters, punctuation characters etc.
Originally my code looked like this:
foreach (var character in stringToCount)
{
if (char.IsControl(character))
{
controlCount++;
}
if (char.IsDigit(character))
{
digitCount++;
}
if (char.IsLetter(character))
{
letterCount++;
} //etc.
}
And then from there I was creating a new object like this, which simply reads the local variables and passes them to the constructor:
var result = new CharacterCountResult(controlCount, highSurrogatecount, lowSurrogateCount, whiteSpaceCount,
symbolCount, punctuationCount, separatorCount, letterCount, digitCount, numberCount, letterAndDigitCount,
lowercaseCount, upperCaseCount, tempDictionary);
However a user over on Code Review Stack Exchange pointed out that I can just do the following. Great, I've saved myself a load of code which is good.
var result = new CharacterCountResult(stringToCount.Count(char.IsControl),
stringToCount.Count(char.IsHighSurrogate), stringToCount.Count(char.IsLowSurrogate),
stringToCount.Count(char.IsWhiteSpace), stringToCount.Count(char.IsSymbol),
stringToCount.Count(char.IsPunctuation), stringToCount.Count(char.IsSeparator),
stringToCount.Count(char.IsLetter), stringToCount.Count(char.IsDigit),
stringToCount.Count(char.IsNumber), stringToCount.Count(char.IsLetterOrDigit),
stringToCount.Count(char.IsLower), stringToCount.Count(char.IsUpper), tempDictionary);
However creating the object the second way takes approximately (on my machine) an extra ~200ms.
How can this be? While it might not seem a significant amount of extra time, it soon adds up when I've left it running processing text.
What should I be doing differently?
You are using method groups (syntactic sugar hiding a lambda or delegate) and iterating over the characters many times, whereas you could get it done with one pass (as in your original code).
I remember your previous question, and I recall seeing the recommendation to use the method group and string.Count(char.IsLetterOrDigit) and thinking "yeh that looks pretty but won't perform well", so it was amusing to actually see that you found exactly that.
If performance is important, I would just do it without delegates period, and use one giant loop with a single pass, the traditional way without delegates or multiple iterations, and even further, tune it by organizing the logic such that any case that excludes other cases is organized such that you do "lazy evaluation". Example, if you know a character is whitespace, then don't check for digit or alpha, etc. Or if you know it is digitOrAlpha, then include digit and alpha checks inside that condition.
Something like:
foreach(var ch in string) {
if(char.IsWhiteSpace(ch)) {
...
}
else {
if(char.IsLetterOrDigit(ch)) {
letterOrDigit++;
if(char.IsDigit(ch)) digit++;
if(char.IsLetter(ch)) letter++;
}
}
}
If you REALLY want to micro-optimize, write a program to pre-calculate all of the options and emit a huge switch statement which does table lookups.
switch(ch) {
case 'A':
isLetter++;
isUpper++;
isLetterOrDigit++;
break;
case 'a':
isLetter++;
isLower++;
isLetterOrDigit++;
break;
case '!':
isPunctuation++;
...
}
Now if you want to get REALLY crazy, organize the switch statement according to real-life frequency of occurence, and put the most common letters at the top of the "tree", and so forth. Of course, if you care that much about speed, it might be a job for plain C.
But I've wandered a bit far afield from your original question. :)
Your old way you walked through the text once, increasing all of your counters as you go. In your new way you walk though the text 13 times (once for each call to stringToCount.Count() and only update one counter per pass.
However, this kind of problem is the perfect situation for Parallel.ForEach. You can walk through the text with multiple threads (being sure your increments are thread safe) and get your totals faster.
Parallel.ForEach(stringToCount, character =>
{
if (char.IsControl(character))
{
//Interlocked.Increment gives you a thread safe ++
Interlocked.Increment(ref controlCount);
}
if (char.IsDigit(character))
{
Interlocked.Increment(ref digitCount);
}
if (char.IsLetter(character))
{
Interlocked.Increment(ref letterCount);
} //etc.
});
var result = new CharacterCountResult(controlCount, highSurrogatecount, lowSurrogateCount, whiteSpaceCount,
symbolCount, punctuationCount, separatorCount, letterCount, digitCount, numberCount, letterAndDigitCount,
lowercaseCount, upperCaseCount, tempDictionary);
It still walks through the text once, but many workers will be walking through various parts of the text at the same time.

I can't figure out what is slowing my program down

I have created a Windows Form application that reads in a text file, rearranges the data, and writes to a new text file. I have noticed that it slows down exponentially as it runs. I have been using tracepoints, stopwatches, and datetime to figure out why each iteration is taking longer than the previous, but I can't figure it out. My best guess would be that it might have something to do with the way I'm initializing variables.
I'm not sure how helpful this snippet of code will be but maybe it will give some insight into my problem:
while (cuttedWords.Any())
{
var variable = cuttedWords.TakeWhile(x => x != separator).ToArray();
cuttedWords = cuttedWords.Skip(variable.Length + 1);
sortDataObject.SortDataMethod(variable, b);
if (sortDataObject.virtualPara)
{
if (!virtualParaUsed)
{
listOfNames = sortDataObject.findListOfNames(backgroundWords, ref IDforCounting, countParametersTable);
}
virtualParaUsed = true;
printDataObject.WriteFileVirtual(fileName, ID, sortDataObject.listNames[0], sortDataObject.listNames[1],
sortDataObject.unit, listOfNames, sortDataObject.virtualNames);
sortDataObject.virtualNames.Clear();
}
else
{
int[] indexes = checkedListBox1.CheckedIndices.Cast<int>().ToArray();
printDataObject.WriteFile(fileName, ID, sortDataObject.listNames[0], sortDataObject.listNames[1],
sortDataObject.unit, sortDataObject.hexValue[0], sortDataObject.stringShift, sortDataObject.sign,
sortDataObject.SFBinary[0], sortDataObject.wordValue, sortDataObject.conversions, sortDataObject.stringData, indexes, sortDataObject.conType);
}
decimal sum = ((decimal)IDforCounting) / countParametersTable * 100;
int sum2 = (int)sum;
backgroundWorker1.ReportProgress(sum2);
ID++;
IDforCounting++;
b++;
}
What is strange to me is that I know that each loop runs in a matter of milliseconds, but from the start of one loop to the start of the next, the time keeps increasing.
I apologize if this is not enough information to analyze my issue, but I'm not sure what else I can provide without showing my entire solution.
Thank you.
EDIT: A better questions might be: what is a good way to analyze performance if stopwatches aren't doing the trick. I'd rather not have to download a profiler.
If its taking longer and longer, on each iteration, its probably related to the initial cuttedWords.any().
What type is cuttedWords? If its a database-backed enumerable, it will re-issue the sql statement on every iteration, which may or may not be what you want.
On the other hand, if this is a producer-consumer scenario, it may be that cuttedWords is locked by the producer, causing the consumer to be thread-locked while waiting for the producer to complete its action.
Also, the .reportProgress will cause the backgroundworker to raise an event on the thread that created it, potentially causing UI updates, so maybe try removing that line and see if it helps any. Then replace it with some code that only calls reportProgress if the progress has actually changed.

Strange Behavior with Threading and Timer

I explain my situation.
I have a producer 1 to N consumers pattern. I'm using blocking collections and everything is working well. Doing some test I noticed this strange behavior:
I was testing how long my manipulation of data took in my consumers.
I noticed this strange things, below you'll find the code cleaned of my manipulation and which produce the strange behavior.
I have 4 consumers for 1 producer.
For most of data, the Console doesn't print anything, because ts=0 (its under a tick) but randomly (between every 1 to 5sec) it plots something like this (not in this very specific order, but of the same kind):
10000
20001
10000
30002
10000
40003
10000
10000
It is of the order of 10,000 ticks so around 1ms. Always a number in the format (N)000(N-1)
Note that the BlockingCollection I consume is filled depending on some network events which occurred completely at random times. Nothing regular from here.
The timing is almost perfect, always a multiple of 10,000 ticks.
What could be behind this ? Thks !
while(IsAlive)
{
DataToFieldMapping item;
try
{
_CollectionToConsume.TryTake(out item, -1);
}
catch
{
item = null;
}
if (item != null)
{
long ts = (DateTime.Now.Ticks - item.TimeStamp.Ticks);
if(ts>10)
Console.WriteLine(ts);
}
}
What's going on here is that DateTime.Now has a fairly limited precision. It's not giving you the time to the nearest tick. It is only updated every 10,000 ticks or so, which is why you generally see multiples of 10k ticks in your prints.
If you really want to get a better feel for the duration of those events, use the StopWatch class, which has a much higher precision. That said, StopWatch is simply a diagnostic tool (hence why it's in the Diagnostics namespace). You should only be using it to help you diagnose what's going on, and should be using it in production code.
On a side note, there really isn't any need to use a timer here at all. It appears that you're creating several consumers that are polling the BlockingCollection for new content. There is no reason to do this. They can simply block until the collection has items. (Hence the name, BlockingCollection.
The easiest way is for the consumers to simply do this:
foreach(var item in _CollectionToConsume.GetConsumingEnumerable())
ProcessItem(item);
Then just run that code in a background thread.
if you write the following and run, you'll see that ticks do not roll one to one, but rather in relatively large chunks b/c ticks resolution is actually much smaller.
for(int i =0; i< 100; i++)
{
Console.WriteLine(DateTime.Now.Ticks);
}
Use Stopwatch class to measure performance as that one uses a high-resolution timer which is much more suitable for the purpose.

Redundant/Better Performance Code VS Optimized/Less Performance Code

In my case, I'm using C#, but the concept of the question would apply to Java as well. Hopefully the answer would be generic enough to cover both languages. Otherwise it's better to split the question into two.
I've always thought of which one is a better practice.
Does the compiler take care of enhancing the 'second' code so its performance would be as good as the 'first' code?
Could it be worked around to get a 'better performance' and 'optimized' code at the same time?
Redundant/Better Performance Code:
string name = GetName(); // returned string could be empty
List<string> myListOfStrings = GetListOfStrings();
if(string.IsNullOrWhiteSpace(name)
{
foreach(string s in myListOfStrings)
Console.WriteLine(s);
}
else
{
foreach(string s in myListOfStrings)
Console.WriteLine(s + " (Name is: " + name);
}
Optimized/Less Performance Code:
string name = GetName(); // returned string could be empty
List<string> myListOfStrings = GetListOfStrings();
foreach(string s in myListOfStrings)
Console.WriteLine(string.IsNullOrWhiteSpace(name) ? s : s + " (Name is: " + name);
Obviously the execution time of the 'first' code is less because it executes the condition 'string.IsNullOrWhiteSpace(name)' only once per loop. Whereas the 'second' code (which is nicer) executes the condition on every iteration.
Please consider a long loop execution time not a short one because I know that when it is short, the performance won't differ.
Does the compiler take care of enhancing the 'second' code so its performance would be as good as the 'first' code?
No, it cannot.
It doesn't know that the boolean expression will not change between iterations of the loop. It's possible for the code to not return the same value each time, so it is forced to perform the check in each iteration.
It's also possible that the boolean expression could have side effects. In this case it doesn't, but there's no way for the compiler to know that. It's important that such side effects would be performed in order to meet the specs, so it needs to execute the check in each iteration.
So, the next question you need to ask is, in a case such as this, is it important to perform the optimization that you've mentioned? In any situation I can imagine for the exact code you showed, probably not. The check is simply going to be so fast that it's almost certainly not going to be a bottleneck. If there are performance problems there are almost certainly bigger fish.
That said, with only a few changes to the example it can be made to matter. If the boolean expression itself is computationally expensive (i.e. it is the result of a database call, a web service call, some expensive CPU computation, etc.) then it could be a performance optimization that matters. Another case to consider is what would happen if the boolean expression had side effects. What if it was a MoveNext call on an IEnumerator? If it was important that it only be executed exactly once because you don't want the side effects to happen N times then that makes this a very important issue.
There are several possible solutions in such a case.
The easiest is most likely to just compute the boolean expression once and then store it in a variable:
bool someValue = ComputeComplexBooleanValue();
foreach(var item in collection)
{
if(someValue)
doStuff(item);
else
doOtherStuff(item);
}
If you want to execute the boolean value 0-1 times (i.e. avoid calling it even once in the event that the collection is empty) then we can use Lazy to lazily compute the value, but ensure it's still only computed at most one time:
var someValue = new Lazy<bool>(() => ComputeComplexBooleanValue());
foreach (var item in collection)
{
if (someValue.Value)
doStuff(item);
else
doOtherStuff(item);
}
You should always go the way that is easier to understand and maintain first. This means reducing duplicate code to absolute minumum (DRY). In addition this kind of micro optimization is not that important for many systems. Also note that shorter code is not always better.
I think I would go with somehting like this:
string name = GetName(); // returned string could be empty
bool nameIsEmpty = string.IsNullOrWhiteSpace(name);
foreach (string s in GetListOfStrings()) {
string messageAddition = "";
if (!nameIsEmpty) {
messageAddition = " (Name is: " + name + ")";
}
Console.WriteLine(s + messageAddition);
// more code which uses the computed value..
// otherwise the condition can be moved out the loop
}
I find an extra if statement easier to read than the ?: operator within a method call but this might be a personal taste.
If you want to improve performance later you should profile your application and start optimizing the slowest sections of code first. Maybe your GetListOfStrings() method is so slow that the performance of the other code is totally irrelevant. If you measured that duplicating the loop improves the perfomance by a significant value you can think about changing it.

Why does the second for loop always execute faster than the first one?

I was trying to figure out if a for loop was faster than a foreach loop and was using the System.Diagnostics classes to time the task. While running the test I noticed that which ever loop I put first always executes slower then the last one. Can someone please tell me why this is happening? My code is below:
using System;
using System.Diagnostics;
namespace cool {
class Program {
static void Main(string[] args) {
int[] x = new int[] { 3, 6, 9, 12 };
int[] y = new int[] { 3, 6, 9, 12 };
DateTime startTime = DateTime.Now;
for (int i = 0; i < 4; i++) {
Console.WriteLine(x[i]);
}
TimeSpan elapsedTime = DateTime.Now - startTime;
DateTime startTime2 = DateTime.Now;
foreach (var item in y) {
Console.WriteLine(item);
}
TimeSpan elapsedTime2 = DateTime.Now - startTime2;
Console.WriteLine("\nSummary");
Console.WriteLine("--------------------------\n");
Console.WriteLine("for:\t{0}\nforeach:\t{1}", elapsedTime, elapsedTime2);
Console.ReadKey();
}
}
}
Here is the output:
for: 00:00:00.0175781
foreach: 00:00:00.0009766
Probably because the classes (e.g. Console) need to be JIT-compiled the first time through. You'll get the best metrics by calling all methods (to JIT them (warm then up)) first, then performing the test.
As other users have indicated, 4 passes is never going to be enough to to show you the difference.
Incidentally, the difference in performance between for and foreach will be negligible and the readability benefits of using foreach almost always outweigh any marginal performance benefit.
I would not use DateTime to measure performance - try the Stopwatch class.
Measuring with only 4 passes is never going to give you a good result. Better use > 100.000 passes (you can use an outer loop). Don't do Console.WriteLine in your loop.
Even better: use a profiler (like Redgate ANTS or maybe NProf)
I am not so much in C#, but when I remember right, Microsoft was building "Just in Time" compilers for Java. When they use the same or similar techniques in C#, it would be rather natural that "some constructs coming second perform faster".
For example it could be, that the JIT-System sees that a loop is executed and decides adhoc to compile the whole method. Hence when the second loop is reached, it is yet compiled and performs much faster than the first. But this is a rather simplistic guess of mine. Of course you need a far greater insight in the C# runtime system to understand what is going on. It could also be, that the RAM-Page is accessed first in the first loop and in the second it is still in the CPU-cache.
Addon: The other comment that was made: that the output module can be JITed a first time in the first loop seams to me more likely than my first guess. Modern languages are just very complex to find out what is done under the hood. Also this statement of mine fits into this guess:
But also you have terminal-outputs in your loops. They make things yet more difficult. It could also be, that it costs some time to open the terminal a first time in a program.
I was just performing tests to get some real numbers, but in the meantime Gaz beat me to the answer - the call to Console.Writeline is jitted at the first call, so you pay that cost in the first loop.
Just for information though - using a stopwatch rather than the datetime and measuring number of ticks:
Without a call to Console.Writeline before the first loop the times were
for: 16802
foreach: 2282
with a call to Console.Writeline they were
for: 2729
foreach: 2268
Though these results were not consistently repeatable because of the limited number of runs, but the magnitude of difference was always roughly the same.
The edited code for reference:
int[] x = new int[] { 3, 6, 9, 12 };
int[] y = new int[] { 3, 6, 9, 12 };
Console.WriteLine("Hello World");
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < 4; i++)
{
Console.WriteLine(x[i]);
}
sw.Stop();
long elapsedTime = sw.ElapsedTicks;
sw.Reset();
sw.Start();
foreach (var item in y)
{
Console.WriteLine(item);
}
sw.Stop();
long elapsedTime2 = sw.ElapsedTicks;
Console.WriteLine("\nSummary");
Console.WriteLine("--------------------------\n");
Console.WriteLine("for:\t{0}\nforeach:\t{1}", elapsedTime, elapsedTime2);
Console.ReadKey();
The reason why is there are several forms of overhead in the foreach version that are not present in the for loop
Use of an IDisposable.
An additional method call for every element. Each element must be accessed under the hood by using IEnumerator<T>.Current which is a method call. Because it's on an interface it cannot be inlined. This means N method calls where N is the number of elements in the enumeration. The for loop just uses and indexer
In a foreach loop all calls go through an interface. In general this a bit slower than through a concrete type
Please note that the things I listed above are not necessarily huge costs. They are typically very small costs that can contribute to a small performance difference.
Also note, as Mehrdad pointed out, the compilers and JIT may choose to optimize a foreach loop for certain known data structures such as an array. The end result may just be a for loop.
Note: Your performance benchmark in general needs a bit more work to be accurate.
You should use a StopWatch instead of DateTime. It is much more accurate for performance benchmarks.
You should perform the test many times not just once
You need to do a dummy run on each loop to eliminate the problems that come with JITing a method the first time. This probably isn't an issue when all of the code is in the same method but it doesn't hurt.
You need to use more than just 4 values in the list. Try 40,000 instead.
You should be using the StopWatch to time the behavior.
Technically the for loop is faster. Foreach calls the MoveNext() method (creating a method stack and other overhead from a call) on the IEnumerable's iterator, when for only has to increment a variable.
I don't see why everyone here says that for would be faster than foreach in this particular case. For a List<T>, it is (about 2x slower to foreach through a List than to for through a List<T>).
In fact, the foreach will be slightly faster than the for here. Because foreach on an array essentially compiles to:
for(int i = 0; i < array.Length; i++) { }
Using .Length as a stop criteria allows the JIT to remove bounds checks on the array access, since it's a special case. Using i < 4 makes the JIT insert extra instructions to check each iteration whether or not i is out of bounds of the array, and throw an exception if that is the case. However, with .Length, it can guarantee you'll never go outside of the array bounds so the bounds checks are redundant, making it faster.
However, in most loops, the overhead of the loop is insignificant compared to the work done inside.
The discrepancy you're seeing can only be explained by the JIT I guess.
I wouldn't read too much into this - this isn't good profiling code for the following reasons
1. DateTime isn't meant for profiling. You should use QueryPerformanceCounter or StopWatch which use the CPU hardware profile counters
2. Console.WriteLine is a device method so there may be subtle effects such as buffering to take into account
3. Running one iteration of each code block will never give you accurate results because your CPU does a lot of funky on the fly optimisation such as out of order execution and instruction scheduling
4. Chances are the code that gets JITed for both code blocks is fairly similar so is likely to be in the instruction cache for the second code block
To get a better idea of timing, I did the following
Replaced the Console.WriteLine with a math expression ( e^num)
I used QueryPerformanceCounter/QueryPerformanceTimer through P/Invoke
I ran each code block 1 million times then averaged the results
When I did that I got the following results:
The for loop took 0.000676 milliseconds
The foreach loop took 0.000653 milliseconds
So foreach was very slightly faster but not by much
I then did some further experiments and ran the foreach block first and the for block second
When I did that I got the following results:
The foreach loop took 0.000702 milliseconds
The for loop took 0.000691 milliseconds
Finally I ran both loops together twice i.e for + foreach then for + foreach again
When I did that I got the following results:
The foreach loop took 0.00140 milliseconds
The for loop took 0.001385 milliseconds
So basically it looks to me that whatever code you run second, runs very slightly faster but not
enough to be of any significance.
--Edit--
Here are a couple of useful links
How to time managed code using QueryPerformanceCounter
The instruction cache
Out of order execution

Categories