for (i=0 ; i<=10; i++)
{
..
..
}
i=0;
while(i<=10)
{
..
..
i++;
}
In for and while loop, which one is better, performance wise?
(update)
Actually - there is one scenario where the for construct is more efficient; looping on an array. The compiler/JIT has optimisations for this scenario as long as you use arr.Length in the condition:
for(int i = 0 ; i < arr.Length ; i++) {
Console.WriteLine(arr[i]); // skips bounds check
}
In this very specific case, it skips the bounds checking, as it already knows that it will never be out of bounds. Interestingly, if you "hoist" arr.Length to try to optimize it manually, you prevent this from happening:
int len = arr.Length;
for(int i = 0 ; i < len ; i++) {
Console.WriteLine(arr[i]); // performs bounds check
}
However, with other containers (List<T> etc), hoisting is fairly reasonable as a manual micro-optimisation.
(end update)
Neither; a for loop is evaluated as a while loop under the hood anyway.
For example 12.3.3.9 of ECMA 334 (definite assignment) dictates that a for loop:
for ( for-initializer ; for-condition ; for-iterator ) embedded-statement
is essentially equivalent (from a Definite assignment perspective (not quite the same as saying "the compiler must generate this IL")) as:
{
for-initializer ;
while ( for-condition ) {
embedded-statement ;
LLoop:
for-iterator ;
}
}
with continue statements that target
the for statement being translated to
goto statements targeting the label
LLoop. If the for-condition is omitted
from the for statement, then
evaluation of definite assignment
proceeds as if for-condition were
replaced with true in the above
expansion.
Now, this doesn't mean that the compiler has to do exactly the same thing, but in reality it pretty much does...
I would say they are the same and you should never do such micro-optimizations anyway.
The performance will be the same. However, unless you need to access the i variable outside the loop then you should use the for loop. This will be cleaner since i will only have scope within the block.
Program efficiency comes from proper algorithms, good object-design, smart program architecture, etc.
Shaving a cycle or two with for loops vs while loops will NEVER make a slow program fast, or a fast program slow.
If you want to improve program performance in this section, find a way to either partially unroll the loop (see Duff's Device), or improve performance of what is done inside the loop.
Neither one. They are equivalent. You can think of the 'for' loop being a more compact way of writing the while-loop.
Yes, they are equivalent code snippets.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Is it a bad practice to use break statement inside a for loop?
Say, I am searching for an value in an array. Compare inside a for loop and when value is found, break; to exit the for loop.
Is this a bad practice? I have seen the alternative used: define a variable vFound and set it to true when the value is found and check vFound in the for statement condition. But is it necessary to create a new variable just for this purpose?
I am asking in the context of a normal C or C++ for loop.
P.S: The MISRA coding guidelines advise against using break.
No, break is the correct solution.
Adding a boolean variable makes the code harder to read and adds a potential source of errors.
Lots of answers here, but I haven't seen this mentioned yet:
Most of the "dangers" associated with using break or continue in a for loop are negated if you write tidy, easily-readable loops. If the body of your loop spans several screen lengths and has multiple nested sub-blocks, yes, you could easily forget that some code won't be executed after the break. If, however, the loop is short and to the point, the purpose of the break statement should be obvious.
If a loop is getting too big, use one or more well-named function calls within the loop instead. The only real reason to avoid doing so is for processing bottlenecks.
You can find all sorts of professional code with 'break' statements in them. It perfectly make sense to use this whenever necessary. In your case this option is better than creating a separate variable just for the purpose of coming out of the loop.
Using break as well as continue in a for loop is perfectly fine.
It simplifies the code and improves its readability.
Far from bad practice, Python (and other languages?) extended the for loop structure so part of it will only be executed if the loop doesn't break.
for n in range(5):
for m in range(3):
if m >= n:
print('stop!')
break
print(m, end=' ')
else:
print('finished.')
Output:
stop!
0 stop!
0 1 stop!
0 1 2 finished.
0 1 2 finished.
Equivalent code without break and that handy else:
for n in range(5):
aborted = False
for m in range(3):
if not aborted:
if m >= n:
print('stop!')
aborted = True
else:
print(m, end=' ')
if not aborted:
print('finished.')
General rule: If following a rule requires you to do something more awkward and difficult to read then breaking the rule, then break the rule.
In the case of looping until you find something, you run into the problem of distinguishing found versus not found when you get out. That is:
for (int x=0;x<fooCount;++x)
{
Foo foo=getFooSomehow(x);
if (foo.bar==42)
break;
}
// So when we get here, did we find one, or did we fall out the bottom?
So okay, you can set a flag, or initialize a "found" value to null. But
That's why in general I prefer to push my searches into functions:
Foo findFoo(int wantBar)
{
for (int x=0;x<fooCount;++x)
{
Foo foo=getFooSomehow(x);
if (foo.bar==wantBar)
return foo;
}
// Not found
return null;
}
This also helps to unclutter the code. In the main line, "find" becomes a single statement, and when the conditions are complex, they're only written once.
There is nothing inherently wrong with using a break statement but nested loops can get confusing. To improve readability many languages (at least Java does) support breaking to labels which will greatly improve readability.
int[] iArray = new int[]{0,1,2,3,4,5,6,7,8,9};
int[] jArray = new int[]{0,1,2,3,4,5,6,7,8,9};
// label for i loop
iLoop: for (int i = 0; i < iArray.length; i++) {
// label for j loop
jLoop: for (int j = 0; j < jArray.length; j++) {
if(iArray[i] < jArray[j]){
// break i and j loops
break iLoop;
} else if (iArray[i] > jArray[j]){
// breaks only j loop
break jLoop;
} else {
// unclear which loop is ending
// (breaks only the j loop)
break;
}
}
}
I will say that break (and return) statements often increase cyclomatic complexity which makes it harder to prove code is doing the correct thing in all cases.
If you're considering using a break while iterating over a sequence for some particular item, you might want to reconsider the data structure used to hold your data. Using something like a Set or Map may provide better results.
break is a completely acceptable statement to use (so is continue, btw). It's all about code readability -- as long as you don't have overcomplicated loops and such, it's fine.
It's not like they were the same league as goto. :)
It depends on the language. While you can possibly check a boolean variable here:
for (int i = 0; i < 100 && stayInLoop; i++) { ... }
it is not possible to do it when itering over an array:
for element in bigList: ...
Anyway, break would make both codes more readable.
I agree with others who recommend using break. The obvious consequential question is why would anyone recommend otherwise? Well... when you use break, you skip the rest of the code in the block, and the remaining iterations. Sometimes this causes bugs, for example:
a resource acquired at the top of the block may be released at the bottom (this is true even for blocks inside for loops), but that release step may be accidentally skipped when a "premature" exit is caused by a break statement (in "modern" C++, "RAII" is used to handle this in a reliable and exception-safe way: basically, object destructors free resources reliably no matter how a scope is exited)
someone may change the conditional test in the for statement without noticing that there are other delocalised exit conditions
ndim's answer observes that some people may avoid breaks to maintain a relatively consistent loop run-time, but you were comparing break against use of a boolean early-exit control variable where that doesn't hold
Every now and then people observing such bugs realise they can be prevented/mitigated by this "no breaks" rule... indeed, there's a whole related strategy for "safer" programming called "structured programming", where each function is supposed to have a single entry and exit point too (i.e. no goto, no early return). It may eliminate some bugs, but it doubtless introduces others. Why do they do it?
they have a development framework that encourages a particular style of programming / code, and they've statistical evidence that this produces a net benefit in that limited framework, or
they've been influenced by programming guidelines or experience within such a framework, or
they're just dictatorial idiots, or
any of the above + historical inertia (relevant in that the justifications are more applicable to C than modern C++).
In your example you do not know the number of iterations for the for loop. Why not use while loop instead, which allows the number of iterations to be indeterminate at the beginning?
It is hence not necessary to use break statemement in general, as the loop can be better stated as a while loop.
I did some analysis on the codebase I'm currently working on (40,000 lines of JavaScript).
I found only 22 break statements, of those:
19 were used inside switch statements (we only have 3 switch statements in total!).
2 were used inside for loops - a code that I immediately classified as to be refactored into separate functions and replaced with return statement.
As for the final break inside while loop... I ran git blame to see who wrote this crap!
So according to my statistics: If break is used outside of switch, it is a code smell.
I also searched for continue statements. Found none.
It's perfectly valid to use break - as others have pointed out, it's nowhere in the same league as goto.
Although you might want to use the vFound variable when you want to check outside the loop whether the value was found in the array. Also from a maintainability point of view, having a common flag signalling the exit criteria might be useful.
I don't see any reason why it would be a bad practice PROVIDED that you want to complete STOP processing at that point.
In the embedded world, there is a lot of code out there that uses the following construct:
while(1)
{
if (RCIF)
gx();
if (command_received == command_we_are_waiting_on)
break;
else if ((num_attempts > MAX_ATTEMPTS) || (TickGet() - BaseTick > MAX_TIMEOUT))
return ERROR;
num_attempts++;
}
if (call_some_bool_returning_function())
return TRUE;
else
return FALSE;
This is a very generic example, lots of things are happening behind the curtain, interrupts in particular. Don't use this as boilerplate code, I'm just trying to illustrate an example.
My personal opinion is that there is nothing wrong with writing a loop in this manner as long as appropriate care is taken to prevent remaining in the loop indefinitely.
Depends on your use case. There are applications where the runtime of a for loop needs to be constant (e.g. to satisfy some timing constraints, or to hide your data internals from timing based attacks).
In those cases it will even make sense to set a flag and only check the flag value AFTER all the for loop iterations have actually run. Of course, all the for loop iterations need to run code that still takes about the same time.
If you do not care about the run time... use break; and continue; to make the code easier to read.
On MISRA 98 rules, that is used on my company in C dev, break statement shall not be used...
Edit : Break is allowed in MISRA '04
Ofcourse, break; is the solution to stop the for loop or foreach loop. I used it in php in foreach and for loop and found working.
I think it can make sense to have your checks at the top of your for loop like so
for(int i = 0; i < myCollection.Length && myCollection[i].SomeValue != "Break Condition"; i++)
{
//loop body
}
or if you need to process the row first
for(int i = 0; i < myCollection.Length && (i == 0 ? true : myCollection[i-1].SomeValue != "Break Condition"); i++)
{
//loop body
}
This way you can have a singular body function without breaks.
for(int i = 0; i < myCollection.Length && (i == 0 ? true : myCollection[i-1].SomeValue != "Break Condition"); i++)
{
PerformLogic(myCollection[i]);
}
It can also be modified to move Break into its own function as well.
for(int i = 0; ShouldContinueLooping(i, myCollection); i++)
{
PerformLogic(myCollection[i]);
}
I'm currently doing some graph calculations that involves adjacency matrices, and I'm in the process of optimizing every little bit of it.
One of the instructions that I think can be optimized is the one in the title, in it's original form:
if ((adjMatrix[i][k] > 0) && (adjMatrix[k][j] > 0) && (adjMatrix[i][k] + adjMatrix[k][j] == w))
But for ease I'll stick to the form provided in the title:
if (a > 0 && b > 0 && a + b == c)
What I don't like is the > 0 part (being an adjacency matrix, in it's initial form it contains only 0 and 1, but as the program progresses, zeros are replaced with numbers from 2 onwards, until there are no more zeros.
I've done a test and removed the > 0 part for both a and b, and there was a significant improvement. Over 60088 iterations there was a decrease of 792ms, from 3672ms to 2880ms, which is 78% of the original time, which to me is excellent.
So my question is: can you think of some way of optimizing a statement like this and having the same result, in C#? Maybe some bitwise operations or something similar, I'm not quite familiar with them.
Answer with every idea that crosses your mind, even if it's not suitable. I'll do the speed testing myself and let you know of the results.
EDIT: This is for a compiler that I'm gonna run it myself on my computer. What I just described it's not a problem / bottleneck that I'm complaining of. The program in it's current form runs fine for my needs, but I just want to push it forward and make it as basic and optimized as possible. Hope this clarifies a little bit.
EDIT I believe providing the full code it's a useful thing, so here it is, but keep in mind what I said in the bold below. I want to concentrate strictly on the if statement. The program essentially takes an adjacency matrix and stores all the route combinations that exists. Then there are sorted and trimmed according to some coefficients, but this I didn't included.
int w, i, j, li, k;
int[][] adjMatrix = Data.AdjacencyMatrix;
List<List<List<int[]>>> output = new List<List<List<int[]>>>(c);
for (w = 2; w <= 5; w++)
{
int[] plan;
for (i = 0; i < c; i++)
{
for (j = 0; j < c; j++)
{
if (j == i) continue;
if (adjMatrix[i][j] == 0)
{
for (k = 0; k < c; k++) // 11.7%
{
if (
adjMatrix[i][k] > 0 &&
adjMatrix[k][j] > 0 &&
adjMatrix[i][k] + adjMatrix[k][j] == w) // 26.4%
{
adjMatrix[i][j] = w;
foreach (int[] first in output[i][k])
foreach (int[] second in output[k][j]) // 33.9%
{
plan = new int[w - 1];
li = 0;
foreach (int l in first) plan[li++] = l;
plan[li++] = k;
foreach (int l in second) plan[li++] = l;
output[i][j].Add(plan);
}
}
}
// Here the sorting and trimming occurs, but for the sake of
// discussion, this is only a simple IEnumerable<T>.Take()
if (adjMatrix[i][j] == w)
output[i][j] = output[i][j].Take(10).ToList();
}
}
}
}
Added comments with profiler results in optimized build.
By the way, the timing results were obtained with exactly this piece of code (without the sorting and trimming which dramatically increases execution time). There weren't another parts that were included in my measurement. There is a Stopwatch.StartNew() exactly before this code, and a Console.WriteLine(EllapsedMilliseconds) just after.
If you want to make an idea about the size, the adjacency matrix has 406 rows / columns. So basically there are only for-instructions combined which execute many many iterations, so I haven't got many options of optimizing. Speed is not currently a problem, but I want to make sure I'm ready when it'll become.
And to rule out the 'optimize another parts' problem, there is room for talk in this subject also, but for this specific matter, I just want to find solution for this as an abstract problem / concept. It may help me and others understand how the C# compiler works and treats if-statements and comparisons, that's my goal here.
You can replace a>0 && b>0 with (a-1)|(b-1) >= 0 for signed variables a and b.
Likewise, the condition x == w can be expressed as (x - w)|(w - x) >= 0, since when x != w either left or the right part of the expression will toggle the sign bit, which is preserved by bit-wise or. Everything put together would be (a-1)|(b-1)|(a+b-w)|(w-a-b) >= 0 expressed as a single comparison.
Alternatively a slight speed advantage may come from putting
the probabilities in increasing order:
Which is more likely (a|b)>=0 or (a+b)==w ?
I don't know how well C# optimizes things like this, but it's not so difficult to try to store adjMatrix[i][k] and adjMatrix[k][j] in temporary variables not to read memory twice. See if that changes things in any way.
It's hard to believe that arithmetic and comparison operations are the bottleneck here. Most likely it's memory access or branching. Ideally memory should be accessed in a linear fashion. Can you do something to make it more linear?
It would be good to see more code to suggest something more concrete.
Update: You could try to use two-dimensional array (int[,]) instead of a jagged one (int[][]). This might improve memory locality and element access speed.
The order of the logical tests could be important (as noted in other answers). Since you are using the short circuit logical test (&& instead of &), then the conditions are evaluated from left to right, and the first one it finds that is false, will cause the program to stop evaluating the conditional and continue executing (without executing the if block). So if there is one condition is the far more likely to be false than the rest, that one should go first, and the next should be the next most likely one to be false, etc.
Another good optimization (which I suspect is really what gave you your performance increase --rather than simply dropping out some of the conditions) is to assign the values you are pulling from the arrays to local variables.
You are using adjMatrix[i][k] twice (as well as adjMatrix[k][j]) which is forcing the computer to dig through the array to get the value. Instead, before the if statement, set each of those to a local variable each time, then do your logic test against those variables.
I agree with others who say it's unlikely that this simple statement is your bottleneck and suggest profiling before you decide on optimizing this specific line. But, as a theoretical experiment, you can do a couple of things:
Zero-checks: checking for a != 0 && b != 0 will probably be somewhat faster than a >= 0 && b >= 0. Since your adjacency matrix is non-negative, you can safely do this.
Reordering: if testing just a + b == c is faster, try using this test first and only then test for a and b individually. I doubt this will be faster because addition and equality check is more expensive than zero checks, but it might work for your particular case.
Avoid double indexing: look at the resulting IL with ILDASM or an equivalent to ensure that the array indexes are only dereferenced once, not twice. If they aren't, try putting them in local variables before the check.
Unless you're calling a function you don't optimize conditionals. Its pointless. However if you really want to theres a few easy things to keep in mind
Conditions are checked if something is a zero (or not), if the highest bit is set (or not) and a compare (== or !=) is essentially a - b and checking if its zero (==0) or not (!=0). So a is unsigned then a>0 is the same as a!=0. If a is signed then a<0 is pretty good (this uses the check on highest bit) and is better then a<=0. But anyways just knowing those rules may help.
Also fire up a profiler, you'll see conditionals take 001% of the time. If anything you should ask how to write something that doesnt require conditionals.
Have you considered reversing the logic?
if (a > 0 && b > 0 && a + b == c)
could be rewritten to:
if (a == 0 || b == 0 || a + b != c) continue;
Since you don't want to do anything in the loop if any of the statements are false, then try to abort as soon as possible (if the runtime is that smart, which I assume).
The operation which is the heaviest should be last, because if first statement is true, the others doesn't need to be checked. I assumed that the addition is the heaviest part, but profiling it might tell a different story.
However, I haven't profiled these scenarios my self, and with such trivial conditionals, it might even be a drawback. Would be interesting to see your findings.
Consider the following snippet:
for(i = n-1; i >= 0; i--)
{
if(str[i] == ' ')
{
i += 2; // I am just incrementing it by 2, so that i can retrieve i+1
continue;
}
// rest of the code with many similar increments of i
}
Say suppose the loop never turns up into infinity and If I traverse through the loop with many such increments and decrements, I am sure the complexity would not be of order N or N Square. But is there any generalised complexity for such kind of solutions ??
P.S: I know it`s the worst code, but still wanted to give it a try :-)
This is an infinite loop (infinite complexity) if you have a space in your string. Since you are using continue, it goes back to for and it starts from i+2.
Lets assume that str does not change over the course of this traversal.
You are traversing str backwards and when you hit a space you move the index forward by one i.e. it would hit the space again in the next decrement and then move it forward again i.e. your claim that the loop is not infinite, does not seem valid.
If no mutable state affects the path taken by i, then either you go into an infinite loop, or you exit the loop in n or less steps. In the latter case, the worst case performance will be O(N).
If the loop mutates some other state, and that state affects the path, then it is impossible to predict the complexity without understanding the state and the mutation process.
(As written, the code will go into an infinite loop ... unless the section at the end of the loop does something to prevent it.)
Consider this:
Requisite:
//The alphabet from a-z
List<char> letterRange = Enumerable.Range('a', 'z' - 'a' + 1)
.Select(i => (Char)i).ToList(); //97 - 122 + 1 = 26 letters/iterations
Standard foreach:
foreach (var range in letterRange)
{
Console.Write(range + ",");
}
Console.Write("\n");
Inbuilt foreach:
letterRange.ForEach(range => Console.Write(range + ",")); //delegate(char range) works as well
Console.Write("\n");
I have tried timing them against each other and the inbuilt foreach is up to 2 times faster, which seems like a lot.
I have googled around, but I can not seem to find any answers.
Also, regarding: In .NET, which loop runs faster, 'for' or 'foreach'?
for (int i = 0; i < letterRange.Count; i++)
{
Console.Write(letterRange[i] + ",");
}
Console.Write("\n");
Doesn't act execute faster than standard foreach as far as I can tell.
I think your benchmark is flawed. Console.Write is an I/O bound task and it's the most time consuming part of your benchmark. This is a micro-benchmark and should be done very carefully for accurate results.
Here is a benchmark: http://diditwith.net/PermaLink,guid,506c0888-8c5f-40e5-9d39-a09e2ebf3a55.aspx (It looks good but I haven't validated it myself). The link appears to be broken as of 8/14/2015
When you enter a foreach loop, you enumerate over each item. That enumeration causes two method calls per iteration: one to IEnumerator<T>.MoveNext(), and another to IEnumerator<T>.Current. That's two call IL instructions.
List<T>.ForEach is faster because it has only one method call per iteration -- whatever your supplied Action<T> delegate is. That's one callvirt IL instruction. This is significantly faster than two call instructions.
As others pointed out, IO-bound instructions like Console.WriteLine() will pollute your benchmark. Do something that can be confined entirely to memory, like adding elements of a sequence together.
This is because the foreach method is not using an enumerator. Enumerators (foreach) tend to be slower then a basic for loop:
Here's the code for the ForEach method:
public void ForEach(Action<T> action)
{
if (action == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.match);
}
for (int i = 0; i < this._size; i++)
{
action(this._items[i]);
}
}
While I would expect there to be a difference, I'm a little surprised it's as large as you indicated. Using the enumerator approach you are taking an extra object creation, and then extra steps are taken to ensure the enumerator is not invalidated (the collection is modified). Your also going through an extra function call Current() to get the member. All this adds time.
One of Steve McConnell's checklist items is that you should not monkey with the loop index (Chapter 16, page 25, Loop Indexes, PDF format).
This makes intuitive sense and is a practice I've always followed except maybe as I learned how to program back in the day.
In a recent code review I found this awkward loop and immediately flagged it as suspect.
for ( int i=0 ; i < this.MyControl.TabPages.Count ; i++ )
{
this.MyControl.TabPages.Remove ( this.MyControl.TabPages[i] );
i--;
}
It's almost amusing since it manages to work by keeping the index at zero until all TabPages are removed.
This loop could have been written as
while(MyControl.TabPages.Count > 0)
MyControl.TabPages.RemoveAt(0);
And since the control was in fact written at about the same time as the loop it could even have been written as
MyControl.TabPages.Clear();
I've since been challenged about the code-review issue and found that my articulation of why it is bad practice was not as strong as I'd have liked. I said it was harder to understand the flow of the loop and therefore harder to maintain and debug and ultimately more expensive over the lifetime of the code.
Is there a better articulation of why this is bad practice?
I think your articulation is great. Maybe it can be worded like so:
Since the logic can be expressed much
clearer, it should.
Well, this adds confusion for little purpose - you could just as easily write:
while(MyControl.TabPages.Count > 0)
{
MyControl.TabPages.Remove(MyControl.TabPages[0]);
}
or (simpler)
while(MyControl.TabPages.Count > 0)
{
MyControl.TabPages.RemoveAt(0);
}
or (simplest)
MyControl.TabPages.Clear();
In all of the above, I don't have to squint and think about any edge-cases; it is pretty clear what happens when. If you are modifying the loop index, you can quickly make it quite hard to understand at a glance.
It's all about expectation.
When one uses a loopcounter, you expect that it is incremented (decremented) each iteration of the loop with the same amount.
If you mess (or monkey if you like) with the loop counter, your loop does not behave like expected. This means it is harder to understand and it increases the chance that your code is misinterpreted, and this introduces bugs.
Or to (mis) quote a wise but fictional character:
complexity leads to misunderstanding
misunderstanding leads to bugs
bugs leads to the dark side.
I agree with your challenge. If they want to keep a for loop, the code:
for ( int i=0 ; i < this.MyControl.TabPages.Count ; i++ ) {
this.MyControl.TabPages.Remove ( this.MyControl.TabPages[i] );
i--;
}
reduces as follows:
for ( int i=0 ; i < this.MyControl.TabPages.Count ; ) {
this.MyControl.TabPages.Remove ( this.MyControl.TabPages[i] );
}
and then to:
for ( ; 0 < this.MyControl.TabPages.Count ; ) {
this.MyControl.TabPages.Remove ( this.MyControl.TabPages[0] );
}
But a while loop or a Clear() method, if that exists, are clearly preferable.
I think you could build a stronger argument by invoking Knuth's concepts of literate programming, that programs should not be written for computers, but to communicate concepts to other programmers, thus the simpler loop:
while (this.MyControl.TabPages.Count>0)
{
this.MyControl.TabPages.Remove ( this.MyControl.TabPages[0] );
}
more clearly illustrates the intent - remove the first tab page until there are none left. I think most people would grok that much quicker than the original example.
This might be clearer:
while (this.MyControl.TabPages.Count > 0)
{
this.MyControl.TabPages.Remove ( this.MyControl.TabPages[0] );
}
One argument that could be used is that it is much more difficult to debug such code, where the index is being changed twice.
The original code is highly redundant to bend the action of the for-loop to what is necessary. The increment is unnecessary, and balanced by the decrement. Those should be PRE-increments, not POST-increments as well, because conceptually the post-increment is wrong. The comparison with the tabpages count is semi-redundant since that's a hackish way of checking that the container is empty.
In short, it's unnecessary cleverness, it adds rather than removes redundancy. Since it can be both obviously simpler and obviously shorter, it's wrong.
The only reason to bother with an index at all would be if one were selectively erasing things. Even in that case, I would think it preferable to say: i=0;
while(i < MyControl.Tabpages.Count)
if (wantToDelete(MyControl.Tabpages(i))
MyControl.Tabpages.RemoveAt(i);
else
i++;rather than jinxing the loop index after each removal. Or, better yet, have the index count downward so that when an item is removed it won't affect the index of future items needing removal. If many items are deleted, this may also help minimize the amount of time spent moving items around after each deletion.
I think pointing out the fact that the loop iteratations is beeing controled not by the "i++" as anyone would expect but by the crazy "i--" setup should have been enough.
I also think that altering the the state of "i" by evaluating the count and then altering the count in the loop may also lead to potential problems. I would expect a for loop to generally have a "fixed" number of iterations and the only part of the for loop condition that changes to be the loop variable "i".