Why increment and decrement are unary operations - c#

Looks like it's a very strange question, because i've read a lot of documentation, where increment & decrement are unary operations without any explanation.
I could be wrong, but ++i is similar with i+=1 (if there aren't any overriding):
int i = 1;
Console.WriteLine(++i); // 2
int j = 1;
Console.WriteLine(j+=1); // 2
In this case, preincrement is simple syntatic sugar to hide binary operator plus and 1 as second argument.
Isn't it?
Why does increment and decrement are independent unary operations,- isn't it just binary plus operator with predefined second argument with value 1?

Your question boils down to why ++ and -- exist in the first place, when normal + and - could do the job.
With today's compiler optimisation capabilities, it's really all for historical reasons. ++ and -- date back to the early (but not earliest) days of C. The Development of the C Language by late Dennis Ritchie, author of the C language, gives some interesting historical insights:
Thompson went a step further by inventing the ++ and -- operators,
which increment or decrement;
[...]
They were not in the earliest versions of B, but
appeared along the way.
[...]
a stronger motivation
for the innovation was probably his observation that the translation
of ++x was smaller than that of x=x+1.
So the definite reason seems to be lost in the mists of history, but this article by Ritchie strongly suggests that increment and decrement operators owe their existence to performance issues with early compilers.
When C++ was invented, compatibility with C was one of the major design goals by its inventor Bjarne Stroustrup, so it's needless to mention that all C operators also exist in C++. As Stroustrup himself says in his FAQ:
I wanted C++ to be compatible with a complete language with sufficient
performance and flexibility for even the most demanding systems
programming.
As for C#, one its inventors Eric Lippert once stated here on the Stack Exchange network that the only reason for them being supported in C# is consistency with older languages:
[...] these operators are horrid features. They're very confusing; after
over 25 years I still get pre- and post- semantics mixed up. They
encourage bad habits like combining evaluation of results with
production of side effects. Had these features not been in
C/C++/Java/JavaScript/etc, they would not have been invented for C#.
P.S.: C++ is special because, as you have mentioned (even with the incorrect word "overriding"), you can overload all of those operators, which has lead to ++ and -- taking on slightly different semantics in the minds of many programmers. They sometimes read as "go ahead" ("go back") or "make one step forward" ("make one step backward"), typically with iterators. If you look at the ForwardIterator concept in C++, you will see that only the unary ++ is required by it.

The answer is very simple.
Unary Operation means the operator will do the operations on only on operand.
Next i++ and i+=1 both are different actions.
-> when i++ will executes you the compiler will goes the variable location and it will increment the value.
-> i+=1 executes the i and 1 will load into the register/temparory variable and the addition operation will done and the new value will copied into the i address loaction.
So compare to i+=1 will be cost compare to i++.

Related

Why are arithmetical expressions not optimized for multiplication by 0 in C#

In order to evaluate a multiplication you have to evaluate the first term, then the second term and finally multiply the two values.
Given that every number multiplied by 0 is 0, if the evaluation of the first term returns 0 I would expect that the entire multiplication is evaluated to 0 without evaluating the second term.
However if you try this code:
var x = 0 * ComplexOperation();
The function ComplexOperation is called despite the fact that we know that x is 0.
The optimized behavior would be also consistent with the Boolean Operator '&&' that evaluates the second term only if the first one is evaluated as true. (The '&' operator evaluates both terms in any case)
I tested this behavior in C# but I guess it is the same for almost all languages.
Firstly, for floating-point, your assertion isn't even true! Consider that 0 * inf is not 0, and 0 * nan is not 0.
But more generally, if you're talking about optimizations, then I guess the compiler is free to not evaluate ComplexOperation if it can prove there are no side-effects.
However, I think you're really talking about short-circuit semantics (i.e. a language feature, not a compiler feature). If so, then the real justification is that C# is copying the semantics of earlier languages (originally C) to maintain consistency.
C# is not functional, so functions can have side effects. For example, you can print something from inside ComlpexOperation or change global static variables. So, whether it is called is defined by * contract.
You found yourself an example of different contracts with & and &&.
The language defines which operators have short-circuit semantics and which do not. Your ComplexOperation function may have side effects, those side effects may be deliberate, and the compiler is not free to assume that they should not occur just because the result of the function is effectively not used.
I will also add this would be obfuscated language design. There would be oodles of SO questions to the effect of...
//why is foo only called 9 times?????????
for(int i = 0; i < 10; i++) {
print((i-5)*foo());
}
Why allow short-circuiting booleans and not short-circuiting 0*? Well, firstly I will say that mixing short-circuit boolean with side-effects is a common source of bugs in code - if used well among maintainers who understand it as an obvious pattern then it may be okay, but it's very hard for me to imagine programmers becoming at all used to a hole in the integers at 0.

Restricting post/pre increment operator over a value rather than a variable, property and indexer

From this post (and not only) we got to the point that the ++ operator cannot be applied on expressions returning value.
And it's really obvious that 5++ is better to write as 5 + 1. I just want to summarize the whole thing around the increment/decrement operator. So let's go through these snippets of code that could be helpful to somebody stuck with the ++ first time at least.
// Literal
int x = 0++; // Error
// Constant
const int Y = 1;
double f = Y++; // error. makes sense, constants are not variables actually.
int z = AddFoo()++; // Error
Summary: ++ works for variables, properties (through a synthetic sugar) and indexers(the same).
Now the interest part - any literal expressions are optimized in CSC and, hence when we write, say
int g = 5 + 1; // This is compiled to 6 in IL as one could expect.
IL_0001: ldc.i4.6 // Pushes the integer value of 6 onto the evaluation stack as an int32.
For 5++ doesn't mean 5 becomes 6, it could be a shorthand for 5 + 1, like for x++ = x + 1
What's the real reason behind this restriction?
int p = Foo()++ //? yes you increase the return value of Foo() with 1, what's wrong with that?
Examples of code that can lead to logical issues are appreciated.
One of real-life example could be, perform one more actions than in the array.
for (int i = 0; i < GetCount()++; i++) { }
Maybe the lack of usage opts compiler teams to avoid similar features?
I don't insist this is a feature we lack of, just want to understand the dark side of this for compiler writers perhaps, though I'm not. But I know c++ allows this when returning a reference in the method. I'm neither a c++ guy(very poor knowledge) just want to get the real gist of the restriction.
Like, is it just because c# guys opted to restrict the ++ over value expressions or there are definite cases leading to unpredictable results?
In order for a feature to be worth supporting, it really needs to be useful. The code you've presented is in every case less readable than the alternative, which is just to use the normal binary addition operator. For example:
for (int i = 0; i < GetCount() + 1; i++) { }
I'm all in favour of the language team preventing you from writing unreadable code when in every case where you could do it, there's a simpler alternative.
Well before using these operators you should try to read up on how they do what they do. In particular you should understand the difference between postfix and prefix, which could help figure out what is and isn't allowed.
The ++ and -- operators modify their operands. Which means that the operand must be modifiable. If you can assign a value to the expression in question then it is modifiable, and is probably a variable(c#).
Taking a look at what these operators actually do. The postfix operators should increment after your line of code executes. As for the prefix operators, well they would need to have access to the value before the method had even been called yet. The way I read the syntax is ++lvalue (or ++variable) converting to memory operations:[read, write, read] or for lvalue++ [read, read, write] Though many compilers probably optimize secondary reads.
So looking at foo()++; the value is going to be plopped dead in the center of executing code. Which would mean the compiler would need to save the value somewhere more long-term in order for operations to be performed on said value, after the line of code has finished executing. Which is no doubt the exact reason C++ does not support this syntax either.
If you were to be returning a reference the compiler wouldn't have any trouble with the postfix. Of course in C# value types (ie. int, char, float, etc) cannot be passed by reference as they are value types.

Constants and compile time evaluation - Why change this behaviour

If you forward to approximately 13 minutes into this video by Eric Lippert he describes a change that was made to the C# compiler that renders the following code invalid (Apparently prior to and including .NET 2 this code would have compiled).
int y;
int x = 10;
if (x * 0 == 0)
y = 123;
Console.Write(y);
Now I understand that clearly any execution of the above code actually evaluates to
int y;
int x = 10;
y = 123;
Console.Write(y);
But what I dont understand is why it is considered "desirable" to make the following code in-compilable? IE: What are the risks with allowing such inferences to run their course?
I'm still finding this question a bit confusing but let me see if I can rephrase the question into a form that I can answer. First, let me re-state the background of the question:
In C# 2.0, this code:
int x = 123;
int y;
if (x * 0 == 0)
y = 345;
Console.WriteLine(y);
was treated as though you'd written
int x = 123;
int y;
if (true)
y = 345;
Console.WriteLine(y);
which in turn is treated as:
int x = 123;
int y;
y = 345;
Console.WriteLine(y);
Which is a legal program.
But in C# 3.0 we took the breaking change to prevent this. The compiler no longer treats the condition as being "always true" despite the fact that you and I both know that it is always true. We now make this an illegal program, because the compiler reasons that it does not know that the body of the "if" is always executed, and therefore does not know that the local variable y is always assigned before it is used.
Why is the C# 3.0 behaviour correct?
It is correct because the specification states that:
a constant expression must contain only constants. x * 0 == 0 is not a constant expression because it contains a non-constant term, x.
the consequence of an if is only known to be always reachable if the condition is a constant expression equal to true.
Therefore, the code given should not classify the consequence of the conditional statement to be always reachable, and therefore should not classify the local y as being definitely assigned.
Why is it desirable that a constant expression contain only constants?
We want the C# language to be clearly understandable by its users, and correctly implementable by compiler writers. Requiring that the compiler make all possible logical deductions about the values of expressions works against those goals. It should be simple to determine whether a given expression is a constant, and if so, what its value is. Put simply, the constant evaluation code should have to know how to perform arithmetic, but should not need to know facts about arithmetical manipulations. The constant evaluator knows how to multiply 2 * 1, but it does not need to know the fact that "1 is the multiplicative identity on integers".
Now, it is possible that a compiler writer might decide that there are areas in which they can be clever, and thereby generate more optimal code. Compiler writers are permitted to do so, but not in a way that changes whether code is legal or illegal. They are only allowed to make optimizations that make the output of the compiler better when given legal code.
How did the bug happen in C# 2.0?
What happened was the compiler was written to run the arithmetic optimizer too early. The optimizer is the bit that is supposed to be clever, and it should have run after the program was determined to be legal. It was running before the program was determined to be legal, and was therefore influencing the result.
This was a potential breaking change: though it brought the compiler into line with the specification, it also potentially turned working code into error code. What motivated the change?
LINQ features, and specifically expression trees. If you said something like:
(int x)=>x * 0 == 0
and converted that to an expression tree, do you expect that to generate the expression tree for
(int x)=>true
? Probably not! You probably expected it to produce the expression tree for "multiply x by zero and compare the result to zero". Expression trees should preserve the logical structure of the expression in the body.
When I wrote the expression tree code it was not clear yet whether the design committee was going to decide whether
()=>2 + 3
was going to generate the expression tree for "add two to three" or the expression tree for "five". We decided on the latter -- constants are folded before expression trees are generated, but arithmetic should not be run through the optimizer before expression trees are generated.
So, let's consider now the dependencies that we've just stated:
Arithmetic optimization has to happen before codegen.
Expression tree rewriting has to happen before arithmetic optimizations
Constant folding has to happen before expression tree rewriting
Constant folding has to happen before flow analysis
Flow analysis has to happen before expression tree rewriting (because we need to know if an expression tree uses an uninitialized local)
We've got to find an order to do all this work in that honours all those dependencies. The compiler in C# 2.0 did them in this order:
constant folding and arithmetic optimization at the same time
flow analysis
codegen
Where can expression tree rewriting go in there? Nowhere! And clearly this is buggy, because flow analysis is now taking into account facts deduced by the arithmetic optimizer. We decided to rework the compiler so that it did things in the order:
constant folding
flow analysis
expression tree rewriting
arithmetic optimization
codegen
Which obviously necessitates the breaking change.
Now, I did consider preserving the existing broken behaviour, by doing this:
constant folding
arithmetic optimization
flow analysis
arithmetic de-optimization
expression tree rewriting
arithmetic optimization again
codegen
Where the optimized arithmetic expression would contain a pointer back to its unoptimized form. We decided that this was too much complexity in order to preserve a bug. We decided that it would be better to instead fix the bug, take the breaking change, and make the compiler architecture more easily understood.
The specification states that the definite assignment of something that is only assigned inside an if block is undetermined. The spec says nothing about compiler magic that removes the unnecessary if block. In particular, it makes for a very confusing error message as you change the if condition, and suddenly get an error about y not being assigned "huh? I haven't changed when y is assigned!".
The compiler is free to perform any obvious code removal it wants to, but first it needs to follow the specification for the rules.
Specifically, section 5.3.3.5 (MS 4.0 spec):
5.3.3.5 If statements
For an if statement stmt of the form:
if ( expr ) then-stmt else else-stmt
v has the same definite assignment state at the beginning of expr as at the beginning of stmt.
If v is definitely assigned at the end of expr, then it is definitely assigned on the control flow transfer to then-stmt and to either else-stmt or to the end-point of stmt if there is no else clause.
If v has the state “definitely assigned after true expression” at the end of expr, then it is definitely assigned on the control flow transfer to then-stmt, and not definitely assigned on the control flow transfer to either else-stmt or to the end-point of stmt if there is no else clause.
If v has the state “definitely assigned after false expression” at the end of expr, then it is definitely assigned on the control flow transfer to else-stmt, and not definitely assigned on the control flow transfer to then-stmt. It is definitely assigned at the end-point of stmt if and only if it is definitely assigned at the end-point of then-stmt.
Otherwise, v is considered not definitely assigned on the control flow transfer to either the then-stmt or else-stmt, or to the end-point of stmt if there is no else
For an initially unassigned variable to be considered definitely assigned at a certain location, an assignment to the variable must occur in every possible execution path leading to that location.
technically, the execution path exists where the if condition is false; if y was also assigned in the else, then fine, but... the specification explicitly makes no demand of spotting the if condition is always true.

Is (--i == i++) an Undefined Behavior?

this question is related to my previous problem. The answer I got was "It is an Undefined behavior."
Please anyone explain:
What is an undefined behavior?
how can I know my code has an undefined behavior?
Example code:
int i = 5;
if (--i == i++)
Console.WriteLine("equal and i=" + i);
else
Console.WriteLine("not equal and i=" + i);
//output: equal and i=6
What is an Undefined-Behaviour?
It's quite simply any behaviour that is not specifically defined by the appropriate language specification. Some specs will list certain things as explicitly undefined, but really anything that's not described as being defined is undefined.
how can I know my code has an undefined behavior?
Hopefully your compiler will warn you - if that's not the case, you need to read the language specification and learn about all the funny corner cases and nooks & crannies that cause these sorts of problems.
Be careful out there!
It's undefined in C, but well-defined in C#:
From C# (ECMA-334) specification "Operator precedence and associativity" section (§14.2.1):
Except for the assignment operators and the null coalescing operator, all
binary operators are left-
associative, meaning that operations
are performed from left to right.
[Example: x + y + z is evaluated as (x + y) + z. end example]
So --i is evaluated first, changing i to 4 and evaluating to 4. Then i++ is evaluating, changing i to 5, but evaluating to 4.
Yes, that expression is undefined behavior as well (in C and C++). See http://en.wikipedia.org/wiki/Sequence_point for some information on the rules; you can also search for "sequence point" more generally (that is the set of rules that your code violates).
(This assumes C or C++.)
Carl's answer is exact in general.
In specific, the problem is what Jeremiah pointed out: sequence points.
To clarify, the chunk of code (--i == ++i) is a single "happening". It's a chunk of code that's evaluated all at once. There is no defined order of what happens first. The left side could be evaluated first, or the right side could, or maybe the equality is compared, then i is incremented, then decremented. Each of these behaviors could cause this expression to have different results. It's "undefined" what will happen here. You don't know what the answer will be.
Compare this to the statement i = i+1; Here, the right side is always evaluated first, then its result is stored into i. This is well-defined. There's no ambiguity.
Hope that helps a little.
In C the result is undefined, in C# it's defined.
In C, the comparison is interpreted as:
Do all of these, in any order:
- Decrease i, then get value of i into x
- Get value of i into y, then increase i
Then compare x and y.
In C# there are more operation boundaries, so the comparison is interpreted as:
Decrease i
then get value of i into x
then get value of i into y
then increase i
then compare x and y.
It's up to the compiler to choose in which order the operations are done within an operation boundary, so putting contradictory operations within the same boundary causes the result to be undefined.
Because the C standard states so. And your example clearly shows an undefined behabiour.
Depending on the order of evaluation, the comparison should be 4 == 5 or 5 == 6. And yet the condition returns True.
Your previous question was tagged [C], so I'm answering based on C, even though the code in your current question doesn't look like C.
The definition of undefined behavior in C99 says (§3.4.3):
1 undefined behavior
behavior, upon use of a nonportable or erroneous program construct or of erroneous data,
for which this International Standard imposes no requirements
2 NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).
Appendix J.2 of the C standard has a (long -- several pages) list of undefined behavior, though even that still isn't exhaustive. For the most part, undefined behavior means you broke the rules, so the way to know it is to know the rules.
Undefined behavior == the result cannot be guaranteed to always be the same whenever you run it in the exact same conditions, or the result cannot be guaranteed to always be the same whenever you use different compilers or runtimes to execute it.
In your code, since it is using a equal comparison operator which does not specify which side of the operands should be executed first, --i or i++ may end up running first, and your answer will depend on the actual implementation of the compiler. If --i is executed first, it will be 4 == 4, i=5; if i++ is implemented first, it will be 5 == 5, i=5.
The fact that the answer may turn out to be the same does not prevent the compiler from warning you that this is an undefined operation.
Now if this is a language that defines that the left hand side (or right hand side) should always be executed first, then the behavior will no longer be undefined.

In C# is there any significant performance difference for using UInt32 vs Int32

I am porting an existing application to C# and want to improve performance wherever possible. Many existing loop counters and array references are defined as System.UInt32, instead of the Int32 I would have used.
Is there any significant performance difference for using UInt32 vs Int32?
The short answer is "No. Any performance impact will be negligible".
The correct answer is "It depends."
A better question is, "Should I use uint when I'm certain I don't need a sign?"
The reason you cannot give a definitive "yes" or "no" with regards to performance is because the target platform will ultimately determine performance. That is, the performance is dictated by whatever processor is going to be executing the code, and the instructions available. Your .NET code compiles down to Intermediate Language (IL or Bytecode). These instructions are then compiled to the target platform by the Just-In-Time (JIT) compiler as part of the Common Language Runtime (CLR). You can't control or predict what code will be generated for every user.
So knowing that the hardware is the final arbiter of performance, the question becomes, "How different is the code .NET generates for a signed versus unsigned integer?" and "Does the difference impact my application and my target platforms?"
The best way to answer these questions is to run a test.
class Program
{
static void Main(string[] args)
{
const int iterations = 100;
Console.WriteLine($"Signed: {Iterate(TestSigned, iterations)}");
Console.WriteLine($"Unsigned: {Iterate(TestUnsigned, iterations)}");
Console.Read();
}
private static void TestUnsigned()
{
uint accumulator = 0;
var max = (uint)Int32.MaxValue;
for (uint i = 0; i < max; i++) ++accumulator;
}
static void TestSigned()
{
int accumulator = 0;
var max = Int32.MaxValue;
for (int i = 0; i < max; i++) ++accumulator;
}
static TimeSpan Iterate(Action action, int count)
{
var elapsed = TimeSpan.Zero;
for (int i = 0; i < count; i++)
elapsed += Time(action);
return new TimeSpan(elapsed.Ticks / count);
}
static TimeSpan Time(Action action)
{
var sw = new Stopwatch();
sw.Start();
action();
sw.Stop();
return sw.Elapsed;
}
}
The two test methods, TestSigned and TestUnsigned, each perform ~2 million iterations of a simple increment on a signed and unsigned integer, respectively. The test code runs 100 iterations of each test and averages the results. This should weed out any potential inconsistencies. The results on my i7-5960X compiled for x64 were:
Signed: 00:00:00.5066966
Unsigned: 00:00:00.5052279
These results are nearly identical, but to get a definitive answer, we really need to look at the bytecode generated for the program. We can use ILDASM as part of the .NET SDK to inspect the code in the assembly generated by the compiler.
Here, we can see that the C# compiler favors signed integers and actually performs most operations natively as signed integers and only ever treats the value in-memory as unsigned when comparing for the branch (a.k.a jump or if). Despite the fact that we're using an unsigned integer for both the iterator AND the accumulator in TestUnsigned, the code is nearly identical to the TestSigned method except for a single instruction: IL_0016. A quick glance at the ECMA spec describes the difference:
blt.un.s :
Branch to target if less than (unsigned or unordered), short form.
blt.s :
Branch to target if less than, short form.
Being such a common instruction, it's safe to assume that most modern high-power processors will have hardware instructions for both operations and they'll very likely execute in the same number of cycles, but this is not guaranteed. A low-power processor may have fewer instructions and not have a branch for unsigned int. In this case, the JIT compiler may have to emit multiple hardware instructions (A conversion first, then a branch, for instance) to execute the blt.un.s IL instruction. Even if this is the case, these additional instructions would be basic and probably wouldn't impact the performance significantly.
So in terms of performance, the long answer is "It is unlikely that there will be a performance difference at all between using a signed or an unsigned integer. If there is a difference, it is likely to be negligible."
So then if the performance is identical, the next logical question is, "Should I use an unsigned value when I'm certain I don't need a sign?"
There are two things to consider here: first, unsigned integers are NOT CLS-compliant, meaning that you may run into issues if you're exposing an unsigned integer as part of an API that another program will consume (such as if you're distributing a reusable library). Second, most operations in .NET, including the method signatures exposed by the BCL (for the reason above), use a signed integer. So if you plan on actually using your unsigned integer, you'll likely find yourself casting it quite a bit. This is going to have a very small performance hit and will make your code a little messier. In the end, it's probably not worth it.
TLDR; back in my C++ days, I'd say "Use whatever is most appropriate and let the compiler sort the rest out." C# is not quite as cut-and-dry, so I would say this for .NET: There's really no performance difference between a signed and unsigned integer on x86/x64, but most operations require a signed integer, so unless you really NEED to restrict the values to positive ONLY or you really NEED the extra range that the sign bit eats, stick with a signed integer. Your code will be cleaner in the end.
I don't think there are any performance considerations, other than possible difference between signed and unsigned arithmetic at the processor level but at that point I think the differences are moot.
The bigger difference is in the CLS compliance as the unsigned types are not CLS compliant as not all languages support them.
I haven't done any research on the matter in .NET, but in the olden days of Win32/C++, if you wanted to cast a "signed int" to a "signed long", the cpu had to run an op to extend the sign. To cast an "unsigned int" to an "unsigned long", it just had stuff zero in the upper bytes. Savings was on the order of a couple of clock cycles (i.e., you'd have to do it billions of times to have an even perceivable difference)
There is no difference, performance wise. Simple integer calculations are well known and modern cpu's are highly optimized to perform them quickly.
These types of optimizations are rarely worth the effort. Use the data type that is most appropriate for the task and leave it at that. If this thing so much as touches a database you could probably find a dozen tweaks in the DB design, query syntax or indexing strategy that would offset a code optimization in C# by a few hundred orders of magnitude.
Its going to allocate the same amount of memory either way (although the one can store a larger value, as its not saving space for the sign). So I doubt you'll see a 'performance' difference, unless you use large values / negative values that will cause one option or the other to explode.
this isn't really to do with performance rather requirements for the loop counter.
Prehaps there were lots of iterations to complete
Console.WriteLine(Int32.MaxValue); // Max interation 2147483647
Console.WriteLine(UInt32.MaxValue); // Max interation 4294967295
The unsigned int may be there for a reason.
I've never empathized with the use of int in loops for(int i=0;i<bla;i++). And oftentimes I would also like to use unsigned just to avoid checking the range. Unfortunately (both in C++ and for similar reasons in C#), the recommendation is to not use unsigned to gain one more bit or to ensure non-negativity, :
"Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules"
page 73 from "The C++ Programming Language" by the language's creator Bjarne Stroustrup.
My understanding (I apologize for not having the source at hand) is that hardware makers also have a bias to optimize for integer types.
Nonetheless, it would be interesting to do the same exercise that #Robear did above but using integer with some positivity assert versus unsigned.

Categories