This question already has answers here:
How to use C#'s ternary operator with two byte values?
(4 answers)
Closed 7 years ago.
The following code works.
byte b = 1;
But I noticed the following code doesn't work
byte b = BooleanProperty ? 2 : 3; // error
The compiler says
Cannot convert source type 'int' to target type 'byte'
I understand that int type cannot be converted to byte type implicitly.
But why the former code works, and the latter doesn't?
There's an implicit conversion from int constants (not just literals, but any compile-time constant expression of type int) to byte, so long as the value is in range. This is from section 6.1.9 of the C# 5 specification:
An implicit constant expression conversion permits the following conversions:
A constant-expression (§7.19) of type int can be converted to type sbyte, byte, short, ushort, uint, or ulong, provided the value of the constant-expression is within the range of the destination type.
However, there's no implicit conversion from a "general" expression of type int to byte - and that's what you've got in your second case. It's a bit like this:
int tmp = BooleanProperty ? 2 : 3;
byte b = tmp; // Not allowed
Note that the use of the conditional expression doesn't play any part in inferring its type - and as both the second and third operands are of type int, the overall expression is of type int as well.
So if you understand why the snippet above where I've separated the code into two statements doesn't compile, that explains why your single-line version with the conditional doesn't either.
There are two ways of fixing it:
Change the second and third operands to expressions of type byte so that the conditional expression has an overall type of byte:
byte b = BooleanProperty ? (byte) 2 : (byte) 3;
Cast the result of the conditional expression:
byte b = (byte) (BooleanProperty ? 2 : 3);
When it comes to literal integers in C#.
If the literal has no suffix, it has the first of these types in which
its value can be represented: int, uint, long, ulong.
Compiler is smart enough to deduce that in case of byte b = 1; the literal fits the byte type. But it's not smart enough to figure it out in case of the conditional (ternary) operator ?:.
Related
Based on this interesting question: Addition of int and uint and toying around with constant folding as mentioned in Nicholas Carey's answer, I've stumbled upon a seemingly inconsistent behavior of the compiler:
Consider the following code snippet:
int i = 1;
uint j = 2;
var k = i - j;
Here the compiler correctly resolves k to long. This particular behavior is well defined in the specifications as explained in the answers to the previously referred question.
What was surprising to me, is that the behavior changes when dealing with literal constants or constants in general. Reading Nicholas Carey's answer I realized that the behavior could be inconsistent so I checked and sure enough:
const int i = 1;
const uint j = 2;
var k = i - j; //Compile time error: The operation overflows at compile time in checked mode.
k = 1 - 2u; //Compile time error: The operation overflows at compile time in checked mode.
k in this case is resolved to Uint32.
Is there a reason for the behavior being different when dealing with constants or is this a small but unfortunate "bug" (lack of a better term) in the compiler?
From the C# specification version 5, section 6.1.9, Constant Expressions only allow the following implicit conversions
6.1.9 Implicit constant expression conversions
An implicit constant expression conversion permits the following conversions:
* A constant-expression (§7.19) of type int can be converted to type sbyte, byte, short, ushort, uint, or ulong, provided the value of the constant-expression is within the range of the destination type.
• A constant-expression of type long can be converted to type ulong, provided the value of the constant-expression is not negative.
Note that long is not on the list of int conversions.
The other half the problem is that only a small number of numeric promotions happen for binary operations:
(From Section 7.3.6.2 Binary numeric promotions):
If either operand is of type decimal, the other operand is converted to type decimal, or a binding-time error occurs if the other operand is of type float or double.
Otherwise, if either operand is of type double, the other operand is converted to type double.
Otherwise, if either operand is of type float, the other operand is converted to type float.
Otherwise, if either operand is of type ulong, the other operand is converted to type ulong, or a binding-time error occurs if the other operand is of type sbyte, short, int, or long.
Otherwise, if either operand is of type long, the other operand is converted to type long.
Otherwise, if either operand is of type uint and the other operand is of type sbyte, short, or int, both operands are converted to type long.
Otherwise, if either operand is of type uint, the other operand is converted to type uint.
Otherwise, both operands are converted to type int.
REMEMBER: The int to long conversion is forbidden for constants, meaning that both args are instead promoted to uints.
Check out this answer here
The problem is that you are using const.
At run time when there is a const the behavior is exactly as with literals, or as if you had simply hard coded those numbers in the code, so since the numbers are 1 and 2 it casts to a Uint32 since 1 is within the range of uint32. Then when you try to subtract 1 - 2 with uint32 it overflows, since 1u - 2u = +4,294,967,295 (0xFFFFFFFF).
The compiler is allowed to look at litterals, and interpret them different than it would other variables. Since const will never change, it can make guarantees that it otherwise couldn't make. in this instance it can guarentee that 1 is within the range of a uint, therfore it can cast it implicitly. In normal circumstances(without the const) it cannot make that guarantee,
a signed int ranges from -2,147,483,648 (0x80000000) to +2,147,483,647 (0x7FFFFFFF).
an unsigned int ranges from 0 (0x00000000) to +4,294,967,295 (0xFFFFFFFF).
Moral of the story, be careful when mixing const and var, you may get something you don't expect.
The MSDN page for byte says that you can declare a byte like this:
byte myByte = 255;
and that
In the preceding declaration, the integer literal 255 is implicitly
converted from int to byte. If the integer literal exceeds the range
of byte, a compilation error will occur.
So I'm struggling to understand why the following gives me a compile error of 'cannot implicitly convert type 'int' to 'byte')
byte value = on ? 1 : 0; // on is defined as a bool earlier
I'm compiling this on VS 2012 as a Windows Store App project, if that makes any difference.
Because this:
on ? 1 : 0
Isn't an integer literal. It an expression that returns an integer. Moreover, this expression cannot be evaluated until runtime.
When there's a literal, the compiler can evaluate it at compile time and ensure it satisfies any range requirements - as the page says, it's up to the compiler to produce an error if the value is out of range.
And from your same page:
You cannot implicitly convert non-literal numeric types of larger storage size to byte.
Per #Jeppe Stig Nielsen's comment - it does also work if the value is a constant (it doesn't have to be a literal as the first page says). C# spec says:
6.1.9 Implicit constant expression conversions
An implicit constant expression conversion permits the following conversions:
A
constant-expression (§7.19) of type int can be converted to type
sbyte, byte, short, ushort, uint, or ulong, provided the value of the
constant-expression is within the range of the destination type.
A
constant-expression of type long can be converted to type ulong,
provided the value of the constant-expression is not negative.
If I have two bytes a and b, how come:
byte c = a & b;
produces a compiler error about casting byte to int? It does this even if I put an explicit cast in front of a and b.
Also, I know about this question, but I don't really know how it applies here. This seems like it's a question of the return type of operator &(byte operand, byte operand2), which the compiler should be able to sort out just like any other operator.
Why do C#'s bitwise operators always return int regardless of the format of their inputs?
I disagree with always. This works and the result of a & b is of type long:
long a = 0xffffffffffff;
long b = 0xffffffffffff;
long x = a & b;
The return type is not int if one or both of the arguments are long, ulong or uint.
Why do C#'s bitwise operators return int if their inputs are bytes?
The result of byte & byte is an int because there is no & operator defined on byte. (Source)
An & operator exists for int and there is also an implicit cast from byte to int so when you write byte1 & byte2 this is effectively the same as writing ((int)byte1) & ((int)byte2) and the result of this is an int.
This behavior is a consequence of the design of IL, the intermediate language generated by all .NET compilers. While it supports the short integer types (byte, sbyte, short, ushort), it has only a very limited number of operations on them. Load, store, convert, create array, that's all. This is not an accident, those are the kind of operations you could execute efficiently on a 32-bit processor, back when IL was designed and RISC was the future.
The binary comparison and branch operations only work on int32, int64, native int, native floating point, object and managed reference. These operands are 32-bits or 64-bits on any current CPU core, ensuring the JIT compiler can generate efficient machine code.
You can read more about it in the Ecma 335, Partition I, chapter 12.1 and Partition III, chapter 1.5
I wrote a more extensive post about this over here.
Binary operators are not defined for byte types (among others). In fact, all binary (numeric) operators act only on the following native types:
int
uint
long
ulong
float
double
decimal
If there are any other types involved, it will use one of the above.
It's all in the C# specs version 5.0 (Section 7.3.6.2):
Binary numeric promotion occurs for the operands of the predefined +, –, *, /, %, &, |, ^, ==, !=, >, <, >=, and <= binary operators. Binary numeric promotion implicitly converts both operands to a common type which, in case of the non-relational operators, also becomes the result type of the operation. Binary numeric promotion consists of applying the following rules, in the order they appear here:
If either operand is of type decimal, the other operand is converted to type decimal, or a compile-time error occurs if the other operand is of type float or double.
Otherwise, if either operand is of type double, the other operand is converted to type double.
Otherwise, if either operand is of type float, the other operand is converted to type float.
Otherwise, if either operand is of type ulong, the other operand is converted to type ulong, or a compile-time error occurs if the other operand is of type sbyte, short, int, or long.
Otherwise, if either operand is of type long, the other operand is converted to type long.
Otherwise, if either operand is of type uint and the other operand is of type sbyte, short, or int, both operands are converted to type long.
Otherwise, if either operand is of type uint, the other operand is converted to type uint.
Otherwise, both operands are converted to type int.
It's because & is defined on integers, not on bytes, and the compiler implicitly casts your two arguments to int.
This question already has answers here:
byte + byte = int... why?
(16 answers)
Closed 5 years ago.
Why does the following raise a compile time error: 'Cannot implicitly convert type 'int' to 'byte':
byte a = 25;
byte b = 60;
byte c = a ^ b;
This would make sense if I were using an arithmentic operator because the result of a + b could be larger than can be stored in a single byte.
However applying this to the XOR operator is pointless. XOR here it a bitwise operation that can never overflow a byte.
using a cast around both operands works:
byte c = (byte)(a ^ b);
I can't give you the rationale, but I can tell why the compiler has that behavior from the stand point of the rules the compiler has to follow (which might not really be what you're interesting in knowing).
From an old copy of the C# spec (I should probably download a newer version), emphasis added:
14.2.6.2 Binary numeric promotions This clause is informative.
Binary numeric promotion occurs for
the operands of the predefined +, ?,
*, /, %, &, |, ^, ==, !=, >, <, >=, and <= binary operators. Binary
numeric promotion implicitly converts
both operands to a common type which,
in case of the non-relational
operators, also becomes the result
type of the operation. Binary numeric
promotion consists of applying the
following rules, in the order they
appear here:
If either operand is of type decimal, the other operand is
converted to type decimal, or a
compile-time error occurs if the other
operand is of type float or double.
Otherwise, if either operand is of type double, the other operand is
converted to type double.
Otherwise, if either operand is of type float, the other operand is
converted to type float.
Otherwise, if either operand is of type ulong, the other operand is
converted to type ulong, or a
compile-time error occurs if the other
operand is of type sbyte, short, int,
or long.
Otherwise, if either operand is of type long, the other operand is
converted to type long.
Otherwise, if either operand is of type uint and the other operand is of
type sbyte, short, or int, both
operands are converted to type long.
Otherwise, if either operand is of type uint, the other operand is
converted to type uint.
Otherwise, both operands are converted to type int.
So, basically operands smaller than an int will be converted to int for these operators (and the result will be an int for the non-relational ops).
I said that I couldn't give you a rationale; however, I will make a guess at one - I think that the designers of C# wanted to make sure that operations that might lose information if narrowed would need to have that narrowing operation made explicit by the programmer in the form of a cast. For example:
byte a = 200;
byte b = 100;
byte c = a + b; // value would be truncated
While this kind of truncation wouldn't happen when performing an xor operation between two byte operands, I think that the language designers probably didn't want to have a more complex set of rules where some operations would need explicit casts and other not.
Just a small note: the above quote is 'informational' not 'normative', but it covers all the cases in an easy to read form. Strictly speaking (in a normative sense), the reason the ^ operator behaves this way is because the closest overload for that operator when dealing with byte operands is (from 14.10.1 "Integer logical operators"):
int operator ^(int x, int y);
Therefore, as the informative text explains, the operands are promoted to int and an int result is produced.
FWIW
byte a = 25;
byte b = 60;
a = a ^ b;
does not work. However
byte a = 25;
byte b = 60;
a ^= b;
does work.
The demigod programmer from Microsoft has an answer: Link
And maybe it's more about compiler design. They make the compiler simpler by generalizing the compiling process, it doesn't have to look at operator of operands, so it lumped bitwise operations in the same category as arithmetic operators. Thereby, subjected to type widening
Link dead, archive here:
https://web.archive.org/web/20140118171646/http://blogs.msdn.com/b/oldnewthing/archive/2004/03/10/87247.aspx
I guess its because the operator XOR is defined for booleans and integers.
And a cast of the result from the integer result to a byte is an information-losing conversion ; hence needs an explicit cast (nod from the programmer).
It seems to be because in C# language specifications, it is defined for integer and long
http://msdn.microsoft.com/en-us/library/aa691307%28v=VS.71%29.aspx
So, what actually happens is that compiler casts byte operands to int implicitly because there is no loss of data that way. But the result (which is int) can not be down-cast-ed without loss of data (implicitly). So, you need to tell the compiler explicitly that you know what you are doing!
As to why the two bytes have to be converted to ints to do the XOR?
If you want to dig into it, 12.1.2 of the CLI Spec (Partition I) describes the fact that, on the evaluation stack, only int or long can exist. All shorter integral types have to be expanded during evaluation.
Unfortunately, I can't find a suitable link directly to the CLI Spec - I've got a local copy as PDF, but can't remember where I got it from.
This has more to do with the rules surrounding implicit and explicit casting in the CLI specification. An integer (int = System.Int32 = 4 bytes) is wider than a byte (1 byte, obviously!). Therefore any cast from int to byte is potentially a narrowing cast. Therefore, the compiler wants you to make this explicit.
I have the following C# code:
byte rule = 0;
...
rule = rule | 0x80;
which produces the error:
Cannot implicitly convert type 'int' to 'byte'. An explicit conversion exists (are you missing a cast?)
[Update: first version of the question was wrong ... I misread the compiler output]
Adding the cast doesn't fix the problem:
rule = rule | (byte) 0x80;
I need to write it as:
rule |= 0x80;
Which just seems weird. Why is the |= operator any different to the | operator?
Is there any other way of telling the compiler to treat the constant as a byte?
# Giovanni Galbo : yes and no. The code is dealing with the programming of the flash memory in an external device, and logically represents a single byte of memory. I could cast it later, but this seemed more obvious. I guess my C heritage is showing through too much!
# Jonathon Holland : the 'as' syntax looks neater but unfortunately doesn't appear to work ... it produces:
The as operator must be used with a reference type or nullable type ('byte' is a non-nullable value type)
C# does not have a literal suffix for byte. u = uint, l = long, ul = ulong, f = float, m = decimal, but no byte. You have to cast it.
This works:
rule = (byte)(rule | 0x80);
Apparently the expression 'rule | 0x80' returns an int even if you define 0x80 as 'const byte 0x80'.
The term you are looking for is "Literal" and unfortunately C# does not have a byte literal.
Here's a list of all C# literals.
int rule = 0;
rule |= 0x80;
http://msdn.microsoft.com/en-us/library/kxszd0kx.aspx The | operator is defined for all value types. I think this will produced the intended result. The "|=" operator is an or then assign operator, which is simply shorthand for rule = rule | 0x80.
One of the niftier things about C# is that it lets you do crazy things like abuse value types simply based on their size. An 'int' is exactly the same as a byte, except the compiler will throw warnings if you try and use them as both at the same time. Simply sticking with one (in this case, int) works well. If you're concerned about 64bit readiness, you can specify int32, but all ints are int32s, even running in x64 mode.
According to the ECMA Specification, pg 72 there is no byte literal. Only integer literals for the types: int, uint, long, and ulong.
Almost five years on and nobody has actually answered the question.
A couple of answers claim that the problem is the lack of a byte literal, but this is irrelevant. If you calculate (byte1 | byte2) the result is of type int. Even if "b" were a literal suffix for byte the type of (23b | 32b) would still be int.
The accepted answer links to an MSDN article claiming that operator| is defined for all integral types, but this isn't true either.
operator| is not defined on byte so the compiler uses its usual overload resolution rules to pick the version that's defined on int. Hence, if you want to assign the result to a byte you need to cast it:
rule = (byte)(rule | 0x80);
The question remains, why does rule |= 0x80; work?
Because the C# specification has a special rule for compound assignment that allows you to omit the explicit conversion. In the compound assignment x op= y the rule is:
if the selected operator is a predefined operator, if the return type of the selected operator is explicitly convertible to the type of x, and if y is implicitly convertible to the type of x or the operator is a shift operator, then the operation is evaluated as x = (T)(x op y), where T is the type of x, except that x is evaluated only once.
Looks like you may just have to do it the ugly way: http://msdn.microsoft.com/en-us/library/5bdb6693.aspx.
Unfortunately, your only recourse is to do it just the way you have. There is no suffix to mark the literal as a byte. The | operator does not provide for implicit conversion as an assignment (i.e. initialization) would.
Apparently the expression 'rule |
0x80' returns an int even if you
define 0x80 as 'const byte 0x80'.
I think the rule is numbers like 0x80 defaults to int unless you include a literal suffix. So for the expression rule | 0x80, the result will be an int since 0x80 is an int and rule (which is a byte) can safely be converted to int.
According to the C standard, bytes ALWAYS promote to int in expressions, even constants. However, as long as both values are UNSIGNED, the high-order bits will be discarded so the operation should return the correct value.
Similarly, floats promote to double, etc.
Pull out of copy of K&R. It's all in there.