How -128 fits in a sbyte - c#

As I remember we had learned that signed integer types (sbyte, short, int , long)
the first bit is for the sign and the latter 7 bit is for the value.
I saw that sbyte range is -128 to 127 while I thought it must be -127 to 127.
I tried some codes to understand how this is possible and I faced two strange things:
1- I tried the following code:
sbyte a = -128;
Console.Write(Convert.ToString(a, 2));
and the resutl was
1111111100000000
As if its a two byte variable.
2-Tried converting all numbers in the range to binary:
for(sbyte i=-128;i<=127;i++)
{
Console.WriteLine(Convert.ToString(i, 2));
if(i==127) break;
}
If I omit the if(i==127) break; the loop goes on. and with the break, the code in the loop does not execute, some how as if -128 is greater than 127.
My Conclusion:
As I thought that -128 must not fit in a unsigned byte variable and the first and second tries approves that (111111110000000 > 01111111)
but If it does not fit, then why range is -128 to 127?

I saw that sbyte range is -128 to 127 while I thought it must be -127 to 127.
The range is [-128, +127] indeed. The range [-127, +127] would mean that sbyte can represent only 255 different values, while 8 bits make 256 combinations.
And BTW if -128 would not be a legal value, compiler would complain about
sbyte a = -128;
There is no overload Convert.ToString(sbyte value, int toBase), so in your case Convert.ToString(int value, int toBase) is called, and sbyte is promoted to int.
To quickly check the value, you can print sbyte as a hexadecimal number:
sbyte s = -1;
Console.WriteLine("{0:X}", s); // FF, i.e. 11111111
If I omit the if(i==127) break; the loop goes on.
Sure, sbyte.MaxValue is 127, so i<=127 is always true. When i is 127 and gets incremented, an overflow occurs and the next value is -128.

Related

integer promotion in c# (the range of an sbyte as an example)

so I am studying about Integer Promotion and I know that type promotions only apply to the values operated upon when an expression is evaluated.
I have this example where I take an sbyte that is equal to 127 and increment it by 1 and then I output it on the Console and I get a -128. Does this mean that the sbyte turns into an int during the process of incrementing and then it somehow transforms into a -128.
I would like to know how exactly this happens.
sbyte efhv = 127;
efhv++;
Console.WriteLine(efhv);
As #Sweeper pointed out in the comments, integer promotion does not apply to the ++ operator: ++ modifies the value in-place, which means that the modified value must be within the range of the
specified data type. Integer promotion would not make any sense here.
As #beautifulcoder explained in their answer, the effect you see is simply a two's complement number overflowing, since C# executes arithmetic operations in an unchecked context by default.
According to the C# language specification integer promotion does occur for the unary + and the binary + operators, as can be seen in the following code example:
sbyte efhv = 127;
sbyte one = 1;
Console.WriteLine((efhv).GetType().Name); // prints SByte
Console.WriteLine((efhv++).GetType().Name); // prints SByte
Console.WriteLine((+efhv).GetType().Name); // prints Int32
Console.WriteLine((efhv + one).GetType().Name); // prints Int32
(fiddle)
It is because this is a signed 8-bit integer. When the value exceeds the max it overflows into the negative numbers. It is not promotion but overflow behavior you are seeing.
To verify:
sbyte efhv = 127;
efhv++;
Console.WriteLine(efhv);
Console.WriteLine(sbyte.MaxValue);
Console.WriteLine(sbyte.MinValue);

Byte overflow evaluate to zero instead of exception?

static void Main(string[] args)
{
int n;
byte b;
n = 256;
b = (byte) n;
Console.WriteLine(b); //0
}
C# byte range is 0 to 255 and hence I try to cast an int of 256 to byte and see what will happen.
Surprisingly it returns 0 instead of 255 or better yet give me an overflow exception?
UPDATES:
I'm trying it on macos which is Mono, if it matters and .NET Framework 4.7
That is the expected behaviour. If you think about it, 256 is one "1" followed by 8 zeroes in binary. When you take away everything except the least significant 8 bits, you get 8 zeroes, which is the value 0.
From the C# language specification §6.2.1:
For a conversion from an integral type to another integral type, the
processing depends on the overflow checking context (§7.6.12) in which
the conversion takes place:
In a checked context, the conversion succeeds if the value of the source operand is within the range of the destination type, but throws
a System.OverflowException if the value of the source operand is
outside the range of the destination type.
In an unchecked context, the conversion always succeeds, and proceeds as follows.
If the source type is larger than the destination type, then the source value is truncated by discarding its “extra” most significant
bits. The result is then treated as a value of the destination type.
If you want an exception, you can used checked:
b = checked((byte) n);
I would like to complement the previous answer.
Take a look at this:
255 -> 11111111 +
001 -> 00000001
256 -> 100000000
As you can see. We have 256 in binary format, but as your number is eight bits, 1 can't be stored. This leave the number 00000000 which is zero.
This is more theory than C# specific question. But i think this is important to understand.

How unchecked int overflow work c#

We know that an int value has a maximum value of 2^31 - 1 and a minimum value of -2^31. If we were to set int to the maximum value:
int x = int.MaxValue;
And we take that x and add one to it in a unchecked field
unchecked
{
x++;
}
Then we get x = the minimum value of int. My question is why does this happen and how does it happen in terms of binary.
In C#, the built-in integers are represented by a sequence of bit values of a predefined length. For the basic int datatype that length is 32 bits. Since 32 bits can only represent 4,294,967,296 different possible values (since that is 2^32),
Since int can hold both positive and negative numbers, the sign of the number must be encoded somehow. This is done with first bit. If the first bit is 1, then the number is negative.
Here are the int values laid out on a number-line in hexadecimal and decimal:
Hexadecimal Decimal
----------- -----------
0x80000000 -2147483648
0x80000001 -2147483647
0x80000002 -2147483646
... ...
0xFFFFFFFE -2
0xFFFFFFFF -1
0x00000000 0
0x00000001 1
0x00000002 2
... ...
0x7FFFFFFE 2147483646
0x7FFFFFFF 2147483647
As you can see from this chart, the bits that represent the smallest possible value are what you would get by adding one to the largest possible value, while ignoring the interpretation of the sign bit. When a signed number is added in this way, it is called "integer overflow". Whether or not an integer overflow is allowed or treated as an error is configurable with the checked and unchecked statements in C#.
This representation is called 2's complement
You can check this link if you want to go deeper.
Int has a maximum value of 2^31 - 1 because , int is an alias for Int32 that stores 4 bytes in memory to represent the value.
Now lets come to the concept of int.Max + 1. Here you need to know about the Signed number representations that is used to store the negative values. In Binary number representation, there's nothing like negative numbers but they are represented by the one's complement and two's complement bits.
Lets say, my int1 has a storage memory of 1 byte, ie. 8 bits. So Max value you can store into int1 is 2^8 -1 = 255 . Now let's add 1 into this value-
11111111
+ 00000000
----------
100000000
your output is 100000000 = (512) in decimal, that is beyond the storage capacity of int1 and that represents the negative value of -256 in decimal (as first bit shows the negative value).
This is the reason adding 1 to int.Max become, int.Minimum. ie.
`int.Maximum + 1 = int.Minimum`
Okay, I'm going to assume for sake of convenience that there's a 4 bit integer type in C#. The most significant bit is used to store the sign, the remaining three are used to store the magnitude.
The max number that can be stored in such a representation is +7 (positive 7 in the base 10). In binary, this is:
0111
Now let's add positive one:
0111
+0001
_____
1000
Whoops, that carried over from the magnitudes and flipped the sign bit! The sum you see above is actually the number -8, the smallest possible number that can be stored in this representation. If you're not familiar with two's complement, which is how signed numbers are commonly represented in binary, you might think that number is negative zero (which would be the case in a sign and magnitude representation). You can read up on two's complement to understand this better.

How do I interpret a WORD or Nibble as a number in C#?

I don't come from a low-level development background, so I'm not sure how to convert the below instruction to an integer...
Basically I have a microprocessor which tells me which IO's are active or inactive. I send the device an ASCII command and it replies with a WORD about which of the 15 I/O's are open/closed... here's the instruction:
Unit Answers "A0001/" for only DIn0 on, "A????/" for All Inputs Active
Awxyz/ - w=High Nibble of MSB in 0 to ? ASCII Character 0001=1, 1111=?, z=Low Nibble of LSB.
At the end of the day I just want to be able to convert it back into a number which will tell me which of the 15 (or 16?) inputs are active.
I have something hooked up to the 15th I/O port, and the reply I get is "A8000", if that helps?
Can someone clear this up for me please?
You can use the BitConverter class to convert an array of bytes to the integer format you need.
If you're getting 16 bits, convert them to a UInt16.
C# does not define the endianness. That is determined by the hardware you are running on. Intel and AMD processors are little-endian. You can learn the endian-ness of your platform using BitConverter.IsLittleEndian. If the computer running .NET and the hardware providing the data do not have the same endian-ness, you would have to swap the two bytes.
byte[] inputFromHardware = { 126, 42 };
ushort value = BitConverter.ToUInt16( inputFromHardware, index );
which of the 15 (or 16?) inputs are active
If the bits are hardware flags, it is plausible that all 16 are used to mean something. When bits are used to represent a signed number, one of the bits is used to represent positive vs. negative. Since the hardware is providing status bits, none of the bits should be interpreted as a sign.
If you want to know if a specific bit is active, you can use a bit mask along with the & operator. For example, the binary mask
0000 0000 0000 0100
corresponds to the hex number
0x0004
To learn if the third bit from the right is toggled on, use
bool thirdBitFromRightIsOn = (value & 0x0004 != 0);
UPDATE
If the manufacturer says the value 8000 (I assume hex) represents Channel 15 being active, look at it like this:
Your bit mask
1000 0000 0000 0000 (binary)
8 0 0 0 (hex)
Based in that info from the manufacturer, the left-most bit corresponds to Channel 15.
You can use that mask like:
bool channel15IsOn = (value & 0x8000 != 0);
A second choice (but for Nibbles only) is to use the Math library like this:
string x = "A0001/"
int y = 0;
for (int i = 3; i >= 0; i--)
{
if (x[4-i] == '1')
y += (int)Math.Pow(2,i);
}
Using the ability to treat a string as an array of characters by just using the brackets and a numeric character as a number by casting you can make the conversion from bits to an integer hardcoded.
It is a novice solution but i think it's a proper too.

In C# can casting a Char to Int32 produce a negative value?

Or is it always guaranteed to be positive for all possible Chars?
It's guaranteed to be non-negative.
char is an unsigned 16-bit value.
From section 4.1.5 of the C# 4 spec:
The char type represents unsigned 16-bit integers with values between 0 and 65535. The set of possible values for the char type corresponds to the Unicode character set. Although char has the same representation as ushort, not all operations permitted on one type are permitted on the other.
Since the range of char is U+0000 to U+ffff, then a cast to an Int32 will always be positive.
Each 16-bit value ranges from hexadecimal 0x0000 through 0xFFFF and is
stored in a Char structure.
Char Structure - MSDN
See Microsoft's documentation
There you can see, that Char is a 16 bit value in the range of U+0000 to U+ffff. If you cast it to a Int32, there should be no negative value.
char can be inplicitly converted to ushort and ushort range is 0 to 65,535 so its always positive

Categories