Two's complement conversion - c#

I need to convert bytes in two's complement format to positive integer bytes.
The range -128 to 127 mapped to 0 to 255.
Examples: -128 (10000000) -> 0 , 127 (01111111) -> 255, etc.
EDIT To clear up the confusion, the input byte is (of course) an unsigned integer in the range 0 to 255. BUT it represents a signed integer in the range -128 to 127 using two's complement format. For example, the input byte value of 128 (binary 10000000) actually represents -128.
EXTRA EDIT Alrighty, lets say we have the following byte stream 0,255,254,1,127. In two's complement format this represents 0, -1, -2, 1, 127. This I need clamping to the 0 to 255 range. For more info check out this hard to find article: Two's complement

From you sample input you simply want:
sbyte something = -128;
byte foo = (byte)( something + 128);

new = old + 128;
bingo :-)

Try
sbyte signed = (sbyte)input;
or
int signed = input | 0xFFFFFF00;

public static byte MakeHexSigned(byte value)
{
if (value > 255 / 2)
{
value = -1 * (255 + 1) + value;
}
return value;
}

I believe that 2s complement bytes would be best done with the following. Maybe not elegant or short but clear and obvious. I would put it as a static method in one of my util classes.
public static sbyte ConvertTo2Complement(byte b)
{
if(b < 128)
{
return Convert.ToSByte(b);
}
else
{
int x = Convert.ToInt32(b);
return Convert.ToSByte(x - 256);
}
}

If I undestood correctly, your problem is how to convert the input, which is really a signed-byte (sbyte), but that input is stored in a unsigned integer, and then also avoid negative values by converting them to zero.
To be clear, when you use a signed type (like ubyte) the framework is using Two's complement behind the scene, so just by casting to the right type you will be using two's complement.
Then, once you have that conversion done, you could clamp the negative values with a simple if or a conditional ternary operator (?:).
The functions presented below will return 0 for values from 128 to 255 (or from -128 to -1), and the same value for values from 0 to 127.
So, if you must use unsigned integers as input and output you could use something like this:
private static uint ConvertSByteToByte(uint input)
{
sbyte properDataType = (sbyte)input; //128..255 will be taken as -128..-1
if (properDataType < 0) { return 0; } //when negative just return 0
if (input > 255) { return 0; } //just in case as uint can be greater than 255
return input;
}
Or, IMHO, you could change your input and outputs to the data types best suited to your input and output (sbyte and byte):
private static byte ConvertSByteToByte(sbyte input)
{
return input < 0 ? (byte)0 : (byte)input;
}

int8_t indata; /* -128,-127,...-1,0,1,...127 */
uint8_t byte = indata ^ 0x80;
xor MSB, that's all

Here is my solution for this problem, for numbers bigger than 8-bits. My example is for a 16-bit value. Note: You will have to check the first bit, to see if it is a negative or not.
Steps:
Convert # to compliment by placing '~' before variable. (ie. y = ~y)
Convert #s to binary string
Break binary strings into character array
Starting with right most value, add 1 , keeping track of carries. Store result in character array.
Convert character array back to string.
private string TwosComplimentMath(string value1, string value2)
{
char[] binary1 = value1.ToCharArray();
char[] binary2 = value2.ToCharArray();
bool carry = false;
char[] calcResult = new char[16];
for (int i = 15; i >= 0; i--)
{
if (binary1[i] == binary2[i])
{
if (binary1[i] == '1')
{
if (carry)
{
calcResult[i] = '1';
carry = true;
}
else
{
calcResult[i] = '0';
carry = true;
}
}
else
{
if (carry)
{
calcResult[i] = '1';
carry = false;
}
else
{
calcResult[i] = '0';
carry = false;
}
}
}
else
{
if (carry)
{
calcResult[i] = '0';
carry = true;
}
else
{
calcResult[i] = '1';
carry = false;
}
}
}
string result = new string(calcResult);
return result;
}

So the problem is that the OP's problem isn't really two's complement conversion. He's adding a bias to a set of values, to adjust the range from -128..127 to 0..255.
To actually do a two's complement conversion, you just typecast the signed value to the unsigned value, like this:
sbyte test1 = -1;
byte test2 = (byte)test1;
-1 becomes 255. -128 becomes 128. This doesn't sound like what the OP wants, though. He just wants to slide an array up so that the lowest signed value (-128) becomes the lowest unsigned value (0).
To add a bias, you just do integer addition:
newValue = signedValue+128;

You could be describing something as simple as adding a bias to your number ( in this case, adding 128 to the signed number ).

Related

Setting bits in a byte array with C#

I have a requirement to encode a byte array from an short integer value
The encoding rules are
The bits representing the integer are bits 0 - 13
bit 14 is set if the number is negative
bit 15 is always 1.
I know I can get the integer into a byte array using BitConverter
byte[] roll = BitConverter.GetBytes(x);
But I cant find how to meet my requirement
Anyone know how to do this?
You should use Bitwise Operators.
Solution is something like this:
Int16 x = 7;
if(x < 0)
{
Int16 mask14 = 16384; // 0b0100000000000000;
x = (Int16)(x | mask14);
}
Int16 mask15 = -32768; // 0b1000000000000000;
x = (Int16)(x | mask15);
byte[] roll = BitConverter.GetBytes(x);
You cannot rely on GetBytes for negative numbers since it complements the bits and that is not what you need.
Instead you need to do bounds checking to make sure the number is representable, then use GetBytes on the absolute value of the given number.
The method's parameter is 'short' so we GetBytes returns a byte array with the size of 2 (you don't need more than 16 bits).
The rest is in the comments below:
static readonly int MAX_UNSIGNED_14_BIT = 16383;// 2^14-1
public static byte[] EncodeSigned14Bit(short x)
{
var absoluteX = Math.Abs(x);
if (absoluteX > MAX_UNSIGNED_14_BIT) throw new ArgumentException($"{nameof(x)} is too large and cannot be represented");
byte[] roll = BitConverter.GetBytes(absoluteX);
if (x < 0)
{
roll[1] |= 0b01000000; //x is negative, set 14th bit
}
roll[1] |= 0b10000000; // 15th bit is always set
return roll;
}
static void Main(string[] args)
{
// testing some values
var r1 = EncodeSigned14Bit(16383); // r1[0] = 255, r1[1] = 191
var r2 = EncodeSigned14Bit(-16383); // r2[0] = 255, r2[1] = 255
}

Can we access randomly a part of byte

Can we access randomly a part of byte.I mean can i access three bits of a bit randomnly without using BitArray and accessing by byte.
Is there any possibility of accessing bit from byte and if not why it is not possible to access it and is it depends on any criteria
You can use the Bitwise And (&) operator in order to read a specific bit from a byte. I'll give some examples by using the 0b prefix, which in C# allows you to write binary literals in your code.
So suppose you have the following byte value:
byte val = 0b10010100; // val = 148 (in decimal)
byte testBits = 0b00000100; // set ONLY the BITS you want to test here...
if ( val & testBits != 0 ) // bitwise and will return 0 if the bit is NOT SET.
Console.WriteLine("The bit is set!");
else
Console.WriteLine("The bit is not set....");
Here's a method for you to test any bit in a given byte, which uses the Left-Shift operator applied to the number 1 in order to generate a binary number which is able to be used for testing against any arbitrary bit in the given byte:
public static int readBit(byte val, int bitPos)
{
if ((val & (1 << bitPos)) != 0)
return 1;
return 0;
}
You can use this method to print which bits are set in a given byte:
byte val = 0b10010100;
for (int i = 0; i < 8; i++)
{
int bitValue = readBit(val, i);
Console.WriteLine($"Bit {i} = {bitValue}");
}
The output from the code above should be:
Bit 0 = 0
Bit 1 = 0
Bit 2 = 1
Bit 3 = 0
Bit 4 = 1
Bit 5 = 0
Bit 6 = 0
Bit 7 = 1
you can use bit shifting
var bitNumber = 0;
var firstBit = (b & (1 << bitNumber)) != 0 ;
we can convert this to extension method
public static class ByteExtensions
{
public static bool GetBit(this byte b, int bitNumber) =>
(b & (1 << bitNumber)) != 0;
}
then
byte b = 7;
var bit0 = b.GetBit(3);

C# How to send colorDialog value to a textBox in 5bitRGB?

So I'm creating an editor of a PS2 game.
And this game has two "Systems" of colors.
The "normal" RGB R: 0 to 255 G: 0 to 255 B: 0 to 255.
And the or I think it is 5bitRGB R: 0 to 31 G: 0 to 31 B: 0 to 31.
And to make the color change in the game,
I have to convert the chosen values in the
colorDialog in Hexadecimal for example: R: 255 G: 176 B: 15
In Hexadecimal stands FFB00F.
And then later change these values in the "slots" of 3 bytes via Hex.
Beauty so far so good, but 5bitRGB only have "slots" of 2 bytes.
Example: 5bitRGB R: 31 G: 0 B: 0 in Hex 1F80.
And that's where I do not know what to do, because the colors of the normal RGB
I can send the values in Hexadecimal to the textBox.
And then I saved these values textBox in "slots" of 3 bytes via Hex.
Meanwhile the slots for the 5bitRGB color change via Hex
They are only "slots" of 2 bytes.
So I would have to send the converted colorDialog value to 5bitRGB
for textBox in 2 bytes, is this really possible?
So I would have to send the converted colorDialog value to 5bitRGB for textBox in 2 bytes, is this really possible?
Sure! 5-bits x 3 fields = 15-bits, and 2 bytes = 16-bits. You'll even have room left over, but not much.
It sounds like it's just a matter of handling a reduction in the resolution of each field. This can be done with a bitwise shift operation to reduce each 8-bit field to a 5-bit field.
You'll need to right shift the value for storage as a 5-bit field. You can also left shift the value to use as an 8-bit (byte) field for setting the Color property - but note that this is just an 8-bit representation of a 5-bit value and the loss of resolution from shifting to 5-bits will persist. You will have to decide on a convention to handle the loss of resolution - doing nothing is an option.
For example:
// 8-bit color values
byte rValue = 255;
byte gValue = 127;
byte bValue = 63;
// set color
pictureBox1.BackColor = Color.FromArgb(rValue, gValue, bValue);
// 5-bit color values
var rBits = new BitArray(5, false);
var gBits = new BitArray(5, false);
var bBits = new BitArray(5, false);
// bit position comparison operator
byte op = 0x80;
// convert to 5-bit arrays
for (int i = 0; i < 5; i++)
{
if (rValue > op) { rBits[i] = true; rValue -= op; }
if (gValue > op) { gBits[i] = true; gValue -= op; }
if (bValue > op) { bBits[i] = true; bValue -= op; }
op >>= 1;
}
byte rRetrieved = 0;
byte gRetrieved = 0;
byte bRetrieved = 0;
// bit position comparison operator
op = 0x80;
// convert back to 8-bit bytes
for (int i = 0; i < 5; i++)
{
if (rBits[i]) { rRetrieved += op; }
if (gBits[i]) { gRetrieved += op; }
if (bBits[i]) { bRetrieved += op; }
op >>= 1;
}
// set color - note loss of resolution due to shifting
pictureBox1.BackColor = Color.FromArgb(rRetrieved, gRetrieved, bRetrieved);

Convert datatype 'long' to byte array

I have to convert values (double/float in C#) to bytes and need some help..
// Datatype long 4byte -99999999,99 to 99999999,99
// Datatype long 4byte -99999999,9 to 99999999,9
// Datatype short 2byte -999,99 to 999,99
// Datatype short 2byte -999,9 to 999,9
In my "world at home" i would just string it and ASCII.GetBytes().
But now, in this world, we have to make less possible space.
And indeed that '-99999999,99' takes 12 bytes instead of 4! if it's a 'long' datatype.
[EDIT]
Due to some help and answer I attach some results here,
long lng = -9999999999L;
byte[] test = Encoding.ASCII.GetBytes(lng.ToString()); // 11 byte
byte[] test2 = BitConverter.GetBytes(lng); // 8 byte
byte[] mybyt = BitConverter.GetBytes(lng); // 8 byte
byte[] bA = BitConverter.GetBytes(lng); // 8 byte
There still have to be one detail left to find out. The lng-variabel got 8 byte even if it helds a lower values, i.e. 99951 (I won't include the ToString() sample).
If the value are even "shorter", which means -999,99 -- 999,99 it will only take 2 byte space.
[END EDIT]
Have you checked BitConverter
long lng =-9999999999L;
byte[] mybyt = BitConverter.GetBytes(lng);
hope this is what you are looking
Try to do it in this way:
long l = 4554334;
byte[] bA = BitConverter.GetBytes(l);
Be aware that in 2 bytes you can only have 4 full digits + sign, and in 4 bytes you can only have 9 digits + sign, so I had to scale your prereqs accordingly.
public static byte[] SerializeLong2Dec(double value)
{
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0)
{
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong2Dec(byte[] value)
{
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeLong1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0) {
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong1Dec(byte[] value) {
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 10.0;
}
public static byte[] SerializeShort2Dec(double value) {
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort2Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeShort1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort1Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 10.0;
}
So that it's clear, the range of a (signed) short (16 bits) is -32,768 to 32,767 so it's quite clear that you only have 4 full digits plus a little piece (the 0-3), the range of a (signed) int (32 bits) is −2,147,483,648 to 2,147,483,647 so it's quite clear that you only have 9 full digits plus a little piece (the 0-2). Going to a (signed) long (64 bits) you have -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 so 18 digits plus a (big) piece. Using floating points you lose in accuracy. A float (32 bits) has an accuracy of around 7 digits, while a double (64 bits) has an accuracy of around 15-16 digits.
To everyone reading this question and the answers. Please note that:
//convert to bytes
long number = 123;
byte[] bytes = BitConverter.GetBytes(number);
//convert back to int64
long newNumber = BitConverter.ToInt64(bytes);
//numbers are different on x64 systems!!!
number != newNumber;
The workaround is to check which system you run on:
newNumber = BitConverter.IsLittleEndian
? BitConverter.ToInt64(bytes, 0)
: BitConverter.ToInt64(bytes.Reverse().ToArray(), 0);
long longValue = 9999999999L;
Console.WriteLine("Long value: " + longValue.ToString());
bytes = BitConverter.GetBytes(longValue);
Console.WriteLine("Byte array value:");
Console.WriteLine(BitConverter.ToString(bytes));
As the other answers have pointed out, you can use BitConverter to get the byte representation of primitive types.
You said that in the current world you inhabit, there is an onus on representing these values as concisely as possible, in which case you should consider variable length encoding (though that document may be a bit abstract).
If you decide this approach is applicable to your case I would suggest looking at how the Protocol Buffers project represents scalar types as some of these types are encoded using variable length encoding which produces shorter output if the data set favours smaller absolute values. (The project is open source under a New BSD license, so you will be able to learn the technique employed from the source repository or even use the source in your own project.)

GetType() and Typeof() in C#

itemVal = "0";
res = int.TryParse(itemVal, out num);
if ((res == true) && (num.GetType() == typeof(byte)))
return true;
else
return false; // goes here when I debugging.
Why num.GetType() == typeof(byte) does not return true ?
Because num is an int, not a byte.
GetType() gets the System.Type of the object at runtime. In this case, it's the same as typeof(int), since num is an int.
typeof() gets the System.Type object of a type at compile-time.
Your comment indicates you're trying to determine if the number fits into a byte or not; the contents of the variable do not affect its type (actually, it's the type of the variable that restricts what its contents can be).
You can check if the number would fit into a byte this way:
if ((num >= 0) && (num < 256)) {
// ...
}
Or this way, using a cast:
if (unchecked((byte)num) == num) {
// ...
}
It seems your entire code sample could be replaced by the following, however:
byte num;
return byte.TryParse(itemVal, num);
Simply because you are comparing a byte with an int
If you want to know number of bytes try this simple snippet:
int i = 123456;
Int64 j = 123456;
byte[] bytesi = BitConverter.GetBytes(i);
byte[] bytesj = BitConverter.GetBytes(j);
Console.WriteLine(bytesi.Length);
Console.WriteLine(bytesj.Length);
Output:
4
8
because and int and a byte are different data types.
an int (as it is commonly known) is 4 bytes (32 bits) an Int64, or Int16 are 64 or 16 bits respectively
a byte is only 8 bits
If num is an int it will never return true
If you wanna check if this int value would fit a byte, you might test the following;
int num = 0;
byte b = 0;
if (int.TryParse(itemVal, out num) && byte.TryParse(itemVal, b))
{
return true; //Could be converted to Int32 and also to Byte
}

Categories