I have a byte array with hexadecimal values, for example:
var b = new byte[] {0x27, 0x01, 0x00, 0x00};
I need to convert this to decimal value, but when I used code below, get unexpected result. Expected is 295, but result is 654376960.
if (BitConverter.IsLittleEndian) Array.Reverse(b);
//int myInt = b[0] | (b[1] << 8) | (b[2] << 16) | (b[3] << 24);
int value = BitConverter.ToInt32(b, 0);
What's wrong?
Basically your understanding of endianness is wrong - your example is in little-endian format already, so you should only reverse it if BitConverter expects a big-endian format. You just need to invert your condition:
if (!BitConverter.IsLittleEndian) Array.Reverse(b);
(I'd personally put the body of the if statement in braces and new lines, but that's a different matter.)
Related
In my C# Application, I have a byte array as follows.
byte[] byteArray = {0x2, 0x2, 0x6, 0x6};
I need to split the first two elements i.e 0x2 and 0x2 and assign it to a byte variable. Similarly last two elements should be assigned to another byte variable.
i.e
byte FirstByte = 0x22;
byte SecondByte = 0x66;
I can split the array into sub arrays but I am not able find a way to convert byteArray into a single byte.
You can just bitwise OR them together, shifting one of the nibbles using <<:
byte firstByte = (byte)(byteArray[0] | byteArray[1] << 4);
byte secondByte = (byte)(byteArray[2] | byteArray[3] << 4);
You didn't specify the order in which to combine the nibbles, so you might want this:
byte firstByte = (byte)(byteArray[1] | byteArray[0] << 4);
byte secondByte = (byte)(byteArray[3] | byteArray[2] << 4);
I'm currently struggling with modbus tcp and ran into a problem with interpreting the response of a module. The response contains two values that are encoded in the bits of an array of three UInt16 values, where the first 8 bits of r[0] have to be ignored.
Let's say the UInt16 array is called r and the "final" values I want to get are val1 and val2, then I would have to do the following:
In the above example the desired output values are val1 (=3) and val2 (=6) for the input values r[0]=768, r[1]=1536 and r[2]=0, all values as UInt16.
I already tried to (logically) bit-rightshift r[0] by 8, but then the upper bits get lost because they are stored in the first 8 bits of r[1]. Do I have to concatenate all r-values first and bit-shift after that? How can I do that? Thanks in advance!
I already tried to (logically) bit-rightshift r[0] by 8, but then the upper bits get lost because they are stored in the first 8 bits of r[1].
Well they're not "lost" - they're just in r[1].
It may be simplest to break it down step by step:
byte val1LowBits = (byte) (r[0] >> 8);
byte val1HighBits = (byte) (r[1] & 0xff);
byte val2LowBits = (byte) (r[1] >> 8);
byte val2HighBits = (byte) (r[2] & 0xff);
uint val1 = (uint) ((val1HighBits << 8) | val1LowBits);
uint val2 = (uint) ((val2HighBits << 8) | val2LowBits);
I am working on a C# WinForms application that reads/writes data to/from a hardware device. My application has a multiselect listbox which contains the numbers 1 - 100000 and the user may select up to 10 numbers. When they're done selecting each number, the user clicks a button and my event handler code needs to build a fixed-size (30 bytes) byte array using 3 bytes to represent each selected number and pad the array if less than 10 numbers were selected.
As an example, suppose my user chooses the following values:
17
99152
3064
52588
65536
I'm currently using this code to convert each number into a byte array:
byte[] bytes = BitConverter.GetBytes(selectedNumber);
Array.Reverse(bytes) // because BitConverter.IsLittleEndian() = true
Debug.WriteLine(BitConverter.ToString(bytes));
For the numbers I listed above, this produces the following:
00-00-00-11
00-01-83-50
00-00-0B-F8
00-00-CD-6C
00-01-00-00
BitConverter is giving me back a 4 byte array where I only have space to use 3 bytes to store each number in the final byte array. I can drop the most significant byte of each individual byte array and then build my final array like this:
00-00-11-01-83-50-00-0B-F8-00-CD-6C-01-00-00-[padding here]
Writing that to the device should work. But reading the array (or a similar array) back from the device causes a bit of a problem for me. When I have a 3 byte array and try to convert that into an int using this code...
int i = BitConverter.ToInt32(bytes, 0);
...I get "Destination array is not long enough to copy all the items in the collection." I suppose I could insert a most significant byte of 0x00 at the beginning of every three bytes and then convert that but is there a better way to do this?
I would imagine bit shifting and the | operator should be the most efficient way of doing this.
int i = (bytes[2] << 0) | (bytes[1] << 8) | (bytes[0] << 16);
Also, as a heads up, you're dropping the most significant byte, not the least significant byte ;p
byte[] bytes = new byte[] { 0x00, 0x00, 0x11, 0x01, 0x83, 0x50, 0x00, 0x0B, 0xF8 };
var ints = bytes.Select((b, i) => new { b, i })
.GroupBy(x => x.i / 3)
.Select(g => BitConverter.ToInt32(
new byte[] { 0 }.Concat(g.Select(x => x.b))
.Reverse()
.ToArray(),
0))
.ToArray();
or classically
var ints = new List<int>();
for (int i = 0; i < bytes.Length; i+=3)
{
int intI=0;
for (int j = i; j < i + 3; j++)
{
intI = intI * 256 + bytes[j]; //or (intI << 8) + bytes[j];
}
ints.Add(intI);
}
ints will be 17, 99152 and 3064
I have a devious little problem to which I think I've come up with a solution far more difficult than needs to be.
The problem is that I have two bytes. The first two bits of the first byte are to be removed (as the value is little endian, these bits are effectively in the middle of the 16 bit value). Then the least significant two bits of the second byte are to be moved to the most significant bit locations of the first byte, in place of the removed bits.
My solution is as follows:
byte firstByte = (byte)stream.ReadByte(); // 01000100
byte secondByte = (byte)stream.ReadByte(); // 00010010
// the first and second byte equal the decimal 4676 in this little endian example
byte remainderOfFirstByte = (byte)(firstByte & 63); // 01000100 & 00111111 = 00000100
byte transferredBits = (byte)(secondByte << 6); // 00010010 << 6 = 10000000
byte remainderOfSecondByte = (byte)(secondByte >> 2); // 00010010 >> 2 = 00000100
byte newFirstByte = (byte)(transferredBits | remainderOfFirstByte); // 10000000 | 00000100 = 10000100
int result = BitConverter.ToInt32(new byte[]{newFirstByte, remainderOfSecondByte, 0, 0}, 0); //10000100 00010000 (the result is decimal 1156)
Is there an easier way* to achieve this?
*less verbose, perhaps an inbuilt function or trick I'm missing? (with the exception of doing both the & and << on the same line)
You don't have to mask out bits that a shift would throw away anyway. And you don't have to transfer those bits manually. So it becomes this: (not tested)
int result = (secondByte << 6) | (firstByte & 0x3F);
I don't know what to call this, which makes googling harder.
I have an integer, say 3, and want to convert it to 11100000, that is, a byte with the value of the integers number of bits set, from the most significantly bit.
I guess it could be done with:
byte result = 0;
for(int i = 8; i > 8 - 3; i--)
result += 2 ^ i;
but is there anything faster / more nice or, preferably, standard library included in .net?
int n = 3; // 0..8
int mask = 0xFF00;
byte result = (byte) (mask >> n);
Because there are only a few possibilities, you could just cache them:
// Each index adds another bit from the left, e.g. resultCache[3] == 11100000.
byte[] resultCache = { 0x00, 0x80, 0xC0, 0xE0, 0xF0, 0XF8, 0xFC, 0xFE, 0xFF };
You'd also get an exception instead of a silent error if you accidentally tried to get the value for n > 8.