I got an array which contains signed int data, i need to convert each value in the array to 2 bytes. I am using C# and i tried using BitConverter.GetBytes(int) but it returns a 4 byte array.
A signed 16-bit value is best represented as a short rather than int - so use BitConverter.GetBytes(short).
However, as an alternative:
byte lowByte = (byte) (value & 0xff);
byte highByte = (byte) ((value >> 8) & 0xff);
Related
So I'm currently writing a Library that will help me interface with a Fingerprint Scanner over a USB-Port this one in fact, which is just a resold Zhiantec (documentation here).
So the problem I'm running in to is this: The documentation specifies that the Header bytes, the Header, Package Length and Checksum bytes are to be transferred high byte first. Not a big deal, after a quick google I found this answer by Jon Skeet showing exactly how to do this. I then put this into two small helper methods that look like this:
public static class ByteHelper
{
// Low/ High byte arithmetic
// byte upper = (byte) (number >> 8);
// byte lower = (byte) (number & 0xff);
public static byte[] GetBytesOrderedLowHigh(ushort toBytes)
{
return new[] {(byte) (toBytes & 0xFF), (byte) (toBytes >> 8)};
}
public static byte[] GetBytesOrderedHighLow(ushort toBytes)
{
return new[] {(byte) (toBytes >> 8), (byte) (toBytes & 0xFF)};
}
}
Which I'm testing to see if they do the correct thing with this code:
// Expected Output '0A-00', actual '00-0A'
Console.WriteLine(BitConverter.ToString(ByteHelper.GetBytesOrderedHighLow(10)));
// Expected Output '00-0A', actual '0A-00'
Console.WriteLine(BitConverter.ToString(ByteHelper.GetBytesOrderedLowHigh(10)));
But I'm getting the wrong output (see comments above Console.WriteLine statements), can anyone explain me why it's doing this and how to fix it?
The results you get are correct.
Your LowHigh-Method switches the two bytes.
00-0A will be 0A-00
Your HighLow-Method does only convert the ushort into an byte-array.
00-0A will stay 00-0A
Here a step-by-step-example of your logic, with some more outputs for better understanding:
ulong x = 0x0A0B0C;
Console.WriteLine(x.ToString("X6"));
// We're only interested in last 2 Bytes:
ulong x2 = x & 0xFFFF;
// Now let's get those last 2 bytes:
byte upperByte = (byte)(x2 >> 8); // Shift 8 bytes -> tell get dropped
Console.WriteLine(upperByte.ToString("X2"));
byte lowerByte = (byte)(x2 & 0xFF); // Only last Byte is left
Console.WriteLine(lowerByte.ToString("X2"));
// The question is: What do you want to do with it? Switch or leave it in this order?
// leave them in the current order:
Console.WriteLine(BitConverter.ToString(new byte[] { upperByte, lowerByte }));
// switch positions:
Console.WriteLine(BitConverter.ToString(new byte[] { lowerByte, upperByte }));
In my C# Application, I have a byte array as follows.
byte[] byteArray = {0x2, 0x2, 0x6, 0x6};
I need to split the first two elements i.e 0x2 and 0x2 and assign it to a byte variable. Similarly last two elements should be assigned to another byte variable.
i.e
byte FirstByte = 0x22;
byte SecondByte = 0x66;
I can split the array into sub arrays but I am not able find a way to convert byteArray into a single byte.
You can just bitwise OR them together, shifting one of the nibbles using <<:
byte firstByte = (byte)(byteArray[0] | byteArray[1] << 4);
byte secondByte = (byte)(byteArray[2] | byteArray[3] << 4);
You didn't specify the order in which to combine the nibbles, so you might want this:
byte firstByte = (byte)(byteArray[1] | byteArray[0] << 4);
byte secondByte = (byte)(byteArray[3] | byteArray[2] << 4);
I'm currently struggling with modbus tcp and ran into a problem with interpreting the response of a module. The response contains two values that are encoded in the bits of an array of three UInt16 values, where the first 8 bits of r[0] have to be ignored.
Let's say the UInt16 array is called r and the "final" values I want to get are val1 and val2, then I would have to do the following:
In the above example the desired output values are val1 (=3) and val2 (=6) for the input values r[0]=768, r[1]=1536 and r[2]=0, all values as UInt16.
I already tried to (logically) bit-rightshift r[0] by 8, but then the upper bits get lost because they are stored in the first 8 bits of r[1]. Do I have to concatenate all r-values first and bit-shift after that? How can I do that? Thanks in advance!
I already tried to (logically) bit-rightshift r[0] by 8, but then the upper bits get lost because they are stored in the first 8 bits of r[1].
Well they're not "lost" - they're just in r[1].
It may be simplest to break it down step by step:
byte val1LowBits = (byte) (r[0] >> 8);
byte val1HighBits = (byte) (r[1] & 0xff);
byte val2LowBits = (byte) (r[1] >> 8);
byte val2HighBits = (byte) (r[2] & 0xff);
uint val1 = (uint) ((val1HighBits << 8) | val1LowBits);
uint val2 = (uint) ((val2HighBits << 8) | val2LowBits);
I am trying to write a swap function in C# to mimic the one in Delphi. According to the documentation the one in Delphi will do the following:
If the number is two bytes, bytes 1 and 2 are swapped
if the number is four bytes, bytes 1 and 4 are swapped, bytes 2 and 3 remain where they are
Below is the code I have.
int number = 17665024;
var hi = (byte)(number >> 24);
var lo = (byte)(number & 0xff);
return (number & 0x00FFFF00) + (lo & 0xFF000000) + (hi & 0x000000FF);
Some numbers seem to return what I expect, but most do not.
// Value in // Expected // Actual
17665024 887809 887809
5376 21 5376
-30720 136 16746751
3328 13 3328
It's probably a fairly obvious mistake to most, but I haven't dealt with bitwise shift operators much and I cannot seem to work out what I have done wrong.
Thanks in advance.
In C#, the data types short and int correspond to integral data types of 2 bytes and 4 bytes, respectively. The algorithm above applies to int (4 bytes).
This algorithm contains an error: (lo & 0xFF000000) will always return 0 because lo is a byte. What you probably intended was lo << 24, which shifts lo 24 bytes to the left.
For an int data type, the proper function then becomes:
int SwapInt(int number)
{
var hi = (byte)(number >> 24);
var lo = (byte)(number & 0xff);
return ((number & 0xffff00) | (lo << 24) | hi);
}
For a short data type, the middle term disappears and we are left with simply:
short SwapShort(short number)
{
var hi = (byte)(number >> 8);
var lo = (byte)(number & 0xff);
return (short)((lo << 8) | hi);
}
Then Swap((short)5376) returns the expected value of 21. Note that Swap(5376) will use the default int datatype for 5376, which returns 5376. To treat integers that can be wholly expressed in two bytes as short, you can run:
int Swap(int n)
{
if (n >= Short.MinValue && n <= Short.MaxValue)
return SwapShort((short)n);
else
return SwapInt(n);
}
What is the best way to divide a 32 bit integer into four (unsigned) chars in C#.
Quick'n'dirty:
int value = 0x48454C4F;
Console.WriteLine(Encoding.ASCII.GetString(
BitConverter.GetBytes(value).Reverse().ToArray()
));
Converting the int to bytes, reversing the byte-array for the correct order and then getting the ASCII character representation from it.
EDIT: The Reverse method is an extension method from .NET 3.5, just for info. Reversing the byte order may also not be needed in your scenario.
Char? Maybe you are looking for this handy little helper function?
Byte[] b = BitConverter.GetBytes(i);
Char c = (Char)b[0];
[...]
It's not clear if this is really what you want, but:
int x = yourNumber();
char a = (char)(x & 0xff);
char b = (char)((x >> 8) & 0xff);
char c = (char)((x >> 16) & 0xff);
char d = (char)((x >> 24) & 0xff);
This assumes you want the bytes interpreted as the lowest range of Unicode characters.
I have tried it a few ways and clocked the time taken to convert 1000000 ints.
Built-in convert method, 325000 ticks:
Encoding.ASCII.GetChars(BitConverter.GetBytes(x));
Pointer conversion, 100000 ticks:
static unsafe char[] ToChars(int x)
{
byte* p = (byte*)&x)
char[] chars = new char[4];
chars[0] = (char)*p++;
chars[1] = (char)*p++;
chars[2] = (char)*p++;
chars[3] = (char)*p;
return chars;
}
Bitshifting, 77000 ticks:
public static char[] ToCharsBitShift(int x)
{
char[] chars = new char[4];
chars[0] = (char)(x & 0xFF);
chars[1] = (char)(x >> 8 & 0xFF);
chars[2] = (char)(x >> 16 & 0xFF);
chars[3] = (char)(x >> 24 & 0xFF);
return chars;
}
Do get the 8-byte-blocks:
int a = i & 255; // bin 11111111
int b = i & 65280; // bin 1111111100000000
Do break the first three bytes down into a single byte, just divide them by the proper number and perform another logical and to get your final byte.
Edit: Jason's solution with the bitshifts is much nicer of course.
.net uses Unicode, a char is 2 bytes not 1
To convert between binary data containing non-unicode text use the System.Text.Encoding class.
If you do want 4 bytes and not chars then replace the char with byte in Jason's answer