What is the best way to divide a 32 bit integer into four (unsigned) chars in C#.
Quick'n'dirty:
int value = 0x48454C4F;
Console.WriteLine(Encoding.ASCII.GetString(
BitConverter.GetBytes(value).Reverse().ToArray()
));
Converting the int to bytes, reversing the byte-array for the correct order and then getting the ASCII character representation from it.
EDIT: The Reverse method is an extension method from .NET 3.5, just for info. Reversing the byte order may also not be needed in your scenario.
Char? Maybe you are looking for this handy little helper function?
Byte[] b = BitConverter.GetBytes(i);
Char c = (Char)b[0];
[...]
It's not clear if this is really what you want, but:
int x = yourNumber();
char a = (char)(x & 0xff);
char b = (char)((x >> 8) & 0xff);
char c = (char)((x >> 16) & 0xff);
char d = (char)((x >> 24) & 0xff);
This assumes you want the bytes interpreted as the lowest range of Unicode characters.
I have tried it a few ways and clocked the time taken to convert 1000000 ints.
Built-in convert method, 325000 ticks:
Encoding.ASCII.GetChars(BitConverter.GetBytes(x));
Pointer conversion, 100000 ticks:
static unsafe char[] ToChars(int x)
{
byte* p = (byte*)&x)
char[] chars = new char[4];
chars[0] = (char)*p++;
chars[1] = (char)*p++;
chars[2] = (char)*p++;
chars[3] = (char)*p;
return chars;
}
Bitshifting, 77000 ticks:
public static char[] ToCharsBitShift(int x)
{
char[] chars = new char[4];
chars[0] = (char)(x & 0xFF);
chars[1] = (char)(x >> 8 & 0xFF);
chars[2] = (char)(x >> 16 & 0xFF);
chars[3] = (char)(x >> 24 & 0xFF);
return chars;
}
Do get the 8-byte-blocks:
int a = i & 255; // bin 11111111
int b = i & 65280; // bin 1111111100000000
Do break the first three bytes down into a single byte, just divide them by the proper number and perform another logical and to get your final byte.
Edit: Jason's solution with the bitshifts is much nicer of course.
.net uses Unicode, a char is 2 bytes not 1
To convert between binary data containing non-unicode text use the System.Text.Encoding class.
If you do want 4 bytes and not chars then replace the char with byte in Jason's answer
Related
I am working on a project which outputs to an odd circuit and need to invert half the byte I am sending. So for example, if I am sending the number 100 as a byte, it comes out in the chip as 01100100, nice and easy. The problem is that I need it to be 10010100, i.e. the first nibble is inverted. This is because of how the outputs of the chip work.
I have playing with the ~ command doing something like:
int b = a & 0x0000000F;
This inverts the second nibble. I can also invert the whole thing with:
int b = a & 0x000000FF;
But I want to get the first nibble of the byte and
int b = a & 0x000000F0;
doesn't give me the answer I am after.
Any suggestions?
To invert a bit, you xor (exclusive or) it.
So you have to do a ^ 0xF0;
with shifting:
b = (byte) ((b & 0x0F) + (~(b >> 4)<<4));
without shifting:
b = (byte)((b & 0x0F) + ((~(b & 0xF0)) & 0xF0));
(not that shifting or not matters...)
I'm currently struggling with modbus tcp and ran into a problem with interpreting the response of a module. The response contains two values that are encoded in the bits of an array of three UInt16 values, where the first 8 bits of r[0] have to be ignored.
Let's say the UInt16 array is called r and the "final" values I want to get are val1 and val2, then I would have to do the following:
In the above example the desired output values are val1 (=3) and val2 (=6) for the input values r[0]=768, r[1]=1536 and r[2]=0, all values as UInt16.
I already tried to (logically) bit-rightshift r[0] by 8, but then the upper bits get lost because they are stored in the first 8 bits of r[1]. Do I have to concatenate all r-values first and bit-shift after that? How can I do that? Thanks in advance!
I already tried to (logically) bit-rightshift r[0] by 8, but then the upper bits get lost because they are stored in the first 8 bits of r[1].
Well they're not "lost" - they're just in r[1].
It may be simplest to break it down step by step:
byte val1LowBits = (byte) (r[0] >> 8);
byte val1HighBits = (byte) (r[1] & 0xff);
byte val2LowBits = (byte) (r[1] >> 8);
byte val2HighBits = (byte) (r[2] & 0xff);
uint val1 = (uint) ((val1HighBits << 8) | val1LowBits);
uint val2 = (uint) ((val2HighBits << 8) | val2LowBits);
I am trying to write a swap function in C# to mimic the one in Delphi. According to the documentation the one in Delphi will do the following:
If the number is two bytes, bytes 1 and 2 are swapped
if the number is four bytes, bytes 1 and 4 are swapped, bytes 2 and 3 remain where they are
Below is the code I have.
int number = 17665024;
var hi = (byte)(number >> 24);
var lo = (byte)(number & 0xff);
return (number & 0x00FFFF00) + (lo & 0xFF000000) + (hi & 0x000000FF);
Some numbers seem to return what I expect, but most do not.
// Value in // Expected // Actual
17665024 887809 887809
5376 21 5376
-30720 136 16746751
3328 13 3328
It's probably a fairly obvious mistake to most, but I haven't dealt with bitwise shift operators much and I cannot seem to work out what I have done wrong.
Thanks in advance.
In C#, the data types short and int correspond to integral data types of 2 bytes and 4 bytes, respectively. The algorithm above applies to int (4 bytes).
This algorithm contains an error: (lo & 0xFF000000) will always return 0 because lo is a byte. What you probably intended was lo << 24, which shifts lo 24 bytes to the left.
For an int data type, the proper function then becomes:
int SwapInt(int number)
{
var hi = (byte)(number >> 24);
var lo = (byte)(number & 0xff);
return ((number & 0xffff00) | (lo << 24) | hi);
}
For a short data type, the middle term disappears and we are left with simply:
short SwapShort(short number)
{
var hi = (byte)(number >> 8);
var lo = (byte)(number & 0xff);
return (short)((lo << 8) | hi);
}
Then Swap((short)5376) returns the expected value of 21. Note that Swap(5376) will use the default int datatype for 5376, which returns 5376. To treat integers that can be wholly expressed in two bytes as short, you can run:
int Swap(int n)
{
if (n >= Short.MinValue && n <= Short.MaxValue)
return SwapShort((short)n);
else
return SwapInt(n);
}
I want to take the first 4 bits of one byte and all bits of another bit and append them to eachother.
This is the result I need to achieve:
This is what I have now:
private void ParseLocation(int UpperLogicalLocation, int UnderLogicalLocation)
{
int LogicalLocation = UpperLogicalLocation & 0x0F; // Take bit 0-3
LogicalLocation += UnderLogicalLocation;
}
But this is not giving the right results.
int UpperLogicalLocation_Offset = 0x51;
int UnderLogicalLocation = 0x23;
int LogicalLocation = UpperLogicalLocation & 0x0F; // Take bit 0-3
LogicalLocation += UnderLogicalLocation;
Console.Write(LogicalLocation);
This should give 0x51(01010001) + 0x23 (00100011),
So the result I want to achieve is 0001 + 00100011 = 000100100011 (0x123)
You will need to left-shift the UpperLogicalLocation bits by 8 before combining the bits:
int UpperLogicalLocation = 0x51;
int UnderLogicalLocation = 0x23;
int LogicalLocation = (UpperLogicalLocation & 0x0F) << 8; // Take bit 0-3 and shift
LogicalLocation |= UnderLogicalLocation;
Console.WriteLine(LogicalLocation.ToString("x"));
Note that I also changed += to |= to better express what is happening.
The problem is that you're storing the upper bits into bits 0-3 of LogicalLocation instead of bits 8-11. You need to shift the bits into the right place. The following change should fix the problem:
int LogicalLocation = (UpperLogicalLocation & 0x0F) << 8;
Also note that the bits are more idiomatically combined using the logical-or operator. So your second line becomes:
LogicalLocation |= UnderLogicalLocation;
You can do this:
int LogicalLocation = (UpperLogicalLocation & 0x0F) << 8; // Take bit 0-3
LogicalLocation |= (UnderLogicalLocation & 0xFF);
...but be careful about endianness! Your documentation says UpperLogicalLocation should be stored in Byte 3, the next 8 bits in Byte 4. Do achieve this, the resulting int LogicalLocation needs to be split into these two bytes correctly.
I got an array which contains signed int data, i need to convert each value in the array to 2 bytes. I am using C# and i tried using BitConverter.GetBytes(int) but it returns a 4 byte array.
A signed 16-bit value is best represented as a short rather than int - so use BitConverter.GetBytes(short).
However, as an alternative:
byte lowByte = (byte) (value & 0xff);
byte highByte = (byte) ((value >> 8) & 0xff);