C# Bitwise Operators - c#

Could someone please explain what the following code does.
private int ReadInt32(byte[] _il, ref int position)
{
return (((il[position++] | (il[position++] << 8)) | (il[position++] << 0x10)) | (il[position++] << 0x18));
}
I'm not sure I understand how the bitwise operators in this method work, could some please break it down for me?

The integer is given as a byte array.
Then each byte is shifted left 0/8/16/24 places and these values are summed to get the integer value.
This is an Int32 in hexadecimal format:
0x10203040
It is represented as following byte array (little endian architecture, so bytes are in reverse order):
[0x40, 0x30, 0x20, 0x10]
In order to build the integer back from the array, each element is shifted i.e. following logic is performed:
a = 0x40 = 0x00000040
b = 0x30 << 8 = 0x00003000
c = 0x20 << 16 = 0x00200000
d = 0x10 << 24 = 0x10000000
then these values are OR'ed together:
int result = a | b | c | d;
this gives:
0x00000040 |
0x00003000 |
0x00200000 |
0x10000000 |
------------------
0x10203040

Think of it like this:
var i1 = il[position];
var i2 = il[position + 1] << 8; (<< 8 is equivalent to * 256)
var i3 = il[position + 2] << 16;
var i4 = il[position + 3] << 24;
position = position + 4;
return i1 | i2 | i3 | i4;

Related

c# FTDI 24bit two's complement implement

I am a beginner at C# and would like some advice on how to solve the following problem:
My code read 3 byte from ADC true FTDI ic and calculating value. Problem is that sometimes I get values under zero (less than 0000 0000 0000 0000 0000 0000) and my calculating value jump to a big number. I need to implement two's complement but I and I don't know how.
byte[] readData1 = new byte[3];
ftStatus = myFtdiDevice.Read(readData1, numBytesAvailable, ref numBytesRead); //read data from FTDI
int vrednost1 = (readData1[2] << 16) | (readData1[1] << 8) | (readData1[0]); //convert LSB MSB
int v = ((((4.096 / 16777216) * vrednost1) / 4) * 250); //calculate
Convert the data left-justified, so that your sign-bit lands in the spot your PC processor treats as a sign bit. Then right-shift back, which will automatically perform sign-extension.
int left_justified = (readData[2] << 24) | (readData[1] << 16) | (readData[0] << 8);
int result = left_justified >> 8;
Equivalently, one can mark the byte containing a sign bit as signed, so the processor will perform sign extension:
int result = (unchecked((sbyte)readData[2]) << 16) | (readData[1] << 8) | readData[0];
The second approach with a cast only works if the sign bit is already aligned left within any one of the bytes. The left-justification approach can work for arbitrary sizes, such as 18-bit two's complement readings. In this situation the cast to sbyte wouldn't do the job.
int left_justified18 = (readData[2] << 30) | (readData[1] << 22) | (readData[0] << 14);
int result18 = left_justified >> 14;
Well, the only obscure moment here is endianness (big or little). Assuming that byte[0] stands for the least significant byte you can put
private static int FromInt24(byte[] data) {
int result = (data[2] << 16) | (data[1] << 8) | data[0];
return data[2] < 128
? result // positive number
: -((~result & 0xFFFFFF) + 1); // negative number, its abs value is 2-complement
}
Demo:
byte[][] tests = new byte[][] {
new byte[] { 255, 255, 255},
new byte[] { 44, 1, 0},
};
string report = string.Join(Environment.NewLine, tests
.Select(test => $"[{string.Join(", ", test.Select(x => $"0x{x:X2}"))}] == {FromInt24(test)}"));
Console.Write(report);
Outcome:
[0xFF, 0xFF, 0xFF] == -1
[0x2C, 0x01, 0x00] == 300
If you have big endianness (e.g. 300 == {0, 1, 44}) you have to swap bytes:
private static int FromInt24(byte[] data) {
int result = (data[0] << 16) | (data[1] << 8) | data[2];
return data[0] < 128
? result
: -((~result & 0xFFFFFF) + 1);
}

Bits exchange trouble C#

I have to write a program that takes bits 3,4,5 and puts them into the place of bits 24,25,26 and then it takes bits 24,25,26 (from the original number) and puts them in the place of bits 3,4,5. The code that I wrote succesfuly transfers 3,4,5 to 24,25,26 but I can't understand why it's not working the other way around.. I also want to ask if there is an easier way to do this..
static void Main()
{
Console.Write("Please input your number: ");
int num = Convert.ToInt32(Console.ReadLine());
int mask = 0;
int bit = 0;
int p = 0;
int numP = 0;
//take bit 3,4,5 and put them in the place of 24,25,26
for (int i = 0; i < 3; i++)
{
p = 3 + i;
numP = num >> p;
bit = numP & 1;
if (bit == 1)
{
mask = 1 << 24 + i;
num = num | mask;
}
else
{
mask = ~(1 << 24 + i);
num = num & mask;
}
}
//take bit 24,25,26 and put them in the place of 3,4,5
for (int i = 0; i < 3; i++)
{
p = 24 + i;
numP = num >> p;
bit = numP & 1;
if (bit == 1)
{
mask = 1 << 3 + i;
num = num | mask;
}
else
{
mask = ~(1 << 3 + i);
num = num & mask;
}
}
Console.WriteLine("Your new number is: {0}", num);
}
To switch the bits you need to store away the original bits before you copy the new bits in.
As you want to switch three bits that are next to each other with three other bits that are next to each other, it can be done quite easily:
int lo = num & 0x00000038; // get bits 3-5
int hi = num & 0x07000000; // get bits 24-26
num &= ~0x07000038; // clear bits 3-5 and 24-26
num |= lo << 21; // put bits 3-5 in 24-26
num |= hi >> 21; // put bits 24-26 in 3-5
Edit:
Doing the same one bit at a time in a loop; instead of having two loops and copying bits, you can do it with one loop where you swap bits, which solves the problem of the first loop overwriting the bits that you need in the second loop:
int numP, bit1, bit2, mask1, mask2;
//swap bits 3,4,5 with bits 24,25,26
for (int i = 0; i < 3; i++) {
// get bit 3 (,4,5)
numP = num >> (3 + i);
bit1 = numP & 1;
// get bit 24 (,25,26)
numP = num >> (24 + i);
bit2 = numP & 1;
// shift bit 3 (,4,5) to positon 24 (,25,26)
bit1 = bit1 << (24 + i);
// shift bit 24 (,25,26) to position 3 (,4,5)
bit2 = bit2 << (3 + i);
// set bit 3 (,4,5) to zero
mask1 = 1 << (3 + i);
num = num & ~mask1;
// set bit 24 (,25,26) to zero
mask2 = 1 << (24 + i);
num = num & ~mask2;
// put bit 3 (,4,5) in bit 24 (,25,26)
num = num | bit1;
// put bit 24 (,25,26) in bit 3 (,4,5)
num = num | bi2;
}
Assuming you are numbering bits from least- to most-significant (the proper way):
3322 2222 2222 1111 1111 11
1098 7654 3210 9876 5432 1098 7654 3210
xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx
Your masks are
0x00000038 (binary: 0000 0000 0000 0000 0000 0000 0011 1000) will extract bits 3-5
0x07000000 (binary: 0000 0111 0000 0000 0000 0000 0000 0000) will extract bits 24-26
The code is easy. This is one way to do it:
public uint exchange_bits( uint a )
{
uint swapped = ( a & ~( 0x07000038 ) ) // clear bits 3-5 and 24-26
| ( ( a & 0x00000038 ) << 21 ) // OR in bits 3-5, shifted left 21 bits
| ( ( a & 0x07000000 ) >> 21 ) // OR in bits 24-26, shifted right 21 bits
;
return swapped ;
}
It is always my opinion that code should be readable by a human, and only incidentally executable. If you are not running this in a tight loop, you could do the following:
private static int SwapBits(int ind)
{
BitVector32 bv = new BitVector32(ind);
BitVector32 bcopy = bv;
bcopy[1 << 24] = bv[1 << 3];
bcopy[1 << 25] = bv[1 << 4];
bcopy[1 << 26] = bv[1 << 5];
bcopy[1 << 3] = bv[1 << 24];
bcopy[1 << 4] = bv[1 << 25];
bcopy[1 << 5] = bv[1 << 26];
return bcopy.Data;
}
Produces:
old 0x02000028 00000010000000000000000000101000
new 0x05000010 00000101000000000000000000010000
If you are in a tight loop, I would do the following:
private static int SwapBitsInt(int ind)
{
// mask out the ones we swap
int outd = ind & ~0x07000038;
// set the top 3 and bottom 3. The sections are 21 bits away.
outd |= (ind & 0x00000038) << 21;
outd |= (ind & 0x07000000) >> 21;
return outd;
}
The constants 0x00000038 and 0x07000000 are the results of 1 << 3 | 1 << 4 | 1 << 5 and 1 << 24 | 1 << 25 | 1 << 26. An easy way to find them is to use the "Programmer" mode in Windows Calculator, and click the bits you want.

Bitshift over Int32.max fails

Ok, so I have two methods
public static long ReadLong(this byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24
| data[4] << 32 | data[5] << 40 | data[6] << 48 | data[7] << 56;
return length;
}
public static void WriteLong(this byte[] data, long i)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
data[0] = (byte)((i >> (8*0)) & 0xFF);
data[1] = (byte)((i >> (8*1)) & 0xFF);
data[2] = (byte)((i >> (8*2)) & 0xFF);
data[3] = (byte)((i >> (8*3)) & 0xFF);
data[4] = (byte)((i >> (8*4)) & 0xFF);
data[5] = (byte)((i >> (8*5)) & 0xFF);
data[6] = (byte)((i >> (8*6)) & 0xFF);
data[7] = (byte)((i >> (8*7)) & 0xFF);
}
So WriteLong works correctly(Verified against BitConverter.GetBytes()). The problem is ReadLong. I have a fairly good understanding of this stuff, but I'm guessing what's happening is the or operations are happening as 32 bit ints so at Int32.MaxValue it rolls over. I'm not sure how to avoid that. My first instinct was to make an int from the lower half and an int from the upper half and combine them, but I'm not quite knowledgeable to know even where to start with that, so this is what I tried....
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long l1 = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24;
long l2 = data[4] | data[5] << 8 | data[6] << 16 | data[7] << 24;
return l1 | l2 << 32;
}
This didn't work though, at least not for larger numbers, it seems to work for everything below zero.
Here's how I run it
void Main()
{
var larr = new long[5]{
long.MinValue,
0,
long.MaxValue,
1,
-2000000000
};
foreach(var l in larr)
{
var arr = new byte[8];
WriteLong(ref arr,l);
Console.WriteLine(ByteString(arr));
var end = ReadLong(arr);
var end2 = BitConverter.ToInt64(arr,0);
Console.WriteLine(l + " == " + end + " == " + end2);
}
}
and here's what I get(using the modified ReadLong method)
0:0:0:0:0:0:0:128
-9223372036854775808 == -9223372036854775808 == -9223372036854775808
0:0:0:0:0:0:0:0
0 == 0 == 0
255:255:255:255:255:255:255:127
9223372036854775807 == -1 == 9223372036854775807
1:0:0:0:0:0:0:0
1 == 1 == 1
0:108:202:136:255:255:255:255
-2000000000 == -2000000000 == -2000000000
The problem is not the or, it is the bitshift. This has to be done as longs. Currently, the data[i] are implicitely converted to int. Just change that to long and that's it. I.e.
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = (long)data[0] | (long)data[1] << 8 | (long)data[2] << 16 | (long)data[3] << 24
| (long)data[4] << 32 | (long)data[5] << 40 | (long)data[6] << 48 | (long)data[7] << 56;
return length;
}
You are doing int arithmetic and then assigning to long, try:
long length = data[0] | data[1] << 8L | data[2] << 16L | data[3] << 24L
| data[4] << 32L | data[5] << 40L | data[6] << 48L | data[7] << 56L;
This should define your constants as longs forcing it to use long arithmetic.
EDIT: Turns out this may not work according to comments below as while bitshift takes many operators on the left it only takes int on the right. Georg's should be the accepted answer.

Convert 3 Hex (byte) to Decimal

How do I take the hex 0A 25 10 A2 and get the end result of 851.00625? This must be multiplied by 0.000005. I have tried the following code without success:
byte oct6 = 0x0A;
byte oct7 = 0x25;
byte oct8 = 0x10;
byte oct9 = 0xA2;
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 32))) * 0.000005M;
I am not getting 851.00625 as the BaseFrequency.
oct6 is being shifted 8 bits too far (32 instead of 24)
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 24))) * 0.000005M;

Convert int to little-endian formated bytes in C++ for blobId in Azure

Working with a base64 encoding for Azure (http://msdn.microsoft.com/en-us/library/dd135726.aspx) and I dont seem to work out how to get the required string back. I'm able to do this in C# where I do the following.
int blockId = 5000;
var blockIdBytes = BitConverter.GetBytes(blockId);
Console.WriteLine(blockIdBytes);
string blockIdBase64 = Convert.ToBase64String(blockIdBytes);
Console.WriteLine(blockIdBase64);
Which prints out (in LINQPad):
Byte[] (4 items)
| 136 |
| 19 |
| 0 |
| 0 |
iBMAAA==
In Qt/C++ I tried a few aporaches, all of them returning the wrong value.
const int a = 5000;
QByteArray b;
for(int i = 0; i != sizeof(a); ++i) {
b.append((char)(a&(0xFF << i) >>i));
}
qDebug() << b.toBase64(); // "iIiIiA=="
qDebug() << QByteArray::number(a).toBase64(); // "NTAwMA=="
qDebug() << QString::number(a).toUtf8().toBase64(); // "NTAwMA=="
How can I get the same result as the C# version?
See my comment for the problem with your for loop. It's shifting by one bit more each pass, but actually it should be 8 bits. Personally, I prefer this to a loop:
b.append(static_cast<char>(a >> 24));
b.append(static_cast<char>((a >> 16) & 0xff));
b.append(static_cast<char>((a >> 8) & 0xff));
b.append(static_cast<char>(a & 0xff));
The code above is for network standard byte order (big endian). Flip the order of the four operations from last to first for little endian byte order.
I ended up doing the following:
QByteArray temp;
int blockId = 5000;
for(int i = 0; i != sizeof(blockId); i++) {
temp.append((char)(blockId >> (i * 8)));
}
qDebug() << temp.toBase64(); // "iBMAAA==" which is correct
I think this would be clearer, though may be claimed to be ill styled...
int i = 0x01020304;
char (&bytes)[4] = (char (&)[4])i;
and you can access each byte directly with bytes[0], bytes[1], ... and do what ever you want to do with them.

Categories