Get integer from last 12 bits of 2 bytes in C# - c#

I suspect this is an easy one.
I need to get a number from the first 4 bits and another number from the last 12 bits
of 2 bytes.
So here is what I have but it doesn't seem to be right:
byte[] data = new byte[2];
//assume byte array contains data
var _4bit = data[0] >> 4;
var _12bit = data[0] >> 8 | data[1] & 0xff;

data[0]>>8 is 0. Remember that your data is defined as byte[] so it has 8bits per single item, so you are effectively cutting ALL bits off the data[0].
You want rather to take the lowest 4 bits from that byte by bitwise AND (00001111 = 0F) and then shift it leftwards as needed.
So try this:
var _4bit = data[0] >> 4;
var _12bit = ((data[0] & 0x0F) << 8) | (data[1] & 0xff);
It's also worth noting that the last & 0xFF is not needed, as the data[1] is already a byte.
On bits, step by step:
byte[2] data = { aaaabbbb, cccccccc }
var _4bit = data[0] >> 4;
= aaaabbbb >> 4
= 0000aaaa
var _12bit = ( (data[0] & 0x0F) << 8) | ( data[1] & 0xff);
= ((aaaabbbb & 0x0F) << 8) | (cccccccc & 0xff);
= ( 0000bbbb << 8) | ( cccccccc );
= ( 0000bbbb000000000 ) | ( cccccccc );
= 0000bbbbcccccccc;
BTW. also, note that results of & and | operators are typed as int, so 32bits, I've omitted the zeroes for clarity and written it as 8bit only to make it brief!

Related

c# FTDI 24bit two's complement implement

I am a beginner at C# and would like some advice on how to solve the following problem:
My code read 3 byte from ADC true FTDI ic and calculating value. Problem is that sometimes I get values under zero (less than 0000 0000 0000 0000 0000 0000) and my calculating value jump to a big number. I need to implement two's complement but I and I don't know how.
byte[] readData1 = new byte[3];
ftStatus = myFtdiDevice.Read(readData1, numBytesAvailable, ref numBytesRead); //read data from FTDI
int vrednost1 = (readData1[2] << 16) | (readData1[1] << 8) | (readData1[0]); //convert LSB MSB
int v = ((((4.096 / 16777216) * vrednost1) / 4) * 250); //calculate
Convert the data left-justified, so that your sign-bit lands in the spot your PC processor treats as a sign bit. Then right-shift back, which will automatically perform sign-extension.
int left_justified = (readData[2] << 24) | (readData[1] << 16) | (readData[0] << 8);
int result = left_justified >> 8;
Equivalently, one can mark the byte containing a sign bit as signed, so the processor will perform sign extension:
int result = (unchecked((sbyte)readData[2]) << 16) | (readData[1] << 8) | readData[0];
The second approach with a cast only works if the sign bit is already aligned left within any one of the bytes. The left-justification approach can work for arbitrary sizes, such as 18-bit two's complement readings. In this situation the cast to sbyte wouldn't do the job.
int left_justified18 = (readData[2] << 30) | (readData[1] << 22) | (readData[0] << 14);
int result18 = left_justified >> 14;
Well, the only obscure moment here is endianness (big or little). Assuming that byte[0] stands for the least significant byte you can put
private static int FromInt24(byte[] data) {
int result = (data[2] << 16) | (data[1] << 8) | data[0];
return data[2] < 128
? result // positive number
: -((~result & 0xFFFFFF) + 1); // negative number, its abs value is 2-complement
}
Demo:
byte[][] tests = new byte[][] {
new byte[] { 255, 255, 255},
new byte[] { 44, 1, 0},
};
string report = string.Join(Environment.NewLine, tests
.Select(test => $"[{string.Join(", ", test.Select(x => $"0x{x:X2}"))}] == {FromInt24(test)}"));
Console.Write(report);
Outcome:
[0xFF, 0xFF, 0xFF] == -1
[0x2C, 0x01, 0x00] == 300
If you have big endianness (e.g. 300 == {0, 1, 44}) you have to swap bytes:
private static int FromInt24(byte[] data) {
int result = (data[0] << 16) | (data[1] << 8) | data[2];
return data[0] < 128
? result
: -((~result & 0xFFFFFF) + 1);
}

Bitshift over Int32.max fails

Ok, so I have two methods
public static long ReadLong(this byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24
| data[4] << 32 | data[5] << 40 | data[6] << 48 | data[7] << 56;
return length;
}
public static void WriteLong(this byte[] data, long i)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
data[0] = (byte)((i >> (8*0)) & 0xFF);
data[1] = (byte)((i >> (8*1)) & 0xFF);
data[2] = (byte)((i >> (8*2)) & 0xFF);
data[3] = (byte)((i >> (8*3)) & 0xFF);
data[4] = (byte)((i >> (8*4)) & 0xFF);
data[5] = (byte)((i >> (8*5)) & 0xFF);
data[6] = (byte)((i >> (8*6)) & 0xFF);
data[7] = (byte)((i >> (8*7)) & 0xFF);
}
So WriteLong works correctly(Verified against BitConverter.GetBytes()). The problem is ReadLong. I have a fairly good understanding of this stuff, but I'm guessing what's happening is the or operations are happening as 32 bit ints so at Int32.MaxValue it rolls over. I'm not sure how to avoid that. My first instinct was to make an int from the lower half and an int from the upper half and combine them, but I'm not quite knowledgeable to know even where to start with that, so this is what I tried....
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long l1 = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24;
long l2 = data[4] | data[5] << 8 | data[6] << 16 | data[7] << 24;
return l1 | l2 << 32;
}
This didn't work though, at least not for larger numbers, it seems to work for everything below zero.
Here's how I run it
void Main()
{
var larr = new long[5]{
long.MinValue,
0,
long.MaxValue,
1,
-2000000000
};
foreach(var l in larr)
{
var arr = new byte[8];
WriteLong(ref arr,l);
Console.WriteLine(ByteString(arr));
var end = ReadLong(arr);
var end2 = BitConverter.ToInt64(arr,0);
Console.WriteLine(l + " == " + end + " == " + end2);
}
}
and here's what I get(using the modified ReadLong method)
0:0:0:0:0:0:0:128
-9223372036854775808 == -9223372036854775808 == -9223372036854775808
0:0:0:0:0:0:0:0
0 == 0 == 0
255:255:255:255:255:255:255:127
9223372036854775807 == -1 == 9223372036854775807
1:0:0:0:0:0:0:0
1 == 1 == 1
0:108:202:136:255:255:255:255
-2000000000 == -2000000000 == -2000000000
The problem is not the or, it is the bitshift. This has to be done as longs. Currently, the data[i] are implicitely converted to int. Just change that to long and that's it. I.e.
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = (long)data[0] | (long)data[1] << 8 | (long)data[2] << 16 | (long)data[3] << 24
| (long)data[4] << 32 | (long)data[5] << 40 | (long)data[6] << 48 | (long)data[7] << 56;
return length;
}
You are doing int arithmetic and then assigning to long, try:
long length = data[0] | data[1] << 8L | data[2] << 16L | data[3] << 24L
| data[4] << 32L | data[5] << 40L | data[6] << 48L | data[7] << 56L;
This should define your constants as longs forcing it to use long arithmetic.
EDIT: Turns out this may not work according to comments below as while bitshift takes many operators on the left it only takes int on the right. Georg's should be the accepted answer.

Convert 3 Hex (byte) to Decimal

How do I take the hex 0A 25 10 A2 and get the end result of 851.00625? This must be multiplied by 0.000005. I have tried the following code without success:
byte oct6 = 0x0A;
byte oct7 = 0x25;
byte oct8 = 0x10;
byte oct9 = 0xA2;
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 32))) * 0.000005M;
I am not getting 851.00625 as the BaseFrequency.
oct6 is being shifted 8 bits too far (32 instead of 24)
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 24))) * 0.000005M;

Convert int to little-endian formated bytes in C++ for blobId in Azure

Working with a base64 encoding for Azure (http://msdn.microsoft.com/en-us/library/dd135726.aspx) and I dont seem to work out how to get the required string back. I'm able to do this in C# where I do the following.
int blockId = 5000;
var blockIdBytes = BitConverter.GetBytes(blockId);
Console.WriteLine(blockIdBytes);
string blockIdBase64 = Convert.ToBase64String(blockIdBytes);
Console.WriteLine(blockIdBase64);
Which prints out (in LINQPad):
Byte[] (4 items)
| 136 |
| 19 |
| 0 |
| 0 |
iBMAAA==
In Qt/C++ I tried a few aporaches, all of them returning the wrong value.
const int a = 5000;
QByteArray b;
for(int i = 0; i != sizeof(a); ++i) {
b.append((char)(a&(0xFF << i) >>i));
}
qDebug() << b.toBase64(); // "iIiIiA=="
qDebug() << QByteArray::number(a).toBase64(); // "NTAwMA=="
qDebug() << QString::number(a).toUtf8().toBase64(); // "NTAwMA=="
How can I get the same result as the C# version?
See my comment for the problem with your for loop. It's shifting by one bit more each pass, but actually it should be 8 bits. Personally, I prefer this to a loop:
b.append(static_cast<char>(a >> 24));
b.append(static_cast<char>((a >> 16) & 0xff));
b.append(static_cast<char>((a >> 8) & 0xff));
b.append(static_cast<char>(a & 0xff));
The code above is for network standard byte order (big endian). Flip the order of the four operations from last to first for little endian byte order.
I ended up doing the following:
QByteArray temp;
int blockId = 5000;
for(int i = 0; i != sizeof(blockId); i++) {
temp.append((char)(blockId >> (i * 8)));
}
qDebug() << temp.toBase64(); // "iBMAAA==" which is correct
I think this would be clearer, though may be claimed to be ill styled...
int i = 0x01020304;
char (&bytes)[4] = (char (&)[4])i;
and you can access each byte directly with bytes[0], bytes[1], ... and do what ever you want to do with them.

C# How to represent an Int as four-byte integer

I'm trying to send the Length of a Byte array to my server, so that it knows how much data to read.
I get the length of the Byte[] message array using int messageLength = message.Length
How do I represent this integer messageLength a four-byte integer?
Use BitConvertor BitConverter.GetBytes(message.Length);
Use the BitConverter.GetBytes(int32) class
You can use
int length = message.Length;
BitConvert.GetBytes(length);
http://msdn.microsoft.com/en-us/library/system.bitconverter.aspx
message[0] = length & 0xFF;
message[1] = (length >> 8) & 0xFF;
message[2] = (length >> 16) & 0xFF;
message[3] = (length >> 24) & 0xFF;
To recover it...
int length = message[0] | (message[1] << 8) | (message[2] << 16) | (message[3] << 24);
A byte in C# is 8 bits.. 8 bits * 4 bytes = 32 bits.. so you want to use a System.Int32.

Categories