c# FTDI 24bit two's complement implement - c#

I am a beginner at C# and would like some advice on how to solve the following problem:
My code read 3 byte from ADC true FTDI ic and calculating value. Problem is that sometimes I get values under zero (less than 0000 0000 0000 0000 0000 0000) and my calculating value jump to a big number. I need to implement two's complement but I and I don't know how.
byte[] readData1 = new byte[3];
ftStatus = myFtdiDevice.Read(readData1, numBytesAvailable, ref numBytesRead); //read data from FTDI
int vrednost1 = (readData1[2] << 16) | (readData1[1] << 8) | (readData1[0]); //convert LSB MSB
int v = ((((4.096 / 16777216) * vrednost1) / 4) * 250); //calculate

Convert the data left-justified, so that your sign-bit lands in the spot your PC processor treats as a sign bit. Then right-shift back, which will automatically perform sign-extension.
int left_justified = (readData[2] << 24) | (readData[1] << 16) | (readData[0] << 8);
int result = left_justified >> 8;
Equivalently, one can mark the byte containing a sign bit as signed, so the processor will perform sign extension:
int result = (unchecked((sbyte)readData[2]) << 16) | (readData[1] << 8) | readData[0];
The second approach with a cast only works if the sign bit is already aligned left within any one of the bytes. The left-justification approach can work for arbitrary sizes, such as 18-bit two's complement readings. In this situation the cast to sbyte wouldn't do the job.
int left_justified18 = (readData[2] << 30) | (readData[1] << 22) | (readData[0] << 14);
int result18 = left_justified >> 14;

Well, the only obscure moment here is endianness (big or little). Assuming that byte[0] stands for the least significant byte you can put
private static int FromInt24(byte[] data) {
int result = (data[2] << 16) | (data[1] << 8) | data[0];
return data[2] < 128
? result // positive number
: -((~result & 0xFFFFFF) + 1); // negative number, its abs value is 2-complement
}
Demo:
byte[][] tests = new byte[][] {
new byte[] { 255, 255, 255},
new byte[] { 44, 1, 0},
};
string report = string.Join(Environment.NewLine, tests
.Select(test => $"[{string.Join(", ", test.Select(x => $"0x{x:X2}"))}] == {FromInt24(test)}"));
Console.Write(report);
Outcome:
[0xFF, 0xFF, 0xFF] == -1
[0x2C, 0x01, 0x00] == 300
If you have big endianness (e.g. 300 == {0, 1, 44}) you have to swap bytes:
private static int FromInt24(byte[] data) {
int result = (data[0] << 16) | (data[1] << 8) | data[2];
return data[0] < 128
? result
: -((~result & 0xFFFFFF) + 1);
}

Related

Get integer from last 12 bits of 2 bytes in C#

I suspect this is an easy one.
I need to get a number from the first 4 bits and another number from the last 12 bits
of 2 bytes.
So here is what I have but it doesn't seem to be right:
byte[] data = new byte[2];
//assume byte array contains data
var _4bit = data[0] >> 4;
var _12bit = data[0] >> 8 | data[1] & 0xff;
data[0]>>8 is 0. Remember that your data is defined as byte[] so it has 8bits per single item, so you are effectively cutting ALL bits off the data[0].
You want rather to take the lowest 4 bits from that byte by bitwise AND (00001111 = 0F) and then shift it leftwards as needed.
So try this:
var _4bit = data[0] >> 4;
var _12bit = ((data[0] & 0x0F) << 8) | (data[1] & 0xff);
It's also worth noting that the last & 0xFF is not needed, as the data[1] is already a byte.
On bits, step by step:
byte[2] data = { aaaabbbb, cccccccc }
var _4bit = data[0] >> 4;
= aaaabbbb >> 4
= 0000aaaa
var _12bit = ( (data[0] & 0x0F) << 8) | ( data[1] & 0xff);
= ((aaaabbbb & 0x0F) << 8) | (cccccccc & 0xff);
= ( 0000bbbb << 8) | ( cccccccc );
= ( 0000bbbb000000000 ) | ( cccccccc );
= 0000bbbbcccccccc;
BTW. also, note that results of & and | operators are typed as int, so 32bits, I've omitted the zeroes for clarity and written it as 8bit only to make it brief!

Generate a unique key by encoding 4 smaller numeric primitive types into a long (Int64)

I have the following method which should create a unique long for any combination of the provided params:
private static long GenerateKey(byte sT, byte srcT, int nId, ushort aId)
{
var nBytes = BitConverter.GetBytes(nId);
var aBytes = BitConverter.GetBytes(aId);
var byteArray = new byte[8];
Buffer.BlockCopy(new[] { sT}, 0, byteArray, 7, 1);
Buffer.BlockCopy(new[] { srcT}, 0, byteArray, 6, 1);
Buffer.BlockCopy(aBytes, 0, byteArray, 4, 2);
Buffer.BlockCopy(nBytes, 0, byteArray, 0, 4);
var result = BitConverter.ToInt64(byteArray, 0);
return result;
}
So I end up with:
1 2 3 4 5 6 7 8
------------------------------------
|byte|byte| ushort | int |
------------------------------------
Can this be done with bitwise operations? I tried the below but seemed to generate the same number for different values??
var hash = ((byte)sT << 56) ^ ((byte)srcT<< 48) ^ (aId << 32) ^ nId;
What went wrong here is that something like this
((byte)sT << 56)
does not do what you want. What it actually does, is cast sT to a byte (which it already is), then it's implicitly converted to an int (because you do math with it), and then it's shifted left by 24 (shift counts are masked to limit them to less than the size of the left operand).
So actually instead of casting to a narrow type, you should cast to a wide type, like this:
((long)sT << 56)
Also, note that implicitly converting an int to long sign-extends it, which means that if nId is negative, you would end up complementing all the other fields (xor with all ones is complement).
So try this:
((long)sT << 56) | ((long)srcT << 48) | ((long)aId << 32) | (nId & 0xffffffffL)
I've also changed the xor's to or's, it's more idiomatic for combining non-overlapping things.
In the following line:
var hash = ((byte)sT << 56) ^ ((byte)srcT<< 48) ^ (aId << 32) ^ nId;
You are shifting more bits than the types are capable of holding. You can't shift a byte more than 8 bits, the rest is lost. Conversely, an ushort cannot be shifted more than 16 bits. Just cast everything to long and you'll get the expected result:
var hash = ((long)sT << 56)
^ ((long)srcT << 48)
^ ((long)aId << 32)
^ (uint)nId; // Cast to uint is required to prevent sign extension

Convert 3 Hex (byte) to Decimal

How do I take the hex 0A 25 10 A2 and get the end result of 851.00625? This must be multiplied by 0.000005. I have tried the following code without success:
byte oct6 = 0x0A;
byte oct7 = 0x25;
byte oct8 = 0x10;
byte oct9 = 0xA2;
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 32))) * 0.000005M;
I am not getting 851.00625 as the BaseFrequency.
oct6 is being shifted 8 bits too far (32 instead of 24)
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 24))) * 0.000005M;

Combine 2 numbers in a byte

I have two numbers (going from 0-9) and I want to combine them into 1 byte.
Number 1 would take bit 0-3 and Number 2 has bit 4-7.
Example : I have number 3 and 4.
3 = 0011 and 4 is 0100.
Result should be 0011 0100.
How can I make a byte with these binary values?
This is what I currently have :
public Byte CombinePinDigit(int DigitA, int DigitB)
{
BitArray Digit1 = new BitArray(Convert.ToByte(DigitA));
BitArray Digit2 = new BitArray(Convert.ToByte(DigitB));
BitArray Combined = new BitArray(8);
Combined[0] = Digit1[0];
Combined[1] = Digit1[1];
Combined[2] = Digit1[2];
Combined[3] = Digit1[3];
Combined[4] = Digit2[0];
Combined[5] = Digit2[1];
Combined[6] = Digit2[2];
Combined[7] = Digit2[3];
}
With this code I have ArgumentOutOfBoundsExceptions
Forget all that bitarray stuff.
Just do this:
byte result = (byte)(number1 | (number2 << 4));
And to get them back:
int number1 = result & 0xF;
int number2 = (result >> 4) & 0xF;
This works by using the << and >> bit-shift operators.
When creating the byte, we are shifting number2 left by 4 bits (which fills the lowest 4 bits of the results with 0) and then we use | to or those bits with the unshifted bits of number1.
When restoring the original numbers, we reverse the process. We shift the byte right by 4 bits which puts the original number2 back into its original position and then we use & 0xF to mask off any other bits.
This bits for number1 will already be in the right position (since we never shifted them) so we just need to mask off the other bits, again with & 0xF.
You should verify that the numbers are in the range 0..9 before doing that, or (if you don't care if they're out of range) you can constrain them to 0..15 by anding with 0xF:
byte result = (byte)((number1 & 0xF) | ((number2 & 0xF) << 4));
this should basically work:
byte Pack(int a, int b)
{
return (byte)(a << 4 | b & 0xF);
}
void Unpack(byte val, out int a, out int b)
{
a = val >> 4;
b = val & 0xF;
}

C# Bitwise Operators

Could someone please explain what the following code does.
private int ReadInt32(byte[] _il, ref int position)
{
return (((il[position++] | (il[position++] << 8)) | (il[position++] << 0x10)) | (il[position++] << 0x18));
}
I'm not sure I understand how the bitwise operators in this method work, could some please break it down for me?
The integer is given as a byte array.
Then each byte is shifted left 0/8/16/24 places and these values are summed to get the integer value.
This is an Int32 in hexadecimal format:
0x10203040
It is represented as following byte array (little endian architecture, so bytes are in reverse order):
[0x40, 0x30, 0x20, 0x10]
In order to build the integer back from the array, each element is shifted i.e. following logic is performed:
a = 0x40 = 0x00000040
b = 0x30 << 8 = 0x00003000
c = 0x20 << 16 = 0x00200000
d = 0x10 << 24 = 0x10000000
then these values are OR'ed together:
int result = a | b | c | d;
this gives:
0x00000040 |
0x00003000 |
0x00200000 |
0x10000000 |
------------------
0x10203040
Think of it like this:
var i1 = il[position];
var i2 = il[position + 1] << 8; (<< 8 is equivalent to * 256)
var i3 = il[position + 2] << 16;
var i4 = il[position + 3] << 24;
position = position + 4;
return i1 | i2 | i3 | i4;

Categories