Convert floating point byte array to decimal - c#

I receive a 4-byte long array over BLE and I am interested in converting the last two bytes to a floating point decimal representation(float, double, etc) in C#. The original value has the following format:
1 sign bit
4 integer bits
11 fractional bits
My first attempt was using BitConverter, but I am confused by the procedure.
Example: I receive a byte array values where values[2] = 143 and values[3] = 231. Those 2 bytes combined represent a value in the format specified above. I am not sure, but I think that it would be like this:
SIGN INT FRACTION
0 0000 00000000000
Furthermore, since the value comes in two bytes, I tried to use BitConverter.ToString to get the hex representation and then concatenate the bytes. It is at this point where I am not sure how to continue.
Thank you for your help!

Your question lacks information, however the format seems to be clear, e.g. for given two bytes as
byte[] arr = new byte[] { 123, 45 };
we have
01111011 00101101
^^ ^^ .. ^
|| || |
|| |11 fractional bits = 01100101101 (813)
||..|
|4 integer bits = 1111 (15)
|
1 sign bit = 0 (positive)
let's get all the parts:
bool isNegative = (arr[0] & 0b10000000) != 0;
int intPart = (arr[0] & 0b01111000) >> 3;
int fracPart = ((arr[0] & 0b00000111) << 8) | arr[1];
Now we should combine all these parts into a number; here is ambiguity: there are many possible different ways to compute fractional part. One of them is
result = [sign] * (intPart + fracPart / 2**Number_Of_Frac_Bits)
In our case { 123, 45 } we have
sign = 0 (positive)
intPart = 1111b = 15
fracPart = 01100101101b / (2**11) = 813 / 2048 = 0.39697265625
result = 15.39697265625
Implementation:
// double result = (isNegative ? -1 : 1) * (intPart + fracPart / 2048.0);
Single result = (isNegative ? -1 : 1) * (intPart + fracPart / 2048f);
Console.Write(result);
Outcome:
15.39697
Edit: In case you actually don't have sign bit, but use 2's complement for the integer part (which is unusual - double, float use sign bit; 2's complement is for integer types like Int32, Int16 etc.) I expect just an expectation, 2's complement is unusual in the context) something like
int intPart = (arr[0]) >> 3;
int fracPart = ((arr[0] & 0b00000111) << 8) | arr[1];
if (intPart >= 16)
intPart = -((intPart ^ 0b11111) + 1);
Single result = (intPart + fracPart / 2048f);
Another possibility that you in fact use integer value (Int16) with fixed floating point; in this case the conversion is easy:
We obtain Int16 as usual
Then put fixed floating point by dividing 2048.0:
Code:
// -14.01221 for { 143, 231 }
Single result = unchecked((Int16) ((arr[0] << 8) | arr[1])) / 2048f;
Or
Single result = BitConverter.ToInt16(BitConverter.IsLittleEndian
? arr.Reverse().ToArray()
: arr, 0) / 2048f;

Related

Convert Two Bytes to Half Float In C#

How can I convert two bytes (Hi, Lo) into this custom half float implementation using c#? Its slightly different to other implementation, as it has 9 bits of precision. I've tried modifying this answer https://stackoverflow.com/a/37761168/14515159 to suit the spec below, but its not converting correctly.
From a protocol sniffer Application
HO = 0x41, LO = 0xC5 should give value of 3.7695313.
However I have not been able to confirm this. The outputs should be in the range of 0 to 5 and all my conversions are way off.
public static float toTwoByteFloat(byte HO, byte LO)
{
var intVal = BitConverter.ToInt32(new byte[] { HO, LO, 0, 0 }, 0);
int mant = intVal & 0x07ff;
int exp = intVal & 0x7e00;
if (exp == 0x3c00) exp = 0x3fc00;
else if (exp != 0)
{
exp += 0x1c000;
if (mant == 0 && exp > 0x1c400)
return BitConverter.ToSingle(BitConverter.GetBytes((intVal & 0x8000) << 16 | exp << 14 | 0x7ff), 0);
}
else if (mant != 0)
{
exp = 0x1c400;
do
{
mant <<= 1;
exp -= 0x400;
} while ((mant & 0x400) == 0);
mant &= 0x7ff;
}
return BitConverter.ToSingle(BitConverter.GetBytes((intVal & 0x8000) << 16 | (exp | mant) << 14), 0);
}
Below is information from custom protocol that I'm working with.
Note for image below that in this protocol document they refer to Bit 0 as the MSB.
Half Float
This format can represent numbers from 231 to 2-31 with 9-bits of precision. A zero is represented with >a mantissa and exponent of zero. The exponent is biased with 31 and the mantissa assumes a left most >leading 1. This definition exactly matches the IEEE floating point formats, except the range of the >fields has been reduced.
Converting from IEEE-754 32-bit format to the 16-bit format is done by splitting the IEEE number into >its component parts: sign, exponent, and mantissa. The mantissa is right shifted 14-bits, losing that >resolution. The exponent is then unbiased (IEEE-754 uses a bias of 127) and re-biased with 31. The 16->bit format is then assembled from the original sign bit, the new exponent, and the new mantissa. During >the re-biasing of the exponent under flow and overflow is detected. Underflows should result in positive >zero. Overflows should result in the maximum possible exponent.
Working with a colleague, we came up with the solution below that converts correctly
public static float Parse16BitFloat(byte HI, byte LO)
{
// Program assumes ints are at least 16 bits
int fullFloat = ((HI << 8) | LO);
int exponent = (HI & 0b01111110) >> 1; // minor optimisation can be placed here
int mant = fullFloat & 0x01FF;
// Special values
if (exponent == 0b00111111) // If using constants, shift right by 1
{
// Check for non or inf
return mant != 0 ? float.NaN :
((HI & 0x80) == 0 ? float.PositiveInfinity : float.NegativeInfinity);
}
else // normal/denormal values: pad numbers
{
exponent = exponent - 31 + 127;
mant = mant << 14;
Int32 finalFloat = (HI & 0x80) << 24 | (exponent << 23) | mant;
return BitConverter.ToSingle(BitConverter.GetBytes(finalFloat), 0);
}
}

how to convert a byte to 4 bytes to be used as color in c#?

I need to convert a byte to 4 bits so it can be used as a color.
byte input;
byte r = //the first and second bits of input ;
byte g = //the third and forth bits of input ;
byte b = //the fifth and sixth bits of input ;
Color32 output = new Color32(r,g,b);
I tried working with bit-wise operators but i am not very good at them .
You can use bitwise operators.
byte input = ...;
r = input & 0x3; // bits 0x1 + 0x2
g =( input & 0xc) >> 2; // bits 0x4 + 0x8
b = (input & 0x30) >> 4; //bits 0x10 + 0x20
The bitwise operator & makes a bitwise and on the input. >> shifts a number to the right by the given number of bits.
Or if by "first and second" bit mean the highest two bits, you can get them as follows
r = input >> 6;
g = (input >> 4) & 0x3;
b = (input >> 2) & 0x3;
You probably want 11 binary to map to 255 and 00 to map to 0 to get a maximum spread in color values.
You can get that spread by multiplying the 2 bit color value by 85. 00b stays 0, 01b becomes 85, 10b becomes 190 and 11b becomes 255.
So the code would look something like this
byte input = 0xfc;
var r = ((input & 0xc0) >> 6) * 85;
var g = ((input & 0x30) >> 4) * 85;
var b = ((input & 0x0c) >> 2) * 85;
Console.WriteLine($"{r} {g} {b}");

Get lower nibble of a byte and replace Hex value

I need to replace the hex value 0xA with a 0 and get only the lower nibble of a hex value.
This is what I have at the moment:
private void ParseContactID(int ContactID_100, int ContactID_10, int ContactID_1)
{
// (A=0)
string Hunderds = ContactID_100.ToString("X").Replace((char)0xA, '0');
string Dozens = ContactID_10.ToString("X").Replace((char)0xA, '0');
string Singles = ContactID_1.ToString("X").Replace((char)0xA, '0');
int HunderdsLower = StringToHex(Hunderds) & 0x0F;
int DozensLower = StringToHex(Dozens) & 0x0F;
int SinglesLower = StringToHex(Singles) & 0x0F;
}
Should I & with 0x0F to get the lower nibble or 0xF0?
And is there a way to replace 0xA without converting it to a string first?
I don't think that the code you currently have does what you think it does - (char)0xA is a line feed, not the letter 'A', so it won't be replacing anything (since the ToString("X") won't produce a line feed. As you've suspect however the string conversion can be done away with completely.
To get the lower nibble, you need to AND with 0x0F. As far as the conversion of 0xA to 0, there are a couple of options, but if you can be sure that the lower nibble will only contain values 0x0 - 0xA (0 - 10), then you can use the modulo operator (%) which if we modulo 10, will convert 0xA to 0, whilst leaving values 0 - 9 unchanged:
var hundreds = (ContactID_100 & 0x0F) % 10;
I don't see any reason for you to use string conversion at all. This could simply be:
int hundreds = (ContactID_100 & 0x0F) % 10;
int dozens = (ContactID_10 & 0x0F) % 10; // I wonder why "dozens" instead of "tens"... ?
int singles = (ContactID_1 & 0x0F) % 10;
int contactId = hundreds * 100 + dozens * 10 + singles; // assuming "dozens" is "tens"...
To get the lower nibble, you just have to mask away the top nibble with & 0x0F.
To make A = 0, modular division can work. Make sure to put () around the & statement, since the % has higher precedence than the &.
If you prefer to not use the % operator, an if check may be faster:
int hundreds = ContactID_100 & 0x0F;
int dozens = ContactID_10 & 0x0F;
int singles = ContactID_1 & 0x0F;
if (hundreds == 10) { hundreds = 0; } // since 0xA is 10
if (dozens == 10) { dozens = 0; }
if (singles == 10) { singles = 0; }

Combine 2 numbers in a byte

I have two numbers (going from 0-9) and I want to combine them into 1 byte.
Number 1 would take bit 0-3 and Number 2 has bit 4-7.
Example : I have number 3 and 4.
3 = 0011 and 4 is 0100.
Result should be 0011 0100.
How can I make a byte with these binary values?
This is what I currently have :
public Byte CombinePinDigit(int DigitA, int DigitB)
{
BitArray Digit1 = new BitArray(Convert.ToByte(DigitA));
BitArray Digit2 = new BitArray(Convert.ToByte(DigitB));
BitArray Combined = new BitArray(8);
Combined[0] = Digit1[0];
Combined[1] = Digit1[1];
Combined[2] = Digit1[2];
Combined[3] = Digit1[3];
Combined[4] = Digit2[0];
Combined[5] = Digit2[1];
Combined[6] = Digit2[2];
Combined[7] = Digit2[3];
}
With this code I have ArgumentOutOfBoundsExceptions
Forget all that bitarray stuff.
Just do this:
byte result = (byte)(number1 | (number2 << 4));
And to get them back:
int number1 = result & 0xF;
int number2 = (result >> 4) & 0xF;
This works by using the << and >> bit-shift operators.
When creating the byte, we are shifting number2 left by 4 bits (which fills the lowest 4 bits of the results with 0) and then we use | to or those bits with the unshifted bits of number1.
When restoring the original numbers, we reverse the process. We shift the byte right by 4 bits which puts the original number2 back into its original position and then we use & 0xF to mask off any other bits.
This bits for number1 will already be in the right position (since we never shifted them) so we just need to mask off the other bits, again with & 0xF.
You should verify that the numbers are in the range 0..9 before doing that, or (if you don't care if they're out of range) you can constrain them to 0..15 by anding with 0xF:
byte result = (byte)((number1 & 0xF) | ((number2 & 0xF) << 4));
this should basically work:
byte Pack(int a, int b)
{
return (byte)(a << 4 | b & 0xF);
}
void Unpack(byte val, out int a, out int b)
{
a = val >> 4;
b = val & 0xF;
}

Converting an Int to a BCD byte array [duplicate]

I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}

Categories