Converting an amount to a 4 byte array - c#

I haven't ever had to deal with this before. I need to convert a sale amount (48.58) to a 4 byte array and use network byte order. The code below is how I am doing it, but it is wrong and I am not understanding why. Can anyone help?
float saleamount = 48.58F;
byte[] data2 = BitConverter.GetBytes(saleamount).Reverse().ToArray();
What I am getting is 66 66 81 236 in the array. I am not certain what it should be though. I am interfacing with a credit card terminal and need to send the amount in "4 bytes, fixed length, max value is 0xffffffff, use network byte order"

The first question you should ask is, "What data type?" IEEE single-precision float? Twos-complement integer? It's an integer, what is the implied scale? Is $48.53 represented as 4,853 or 485,300?
It's not uncommon for monetary values to be represented by an integer with an implied scale of either +2 or +4. In your example, $48.58 would be represented as the integer value 4858 or 0x000012FA.
Once you've established what they actually want...use an endian-aware BitConverter or BinaryWriter to create it. Jon Skeet's MiscUtil, for instance offers:
EndianBinaryReader
EndianBinaryWriter
BigEndianBitConverter
LittleEndianBitConverter
There are other implementations out there as well. See my answer to the question "Helpful byte array extensions to handle BigEndian data" for links to some.
Code you don't write it code you don't have to maintain.

Network byte order pseudo-synonym of big-endian, hence (as itsme86 mentioned already) so you can check BitConverter.IsLittleEndian:
float saleamount = 48.58F;
byte[] data2 = BitConverter.IsLittleEndian
? BitConverter.GetBytes(saleamount).Reverse().ToArray()
: BitConverter.GetBytes(saleamount);
But if you don't know this, probably you already using some protocol, which handle it.

Related

Why does BitConverter seemingly return incorrect results when converting floats and bytes?

I'm working in C# and attempting to pack four bytes into a float (the context is game development, where an RGBA color is packed into a single value). To do this, I'm using BitConverter, but certain conversions seem to result in incorrect bytes. Take the following example (using bytes 0, 0, 129, 255):
var before = new [] { (byte)0, (byte)0, (byte)129, (byte)255 };
var f = BitConverter.ToSingle(before, 0); // Results in NaN
var after = BitConverter.GetBytes(f); // Results in bytes 0, 0, 193, 255
Using https://www.h-schmidt.net/FloatConverter/IEEE754.html, I verified that the four bytes I started with (0, 0, 129, 255, equivalent to binary 00000000000000001000000111111111) represents the floating-point value 4.66338115943e-41. By flipping the endianness (binary 11111111100000010000000000000000), I get NaN (which matches f in the code above). But when I convert that float back to bytes, I get 0, 0, 193, 255 (note 193 when I'm expecting 129).
Curiously, running this same example with bytes 0, 0, 128, 255 is correct (the floating-point value f becomes -Infinity, then converting back to bytes yields 0, 0, 128, 255 again). Given this fact, I suspect NaN is relevant.
Can anyone shed some light on what's happening here?
Update: the question Converting 2 bytes to Short in C# was listed as a duplicate, but that's inaccurate. That question is attempting to convert bytes to a value (in that case, two bytes to a short) and incorrect endianness was giving an unexpected value. In my case, the actual float value is irrelevant (since I'm not using the converted value as a float). Instead, I'm attempting to effectively reinterpret four bytes as a float directly by first converting to a float, then converting back. As shown, that back-and-forth sometimes returns different bytes than the ones I sent in.
Second update: I'll simply my question. As Peter Duniho comments, BitConverter will never modify the bytes you pass in, but simply copy them to a new memory location and reinterpret the result. However, as my example shows, it is possible to send in four bytes (0, 0, 129, 255) which are internally copied and reinterpreted to a float, then convert that float back to bytes that are different than the originals (0, 0, 193, 255).
Endianness is frequently mentioned in relation to BitConverter. However, in this case, I feel endianness isn't the root issue. When I call BitConverter.ToSingle, I pass in an array of four bytes. Those bytes represent some binary (32 bits) which is converted to a float. By changing the endianness prior to the function call, all I'm doing is changing the bits I send into the function. Regardless of the value of those bits, it should be possible to convert them to a float (also 32 bits), then convert the float back to get the same bits I sent in. As demonstrated in my example, using bytes 0, 0, 129, 255 (binary 00000000000000001000000111111111) results in a floating-point value. I'd like to take that value (the float represented by those bits) and convert it to the original four bytes.
Is this possible in C# in all cases?
After research, experimentation, and discussion with friends, the root cause of this behavior (bytes changing when converted to and from a float) seems to be signaling vs. quiet NaNs (as Hans Passant also pointed out in a comment). I'm no expert on signaling and quiet NaNs, but from what I understand, quiet NaNs have the highest-order bit of the mantissa set to one, while signaling NaNs have that bit set to zero. See the following image (taken from https://www.h-schmidt.net/FloatConverter/IEEE754.html) for reference. I've drawn four colored boxes around each group of eight bits, as well as an arrow pointing to the highest-order mantissa bit.
Of course, the question I posted wasn't about floating-point bit layout or signaling vs. quiet NaNs, but simply asking why my encoded bytes were seemingly modified. The answer is that the C# runtime (or at least I assume it's the C# runtime) internally converts all signaling NaNs to quiet, meaning that the byte encoded at that position has its second bit swapped from zero to one.
For example, the bytes 0, 0, 129, 255 (encoded in the reverse order, I think due to endianness) puts the value 129 in the second byte (the green box). 129 in binary is 10000001, so flipping its second bit gives 11000001, which is 193 (exactly what I saw in my original example). This same pattern (the encoded byte having its value changed) applies to all bytes in the range 129-191 inclusive. Bytes 128 and lower aren't NaNs, while bytes 192 and higher are NaNs, but don't have their value modified because their second bit (placed at the highest-order mantissa bit) is already one.
So that answers why this behavior occurs, but in my mind, there are two questions remaining:
Is it possible to disable this behavior (converting signaling NaNs to quiet) in C#?
If not, what's the workaround?
The answer to the first question seems to be no (I'll amend this answer if I learn otherwise). However, it's important to note that this behavior doesn't appear consistent across all .NET versions. On my computer, NaNs are converted (i.e. my encoded bytes changed) on every .NET Framework version I tried (starting with 4.8.0, then working back down). NaNs appear to not be converted (i.e. my encoded bytes did not change) in .NET Core 3 and .NET 5 (I didn't test every available version). In addition, a friend was able to run the same sample code on .NET Framework 4.7.2, and surprisingly, the bytes were not modified on his machine. The internals of different C# runtimes isn't my area of expertise, but suffice to say there's variance among versions and computers.
The answer to the second question is to, as others have suggested, simply avoid the float conversion entirely. Instead, each set of four bytes (representing RGBA colors in my case) can either be encoded in an integer or added to a byte array directly.

char8 & uchar8 Equivalent

I am working on a program, or I communicate with a card via UDP, the answers of the card I recover them in a Byte table and I put them in an int32 but they ask me that the gain be coded in uchar8 and Pedestal in char8, what is it the equivalent of char8 and uchar8 in C # and what exactly is 0.255 the max and min value?
char8 & uchar8
As far as the specification you're asking about goes, you'll need to ask the author of the specification for specifics. However, based solely on the image you've provided, I'd say that yes, the numerical values given indicate the minimum and maximum values allowed for the types. As such, the equivalent C# types would be sbyte for your char8 type, and byte for the uchar8 type.
The former has a range of -128 to 127 (it's a signed type), while the latter has a range of 0 to 255 (being an unsigned type). Both are stored as single bytes, and as such have 256 different possible values.

Reading and writing non-standard floating point values

I'm working with a binary file (3d model file for an old video game) in C#. The file format isn't officially documented, but some of it has been reverse-engineered by the game's community.
I'm having trouble understanding how to read/write the 4-byte floating point values. I was provided this explanation by a member of the community:
For example, the bytes EE 62 ED FF represent the value -18.614.
The bytes are little endian ordered. The first 2 bytes represent the decimal part of the value, and the last 2 bytes represent the whole part of the value.
For the decimal part, 62 EE converted to decimal is 25326. This represents the fraction out of 1, which also can be described as 65536/65536. Thus, divide 25326 by 65536 and you'll get 0.386.
For the whole part, FF ED converted to decimal is 65517. 65517 represents the whole number -19 (which is 65517 - 65536).
This makes the value -19 + .386 = -18.614.
This explanation mostly makes sense, but I'm confused by 2 things:
Does the magic number 65536 have any significance?
BinaryWriter.Write(-18.613f) writes the bytes as 79 E9 94 C1, so my assumption is the binary file I'm working with uses its own proprietary method of storing 4-byte floating point values (i.e. I can't use C#'s float interchangably and will need to encode/decode the values first)?
Firstly, this isn't a Floating Point Number its a Fix Point Number
Note : A fixed point number has a specific number of bits (or digits) reserved for the integer part (the part to the left of the decimal point)
Does the magic number 65536 have any significance
Its the max number of values unsigned 16 bit number can hold, or 2^16, yeah its significant, because the number you are working with is 2 * 16 bit values encoded for integral and fractional components.
so my assumption is the binary file I'm working with uses its own
proprietary method of storing 4-byte floating point values
Nope wrong again, floating point values in .Net adhere to the IEEE Standard for Floating-Point Arithmetic (IEEE 754) technical standard
When you use BinaryWriter.Write(float); it basically just shifts the bits into bytes and writes it to the Stream.
uint TmpValue = *(uint *)&value;
_buffer[0] = (byte) TmpValue;
_buffer[1] = (byte) (TmpValue >> 8);
_buffer[2] = (byte) (TmpValue >> 16);
_buffer[3] = (byte) (TmpValue >> 24);
OutStream.Write(_buffer, 0, 4);
If you want to read and write this special value you will need to do the same thing, you are going to have to read and write the bytes and convert them your self
This should be a build-in value unique to the game.
It should be more similar to Fraction Value.
Where 62 EE represent the Fraction Part of the value and FF ED represent the Whole Number Part of the value.
The While Number Part is easy to understand so I'm not going to explain it.
The explanation of Fraction Part is:
For every 2 bytes, there are 65536 possibilities (0 ~ 65535).
256 X 256 = 65536
hence the magic number 65536.
And the game itself must have a build-in algorithm to divide the first 2 bytes by 65536.
Choosing any number other than this will be a waste of memory space and result in decreased accuracy of the value which can be represented.
Of course, it's all depended on what kind of accuracy the game wish to present.

Why doesn't my 32-bit integer convert into a float properly?

Background
First of all, I have some hexadecimal data... 0x3AD3FFD6. I have chosen to represent this data as an array of bytes as follows:
byte[] numBytes = { 0x3A, 0xD3, 0xFF, 0xD6 };
I attempt to convert this array of bytes into its single-precision floating point value by executing the following code:
float floatNumber = 0;
floatNumber = BitConverter.ToSingle(numBytes, 0);
I have calculated this online using this IEEE 754 Converter and got the following result:
0.0016174268
I would expect the output of the C# code to produce the same thing, but instead I am getting something like...
-1.406E+14
Question
Can anybody explain what is going on here?
The bytes are in the wrong order. BitConverter uses the endianness of the underlying system (computer architecture), make sure to use the right endianness always.
Quick Answer: You've got the order of the bytes in your numBytes array backwards.
Since you're programming in C# I assume you are running on an Intel processor and Intel processors are little endian; that is, they store (and expect) the least significant bytes first. In your numBytes array you are putting the most significant byte first.
BitConverter doesn't so much convert byte array data as interpret it as another base data type. Think of physical memory holding a byte array:
b0 | b1 | b2 | b3.
To interpret that byte array as a single precision float, one must know the endian of the machine, i.e. if the LSByte is stored first or last. It may seem natural that the LSByte comes last because many of us read that way, but for little endian (Intel) processors, that's incorrect.

Succinct way to write a mixture of chars and bytes?

I'm trying to write an index file that follows the format of a preexisting (and immutable) text file.
The file is fixed length, with 11 bytes of string (in ASCII) followed by 4 bytes of long for a total of 15 bytes per line.
Perhaps I'm being a bit dim, but is there an simple way to do this? I get the feeling I need to open up two streams to write one line - one for the string and one for the bytes - but that feels wrong.
Any hints?
You can use BitConverter to convert between an int/long and an array of bytes. This way you would be able to write eleven bytes followed by four bytes, followed by eleven more bytes, and so on.
byte[] intBytes = BitConverter.GetBytes(intValue); // returns 4-byte array
Converting to bytes: BitConverter.GetBytes(int).
Converting back to int: BitConverter.ToInt32(byte\[\], int)
If you are developing a cross-platform solution, keep in mind the following note from the documentation (thanks to uriDium for the comment):
The order of bytes in the array returned by the GetBytes method depends on whether the computer architecture is little-endian or big-endian.

Categories