I have a hex value 0x492655FE that want to convert to float. My code is
uint num = uint.Parse(hexString, System.Globalization.NumberStyles.AllowHexSpecifier);
byte[] floatVals = BitConverter.GetBytes(num);
BitConverter.ToSingle(floatVals, 0).Dump();
Result: 681311,9
But in ModbusPoll program : display float inverse result: 681311,8750
I tried another code. But result is same. What can I do for this problem?
According to this
0x492655FE in single-precision represents exactly 6.81311875E5 or 681311.875 in decimal. If you print more digits after the radix point you'll get the same output
However float has only 23 bits of mantissa which corresponds to ~6-7 decimal digits of precision. The rest are generally just noise because powers of 2 cannot represent exactly numbers in fractional numbers in decimal, thus it's safe to round to only the first 6-7 digits
Related
This question already has answers here:
How to calculate float type precision and does it make sense?
(4 answers)
Closed 2 years ago.
On the C# reference for floating-point numeric types one can read that
float has a precision of 6 to 9 digits
double has a precision of 15 to 17 digits
decimal has a precision of 28 to 29 digits
What does precision mean in this context and especially, how can the precision be a range? As the number of bits for the exponent and mantissa are fixed, how can the precision be variable (in the described range)? Can someone please give an example for e.g. a float with a precision of 6 and one with a precision of 9?
float and double
(I'll explain float, that is IEEE-754 Single-precision floating-point format, but double, that is IEEE-754 Double-precision floating-point format is the same but with bigger numbers.
In general you can imagine a float to be:
mantissa₂ * (2 ^ exponent₂)
where mantissa₂ means mantissa in base two, and exponent₂ means exponent in base two
The mantissa₂ is 23 bits, the exponent₂ 8 bits. There is an extra bit for the sign, and the exponent₂ has a special format with special range that we will see much below
There is another trick: floating points are normally saved in "normalized" form:
1₂ mantissa₂ * (2 ^ exponent₂)
so the first digit is always 1₂, and so there is a 1₂ plus 23 binary digits for the mantissa₂, so a total of 24 digits for the complete mantissa₂.
Now, with 24 bits you can have numbers between 0 and 16,777,216, that is 7 full digits plus the 8th that is "partial" (you can't have 26,777,216 for example). In fact log₁₀ 2^24 = 7.22471989594
The exponent "moves" a floating decimal point, so that you can have, for example
1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂ . 1₂ (there are a total of 24 binary digits 1, I hope... I counted them)
or
1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂ . 1₂1₂
or
1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂0₂
or
1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂1₂0₂0₂
and so on.
The exponent₂ has three ranges: [-1;-127], [1;127], 0 for denormalized numbers and 255 for NaN and Infinite (where 255 means that all the bits of the exponent are at 1)
In the range [-1;-127] the decimal point is moved to the left, for a number of steps equal to the range, in the range [1;127]` the decimal point is moved to the right in the same way.
If the exponent is 0, the number is "denormalized". They are ugly floating point numbers that have special handling and are slower for this reason. When the number is "denormalized" then there is no implicit 1₂ at the beginning of the number, so you only have 23 bits of mantissa, that is 6 dot something digits of precision (log₁₀ 2^23 = 6.9236899)
Can't explain how the 9 digits of precision come out.
decimal
With decimal it is easy: the format is:
mantissa₂ / (10 ^ exponent₂)
where mantissa₂ is 96 bits, exponent₂ is 5 bits (a little less, the range is [0;28]), plus there is a sign bit, and many unused bits. The exact format is written in the reference source. In decimals there is no implicit initial 1₂, so it is pure 96 bits, and log₁₀ 2^96 = 28.8988795837, so 28 or 29 digits.
What's the meaning of precision digits in float / double / decimal?
The decimal digits needed to round trip between text and the type.
text-FP-text: When a number is decimal text and then converted to floating point type and then converted back to text with the same number of digits and get the same value, over the entire exponent range of the FP type, the max number of significant decimal digits in the text version is the lower number like 6 for float. As long as the text version has only 6 digits display, float can encode a value close enough.
FP-text-FP: When a FP number is converted to decimal text and then converted back to FP and get the same FP value, the number of significant decimal digits needed for the text version is the higher number like 9 for float. As long as a text version reports 9+ significant digits, the original FP value can be recovered exactly.
float has 24 bits of binary precession. To translate that into decimal, the above context is important. The minimum non-zero double takes about 330+ decimal digits to printout exactly, yet that is rarely thought of as the precession of that number.
Can someone please give an example for e.g. a float with a precision of 6 and one with a precision of 9?
6 decimal digits always works .... "9999979e3" and "9999978e3" both convert to 9.999978e+09, so 9 significant text digits needed to round-trip.
I have a problem understanding the precision of float type.
The msdn writes that precision from 6 to 9 digits. But I note that precision depends from on the size of the number:
float smallNumber = 1.0000001f;
Console.WriteLine(smallNumber); // 1.0000001
bigNumber = 100000001f;
Console.WriteLine(bigNumber); // 100000000
The smallNumber is more precise than big, I understand IEEE754, but I don't understand how MSDN calculate precision, and does it make sense?
Also, you can play with the representation of numbers in float format here. Please write 100000000 value in "You entered" input and click "+1" on the right. Then change the input's value to 1, and click "+1" again. You may see the difference in precision.
The MSDN documentation is nonsensical and wrong.
Bad concept. Binary-floating-point format does not have any precision in decimal digits because it has no decimal digits at all. It represents numbers with a sign, a fixed number of binary digits (bits), and an exponent for a power of two.
Wrong on the high end. The floating-point format represents many numbers exactly, with infinite precision. For example, “3” is represented exactly. You can write it in decimal arbitrarily far, 3.0000000000…, and all of the decimal digits will be correct. Another example is 1.40129846432481707092372958328991613128026194187651577175706828388979108268586060148663818836212158203125•10−45. This number has 105 significant digits in decimal, but the float format represents it exactly (it is 2−149).
Wrong on the low end. When “999999.97” is converted from decimal to float, the result is 1,000,000. So not even one decimal digit is correct.
Not a measure of accuracy. Because the float significand has 24 bits, the resolution of its lowest bit is about 223 times finer than the resolution of its highest bit. This is about 6.9 digits in the sense that log10223 is about 6.9. But that just tells us the resolution—the coarseness—of the representation. When we convert a number to the float format, we get a result that differs from the number by at most ½ of this resolution, because we round to the nearest representable value. So a conversion to float has a relative error of at most 1 part in 224, which corresponds to about 7.2 digits in the above sense. If we are using digits to measure resolution, then we say the resolution is about 7.2 digits, not that it is 6-9 digits.
Where do these numbers came from?
So, if “~6-9 digits” is not a correct concept, does not come from actual bounds on the digits, and does not measure accuracy, where does it come from? We cannot be sure, but 6 and 9 do appear in two descriptions of the float format.
6 is the largest number x for which this is guaranteed:
If any decimal numeral with at most x significant digits is within the normal exponent bounds of the float format and is converted to the nearest value represented in the format, then, when the result is converted to the nearest decimal numeral with at most x significant digits, the result of that conversion equals the original number.
So it is reasonable to say float can preserve at least six decimal digits. However, as we will see, there is no bound involving nine digits.
9 is the smallest number x that guarantees this:
If any finite float number is converted to the nearest decimal numeral with x digits, then, when the result is converted to the nearest value representable in float, the result of that conversion equals the original number.
As an analogy, if float is a container, then the largest “decimal container” guaranteed to fit inside it is six digits, and the smallest “decimal container” guaranteed to hold it is nine digits. 6 and 9 are akin to interior and exterior measurements of the float container.
Suppose you had a block 7.2 units long, and you were looking at its placement on a line of bricks each 1 unit long. If you put the start of the block at the start of a brick, it will extend 7.2 bricks. However, if somebody else chooses where it starts, they might start it in the middle of a brick. Then it would cover part of that brick, all of the next 6 bricks, and and part of the last brick (e.g., .5 + 6 + .7 = 7.2). So a 7.2-unit block is only guaranteed to cover 6 bricks. Conversely, 8 bricks can cover the 7.2-unit block if you choose where they are placed. But if somebody else chooses where they start, the first might cover just .1 units of the block. Then you need 7 more and another fraction, so 9 bricks are needed.
The reason this analogy holds is that powers of two and powers of 10 are irregularly spaced relative to each other. 210 (1024) is near 103 (1000). 10 is the exponent used in the float format for numbers from 1024 (inclusive) to 2048 (exclusive). So this interval from 1024 to 2048 is like a block that has been placed just after the 100-1000 ends and the 1000-10,000 block starts.
But note that this property involving 9 digits is the exterior measurement—it is not a capability that float can perform or a service that it can provide. It is something that float needs (if it is to be held in a decimal format), not something it provides. So it is not a bound on how many digits a float can store.
Further Reading
For better understanding of floating-point arithmetic, consider studying the IEEE-754 Standard for Floating-Point Arithmetic or a good textbook like Handbook of Floating-Point Arithmetic by Jean-Michel Muller et al.
Yes number of digits before rounding errors is a measure of precision but you can not asses precision from just 2 numbers because you might be just closer or further from the rounding threshold.
To better understand the situation then you need to see how floats are represented.
The IEEE754 32bit floats are stored as:
bool(1bit sign) * integer(24bit mantisa) << integer(8bit exponent)
Yes mantissa is 24 bit instead of 23 as it's MSB is implicitly set to 1.
As you can see there are only integers and bitshift. So if you are representing natural number up to 2^24 you are without rounding completely. Fro bigger numbers binary zero padding occurs from the right that causes the difference.
In case of digits after decimal points the zero padding occurs from the left. But there is another problem as in binary you can not store some decadic numbers exactly. For example:
0.3 dec = 0.100110011001100110011001100110011001100... bin
0.25 dec = 0.01 bin
As you can see the sequence of 0.3 dec in binary is infinite (like we can not write 1/3 in decadic) hence if crop it to only 24 bits you lose the rest and the number is not what you want anymore.
If you compare 0.3 and 0.125 the 0.125 is exact and 0.3 is not but 0.125 is much smaller than 0.3. So your measure is not correct unless explored more very close values that will cover the rounding steps and computing the max difference from such set. For example you could compare
1.0000001f
1.0000002f
1.0000003f
1.0000004f
1.0000005f
1.0000006f
1.0000007f
1.0000008f
1.0000009f
and remember the max difference of fabs(x-round(x)) and than do the same for
100000001
100000002
100000003
100000004
100000005
100000006
100000007
100000008
100000009
And then compare the two differences.
On top of all this you are missing one very important thing. And that is the errors while converting from text to binary and back which are usually even bigger. First of all try to print your numbers without rounding (for example force to print 20 decimal digits after decimal point).
Also the numbers are stored in binary base so in order to print them you need to convert to decadic base which involves multiplication and division by 10. The more bits are missing (zero pad) from the number the bigger the print errors are. To be as precise as you can a trick is used and that is to print the number in hex (no rounding errors) and then convert the hex string itself to decadic base on integer math. That is much more accurate then naive floating point prints. for more info see related QAs:
my best attempt to print 32 bit floats with least rounding errors (integer math only)
How do libraries/programming languages convert floats to strings
How do I convert a very long binary number to decimal?
Now to get back to number of "precise" digits represented by float. For integer part of number is that easy:
dec_digits = floor(log10(2^24)) = floor(7.22) = 7
However for digits after decimal point is this not as precise (for first few decadic digits) as there are a lot rounding going on. For more info see:
How do you print the EXACT value of a floating point number?
I think what they mean in their documentation is that depending on the number that the precision ranges from 6 to 9 decimal places. Go by the standard that is explained on the page you linked, sometimes Microsoft are a bit lazy when it comes to documentation, like the rest of us.
The problem with floating point is that it is inaccurate. If you put the number 1.05 into the site in your link you will notice that it cannot be accurately stored in floating point. It's actually stored as 1.0499999523162841796875. It's stored this way to do calculations faster. It's not great for money, e.g. what if your item is priced at $1.05 and you sell a billion of them.
The smallNumber is more precise than big
Incorrect compare. The other number has more significant digits.
1.0000001f is attempting N digits of decimal precision.
100000001f attempts N+1.
I have a problem understanding the precision of float type.
To best understand float precision, think binary. Use "%a" for printing with a C99 or later compiler.
float is stored base 2. The significand is a Dyadic rational, some integer/power-of-2.
float commonly has 24 bits of binary precision. (23-bit explicitly encoded, 1 implied)
Between [1.0 ... 2.0), there are 223 different float values.
Between [2.0 ... 4.0), there are 223 different float values.
Between [4.0 ... 8.0), there are 223 different float values.
...
The possible values of a float are not distributed uniformly among powers-of-10. The grouping of float values to power-of-10 (decimal precision) results in the wobbling 6 to 9 decimal digits of precision.
How to calculate float type precision?
To find the difference between subsequent float values, since C99, use nextafterf()
Illustrative code:
#include<math.h>
#include<stdio.h>
void foooo(float b) {
float a = nextafterf(b, 0);
float c = nextafterf(b, b * 2.0f);
printf("%-15a %.9e\n", a, a);
printf("%-15a %.9e\n", b, b);
printf("%-15a %.9e\n", c, c);
printf("Local decimal precision %.2f digits\n", 1.0 - log10((c - b) / b));
}
int main(void) {
foooo(1.0000001f);
foooo(100000001.0f);
return 0;
}
Output
0x1p+0 1.000000000e+00
0x1.000002p+0 1.000000119e+00
0x1.000004p+0 1.000000238e+00
Local decimal precision 7.92 digits
0x1.7d783ep+26 9.999999200e+07
0x1.7d784p+26 1.000000000e+08
0x1.7d7842p+26 1.000000080e+08
Local decimal precision 8.10 digits
I want to print floats with greater precision than is the default.
For example when I print the value of PI from a float i get 6 decimals. But if I copy the same value from the float into a double and print it i get 14 decimals.
Why do I get more precision when printing the same value but as a double?
How can I get Console.WriteLine() to output more decimals when printing floats without needing to copy it into a double first?
I also tried the 0.0000000000 but it did not write with more precision, it just added more zeroes. :-/
My test code:
float x = (float)Math.PI;
double xx = x;
Console.WriteLine(x);
Console.WriteLine(xx);
Console.WriteLine($"{x,18:0.0000000000}"); // Try to force more precision
Console.WriteLine($"{xx,18:0.0000000000}");
Output:
3,141593
3,14159274101257
3,1415930000 <-- The program just added more zeroes :-(
3,1415927410
I also tried to enter PI at https://www.h-schmidt.net/FloatConverter/IEEE754.html
The binary float representation of PI is 0b01000000010010010000111111011011
So the value is: 2 * 0b1.10010010000111111011011 = 2 * 1.57079637050628662109375 = 3.1415927410125732421875
So there are more decimals to output. How would I get C# to output this whole value?
There is no more precision in a float. When you convert it to a double, the accuracy is worse, even though the precision (number of digits) increased - basically, all those extra digits you see in the double print are conversion artifacts - they are wrong, just the best representation of that given float number you can get in a double. This has everything to do with how binary floating point numbers work.
Let's look at the binary representation of the float:
01000000010010010000111111011100
The first bit is sign, the next eight are the exponent, and the rest is the mantissa. What do we get when we cast it to a double? The exponent stays the same, but the mantissa is filled in with zeroes. But that actually changes the number - you get 3.1415929794311523 (rounded, as always with binary floats) instead of the correct 3.14159265358979 double value of pi. You get the illusion of greater precision, but it's only an illusion - the number is no more accurate than before, you just replaced zeroes with noise.
There's no 1:1 mapping between floats and doubles. The same float value can be represented by many different double values, and the decimal representation of the number can change accordingly. Every cast of float to double should be followed by a rounding operation if you care about decimal precision. Consider this code instead:
double.Parse(((float)Math.PI).ToString())
Instead of casting the float to a double, I first changed it to a decimal representation (in a string), and created the double from that. Now instead of having a "truncated" double, you have a proper double that doesn't lie about extra precision; when you print it out, you get 3.1415930000. Still rounded, of course, since it's still a binary->decimal conversion, but no longer pretending to have more precision than is actually there - the rounding happens at a later digit than in the float version, the zeroes are really zeroes, except for the last one (which is only approximately zero).
If you want real decimal precision, you might want to use a decimal type (e.g. int or decimal). float and double are both binary numbers, and only have binary precision. A finite decimal number isn't necessarily finite in binary, just like a finite trinary number isn't necessarily finite in decimal (e.g. 1/3 doesn't have a finite decimal representation).
A float within c# has a precision of 7 digits and no more. That means 1 digit before the decimal and 6 after.
If you do have any more digits in your output, they might be entirely wrong.
If you do need more digits, you have to use either double which has 15-16 digits precision or decimal which has 28-29 digits.
See MSDN for reference.
You can easily verify that as the digits of PI are a little different from your output. The correct first 40 digits are: 3.14159 26535 89793 23846 26433 83279 50288 4197
To print the value with 6 places after the decimal.
Console.WriteLine("{0:N6}", (double)2/3);
Output :
0.666667
This question already has answers here:
Difference between decimal, float and double in .NET?
(18 answers)
Closed 7 years ago.
What's the difference between using decimal or floatin C#? I know float is used for more precise decimal numbers and decimal is used for few decimals like currency or prices but when it's used for few decimals why is better to use decimalrather than float?
A float is a floating point binary type, which means that, under the hood, it is a binary mantissa followed by a binary exponent, taking the form mantissa x 10 ^ exponent, 10 being the number 2 in binary. For example, the number 3.0 is represented as 1.1 x 10^1, and the number 8 1/2 is represented as 1.0001 x 10^11. It essentially represents numbers in the form of binary fractions. The problem with base-2 floating point numbers is that it is difficult to precisely represent decimal numbers that aren't factors of 2.
It's easy to represent values like 1/2, 1/4, 1/8, 1/16, etc. in floating-point binary format. 1/2 is just 0.1, 1/4 is 0.01, 1/8 is 0.001, etc. But if you want to represent a decimal value like 0.6, you have to build a sum of base-2 fractions to get close to representing it. So you end up with a floating-point representation of 1.001100110011001100110011 x 10^-1 where the 0011 just keeps repeating, because there is no rational representation of decimal 0.6 in base-2.
This is where the decimal type comes in. Rather than use a fractional binary representation, the decimal type uses a sign bit, a 96-bit integer significand, and an integer scaling factor to represent the value, taking the form (sign x significand) / (10 ^ scaling factor). The sign can be 0 or 1, the significand can be anything from 0 to 2^96 - 1, and the scaling factor can be anything from 0 to 28. What this means is that the number is represented under the hood as a decimal fraction instead of a binary fraction, so the numbers that we humans are used to working with can be precisely and accurately represented in a rational form under the hood. Unlike the ugly and imprecise binary representation of 0.6 that we saw earlier, the decimal type represents 0.6 as a nice clean (1 x 6) / (10 ^ 1). Pretty, isn't it?
Unfortunately, this comes at a cost. Operations on the decimal type are much, much slower than operations on float or double. The processor in virtually every computer known to man is a binary processor (I say "virtually every" because I cannot disprove the existence of a non-binary computer somewhere on the planet). This means that it natively supports binary addition, subtraction, etc. An operation like float x = 256.0 / 2; compiles to a simple instruction where the exponent of the floating point number gets decremented. However, decimal x = 256.0m / 2; compiles to a more complicated set of instructions, because the number is not stored as a binary fraction, and the special base-10 representation of the number must be accounted for.
Generally, if you require speed more than decimal accuracy, a float or double will suffice for your application. If, however, you require decimal accuracy above all else, such as for accounting, then decimal is the type to use.
See this MSDN documentation for more details.
The decimal keyword indicates a 128-bit data type. Compared to floating-point types, the decimal type has more precision and a smaller range, which makes it appropriate for financial and monetary calculations. The approximate range and precision for the decimal type are shown in the following table.
(-7.9 x 1028 to 7.9 x 1028) / (10^0 - ^28), 28-29 significant digits
From MSDN
Floats on the other hand:
The float keyword signifies a simple type that stores 32-bit floating-point values. The following table shows the precision and approximate range for the float type.
-3.4 × 10^38 to +3.4 × 10^38, 7 significant digits
Link
Decimal will have 128 bits, float only 32 bits for the representation of the number
I'm trying to convert Single to Double while maintaining the original value. I've found the following method:
Single f = 5.2F;
Double d1 = f; // 5.19999980926514
Double d2 = Double.Parse(f.ToString()); // 5.2 (correct)
Is this practice recommendable? I don't need an optimal method, but the intended value must be passed on to the double. Are there even consequences to storing a rounded value in a double?
You could use "decimal" instead of a string.
float f = 5.2F;
decimal dec = new decimal(f);//5.2
double d = (double)dec; //5.2
The conversion is exact. All the Single values can be represented by a Double value, because they are "built" in the same way, just with more possible digits. What you see as 5.2F is in truth 5.1999998092651368. If you go http://www.h-schmidt.net/FloatConverter/IEEE754.html and insert 5.2 you'll see that it has an exponent of 2^2 (so 4) and a mantissa of 1.2999999523162842. Now, if you multiply the two numbers you'll get 5.1999998092651368.
Single have a maximum precision of 7 digits, so .NET only shows 7 digits. With a little rounding 5.1999998092651368 is 5.2
If you know that your numbers are multiples of e.g. 0.01, I would suggest that you convert to double, round to the nearest integer, and subtract that to get the fractional residue. Multiply that by 100, round to the nearest integer, and then divide by 100. Add that to the whole-number part to get the nearest double representation to the multiple of 0.01 which is nearest the original number.
Note that depending upon where the float values originally came from, such treatment may or may not improve accuracy. The closest float value to 9000.02 is about 9000.019531, and the closest float value to 9000.021 is about 9000.021484f. If the values were arrived at by converting 9000.020 and 9000.021 to float, the difference between them should be about 0.01. If, however, they were arrived at by e.g. computing 9000f+0.019531f and 9000f+0.021484f, then the difference between them should be closer to 0.02. Rounding to the nearest 0.01 before the subtract would improve accuracy in the former case and degrade it in the latter.