Converting double to float in C# giving wrong values - c#

Question is real simple. I have a double which I want to convert to float.
double doubleValue = 0.00000000000000011102230246251565;
float floatValue = (float)doubleValue;
Now when I do this. Float value just goes mad. Its value is "1.110223E-16" Do I need to do something extra for this flow. Value should be 0 or 0.1 but it is not here. What the problem here?

Converting double to float will definitely be loss of precision . But the value 1.110223E-16 is exponent notation i.e. raised to power -16 that means it is something like 0.0000000000000001110223.
You should use Math.Round() to round numbers.

The float is perfectly fine. It represents the value of the double up to the precision of a float.
1.110223E-16 is another notation for 0.0000000000000001110223.

Double is double-precision floating-point number but float is single-precision floating-point number.
So normally, you can loss precision when you convert Double to float. 1.110223E-16 is equal something like 0.0000000000000001110223 as it representation.
If you want to round the nearest number your values, you can use Math.Round() method.
Rounds a value to the nearest integer or to the specified number of
fractional digits.
Take a look at The Floating-Point Guide

A float is capable of representing about 6 significant figures with perfect accuracy. That's not the same thing as 6 decimal places which is what I think you were assuming.

Actually it is giving the right value.
How about getting rid of exponent after converting to float
double doubleValue = 0.00000000000000011102230246251565;
float result = (float)doubleValue;
decimal d = Decimal.Parse(result.ToString(), System.Globalization.NumberStyles.Float);

Related

c# print float values with more precision

I want to print floats with greater precision than is the default.
For example when I print the value of PI from a float i get 6 decimals. But if I copy the same value from the float into a double and print it i get 14 decimals.
Why do I get more precision when printing the same value but as a double?
How can I get Console.WriteLine() to output more decimals when printing floats without needing to copy it into a double first?
I also tried the 0.0000000000 but it did not write with more precision, it just added more zeroes. :-/
My test code:
float x = (float)Math.PI;
double xx = x;
Console.WriteLine(x);
Console.WriteLine(xx);
Console.WriteLine($"{x,18:0.0000000000}"); // Try to force more precision
Console.WriteLine($"{xx,18:0.0000000000}");
Output:
3,141593
3,14159274101257
3,1415930000 <-- The program just added more zeroes :-(
3,1415927410
I also tried to enter PI at https://www.h-schmidt.net/FloatConverter/IEEE754.html
The binary float representation of PI is 0b01000000010010010000111111011011
So the value is: 2 * 0b1.10010010000111111011011 = 2 * 1.57079637050628662109375 = 3.1415927410125732421875
So there are more decimals to output. How would I get C# to output this whole value?
There is no more precision in a float. When you convert it to a double, the accuracy is worse, even though the precision (number of digits) increased - basically, all those extra digits you see in the double print are conversion artifacts - they are wrong, just the best representation of that given float number you can get in a double. This has everything to do with how binary floating point numbers work.
Let's look at the binary representation of the float:
01000000010010010000111111011100
The first bit is sign, the next eight are the exponent, and the rest is the mantissa. What do we get when we cast it to a double? The exponent stays the same, but the mantissa is filled in with zeroes. But that actually changes the number - you get 3.1415929794311523 (rounded, as always with binary floats) instead of the correct 3.14159265358979 double value of pi. You get the illusion of greater precision, but it's only an illusion - the number is no more accurate than before, you just replaced zeroes with noise.
There's no 1:1 mapping between floats and doubles. The same float value can be represented by many different double values, and the decimal representation of the number can change accordingly. Every cast of float to double should be followed by a rounding operation if you care about decimal precision. Consider this code instead:
double.Parse(((float)Math.PI).ToString())
Instead of casting the float to a double, I first changed it to a decimal representation (in a string), and created the double from that. Now instead of having a "truncated" double, you have a proper double that doesn't lie about extra precision; when you print it out, you get 3.1415930000. Still rounded, of course, since it's still a binary->decimal conversion, but no longer pretending to have more precision than is actually there - the rounding happens at a later digit than in the float version, the zeroes are really zeroes, except for the last one (which is only approximately zero).
If you want real decimal precision, you might want to use a decimal type (e.g. int or decimal). float and double are both binary numbers, and only have binary precision. A finite decimal number isn't necessarily finite in binary, just like a finite trinary number isn't necessarily finite in decimal (e.g. 1/3 doesn't have a finite decimal representation).
A float within c# has a precision of 7 digits and no more. That means 1 digit before the decimal and 6 after.
If you do have any more digits in your output, they might be entirely wrong.
If you do need more digits, you have to use either double which has 15-16 digits precision or decimal which has 28-29 digits.
See MSDN for reference.
You can easily verify that as the digits of PI are a little different from your output. The correct first 40 digits are: 3.14159 26535 89793 23846 26433 83279 50288 4197
To print the value with 6 places after the decimal.
Console.WriteLine("{0:N6}", (double)2/3);
Output :
0.666667

Convert float to double

I'm trying to convert Single to Double while maintaining the original value. I've found the following method:
Single f = 5.2F;
Double d1 = f; // 5.19999980926514
Double d2 = Double.Parse(f.ToString()); // 5.2 (correct)
Is this practice recommendable? I don't need an optimal method, but the intended value must be passed on to the double. Are there even consequences to storing a rounded value in a double?
You could use "decimal" instead of a string.
float f = 5.2F;
decimal dec = new decimal(f);//5.2
double d = (double)dec; //5.2
The conversion is exact. All the Single values can be represented by a Double value, because they are "built" in the same way, just with more possible digits. What you see as 5.2F is in truth 5.1999998092651368. If you go http://www.h-schmidt.net/FloatConverter/IEEE754.html and insert 5.2 you'll see that it has an exponent of 2^2 (so 4) and a mantissa of 1.2999999523162842. Now, if you multiply the two numbers you'll get 5.1999998092651368.
Single have a maximum precision of 7 digits, so .NET only shows 7 digits. With a little rounding 5.1999998092651368 is 5.2
If you know that your numbers are multiples of e.g. 0.01, I would suggest that you convert to double, round to the nearest integer, and subtract that to get the fractional residue. Multiply that by 100, round to the nearest integer, and then divide by 100. Add that to the whole-number part to get the nearest double representation to the multiple of 0.01 which is nearest the original number.
Note that depending upon where the float values originally came from, such treatment may or may not improve accuracy. The closest float value to 9000.02 is about 9000.019531, and the closest float value to 9000.021 is about 9000.021484f. If the values were arrived at by converting 9000.020 and 9000.021 to float, the difference between them should be about 0.01. If, however, they were arrived at by e.g. computing 9000f+0.019531f and 9000f+0.021484f, then the difference between them should be closer to 0.02. Rounding to the nearest 0.01 before the subtract would improve accuracy in the former case and degrade it in the latter.

C# float variable showing epsilon number

i am facing a problem while assigning a large value to a float variable. Following is the code
float f1=99999999999.959f;
When i am retrieving value from this variable it is showing 1.0E+11 value. i want to get my original value. Can anyone help me to sort-out this problem.
In C#, float values only have a precision of 7 digits, so you're getting the best you can out of that data type. If you need more precision, use double or decimal.
http://msdn.microsoft.com/en-us/library/s1ax56ch.aspx
you are getting back the number you entered, to the degree of accuracy available from a float (about 6-7 d.p.)
So firstly, the storage of the value has limited precision, and secondly the way you display it may add further rounding.
If you use a double, you will increase the precision to around 15 d.p., or you can go further and use a decimal.
Primitives types float and double are approximations.
Here is a great response on their precision: How to Calculate Double + Float Precision
Even decimal has this problem. C# does not come with big num class, but you can use libraries for that. If you need more accurate decimal numbers then decimal (±1.0 × 10^−28 to ±7.9 × 10^28) you should try them.
Example: https://bcl.codeplex.com/ (BigRational)

c# Convert String to double loss of precision

I have the following:
string value = "9223372036854775807";
double parsedVal = double.Parse(value, CultureInfo.InvariantCulture);
... and the result is 9.2233720368547758E+18 which is not the exact same number. How should I convert string to double without loss of precision?
double can only guarantee 16 (approx) decimal digits of accuracy. You might try switching to decimal (which has a few more bits to play with, and holds that value with plenty of headroom).
You can't convert 9223372036854775807 to double without loss of precision, due to the definition of a double ((IEEE 754) standard for binary floating-point arithmetic).
By default, a Double value contains 15 decimal digits of precision,
although a maximum of 17 digits is maintained internally.
Using Decimal will get you the precision you need here. However please note that Decimal will not have the same storage range as a double.
See Can C# store more precise data than doubles?

Double.Parse losing precision when converts 8 decimal places string

i am trying to convert "12345678.12345678" to double, but Double.Parse changes it 12345678.123457. Same is the case when i use Decimal instead of double
decimal check = Decimal.Parse("12345678.12345678", NumberStyles.AllowDecimalPoint);//returns 12345678.123457
double check1 = (Double)check; //returns 12345678.123457
Floating point arithmetic with double precision values inherently has finite precision. There only are 15-16 significant decimal digits of information in a double precision value. The behaviour you see is exactly to be expected.
The closest representable double precision value to 12345678.12345678 is 12345678.1234567798674106597900390625 which tallies with your observed behaviour.
Floating point types haves only so many significant digits: 15 or 16 in the case of System.Double (the exact number varies with value).
The documentation for System.Double covers this.
A read of What Every Computer Scientist Should Know About Floating-Point Arithmetic is worth while.
If you take a look at the page for the double datatype you'll see that the precision is 15-16 digits. You've reached the limit of the precision of the type.
I believe Decimal might be what you're looking for in this situation.
Just a quick test gave me the correct value.
double dt = double.Parse("12345678.12345678");
Console.WriteLine(dt.ToString());
There are two things going on here:
A decimal to double conversion is inexact/the double type has precision which does not map to whole numbers well (at least in a decimal system)
Double has a decimal place limit of 15-16 places
Reference for decimal to double conversion;

Categories