I have the following:
string value = "9223372036854775807";
double parsedVal = double.Parse(value, CultureInfo.InvariantCulture);
... and the result is 9.2233720368547758E+18 which is not the exact same number. How should I convert string to double without loss of precision?
double can only guarantee 16 (approx) decimal digits of accuracy. You might try switching to decimal (which has a few more bits to play with, and holds that value with plenty of headroom).
You can't convert 9223372036854775807 to double without loss of precision, due to the definition of a double ((IEEE 754) standard for binary floating-point arithmetic).
By default, a Double value contains 15 decimal digits of precision,
although a maximum of 17 digits is maintained internally.
Using Decimal will get you the precision you need here. However please note that Decimal will not have the same storage range as a double.
See Can C# store more precise data than doubles?
Related
While parsing the string to double we are facing a issue. we need to convert 18 digit string(<123,456,789,012>.<123456>) to double. It returns only 5 digits after the decimal. When we tried to reducing the no of digits, it worked fine. We have attached the screenshot with three different scenarios. Variable 'S' is the input and retValue is the output value.
Kindly help us in converting 123,456,789,012.123456 to 123456789012.123456
string s = "123,456,789,012.123456";
double retVal;
System.Globalization.CultureInfo cInfo = new System.Globalization.CultureInfo(System.Web.HttpContext.Current.Session["culture"].ToString());
retVal = double.Parse(s, NumberStyles.Any, cInfo);
Issue screen shot
A double has a precision of up to 15 digits, see MSDN:
A Double value has up to 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
You should use decimal and Decimal.Parse(..) that has 28-29 significant digits.
You are running into a precision issue that is inherent to all floating point data types like double. If you need more exact precision, you should be using decimal instead:
string s = "987,654,321.123456";
decimal d = decimal.Parse(s);
See it in action: https://repl.it/Gtoy/0
EDIT: As Panagiotis Kanavos points out, decimal isn't perfect either. It uses 128 bits as opposed to double's 64. However, where double uses its bits to represent larger numbers, decimal uses it to represent smaller numbers with much greater accuracy.
I have a double with 17 digits after the decimal point, i.e.:
double myDouble = 0.12345678901234567;
If I convert this to a decimal like this:
decimal myDecimal = Convert.ToDecimal(myDouble);
then the value of myDecimal is rounded, as per the Convert.ToDecimal documentation, to 15 digits (i.e. 0.0123456789012345). My question is, why is this rounding performed?
I understand that if my original number could be accurately represented in base 10 and I was trying to store it as a double, then we could only have confidence in the first 15 digits. The final two digits would be subject to rounding error. But, that's a base 10 biased point of view. My number may be more accurately represented by a double and I wish to convert it to decimal while preserving as much accuracy as possible.
Shouldn't Convert.ToDecimal aim to minimise the difference between myDouble and (double)Convert.ToDecimal(myDouble)?
From the documentation of Double:
A Double value has up to 15 decimal digits of precision, although a
maximum of 17 digits is maintained internally
So, as the double value itself has a maximum of 15 decimal places, converting it to Decimal will result in a Decimal value with 15 significant figures.
The behavior of the rounding guarantees that conversion of any Decimal which has at most fifteen significant figures to double and back to Decimal will yield the original value unchanged. If values were rounded to sixteen figures rather than fifteen, such a guarantee would not only fail to hold for number with sixteen figures, but it wouldn't even hold for much shorter values. For example, the closest Double value to 9000.04 is approximately 9000.040000000000873115; rounding that to sixteen figures would yield 9000.040000000001.
The choice of rounding one should use depends upon whether regards the best Decimal value of double value 9000.04 as being 9000.04m, 9000.040000000001m, 9000.0400000000008731m, or perhaps something else. Microsoft probably decided that any representation other than 9000.04m would be confusing.
The following is from the documentation of the method in question.
http://msdn.microsoft.com/en-us/library/a69w9ca0(v=vs.110).aspx
"The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
Every terminating binary fraction is exactly representable as a decimal fraction, so the minimum possible difference for a finite number is always 0. The IEEE 754 64-bit representation of your number is exactly equal to 0.1234567890123456634920984242853592149913311004638671875
Every conversion from binary floating point to decimal or decimal string must embody some decision about rounding. In theory, they could preserve all the digits, but that would result in some very long outputs with most of the digits having little meaning.
One option, used in Java's Double.toString, is to stop at the shortest decimal representation that would convert back to the original double.
Most set some fairly arbitrary limit. 15 significant digits preserves most meaningful digits.
How do I convert a double to have precision of 2 places?
For ex:
double x = 1.00d;
Console.WriteLine(Math.Round(x,2,MidpointRounding.AwayFromZero));
//Should output 1.00 for me, but it outputs just 1.
What I'm looking for is to store 1.00 in a double variable which when printed to console prints 1.00.
Thanks,
-Mike
"x" stores the number - it doesn't pay attention to the precision.
To output the number as a string with a specific amount of precision, use a format specifier, ie:
Console.WriteLine(x.ToString("F2"));
double is an IEEE-754 double-precision floating point number. It always has exactly 53 bits of precision. It does not have a precision that can be exactly represented in "decimal places" -- as the phrase implies, "decimal places" only measures precision relative to the base-10 number system.
If what you want is simply to change the precision the value is displayed with, then you can use one of the ToString() overloads to do that, as other answers here point out.
Remember too that double is an approximation of a continuous real value; that is, a value where there is no implied discrete quantization (although the values are in fact quantized to 53 bits). If what you are trying to represent is a discrete value, where there is an implicit quantization (for example, if you are working with currency values, where there is a significant difference between 0.1 and 0.099999999999999) then you should probably be using a data type that is based on integer semantics -- either a fixed-point representation using a scaled integer or something like the .NET decimal type.
This is not a question of storing, it is a question of formatting.
This code produces 1.00:
double x = 1.00d;
Console.WriteLine("{0:0.00}", Math.Round(x, 2, MidpointRounding.AwayFromZero));
The decimal data type has a concept of variable precision. The double data type does not. Its precision is always 53 bits.
The default string representation of a double rounds it slightly to minimize odd-looking values like 0.999999999999, and then drops any trailing zeros. As other answers note, you can change that behavior by using one of the type's ToString overloads.
i am trying to convert "12345678.12345678" to double, but Double.Parse changes it 12345678.123457. Same is the case when i use Decimal instead of double
decimal check = Decimal.Parse("12345678.12345678", NumberStyles.AllowDecimalPoint);//returns 12345678.123457
double check1 = (Double)check; //returns 12345678.123457
Floating point arithmetic with double precision values inherently has finite precision. There only are 15-16 significant decimal digits of information in a double precision value. The behaviour you see is exactly to be expected.
The closest representable double precision value to 12345678.12345678 is 12345678.1234567798674106597900390625 which tallies with your observed behaviour.
Floating point types haves only so many significant digits: 15 or 16 in the case of System.Double (the exact number varies with value).
The documentation for System.Double covers this.
A read of What Every Computer Scientist Should Know About Floating-Point Arithmetic is worth while.
If you take a look at the page for the double datatype you'll see that the precision is 15-16 digits. You've reached the limit of the precision of the type.
I believe Decimal might be what you're looking for in this situation.
Just a quick test gave me the correct value.
double dt = double.Parse("12345678.12345678");
Console.WriteLine(dt.ToString());
There are two things going on here:
A decimal to double conversion is inexact/the double type has precision which does not map to whole numbers well (at least in a decimal system)
Double has a decimal place limit of 15-16 places
Reference for decimal to double conversion;
I want to print out the string represenation of a double without losing precision using ToString() I get the following when I try formatting it as a string:
double i = 101535479557522.5;
i.ToString(); //displays 101535479557523
How do I do this in C#?
If you want it to be exactly the same you should use i.ToString("r") (the "r" is for round-trip). You can read about the different numeric formats on MSDN.
You're running into the limits of the precision of Double. From the docs:
Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
If you want more precision - and particularly maintaining a decimal representation - you should look at using the decimal type instead.