Weird behaviour of Double.ToString() - c#

I am trying to convert the double value 9007199254740992.0 to a string.
But there seems to be a rounding error (the last 2 becomes a 0):
(9007199254740992.0).ToString("#") // Returns "9007199254740990"
(9007199254740992.0).ToString() // Returns"9.00719925474099E+15"
First I thought that maybe the number couldn't be represented as a double. But it can. This can be seen by casting it to a long and then converting it to a string.
((long)9007199254740991.0).ToString() // Returns "9007199254740991"
((long)9007199254740992.0).ToString() // Returns "9007199254740992"
Also, I found that if I use the "R" format, it works.
(9007199254740992.0).ToString("R") // Returns "9007199254740992"
Can anyone explain why ToString("#") doesn't return the double with the integer part at full precision?

As can be seen on MSDN:
By default, the return value only contains 15 digits of precision
although a maximum of 17 digits is maintained internally. If the value
of this instance has greater than 15 digits, ToString returns
PositiveInfinitySymbol or NegativeInfinitySymbol instead of the
expected number. If you require more precision, specify format with
the "G17" format specification, which always returns 17 digits of
precision, or "R", which returns 15 digits if the number can be
represented with that precision or 17 digits if the number can only be
represented with maximum precision.

Related

Convert.ToDouble in c# not giving right answer when accessing redis database

Im trying to convert a long string of numbers into a double in order to use that as an index to a redis database (redis requires a double) but the conversion isnt correct so the database fetch gets messed up.
Here is the code to convert the string into a double
string start="636375529204974172";
ulong istart = Convert.ToUInt64(start); // result 636375529204974172
double dstart = Convert.ToDouble(istart); // result 6.3637552920497421E+17
as you can see the problem starts when i convert istart to a double as dstart, some of the least significant digits gets messed up, seems by the conversion introducing the E+17 in there.
I cant understand why, its a WHOLE NUMBER, no fractional part to create error so why it cant do it?
Does anybody knows how to fix the conversion from a string to double?
I cant understand why, its a WHOLE NUMBER, no fractional part to create error so why it cant do it?
Because it still only has 64 bits of information to play with, 52 of which are used for the mantissa, 11 of which are used for the exponent, and 1 of which is used for the sign.
Basically you shouldn't rely on more than 15 decimal digits of precision in a double. From the docs for System.Double:
All floating-point numbers also have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Double value has up to 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
The first 15 significant digits of your value are correct - nothing is going wrong here. The nearest double value to your original value is exactly 636375529204974208. There is no double that precisely represents 636375529204974172.

Why do the numeric format strings in C# round the number when not using decimals (F0)?

I have encountered something very weird when it comes to the standard numeric format strings in C#. This is probably a known quirk but i can't find anything documenting it and i can't figure out a way to actually get what i want.
I want to take a number like 17.929333333333489 and format it with no decimal places. So i just want "17". But when run this code.
decimal what = 17.929333333333489m;
Console.WriteLine(what.ToString("F0"));
I get this output
18
Looking at Microsoft's documentation on it their examples show the same output.
https://msdn.microsoft.com/en-us/library/kfsatb94(v=vs.110).aspx
// F: -195489100.84
// F0: -195489101
// F1: -195489100.8
// F2: -195489100.84
// F3: -195489100.838
Here is a code example showing the odd issue.
http://csharppad.com/gist/d67ddf5d0535c4fe8e39
This issue is not only limited to the standard ones like "F" and "N" but also the custom ones like "#".
How can i use the standard "F0" formatter and not have it round my number?
From the documentation on Standard Numeric Format Strings:
xx is an optional integer called the precision specifier. The precision specifier ranges from 0 to 99 and affects the number of digits in the result. Note that the precision specifier controls the number of digits in the string representation of a number. It does not round the number itself. To perform a rounding operation, use the Math.Ceiling, Math.Floor, or Math.Round method.
When precision specifier controls the number of fractional digits in the result string, the result strings reflect numbers that are rounded away from zero (that is, using MidpointRounding.AwayFromZero).
So the documentation does indeed discuss this, and there is no apparent way to prevent rounding of the output purely through a format string.
The best I can offer is to truncate the number yourself using Math.Truncate():
decimal what = 17.929333333333489m;
decimal truncatedWhat = Math.Truncate(what);
Console.WriteLine(truncatedWhat.ToString("F0"));
I believe using decimal with "m" at the end rounds up at the given decimal place.
Here is what I experimented.
decimal what = 17.429333333333489m;
Console.WriteLine(what.ToString("F0"));
Console.WriteLine(what.ToString("N0"));
Console.WriteLine(what.ToString("F1"));
Console.WriteLine(what.ToString("N1"))
17
17
17.4
17.4
If you want to get 17, I used different approach using int and deciam
double what = 17.929333333333489;
Console.WriteLine(string.Format("{0:0}", (int)what));
Console.WriteLine(string.Format("{0:0}", what));
Console.WriteLine(string.Format("{0:0.00}", Math.Floor(what*100)/100));
Console.WriteLine(string.Format("{0:0.00}", what));
17
18
17.92
17.93

How to always get the full number or rounded up (always) number c#

If I have a decimal value I would like to return it's full number or next number (always round up) if there are any values in the decimal.
ie:
150.2148 ... returns 151
150.0000 ... returns 150
Which math function does this?
Math.Ceiling is what you are looking for, it has an overload for accepting decimal.
"Returns the smallest integer greater than or equal to the specified
number."
To verify:
Console.WriteLine(Math.Ceiling(150.2148M)); //prints 151
Console.WriteLine(Math.Ceiling(150.0000M)); //prints 150
Just in case:
150.0000M means decimal literal with value 150.0000. M in C# is used to denote the decimal type of the literal. It's not the most common literal type, so this note can be useful.

Why does Convert.ToDecimal(Double) round to 15 significant figures?

I have a double with 17 digits after the decimal point, i.e.:
double myDouble = 0.12345678901234567;
If I convert this to a decimal like this:
decimal myDecimal = Convert.ToDecimal(myDouble);
then the value of myDecimal is rounded, as per the Convert.ToDecimal documentation, to 15 digits (i.e. 0.0123456789012345). My question is, why is this rounding performed?
I understand that if my original number could be accurately represented in base 10 and I was trying to store it as a double, then we could only have confidence in the first 15 digits. The final two digits would be subject to rounding error. But, that's a base 10 biased point of view. My number may be more accurately represented by a double and I wish to convert it to decimal while preserving as much accuracy as possible.
Shouldn't Convert.ToDecimal aim to minimise the difference between myDouble and (double)Convert.ToDecimal(myDouble)?
From the documentation of Double:
A Double value has up to 15 decimal digits of precision, although a
maximum of 17 digits is maintained internally
So, as the double value itself has a maximum of 15 decimal places, converting it to Decimal will result in a Decimal value with 15 significant figures.
The behavior of the rounding guarantees that conversion of any Decimal which has at most fifteen significant figures to double and back to Decimal will yield the original value unchanged. If values were rounded to sixteen figures rather than fifteen, such a guarantee would not only fail to hold for number with sixteen figures, but it wouldn't even hold for much shorter values. For example, the closest Double value to 9000.04 is approximately 9000.040000000000873115; rounding that to sixteen figures would yield 9000.040000000001.
The choice of rounding one should use depends upon whether regards the best Decimal value of double value 9000.04 as being 9000.04m, 9000.040000000001m, 9000.0400000000008731m, or perhaps something else. Microsoft probably decided that any representation other than 9000.04m would be confusing.
The following is from the documentation of the method in question.
http://msdn.microsoft.com/en-us/library/a69w9ca0(v=vs.110).aspx
"The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
Every terminating binary fraction is exactly representable as a decimal fraction, so the minimum possible difference for a finite number is always 0. The IEEE 754 64-bit representation of your number is exactly equal to 0.1234567890123456634920984242853592149913311004638671875
Every conversion from binary floating point to decimal or decimal string must embody some decision about rounding. In theory, they could preserve all the digits, but that would result in some very long outputs with most of the digits having little meaning.
One option, used in Java's Double.toString, is to stop at the shortest decimal representation that would convert back to the original double.
Most set some fairly arbitrary limit. 15 significant digits preserves most meaningful digits.

When does Double.ToString() return a value in scientific notation?

I assume that it has something to do with the number of leading or trailing zeroes, but I can't find anything in msdn that gives me a concrete answer.
At what point does Double.ToString(CultureInfo.InvariantCulture) start to return values in scientific notation?
From the docs for Double.ToString(IFormatProvider):
This instance is formatted with the general numeric format specifier ("G").
From the docs for the General Numeric Format Specifier:
Fixed-point notation is used if the exponent that would result from expressing the number in scientific notation is greater than -5 and less than the precision specifier; otherwise, scientific notation is used. The result contains a decimal point if required, and trailing zeros after the decimal point are omitted. If the precision specifier is present and the number of significant digits in the result exceeds the specified precision, the excess trailing digits are removed by rounding.
However, if the number is a Decimal and the precision specifier is omitted, fixed-point notation is always used and trailing zeros are preserved.
The default precision specifier for Double is documented to be 15.
Although earlier in the table, it's worded slightly differently:
Result: The most compact of either fixed-point or scientific notation.
I haven't worked out whether those two are always equivalent for a Double value...
EDIT: As per Abel's comment:
Also, it is not always the most compact notation. 0.0001 is larger then 1E-04, but the first is output. The MS docs are not complete here.
That fits in with the more detailed description, of course. (As the exponent required is greater than -5 and less than 15.)
From the documentation it follows that the most compact form to represent the number will be chosen.
I.e., when you do not specify a format string, the default is the "G" format string. From the specification of the G format string follows:
Result: The most compact of either fixed-point or scientific notation.
The default for the number of digits is 15 with the specifier. That means that a number that is representable as exactly a certain binary representation (like 0.1 in the example of harriyott) will be displayed as fixed point, unless the exponential notation is more compact.
When there are more digits, it will, by default, display all these digits (up to 15) and choose exponential notation once that is shorter.
Putting this together:
?(1.0/7.0).ToString()
"0,142857142857143" // 15 digits
?(10000000000.0/7.0).ToString()
"1428571428,57143" // 15 significant digits, E-notation not shorter
?(100000000000000000.0/7.0).ToString()
"1,42857142857143E+16" // 15 sign. digits, above range for non-E-notation (15)
?(0.001/7.0).ToString()
"0,000142857142857143" // non E-notation is shorter
?(0.0001/7.0).ToString()
"1,42857142857143E-05" // E-notation shorter
And, of interest:
?(1.0/2.0).ToString()
"0,5" // exact representation
?(1.0/5.0).ToString()
"0,2" // rounded, zeroes removed
?(1.0/2.0).ToString("G20")
"0,5" // exact representation
?(1.0/5.0).ToString("G20")
"0,20000000000000001" // unrounded
This is to show what happens behind the scene and why 0.2 is written as 0.2, not 0,20000000000000001, which is actually is. By default, 15 significant digits are shown. When there are more digits (and there always are, except for certain special numbers), these are rounded the normal way. After rounding, redundant zeroes are removed.
Note that a double has a precision of 15 or 16 digits, depending on the number. So, by showing 15 digits, what you see is a correctly rounded down number and always a complete representation, and the shortest representation of the double.
It uses the formatter "G" (for "General"), which is specified to use "the most compact of either fixed-point or scientific notation" http://msdn.microsoft.com/en-us/library/dwhawy9k.aspx
So since the fixed-point 0.00001 is more characters than 1E-05 it will favour the scientific notation. I suppose if they're of equal length, it favours fixed-point.
I've just tried this with a loop:
double a = 1;
for (var i = 1; i < 10; i++)
{
a = a / 10;
Console.WriteLine(a.ToString(CultureInfo.InvariantCulture));
}
The output was:
0.1
0.01
0.001
0.0001
1E-05
1E-06
1E-07
1E-08
1E-09

Categories