i'm working in a C# (Unity3D compatible = .NET 2.0) Json library and i'm having precision problems. Firstly i have this logic in order to parse number strings:
...
string jsonPart ="-1.7555215491128452E-19"
enter code here
long longValue = 0;
if (long.TryParse(jsonPart, NumberStyles.Any, CultureInfo.InvariantCulture, out longValue))
{
if (longValue > int.MaxValue || longValue < int.MinValue)
{
jsonPartValue = new JsonBasic(longValue);
}
else
{
jsonPartValue = new JsonBasic((int)longValue);
}
}
else
{
decimal decimalValue = 0;
if (decimal.TryParse(jsonPart, NumberStyles.Any, CultureInfo.InvariantCulture, out decimalValue))
{
jsonPartValue = new JsonBasic(decimalValue);
}
}
...
The problem comes because decimal type is not the best type always for big decimal numbers. I have an output log to show you the problem (using .ToString()):
String = "-1.7555215491128452E-19"
Float Parsed : -1.755522E-19
Double parsed : -1.75552154911285E-19
Decimal Parsed : -0.0000000000000000001755521549
but on the other way , this examples with decimal type is the right one:
String = "0.1666666666666666666"
Float Parsed : 0.1666667
Double parsed : 0.166666666666667
Decimal Parsed : 0.1666666666666666666
String = "-1.30142114406914976E17"
Float Parsed : -1.301421E+17
Double parsed : -1.30142114406915E+17
Decimal Parsed : -130142114406914976
I suppost there is many other cases that can balance to one type or another.
Is there any smart way to parse it loosing minimum precision?
The difference you are seeing is because, although decimal can hold up to 28 or 29 digits of precision compared to double's 15 or 16 digits, its range is much lower than double.
A decimal has a range of (-7.9 x 10^28 to 7.9 x 10^28) / (10^(0 to 28))
A decimal stores ALL the digits, including zeros after a decimal point which is preceeded by a zero (e.g. 0.00000001) - i.e. it doesn't store numbers using exponential format.
A double has a range of ±5.0 × 10^−324 to ±1.7 × 10^308
A double can store a number using exponential format which means it doesn't have to store the leading zeroes in a number like 0.0000001.
The consequence of this is that for numbers that are at the edges of the decimal range, it actually has less precision than a double.
For example, consider the number -1.7555215491128452E-19:
Converting that to non-exponential notation you get:
-0.00000000000000000017555215491128452
1 2 3
12345678901234567890123456789012345
You can see that the number of decimal digits of that is 35, which exceeds the range of a decimal.
As you have observed, when you print that number out after storing it in a decimal, you get:
-0.0000000000000000001755521549
1 2
1234567901234567890123456789
which is giving you only 29 digits, as per Microsoft's specification.
A double, however, stores its numbers using exponential notation which means that it doesn't store all the leading zeroes, which allows it to store that particular number with greater precision.
For example, a double stores -0.00000000000000000017555215491128452 as an exponential number with 15 or 16 digits of precision.
If you take 15 digits of precision from the above number you get:
-0.000000000000000000175552154911285
1
123456789012345
which is indeed what is printed out if you do this:
double d = -1.7555215491128452E-19;
Console.WriteLine(d.ToString("F35"));
Related
int trail = 14;
double mean = 14.00000587000000;
double sd = 4.47307944700000;
double zscore = double.MinValue;
zscore = (trail - mean) / sd; //zscore at this point is exponent value -1.3122950464645662E-06
zscore = Math.Round(zscore, 14); //-1.31229505E-06
Math.Round() also keeps the exponent value. should zscore.ToString("F14") be used instead of Math.Round() function to convert it to non-exponent value? Please explain.
These are completely independant concerns.
Math.Round will actually return a new value, rounded to the specified decimal (ar at least, as near as one can do with floating point).
You can reuse this result value anywhere, and show it with 16 decimals precision if you want, but it's not supposed to be the same as the original one.
The fact that it is displayed with exponent notation or not has nothing to do with Round.
When you use ToString("F14") on a number, this is a display specification only, and does not modify the underlying value in any way. The underlying value might be a number that would or would not display as exponential notation otherwise, and may or may actually have 14 significant digits.
It simply forces the number to be displayed as a full decimal without exponent notation, with the number of digits specified. So it seems to be what you actually want.
Examples :
(executable online here : http://rextester.com/PZXDES55622)
double num = 0.00000123456789;
Console.WriteLine("original :");
Console.WriteLine(num.ToString());
Console.WriteLine(num.ToString("F6"));
Console.WriteLine(num.ToString("F10"));
Console.WriteLine(num.ToString("F14"));
Console.WriteLine("rounded to 6");
double rounded6 = Math.Round(num, 6);
Console.WriteLine(rounded6.ToString());
Console.WriteLine(rounded6.ToString("F6"));
Console.WriteLine(rounded6.ToString("F10"));
Console.WriteLine(rounded6.ToString("F14"));
Console.WriteLine("rounded to 10");
double rounded10 = Math.Round(num, 10);
Console.WriteLine(rounded10.ToString());
Console.WriteLine(rounded10.ToString("F6"));
Console.WriteLine(rounded10.ToString("F10"));
Console.WriteLine(rounded10.ToString("F14"));
will output:
original :
1,23456789E-06
0,000001
0,0000012346
0,00000123456789
rounded to 6
1E-06
0,000001
0,0000010000
0,00000100000000
rounded to 10
1,2346E-06
0,000001
0,0000012346
0,00000123460000
I have this test code:
class Test
{
static void Main()
{
decimal m = 1M / 6M;
double d = 1.0 / 6.0;
decimal notQuiteWholeM = m + m + m + m + m + m; // 1.0000000000000000000000000002M
double notQuiteWholeD = d + d + d + d + d + d; // 0.99999999999999989
Console.WriteLine(notQuiteWholeM); // Prints: 1.0000000000000000000000000002
Console.WriteLine(notQuiteWholeD); // Prints: 1.
Console.WriteLine(notQuiteWholeM == 1M); // False
Console.WriteLine(notQuiteWholeD < 1.0); // Prints: True. Why?
Console.ReadKey();
}
}
Why this line prints 1?
Console.WriteLine(notQuiteWholeD); // Prints: 1
an this one, why prints True?
Is there an automatically rounding process? What can I do to print the correct/calculated value?
[Note: I found this example code in C# 5.0 in a Nutsheel page 30: Real Number Rounding Error].
Thanks in advance.
Not quite reading your question in the same way as the other two answers. The gist of it: Does the formatted string representation of a double "round" in C#?
Yes.
Internally double is represented with full IEEE-754 decimal digit precision (15-17 digits), which is why:
notQuiteWholeD < 1.0 == true // because notQuiteWholeD = 0.99999999999999989
However, when formatting it as a string, by default it will use 15 digit precision - equivalent to:
String.Format("{0:G15}", notQuiteWholeD) // outputs "1"
To get all the digits of the full internal representation, you can use:
Console.WriteLine("{0:G17}", notQuiteWholeD);
Or:
Console.WriteLine("{0:R}", notQuiteWholeD);
Both, in this case, will output "0,99999999999999989".
The former will always use 17 digit precision. The latter ("roundtrip precision") will use 15 digits if that's enough precision for the following to be true, otherwise it will use 17:
Double.Parse(String.Format("{0:G15}", notQuiteWholeD)) == notQuiteWholeD
Bonus Example:
... of when G17 and R differ:
Console.WriteLine("{0:G17}", 1.0000000000000699); // outputs "1.0000000000000699"
Console.WriteLine("{0:R}", 1.0000000000000699); // outputs "1.00000000000007"
1.0000000000000699 (17 significant digits) can be represented accurately enough for a roundtrip using only 15 significant digits. In other words, the double representation of 1.00...07 is the same as for 1.00...0699.
So 1.00...07 (15 digits) is a shorter input to get the exact same internal (17 digit) representation. That means R will round it to 15 digits, while G17 will keep all the digits of the internal representation.
Maybe it's clearer when realizing that this:
Console.WriteLine("{0:G17}", 1.00000000000007); // outputs "1.0000000000000699"
Console.WriteLine("{0:R}", 1.00000000000007); // outputs "1.00000000000007"
... gives the exact same results.
Decimal is stored in terms of base 10. Double is stored in terms of base 2. Neither of those bases can exactly represent 1 / 6 with a finite representation.
That explains all the output except Console.WriteLine(notQuiteWholeD). You get "1" for output, even though the actual value stored is less than 1. Since the output is in base 10, it has to convert from base 2. Part of the conversion includes rounding.
As we know, 1/6 = 0.1666 (repeating), decimal and double can not represent repeating numbers, they are calculated when assigned to. Since they are built from different backing data structures they represent a different set of possible numbers and round differently in some cases.
For this code:
Console.WriteLine(notQuiteWholeD < 1.0); // Prints: True. Why?
Since notQuiteWholeD is 0.99999999999999989 it prints true.
I'm not going to cover how the double and decimal work behind the scenes but here is some reading material if you're interested.
Double-precision floating-point format
SO - Behind the scenes, what's happening with decimal value type in C#/.NET?
Decimal floating point in .NET
When converting a "high" precision Double to a Decimal I lose precision with Convert.ToDecimal or casting to (Decimal) due to Rounding.
Example :
double d = -0.99999999999999956d;
decimal result = Convert.ToDecimal(d); // Result = -1
decimal result = (Decimal)(d); // Result = -1
The Decimal value returned by Convert.ToDecimal(double) contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
So I in order to keep my precision, I have to convert my double to a String and then call Convert.ToDecimal(String):
decimal result = System.Convert.ToDecimal(d.ToString("G20")); // Result = -0.99999999999999956d
This method is working but I would like to avoid using a String variable in order to convert a Double to Decimal without rounding after 15 digits?
One possible solution is to decompose d as the exact sum of n doubles, the last of which is small and contains all the trailing significant digits that you desire when converted to decimal, and the first (n-1) of which convert exactly to decimal.
For the source double d between -1.0 and 1.0:
decimal t = 0M;
bool b = d < 0;
if (b) d = -d;
if (d >= 0.5) { d -= 0.5; t = 0.5M; }
if (d >= 0.25) { d -= 0.25; t += 0.25M; }
if (d >= 0.125) { d -= 0.125; t += 0.125M; }
if (d >= 0.0625) { d -= 0.0625; t += 0.0625M; }
t += Convert.ToDecimal(d);
if (b) t = -t;
Test it on ideone.com.
Note that the operations d -= are exact, even if C# computes the binary floating-point operations at a higher precision than double (which it allows itself to do).
This is cheaper than a conversion from double to string, and provides a few additional digits of accuracy in the result (four bits of accuracy for the above four if-then-elses).
Remark: if C# did not allow itself to do floating-point computations at a higher precision, a good trick would have been to use Dekker splitting to split d into two values d1 and d2 that would convert each exactly to decimal. Alas, Dekker splitting only works with a strict interpretation of IEEE 754 multiplication and addition.
Another idea is to use C#'s version of frexp to obtain the significand s and exponent e of d, and to compute (Decimal)((long) (s * 4503599627370496.0d)) * <however one computes 2^e in Decimal>.
There are two approaches, one of which will work for values up below 2^63, and the other of which will work for values larger than 2^53.
Split smaller values into whole-number and fractional parts. The whole-number part may be precisely cast to long and then Decimal [note that a direct cast to Decimal may not be precise!] The fractional part may be precisely multiplied by 9007199254740992.0 (2^53), converted to long and then Decimal, and then divided by 9007199254740992.0m. Adding the result of that division to the whole-number part should yield a Decimal value which is within one least-significant-digit of being correct [it may not be precisely rounded, but will still be far better than the built-in conversions!]
For larger values, multiply by (1.0/281474976710656.0) (2^-48), take the whole-number part of that result, multiply it back by 281474976710656.0, and subtract it from the original result. Convert the whole-number results from the division and the subtraction to Decimal (they should convert precisely), multiply the former by 281474976710656m, and add the latter.
I have this test code:
class Test
{
static void Main()
{
decimal m = 1M / 6M;
double d = 1.0 / 6.0;
decimal notQuiteWholeM = m + m + m + m + m + m; // 1.0000000000000000000000000002M
double notQuiteWholeD = d + d + d + d + d + d; // 0.99999999999999989
Console.WriteLine(notQuiteWholeM); // Prints: 1.0000000000000000000000000002
Console.WriteLine(notQuiteWholeD); // Prints: 1.
Console.WriteLine(notQuiteWholeM == 1M); // False
Console.WriteLine(notQuiteWholeD < 1.0); // Prints: True. Why?
Console.ReadKey();
}
}
Why this line prints 1?
Console.WriteLine(notQuiteWholeD); // Prints: 1
an this one, why prints True?
Is there an automatically rounding process? What can I do to print the correct/calculated value?
[Note: I found this example code in C# 5.0 in a Nutsheel page 30: Real Number Rounding Error].
Thanks in advance.
Not quite reading your question in the same way as the other two answers. The gist of it: Does the formatted string representation of a double "round" in C#?
Yes.
Internally double is represented with full IEEE-754 decimal digit precision (15-17 digits), which is why:
notQuiteWholeD < 1.0 == true // because notQuiteWholeD = 0.99999999999999989
However, when formatting it as a string, by default it will use 15 digit precision - equivalent to:
String.Format("{0:G15}", notQuiteWholeD) // outputs "1"
To get all the digits of the full internal representation, you can use:
Console.WriteLine("{0:G17}", notQuiteWholeD);
Or:
Console.WriteLine("{0:R}", notQuiteWholeD);
Both, in this case, will output "0,99999999999999989".
The former will always use 17 digit precision. The latter ("roundtrip precision") will use 15 digits if that's enough precision for the following to be true, otherwise it will use 17:
Double.Parse(String.Format("{0:G15}", notQuiteWholeD)) == notQuiteWholeD
Bonus Example:
... of when G17 and R differ:
Console.WriteLine("{0:G17}", 1.0000000000000699); // outputs "1.0000000000000699"
Console.WriteLine("{0:R}", 1.0000000000000699); // outputs "1.00000000000007"
1.0000000000000699 (17 significant digits) can be represented accurately enough for a roundtrip using only 15 significant digits. In other words, the double representation of 1.00...07 is the same as for 1.00...0699.
So 1.00...07 (15 digits) is a shorter input to get the exact same internal (17 digit) representation. That means R will round it to 15 digits, while G17 will keep all the digits of the internal representation.
Maybe it's clearer when realizing that this:
Console.WriteLine("{0:G17}", 1.00000000000007); // outputs "1.0000000000000699"
Console.WriteLine("{0:R}", 1.00000000000007); // outputs "1.00000000000007"
... gives the exact same results.
Decimal is stored in terms of base 10. Double is stored in terms of base 2. Neither of those bases can exactly represent 1 / 6 with a finite representation.
That explains all the output except Console.WriteLine(notQuiteWholeD). You get "1" for output, even though the actual value stored is less than 1. Since the output is in base 10, it has to convert from base 2. Part of the conversion includes rounding.
As we know, 1/6 = 0.1666 (repeating), decimal and double can not represent repeating numbers, they are calculated when assigned to. Since they are built from different backing data structures they represent a different set of possible numbers and round differently in some cases.
For this code:
Console.WriteLine(notQuiteWholeD < 1.0); // Prints: True. Why?
Since notQuiteWholeD is 0.99999999999999989 it prints true.
I'm not going to cover how the double and decimal work behind the scenes but here is some reading material if you're interested.
Double-precision floating-point format
SO - Behind the scenes, what's happening with decimal value type in C#/.NET?
Decimal floating point in .NET
I am facing a problem when I try to convert decimal? to string. Scenario is
decimal decimalValue = .1211;
string value = (decimalValue * 100).ToString();
Current Result : value = 12.1100
Expected Result : value = 12.11
Please let me know, what could be reason for this.
Decimal preserves any trailing zeroes in a Decimal number. If you want two decimal places instead:
decimal? decimalValue = .1211m;
string value = ((decimal)(decimalValue * 100)).ToString("#.##")
http://msdn.microsoft.com/en-us/library/0c899ak8.aspx
or
string value = ((decimal)(decimalValue * 100)).ToString("N2")
http://msdn.microsoft.com/en-us/library/dwhawy9k.aspx
From System.Decimal:
A decimal number is a floating-point
value that consists of a sign, a
numeric value where each digit in the
value ranges from 0 to 9, and a
scaling factor that indicates the
position of a floating decimal point
that separates the integral and
fractional parts of the numeric value.
The binary representation of a Decimal
value consists of a 1-bit sign, a
96-bit integer number, and a scaling
factor used to divide the 96-bit
integer and specify what portion of it
is a decimal fraction. The scaling
factor is implicitly the number 10,
raised to an exponent ranging from 0
to 28. Therefore, the binary
representation of a Decimal value is
of the form, ((-296 to 296) / 10(0 to
28)), where -296-1 is equal to
MinValue, and 296-1 is equal to
MaxValue.
The scaling factor also preserves any
trailing zeroes in a Decimal number.
Trailing zeroes do not affect the
value of a Decimal number in
arithmetic or comparison operations.
However, >>trailing zeroes can be
revealed by the ToString method if an
appropriate format string is applied<<.
Remarks:
the decimal multiplication needs to be casted to decimal, because Nullable<decimal>.ToString has no format provider
as Chris pointed out you need to handle the case that the Nullable<decimal> is null. One way is using the Null-Coalescing-Operator:
((decimal)(decimalValue ?? 0 * 100)).ToString("N2")
This article from Jon Skeet is worth reading:
Decimal floating point in .NET (seach for keeping zeroes if you're impatient)
Since you using Nullable<T> as your decimal, Nullable<T>.ToString() method doesn't have overloading takes parameters that you can use for formatting.
Instead of, you can explicitly cast it to decimal and you can use .ToString() method for formatting.
Just use "0.00" format in your .ToString() method.
decimal? decimalValue = .1211M;
string value = ((decimal)(decimalValue * 100)).ToString("0.00");
Console.WriteLine(value);
Output will be;
12.11
Here is a DEMO.
As an alternative, you can use Nullable<T>.Value without any conversation like;
string value = (decimalValue * 100).Value.ToString("0.00");
Check out for more information from Custom Numeric Format Strings
Alternatively, you can specify the format "F2", like so: string val = decVal.ToString("F2") as this specifies 2 decimal places.
Use the fixed-point ("F) format specifier .
string value = (decimalValue * 100).ToString("F");
The default precision specifier is based on value of NumberFormatInfo.NumberDecimalDigits property which by default has value 2. So if don't specify a digit aftyer "F" , it by default specifies two decimal digits.
F0 - No decimal places
F1 - One decimal place
In case you do not want to limit to a certain amount of decimal digits:
decimal? decimalValue = .1211;
string value = decimalValue == null
? "0"
: decimalValue == 0
? "0"
: (decimalValue * 100).ToString().TrimEnd('0');
This will trim any (if any) trailing zeroes of the string and also return "0" if decimalValue is null. If the value is 0 then "0" is returned without trimming.
String.Format("{0:0.00}", decimalValue * 100);
You can use .Format() as an alternative to .ToString("0.00").
Since decimal? does not have a ToString(string format) overload, the easiest way is to use String.Format instead which will provide consistent results with the null case for decimalValue as well (resulting in an empty string) when compared to your original code:
string value = String.Format("{0:#.##}", decimalValue * 100);
But there are some other considerations for other numbers that you weren't clear on.
If you have a number that does not produce a value greater than 0, does it show a leading zero? That is, for 0.001211, does it display as 0.12 or .12? If you want the leading zero, use this instead (notice the change from #.## to 0.##):
string value = String.Format("{0:0.##}", decimalValue * 100);
If you have more than 2 significant decimal places, do you want those displayed? So if you had .12113405 would it display as 12.113405? If so use:
string value = String.Format("{0:#.############}", decimalValue * 100);
(honestly, I think there must be a better formatting string than that, especially as it only supports 12 decimal places)
And of course if you want both leading zeros and multiple decimal places, just combine the two above:
string value = String.Format("{0:0.############}", decimalValue * 100);