i am trying to convert "12345678.12345678" to double, but Double.Parse changes it 12345678.123457. Same is the case when i use Decimal instead of double
decimal check = Decimal.Parse("12345678.12345678", NumberStyles.AllowDecimalPoint);//returns 12345678.123457
double check1 = (Double)check; //returns 12345678.123457
Floating point arithmetic with double precision values inherently has finite precision. There only are 15-16 significant decimal digits of information in a double precision value. The behaviour you see is exactly to be expected.
The closest representable double precision value to 12345678.12345678 is 12345678.1234567798674106597900390625 which tallies with your observed behaviour.
Floating point types haves only so many significant digits: 15 or 16 in the case of System.Double (the exact number varies with value).
The documentation for System.Double covers this.
A read of What Every Computer Scientist Should Know About Floating-Point Arithmetic is worth while.
If you take a look at the page for the double datatype you'll see that the precision is 15-16 digits. You've reached the limit of the precision of the type.
I believe Decimal might be what you're looking for in this situation.
Just a quick test gave me the correct value.
double dt = double.Parse("12345678.12345678");
Console.WriteLine(dt.ToString());
There are two things going on here:
A decimal to double conversion is inexact/the double type has precision which does not map to whole numbers well (at least in a decimal system)
Double has a decimal place limit of 15-16 places
Reference for decimal to double conversion;
Related
I have a double with 17 digits after the decimal point, i.e.:
double myDouble = 0.12345678901234567;
If I convert this to a decimal like this:
decimal myDecimal = Convert.ToDecimal(myDouble);
then the value of myDecimal is rounded, as per the Convert.ToDecimal documentation, to 15 digits (i.e. 0.0123456789012345). My question is, why is this rounding performed?
I understand that if my original number could be accurately represented in base 10 and I was trying to store it as a double, then we could only have confidence in the first 15 digits. The final two digits would be subject to rounding error. But, that's a base 10 biased point of view. My number may be more accurately represented by a double and I wish to convert it to decimal while preserving as much accuracy as possible.
Shouldn't Convert.ToDecimal aim to minimise the difference between myDouble and (double)Convert.ToDecimal(myDouble)?
From the documentation of Double:
A Double value has up to 15 decimal digits of precision, although a
maximum of 17 digits is maintained internally
So, as the double value itself has a maximum of 15 decimal places, converting it to Decimal will result in a Decimal value with 15 significant figures.
The behavior of the rounding guarantees that conversion of any Decimal which has at most fifteen significant figures to double and back to Decimal will yield the original value unchanged. If values were rounded to sixteen figures rather than fifteen, such a guarantee would not only fail to hold for number with sixteen figures, but it wouldn't even hold for much shorter values. For example, the closest Double value to 9000.04 is approximately 9000.040000000000873115; rounding that to sixteen figures would yield 9000.040000000001.
The choice of rounding one should use depends upon whether regards the best Decimal value of double value 9000.04 as being 9000.04m, 9000.040000000001m, 9000.0400000000008731m, or perhaps something else. Microsoft probably decided that any representation other than 9000.04m would be confusing.
The following is from the documentation of the method in question.
http://msdn.microsoft.com/en-us/library/a69w9ca0(v=vs.110).aspx
"The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
Every terminating binary fraction is exactly representable as a decimal fraction, so the minimum possible difference for a finite number is always 0. The IEEE 754 64-bit representation of your number is exactly equal to 0.1234567890123456634920984242853592149913311004638671875
Every conversion from binary floating point to decimal or decimal string must embody some decision about rounding. In theory, they could preserve all the digits, but that would result in some very long outputs with most of the digits having little meaning.
One option, used in Java's Double.toString, is to stop at the shortest decimal representation that would convert back to the original double.
Most set some fairly arbitrary limit. 15 significant digits preserves most meaningful digits.
while trying to parse variable to float with following parameter float.TryParse(value, NumberStyles.Float, CultureInfo.InvariantCulture, out fValue),
the value=6666.77777 is rounded of to 6666.778.
can anyone help, i don't want my value to be rounded.
float only has around 6 significant digits. Note that digits before the decimal point count too. double has higher precision in that regard (around 16):
PS> [float]::Parse('6666,77777')
6666.778
PS> [double]::Parse('6666,77777')
6666.77777
But as others noted, this is just an approximate value because it cannot be represented exactly in base 2. If you need decimal exactness (e.g. for money values) then use a decimal. For most other things binary floating point (i.e. float and double) should be fine.
Why don't you use Double.TryParse or Decimal.TryParse to support higher precision:
float: Approximately ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures
double: Approximately ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
decimal: Approximately ±1.0 x 10-28 to ±7.9 x 1028 with 28 or 29 significant figures
Try this piece of code snippet instead:
double fValue;
double.TryParse("6666.77777", NumberStyles.Double, CultureInfo.InvariantCulture, out fValue);
OR
decimal fValue;
decimal.TryParse("6666.77777", NumberStyles.Decimal, CultureInfo.InvariantCulture, out fValue);
That is because value 666.77777 cannot be represented in the binary form floating point numbers use - the actual number contains more binary digits than the floating point has room for. The resulting number is the closest approximation.
Rounding is used when the exact result of a floating-point operation
(or a conversion to floating-point format) would need more digits than
there are digits in the significand. IEEE 754 requires correct
rounding: that is, the rounded result is as if infinitely precise
arithmetic was used to compute the value and then rounded (although in
implementation only three extra bits are needed to ensure this). There
are several different rounding schemes (or rounding modes).
Historically, truncation was the typical approach. Since the
introduction of IEEE 754, the default method (round to nearest, ties
to even, sometimes called Banker's Rounding) is more commonly used.
You should consider using double if you need more precision, or decimal if you need even more than that, though they too suffer from precision loss at some point.
you should use decimal if you need higher precision.
Can someone tell what a decimal variable cannot do but at the same time double can do?
Also what is that double cant do but decimal can?
I was having trouble with finding power of (sqroot 5) to more than 2000000
e.g. (3 + root(5) )raise to 300000 ...here what can be used while using binomial expansion ?
Can I use double / decimal ? What's the main difference?
Note : I want to preserve last 3 decimal place before decimal point in the answer to the 100% accuracy.
In brief:
Decimal is a decimal floating point type, so it can represent exact decimal values, e.g. 0.1. It has a fairly high precision, but a relatively limited range. It's implemented in software, so is relatively slow.
Single/Double are binary floating point types, so they can only represent exactly numbers which can be represented exactly in binary - which doesn't include the decimal value 0.1, for example. They have relatively low precision, but a large range. It's usually implemented in hardware, so is very fast.
Additionally float/double have representations for positive and negative infinity, and "not a number" - decimal doesn't have any of this.
See my articles on binary floating point and decimal floating point for more information.
decimal.MaxValue = 79,228,162,514,264,337,593,543,950,335
double.MaxValue = 1.7976931348623157E+308
(5.24) ^ 300000 = ???
I don't think you can easily raise 5 to the power of 300000 without using a math library that is more cleaver than double....
decimal is base 10 which means he can represend 0.1 as 0.1
double is a base 2 - binary - which means he cant. ( its infinite numbers of 0,1)
double can store much bigger numbers than decimal
decimal is more accurate.,
I have the following:
string value = "9223372036854775807";
double parsedVal = double.Parse(value, CultureInfo.InvariantCulture);
... and the result is 9.2233720368547758E+18 which is not the exact same number. How should I convert string to double without loss of precision?
double can only guarantee 16 (approx) decimal digits of accuracy. You might try switching to decimal (which has a few more bits to play with, and holds that value with plenty of headroom).
You can't convert 9223372036854775807 to double without loss of precision, due to the definition of a double ((IEEE 754) standard for binary floating-point arithmetic).
By default, a Double value contains 15 decimal digits of precision,
although a maximum of 17 digits is maintained internally.
Using Decimal will get you the precision you need here. However please note that Decimal will not have the same storage range as a double.
See Can C# store more precise data than doubles?
I'm using the following piece of code and under some mysterious circumstances the result of the addition is not as it's supposed to be:
double _west = 9.482935905456543;
double _off = 0.00000093248155508263153;
double _lon = _west + _off;
// check for the expected result
Debug.Assert(_lon == 9.4829368379380981);
// sometimes i get 9.48293685913086 for _lon (which is wrong)
I'm using some native DLLs within my application and i suspect that some DLL is responsible for this 'miscalculation', but i need to figure out which one.
Can anyone give me a hint how to figure out the root of my problem?
double is not completely accurate, try using decimal instead
The advanteage of using double and float over decimal is performance
At first I thought this was a rounding error but actually it is your assertion that is wrong. Try adding the entire result of your calculation without any arbitrary rounding on your part.
Try this:
using System;
class Program
{
static void Main()
{
double _west = 9.482935905456543;
double _off = 0.00000093248155508263153;
double _lon = _west + _off;
// check for the expected result
Console.WriteLine(_lon == 9.48293683793809808263153);
}
}
In the future though it is best to use System.Decimal in cases where you need to avoid rounding errors that are usually associated with the System.Single and System.Double types.
That being said, however, this is not the case here. By arbitrarily rounding the number at a given point you are assuming that the type will also round at that same point which is not how it works. Floating point numbers are stored to their maximum representational capacity and only once that threshold has been reached does rounding take place.
The problem is that Double only has precision of 15 - 16 digits (and you seem to need more precision in your example) whereas Decimal has precision to 28 - 29. How are you converting between Double and Decimal?
You are being bitten by rounding and precision problems. See this. Decimal might help. Go here for details on conversions and rounding.
From MSDN:
When you convert float or double to decimal, the source value is converted to decimal representation and rounded to the nearest number after the 28th decimal place if required. Depending on the value of the source value, one of the following results may occur:
If the source value is too small to be represented as a decimal, the result becomes zero.
If the source value is NaN (not a number), infinity, or too large to be represented as a decimal, an OverflowException is thrown.
You cannot represent every floating point number in the decimal system with a floating point in a binary system accurately, this isn't even directly related to how "small" the decimal number is, some numbers just don't "fit" in base-2 nicely.
Using a longer bit width can help in most cases, but not always.
To specify your constants in Decimal (128-bit floating point ) precision, use this declaration:
decimal _west = 9.482935905456543m;
decimal _off = 0.00000093248155508263153m;
decimal _lon = _west + _off;