Im trying to convert a long string of numbers into a double in order to use that as an index to a redis database (redis requires a double) but the conversion isnt correct so the database fetch gets messed up.
Here is the code to convert the string into a double
string start="636375529204974172";
ulong istart = Convert.ToUInt64(start); // result 636375529204974172
double dstart = Convert.ToDouble(istart); // result 6.3637552920497421E+17
as you can see the problem starts when i convert istart to a double as dstart, some of the least significant digits gets messed up, seems by the conversion introducing the E+17 in there.
I cant understand why, its a WHOLE NUMBER, no fractional part to create error so why it cant do it?
Does anybody knows how to fix the conversion from a string to double?
I cant understand why, its a WHOLE NUMBER, no fractional part to create error so why it cant do it?
Because it still only has 64 bits of information to play with, 52 of which are used for the mantissa, 11 of which are used for the exponent, and 1 of which is used for the sign.
Basically you shouldn't rely on more than 15 decimal digits of precision in a double. From the docs for System.Double:
All floating-point numbers also have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Double value has up to 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
The first 15 significant digits of your value are correct - nothing is going wrong here. The nearest double value to your original value is exactly 636375529204974208. There is no double that precisely represents 636375529204974172.
Related
I may very well have not the proper understanding of significant figures, but the book
C# 6.0 in a Nutshell by Joseph Albahari and Ben Albahari (O’Reilly).
Copyright 2016 Joseph Albahari and Ben Albahari, 978-1-491-92706-9.
provides the table below for comparing double and decimal:
Is it not counter-intuitive that, on the one hand, a double can hold a smaller quantity of significant figures, while on the other it can represent numbers way bigger than decimal, which can hold a higher quantity of significant figures ?
Imagine you were told you can store a value, but were given a limitation: You can only store 10 digits, 0-9 and a negative symbol. You can create the rules to decode the value, so you can store any value.
The first way you store things is simply as the value xxxxxxxxxx, meaning the number 123 is stored as 0000000123. Simple to store and read. This is how an int works.
Now you decide you want to store fractional numbers, so you change the rules a bit. Now you store xxxxxxyyyy, where x is the integer portion and y is the fractional portion. So, 123.98 would be stored as 0001239800. This is roughly how a Decimal value works. You can see the largest value I can store is 9999999999, which translates to 999999.9999. This means I have a hard upper limit on the size of the value, but the number of the significant digits is large at 10.
There is a way to store larger values, and that's to store the x and y components for the formula in xxxxxxyyyy. So, to store 123.98, you need to store 01239800-2, which I can calculate as . This means I can store much bigger numbers by changing 'y', but the number of significant digits is basically fixed at 6. This is basically how a double works.
The answer lies in the way that doubles are encoded. Rather than just being a direct binary representation of a number, they have 3 parts: sign, exponent, and fraction.
The sign is obvious, it controls + or -.
The fraction part is also obvious. It's binary fraction that represents a number in between 0 and 1.
The exponent is where the magic happens. It signifies a scaling factor.
The final float calculation comes out to (-1)^$sign * (1 + $fraction) * 2 ^$exponent
This allows much higher values than a straight decimal number because of the exponent. There's a lot of reading out there on why this works and how to do addition and multiplication with these encoded numbers. Google around for "IEEE floating point format" or whatever topic you need. Hope that helps!
The Range has nothing to do with the precision. Double has a binary representation (base 2). Not all numbers can be represented exactly as we humans know them in the decimal format. Not to mention and accumulated rounding errors of addition and division. A larger range means a greater MAX VALUE and a smaller MIN VALUE than decimal.
Decimal on the other side is (base 10). It has a smaller range (smaller MAX VALUE and larger MIN VALUE). This has nothing to do with precision, since it is not represented using floating binary point representation, it can represent numbers more precisely and though is recommended for human-made numbers and calculations.
I have a class that does some length calculations based on a height on a ticket. It's been in place for years and working quite well... Until we got a unique ticket size.
They are entered by sales people in inches and are normally nice numbers like 3, 4 or 3.5 and store in a database - This one is however 3.66666 recurring (or 11/3) But it is being entered as 3.666 and causing the calculation to fail due to lost precision.
I have thought of a bit of a hack to restore precision for certain numbers, but thought maybe someone knows of a better way of getting a 3.666 or a 93.1333 back to it's number + two thirds status?
Thanks,
Mick.
As you explained in comments I see your point now. I've checked the numbers:
168000 / 3.666 = 45826.5139
168000 / 3.666666 = 45818.1901488
168000 * 3 / 11 = 45818.1818182
It makes a difference of 8 tickets. I have a feeling that your issue can be solved in many ways. On the side of user input for example. Or on the side of database. But back to your question:
How do I convert 3.666 or a 93.1333 back to it's number + two thirds
status?
You are looking for converting decimal (or double) to fraction.
There is already a question on SO: Algorithm for simplifying decimal to fractions which has many answeres. I've tested some of them, and none of them were satisfying. Some of them don't even hanlde recurrence. Perhaps I've missed the correct one, you can look by yourself.
Anyway, I believe you don't need to fully implement a conversion from 1.666 to 3/2, since it's not easy and you have a real-world sizes. You've said, that most of the time numbers are aroung 3, 3.5, 4 etc. So I suggest you to take a look at a question I've linked above and search for an algorythm of detecting the recurrence number. It was also discussed here How to know the repeating decimal in a fraction?
After what just convert 1.666 to 1.666666, since 1/1000000 of inch won't mess your calculations, as numbers above show.
It would be difficult to get the accurate value of double as double is floating point.
The MSDN says:
Remember that a floating-point number
can only approximate a decimal number,
and that the precision of a
floating-point number determines how
accurately that number approximates a
decimal number. By default, a Double
value contains 15 decimal digits of
precision, although a maximum of 17
digits is maintained internally. The
precision of a floating-point number
has several consequences:
Two floating-point numbers that appear equal for a particular
precision might not compare equal
because their least significant digits
are different.
A mathematical or comparison operation that uses a floating-point
number might not yield the same result
if a decimal number is used because
the floating-point number might not
exactly approximate the decimal
number.
I have a double with 17 digits after the decimal point, i.e.:
double myDouble = 0.12345678901234567;
If I convert this to a decimal like this:
decimal myDecimal = Convert.ToDecimal(myDouble);
then the value of myDecimal is rounded, as per the Convert.ToDecimal documentation, to 15 digits (i.e. 0.0123456789012345). My question is, why is this rounding performed?
I understand that if my original number could be accurately represented in base 10 and I was trying to store it as a double, then we could only have confidence in the first 15 digits. The final two digits would be subject to rounding error. But, that's a base 10 biased point of view. My number may be more accurately represented by a double and I wish to convert it to decimal while preserving as much accuracy as possible.
Shouldn't Convert.ToDecimal aim to minimise the difference between myDouble and (double)Convert.ToDecimal(myDouble)?
From the documentation of Double:
A Double value has up to 15 decimal digits of precision, although a
maximum of 17 digits is maintained internally
So, as the double value itself has a maximum of 15 decimal places, converting it to Decimal will result in a Decimal value with 15 significant figures.
The behavior of the rounding guarantees that conversion of any Decimal which has at most fifteen significant figures to double and back to Decimal will yield the original value unchanged. If values were rounded to sixteen figures rather than fifteen, such a guarantee would not only fail to hold for number with sixteen figures, but it wouldn't even hold for much shorter values. For example, the closest Double value to 9000.04 is approximately 9000.040000000000873115; rounding that to sixteen figures would yield 9000.040000000001.
The choice of rounding one should use depends upon whether regards the best Decimal value of double value 9000.04 as being 9000.04m, 9000.040000000001m, 9000.0400000000008731m, or perhaps something else. Microsoft probably decided that any representation other than 9000.04m would be confusing.
The following is from the documentation of the method in question.
http://msdn.microsoft.com/en-us/library/a69w9ca0(v=vs.110).aspx
"The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
Every terminating binary fraction is exactly representable as a decimal fraction, so the minimum possible difference for a finite number is always 0. The IEEE 754 64-bit representation of your number is exactly equal to 0.1234567890123456634920984242853592149913311004638671875
Every conversion from binary floating point to decimal or decimal string must embody some decision about rounding. In theory, they could preserve all the digits, but that would result in some very long outputs with most of the digits having little meaning.
One option, used in Java's Double.toString, is to stop at the shortest decimal representation that would convert back to the original double.
Most set some fairly arbitrary limit. 15 significant digits preserves most meaningful digits.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
I have been dealing with some numbers and C#, and the following line of code results in a different number than one would expect:
double num = (3600.2 - 3600.0);
I expected num to be 0.2, however, it turned out to be 0.1999999999998181. Is there any reason why it is producing a close, but still different decimal?
This is because double is a floating point datatype.
If you want greater accuracy you could switch to using decimal instead.
The literal suffix for decimal is m, so to use decimal arithmetic (and produce a decimal result) you could write your code as
var num = (3600.2m - 3600.0m);
Note that there are disadvantages to using a decimal. It is a 128 bit datatype as opposed to 64 bit which is the size of a double. This makes it more expensive both in terms of memory and processing. It also has a much smaller range than double.
There is a reason.
The reason is, that the way the number is stored in memory, in case of the double data type, doesn't allow for an exact representation of the number 3600.2. It also doesn't allow for an exact representation of the number 0.2.
0.2 has an infinite representation in binary. If You want to store it in memory or processor registers, to perform some calculations, some number close to 0.2 with finite representation is stored instead. It may not be apparent if You run code like this.
double num = (0.2 - 0.0);
This is because in this case, all binary digits available for representing numbers in double data type are used to represent the fractional part of the number (there is only the fractional part) and the precision is higher. If You store the number 3600.2 in an object of type double, some digits are used to represent the integer part - 3600 and there is less digits representing fractional part. The precision is lower and fractional part that is in fact stored in memory differs from 0.2 enough, that it becomes apparent after conversion from double to string
Change your type to decimal:
decimal num = (3600.2m - 3600.0m);
You should also read this.
See Wikipedia
Can't explain it better. I can also suggest reading What Every Computer Scientist Should Know About Floating-Point Arithmetic. Or see related questions on StackOverflow.
I have the following test code:
decimal test1 = 0.0500000000000000045656554454M;
double test2 = (double)test1;
This results in test2 showing as 0.05 when debugging. Why is it being rounded to 2 decimal places?
Thanks
The value from that conversion is actually 0.050000000000000009714451465470119728706777095794677734375, as shown by DoubleConverter. That's the exact value of the nearest double to the decimal you converted.
When you use the debugger or normal string formatting, you aren't usually shown the exact result.
The reason is that double can contain no more than 15-16 significant digits.
see double (C# Reference)
You should take a look at this article about floating-point arithmetic and .NET. The rounding occurs due to a combination of how the number gets converted to a double-precision floating point value and how it is formatted when printed, since .NET defaults to 15 decimals for doubles, and your original number contains decimal past the 15th.
You could try test2.ToString("0.000000000000000000000000") to see if you might squeeze out any more information from the number, but I doubt it will.
There are two reasons I can think of:
Due to the different representation of decimal and double. See this article for more information about floating point representation. It is possible that there are not enough bits for the whole number representation in the double.
Due to the way numbers are printed. It is possible that in your printing options, there are less than 18 numbers after the decimal point specified - in which case, you'll get the rounded result.
I would check for tweaking the printing options first to make sure that the problem isn't there first.
.. But know that the only solution for the first problem is stop using double :-)