Why doesn't 0.9 recurring always equal 1 - c#

Mathematically, 0.9 recurring can be shown to be equal to 1. This question however, is not about infinity, convergence, or the maths behind this.
The above assumption can be represented using doubles in C# with the following.
var oneOverNine = 1d / 9d;
var resultTimesNine = oneOverNine * 9d;
Using the code above, (resultTimesNine == 1d) evaluates to true.
When using decimals instead, the evaluation yields false, yet, my question is not about the disparate precision of double and decimal.
Since no type has infinite precision, how and why does double maintain such an equality where decimal does not? What is happening literally 'between the lines' of code above, with regards to the manner in which the oneOverNine variable is stored in memory?

It depends on the rounding used to get the closest representable value to 1/9. It could go either way. You can investigate the issue of representability at Rob Kennedy's useful page: http://pages.cs.wisc.edu/~rkennedy/exact-float
But don't think that somehow double is able to achieve exactness. It isn't. If you try with 2/9, 3/9 etc. you will find cases where the rounding goes the other way. The bottom line is that 1/9 is not exactly representable in binary floating point. And so rounding happens and your calculations are subject to rounding errors.

What is happening literally 'between the lines' of code above, with regards to the manner in which the oneOverNine variable is stored in memory?
What you're asking about is called IEEE 754. This is the spec that C#, it's underlying .Net runtime, and most other programming platforms use to store and manipulate decimal values. This is because support for IEEE 754 is typically implemented directly at the CPU/chipset-level, making it both far more performant than an alternative implemented solely in software and far easier when building compilers, because the operations will map almost directly to specific CPU instructions.

Related

double type Multiplication in C# giving me wrong values [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
If I execute the following expression in C#:
double i = 10*0.69;
i is: 6.8999999999999995. Why?
I understand numbers such as 1/3 can be hard to represent in binary as it has infinite recurring decimal places but this is not the case for 0.69. And 0.69 can easily be represented in binary, one binary number for 69 and another to denote the position of the decimal place.
How do I work around this? Use the decimal type?
Because you've misunderstood floating point arithmetic and how data is stored.
In fact, your code isn't actually performing any arithmetic at execution time in this particular case - the compiler will have done it, then saved a constant in the generated executable. However, it can't store an exact value of 6.9, because that value cannot be precisely represented in floating point point format, just like 1/3 can't be precisely stored in a finite decimal representation.
See if this article helps you.
why doesn't the framework work around this and hide this problem from me and give me the
right answer,0.69!!!
Stop behaving like a dilbert manager, and accept that computers, though cool and awesome, have limits. In your specific case, it doesn't just "hide" the problem, because you have specifically told it not to. The language (the computer) provides alternatives to the format, that you didn't choose. You chose double, which has certain advantages over decimal, and certain downsides. Now, knowing the answer, you're upset that the downsides don't magically disappear.
As a programmer, you are responsible for hiding this downside from managers, and there are many ways to do that. However, the makers of C# have a responsibility to make floating point work correctly, and correct floating point will occasionally result in incorrect math.
So will every other number storage method, as we do not have infinite bits. Our job as programmers is to work with limited resources to make cool things happen. They got you 90% of the way there, just get the torch home.
And 0.69 can easily be represented in
binary, one binary number for 69 and
another to denote the position of the
decimal place.
I think this is a common mistake - you're thinking of floating point numbers as if they are base-10 (i.e decimal - hence my emphasis).
So - you're thinking that there are two whole-number parts to this double: 69 and divide by 100 to get the decimal place to move - which could also be expressed as:
69 x 10 to the power of -2.
However floats store the 'position of the point' as base-2.
Your float actually gets stored as:
68999999999999995 x 2 to the power of some big negative number
This isn't as much of a problem once you're used to it - most people know and expect that 1/3 can't be expressed accurately as a decimal or percentage. It's just that the fractions that can't be expressed in base-2 are different.
but why doesn't the framework work around this and hide this problem from me and give me the right answer,0.69!!!
Because you told it to use binary floating point, and the solution is to use decimal floating point, so you are suggesting that the framework should disregard the type you specified and use decimal instead, which is very much slower because it is not directly implemented in hardware.
A more efficient solution is to not output the full value of the representation and explicitly specify the accuracy required by your output. If you format the output to two decimal places, you will see the result you expect. However if this is a financial application decimal is precisely what you should use - you've seen Superman III (and Office Space) haven't you ;)
Note that it is all a finite approximation of an infinite range, it is merely that decimal and double use a different set of approximations. The advantage of decimal is it produces the same approximations that you would if you were performing the calculation yourself. For example if you calculated 1/3, you would eventually stop writing 3's when it was 'good enough'.
For the same reason that 1 / 3 in a decimal systems comes out as 0.3333333333333333333333333333333333333333333 and not the exact fraction, which is infinitely long.
To work around it (e.g. to display on screen) try this:
double i = (double) Decimal.Multiply(10, (Decimal) 0.69);
Everyone seems to have answered your first question, but ignored the second part.

python: how accurate math.sqrt(x) function is?

Consider the following code snippet in Python:
m = int(math.sqrt(n))
For n = 25, it should give m = 5 (and it does in my shell). But from my C experience I know that using such expression is a bad idea, as sqrt function may return a slightly lower value than the real value, and then after rounding i may get m = 4 instead of m = 5. Is this limitation also involved in python? And if this is the case, what is be the best way to write such expressions in python? What will happen if I use Java or C#?
Besides, if there is any inaccuracy, what factors controls the amount of it?
For proper rounding, use round(); it rounds to the nearest whole number, but returns a float. Then you may construct an int from the result.
(Most probably your code is not performance-critical and you will never notice any slowdown associated with round(). If you do, you probably should be using numpy anyway.)
If you are very concerned with the accuracy of sqrt, you could use the decimal.Decimal class from the standard library, which provides its own sqrt function. The Decimal class can be set to greater precision than regular Python floats. That said, it may not matter if you are rounding anyways. The Decimal class results in exact numbers (from the docs):
The exactness [of Decimal] carries over into arithmetic. In decimal floating point,
0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the result is 5.5511151231257827e-017. While near to zero, the
differences prevent reliable equality testing and differences can
accumulate. For this reason, decimal is preferred in accounting
applications which have strict equality invariants.
The solution is easy. If you're expecting an integer result, use int(math.sqrt(n)+.1). If the value is a little more or less than the integer result, it will round to the correct value.

Convert.ToDecimal(double x) - System.OverflowException

What is the maximum double value that can be represented\converted to a decimal?
How can this value be derived - example please.
Update
Given a maximum value for a double that can be converted to a decimal, I would expect to be able to round-trip the double to a decimal, and then back again. However, given a figure such as (2^52)-1 as in #Jirka's answer, this does not work. For example:
Test]
public void round_trip_double_to_decimal()
{
double maxDecimalAsDouble = (Math.Pow(2, 52) - 1);
decimal toDecimal = Convert.ToDecimal(maxDecimalAsDouble);
double toDouble = Convert.ToDouble(toDecimal);
//Fails.
Assert.That(toDouble, Is.EqualTo(maxDecimalAsDouble));
}
All integers between -9,007,199,254,740,992 and 9,007,199,254,740,991 can be exactly represented in a double. (Keep reading, though.)
The upper bound is derived as 2^53 - 1. The internal representation of it is something like (0x1.fffffffffffff * 2^52) if you pardon my hexadecimal syntax.
Outside of this range, many integers can be still exactly represented if they are a multiple of a power of two.
The highest integer whatsoever that can be accurately represented would therefore be 9,007,199,254,740,991 * (2 ^ 1023), which is even higher than Decimal.MaxValue but this is a pretty meaningless fact, given that the value does not bother to change, for example, when you subtract 1 in double arithmetic.
Based on the comments and further research, I am adding info on .NET and Mono implementations of C# that relativizes most conclusions you and I might want to make.
Math.Pow does not seem to guarantee any particular accuracy and it seems to deliver a bit or two fewer than what a double can represent. This is not too surprising with a floating point function. The Intel floating point hardware does not have an instruction for exponentiation and I expect that the computation involves logarithm and multiplication instructions, where intermediate results lose some precision. One would use BigInteger.Pow if integral accuracy was desired.
However, even (decimal)(double)9007199254740991M results in a round trip violation. This time it is, however, a known bug, a direct violation of Section 6.2.1 of the C# spec. Interestingly I see the same bug even in Mono 2.8. (The referenced source shows that this conversion bug can hit even with much lower values.)
Double literals are less rounded, but still a little: 9007199254740991D prints out as 9007199254740990D. This is an artifact of internal multiplication by 10 when parsing the string literal (before the upper and lower bound converge to the same double value based on the "first zero after the decimal point"). This again violates the C# spec, this time Section 9.4.4.3.
Unlike C, C# has no hexadecimal floating point literals, so we cannot avoid that multiplication by 10 by any other syntax, except perhaps by going through Decimal or BigInteger, if these only provided accurate conversion operators. I have not tested BigInteger.
The above could almost make you wonder whether C# does not invent its own unique floating point format with reduced precision. No, Section 11.1.6 references 64bit IEC 60559 representation. So the above are indeed bugs.
So, to conclude, you should be able to fit even 9007199254740991M in a double precisely, but it's quite a challenge to get the value in place!
The moral of the story is that the traditional belief that "Arithmetic should be barely more precise than the data and the desired result" is wrong, as this famous article demonstrates (page 36), albeit in the context of a different programming language.
Don't store integers in floating point variables unless you have to.
MSDN Double data type
Decimal vs double
The value of Decimal.MaxValue is positive 79,228,162,514,264,337,593,543,950,335.

C# loss of precision when dividing doubles

I know this has been discussed time and time again, but I can't seem to get even the most simple example of a one-step division of doubles to result in the expected, unrounded outcome in C# - so I'm wondering if perhaps there's i.e. some compiler flag or something else strange I'm not thinking of. Consider this example:
double v1 = 0.7;
double v2 = 0.025;
double result = v1 / v2;
When I break after the last line and examine it in the VS debugger, the value of "result" is 27.999999999999996. I'm aware that I can resolve it by changing to "decimal," but that's not possible in the case of the surrounding program. Is it not strange that two low-precision doubles like this can't divide to the correct value of 28? Is the only solution really to Math.Round the result?
Is it not strange that two low-precision doubles like this can't divide to the correct value of 28?
No, not really. Neither 0.7 nor 0.025 can be exactly represented in the double type. The exact values involved are:
0.6999999999999999555910790149937383830547332763671875
0.025000000000000001387778780781445675529539585113525390625
Now are you surprised that the division doesn't give exactly 28? Garbage in, garbage out...
As you say, the right result to represent decimal numbers exactly is to use decimal. If the rest of your program is using the wrong type, that just means you need to work out which is higher: the cost of getting the wrong answer, or the cost of changing the whole program.
Precision is always a problem, in case you are dealing with float or double.
Its a known issue in Computer Science and every programming language is affected by it. To minimize these sort of errors, which are mostly related to rounding, a complete field of Numerical Analysis is dedicated to it.
For instance, let take the following code.
What would you expect?
You will expect the answer to be 1, but this is not the case, you will get 0.9999907.
float v = .001f;
float sum = 0;
for (int i = 0; i < 1000; i++ )
{
sum += v;
}
It has nothing to do with how 'simple' or 'small' the double numbers are. Strictly speaking, neither 0.7 or 0.025 may be stored as exactly those numbers in computer memory, so performing calculations on them may provide interesting results if you're after heavy precision.
So yes, use decimal or round.
To explain this by analogy:
Imagine that you are working in base 3. In base 3, 0.1 is (in decimal) 1/3, or 0.333333333'.
So you can EXACTLY represent 1/3 (decimal) in base 3, but you get rounding errors when trying to express it in decimal.
Well, you can get exactly the same thing with some decimal numbers: They can be exactly expressed in decimal, but they CAN'T be exactly expressed in binary; hence, you get rounding errors with them.
Short answer to your first question: No, it's not strange. Floating-point numbers are discrete approximations of the real numbers, which means that rounding errors will propagate and scale when you do arithmetic operations.
Theres' a whole field of mathematics called numerical analyis that basically deal with how to minimize the errors when working with such approximations.
It's the usual floating point imprecision. Not every number can be represented as a double, and those minor representation inaccuracies add up. It's also a reason why you should not compare doubles to exact numbers. I just tested it, and result.ToString() showed 28 (maybe some kind of rounding happens in double.ToString()?). result == 28 returned false though. And (int)result returned 27. So you'll just need to expect imprecisions like that.

Is there a 128 or 256 bit double class in .net?

I have an application that I want to be able to use large numbers and very precise numbers. For this, I needed a precision interpretation and IntX only works for integers.
Is there a class in .net framework or even third party(preferably free) that would do this?
Is there another way to do this?
Maybe the Decimal type would work for you?
You can use the freely available, arbitrary precision, BigDecimal from java.math, which is part of the J# redistributable package from Microsoft and is a managed .NET library.
Place a reference to vjslib in your project and you can something like this:
using java.math;
public void main()
{
BigDecimal big = new BigDecimal("1234567890123456789011223344556677889900.0000009876543210000987654321");
big.add(new BigDecimal(1.0));
Debug.Print(big);
}
Will print the following to the debug console:
1234567890123456789011223344556677889901.0000009876543210000987654321
Note that, as already mentioned, .NET 2010 contains a BigInteger class which, as a matter of fact, was already available in earlier versions, but only as internal class (i.e., you'd need some reflection to get it to work).
The F# library has some really big number types as well if you're okay with using that...
I've been searching for a solution for this for a long time, and today came across this library:
Quadruple Precision Double in C#
Signed 128-bit floating point data type library, with 64 effective bits of precision (vs. 53 for Doubles) and a 64 bit exponent (vs. 11 for Doubles). Quads have greater precision and far greater range than Doubles and are especially useful when dealing with very large or very small values, such as those in probabilistic models. As of version 2.0, all Quad arithmetic is checked (underflowing to 0, overflowing to +/- infinity), has special PositiveInfinity, NegativeInfinity, and NaN values, and follows the same rules as .Net Double arithmetic and comparison operators (e.g. 1/0 == PositiveInfinity, 0 * PositiveInfinity == NaN, NaN != NaN), making it a convenient drop-in replacement for Doubles in existing code.
If Decimal doesn't work for you, try implementing (or grabbing code from somewhere) Rational arithmetic using large integers. That will provide the precision you need.
Shameless plug: QPFloat emulates the IEEE standard to full precision.
Use decimal for this if possible.
Decimal is a 128-bit (16 byte) value type that is used for highly precise calculations. It is a floating point type that is represented internally as base 10 instead of base 2 (i.e. binary). If you need to be highly precise, you should use Decimal - but the drawback is that Decimal is about 20 times slower than using floats.
Well, I'm about 12 years late to the party. 🙃
I just thought you'd like to use my arbitrary precision floating point class called BigDecimal (I should have named it BigFloat, but its kinda late for that now).
Well, more correctly it will provide precision up to the number of digits you specify (by setting the static BigDecimal.Precision member), that way it doesn't use up all your ram trying to represent irrational numbers.
I put a lot of effort into ensuring its correctness and working out all the bugs. It comes with a test project that tests every method in multiple ways, and each bug I fixed started by adding a test case.
And unlike QPFloat and the J# redistributable's BigDecimal, the code is not an incomprehensible mess and follows C# coding style and naming conventions (to be fair, part of unreadability of J#'s version comes from the fact that you have to decompile the assembly first, so it'll be missing the names of all the private members).
Link drop:
BigDecimal on GitHub
ExtendedNumerics.BigDecimal on NuGet
Decimal is 128 bits if that would work.

Categories