C# Decimal Division (Extremely strange behavior) - c#

Can anyone explain this?
Decimal leftSide = 13.0M;
Decimal rightSide = 1.0M;
Decimal tmpDec = 39.0M;
tmpDec * (rightSide / leftSide) = 2.9999999999999999999999999991
tmpDec * rightSide / leftSide = 3
Am I losing significant digits on the first's (rightSide / leftSide)?

It is rounding (loosing precision) problem.
First case first executes division and crops the last digits (after 28th after decimal point since float (or double or decimal or any other fixed-precision data type) can't store the infinite long values), then executes multiplication of the cropped value, increasing the loss in 39 million times. That already becomes significant (it's not 28th digit issue now, nut 20-22nd) and can't be rounded to 3.0.
The second case first executes multiplication which don't loose precision and stores every 8 digits, then executes division with less precision loose (by 6 decimal points). So rounding already rounds to 3.0 and not to 2.999999999999999999999 as in fist case.

The Decimal type is only precise to 29 significant digits.
2.9999999999999999999999999991
^ Has more than 29 significant digits.
You can use System.Numerics.BigInteger (modify it to work as fixed point decimal) if you require more precision.

Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation

Related

How to calculate float type precision and does it make sense?

I have a problem understanding the precision of float type.
The msdn writes that precision from 6 to 9 digits. But I note that precision depends from on the size of the number:
float smallNumber = 1.0000001f;
Console.WriteLine(smallNumber); // 1.0000001
bigNumber = 100000001f;
Console.WriteLine(bigNumber); // 100000000
The smallNumber is more precise than big, I understand IEEE754, but I don't understand how MSDN calculate precision, and does it make sense?
Also, you can play with the representation of numbers in float format here. Please write 100000000 value in "You entered" input and click "+1" on the right. Then change the input's value to 1, and click "+1" again. You may see the difference in precision.
The MSDN documentation is nonsensical and wrong.
Bad concept. Binary-floating-point format does not have any precision in decimal digits because it has no decimal digits at all. It represents numbers with a sign, a fixed number of binary digits (bits), and an exponent for a power of two.
Wrong on the high end. The floating-point format represents many numbers exactly, with infinite precision. For example, “3” is represented exactly. You can write it in decimal arbitrarily far, 3.0000000000…, and all of the decimal digits will be correct. Another example is 1.40129846432481707092372958328991613128026194187651577175706828388979108268586060148663818836212158203125•10−45. This number has 105 significant digits in decimal, but the float format represents it exactly (it is 2−149).
Wrong on the low end. When “999999.97” is converted from decimal to float, the result is 1,000,000. So not even one decimal digit is correct.
Not a measure of accuracy. Because the float significand has 24 bits, the resolution of its lowest bit is about 223 times finer than the resolution of its highest bit. This is about 6.9 digits in the sense that log10223 is about 6.9. But that just tells us the resolution—the coarseness—of the representation. When we convert a number to the float format, we get a result that differs from the number by at most ½ of this resolution, because we round to the nearest representable value. So a conversion to float has a relative error of at most 1 part in 224, which corresponds to about 7.2 digits in the above sense. If we are using digits to measure resolution, then we say the resolution is about 7.2 digits, not that it is 6-9 digits.
Where do these numbers came from?
So, if “~6-9 digits” is not a correct concept, does not come from actual bounds on the digits, and does not measure accuracy, where does it come from? We cannot be sure, but 6 and 9 do appear in two descriptions of the float format.
6 is the largest number x for which this is guaranteed:
If any decimal numeral with at most x significant digits is within the normal exponent bounds of the float format and is converted to the nearest value represented in the format, then, when the result is converted to the nearest decimal numeral with at most x significant digits, the result of that conversion equals the original number.
So it is reasonable to say float can preserve at least six decimal digits. However, as we will see, there is no bound involving nine digits.
9 is the smallest number x that guarantees this:
If any finite float number is converted to the nearest decimal numeral with x digits, then, when the result is converted to the nearest value representable in float, the result of that conversion equals the original number.
As an analogy, if float is a container, then the largest “decimal container” guaranteed to fit inside it is six digits, and the smallest “decimal container” guaranteed to hold it is nine digits. 6 and 9 are akin to interior and exterior measurements of the float container.
Suppose you had a block 7.2 units long, and you were looking at its placement on a line of bricks each 1 unit long. If you put the start of the block at the start of a brick, it will extend 7.2 bricks. However, if somebody else chooses where it starts, they might start it in the middle of a brick. Then it would cover part of that brick, all of the next 6 bricks, and and part of the last brick (e.g., .5 + 6 + .7 = 7.2). So a 7.2-unit block is only guaranteed to cover 6 bricks. Conversely, 8 bricks can cover the 7.2-unit block if you choose where they are placed. But if somebody else chooses where they start, the first might cover just .1 units of the block. Then you need 7 more and another fraction, so 9 bricks are needed.
The reason this analogy holds is that powers of two and powers of 10 are irregularly spaced relative to each other. 210 (1024) is near 103 (1000). 10 is the exponent used in the float format for numbers from 1024 (inclusive) to 2048 (exclusive). So this interval from 1024 to 2048 is like a block that has been placed just after the 100-1000 ends and the 1000-10,000 block starts.
But note that this property involving 9 digits is the exterior measurement—it is not a capability that float can perform or a service that it can provide. It is something that float needs (if it is to be held in a decimal format), not something it provides. So it is not a bound on how many digits a float can store.
Further Reading
For better understanding of floating-point arithmetic, consider studying the IEEE-754 Standard for Floating-Point Arithmetic or a good textbook like Handbook of Floating-Point Arithmetic by Jean-Michel Muller et al.
Yes number of digits before rounding errors is a measure of precision but you can not asses precision from just 2 numbers because you might be just closer or further from the rounding threshold.
To better understand the situation then you need to see how floats are represented.
The IEEE754 32bit floats are stored as:
bool(1bit sign) * integer(24bit mantisa) << integer(8bit exponent)
Yes mantissa is 24 bit instead of 23 as it's MSB is implicitly set to 1.
As you can see there are only integers and bitshift. So if you are representing natural number up to 2^24 you are without rounding completely. Fro bigger numbers binary zero padding occurs from the right that causes the difference.
In case of digits after decimal points the zero padding occurs from the left. But there is another problem as in binary you can not store some decadic numbers exactly. For example:
0.3 dec = 0.100110011001100110011001100110011001100... bin
0.25 dec = 0.01 bin
As you can see the sequence of 0.3 dec in binary is infinite (like we can not write 1/3 in decadic) hence if crop it to only 24 bits you lose the rest and the number is not what you want anymore.
If you compare 0.3 and 0.125 the 0.125 is exact and 0.3 is not but 0.125 is much smaller than 0.3. So your measure is not correct unless explored more very close values that will cover the rounding steps and computing the max difference from such set. For example you could compare
1.0000001f
1.0000002f
1.0000003f
1.0000004f
1.0000005f
1.0000006f
1.0000007f
1.0000008f
1.0000009f
and remember the max difference of fabs(x-round(x)) and than do the same for
100000001
100000002
100000003
100000004
100000005
100000006
100000007
100000008
100000009
And then compare the two differences.
On top of all this you are missing one very important thing. And that is the errors while converting from text to binary and back which are usually even bigger. First of all try to print your numbers without rounding (for example force to print 20 decimal digits after decimal point).
Also the numbers are stored in binary base so in order to print them you need to convert to decadic base which involves multiplication and division by 10. The more bits are missing (zero pad) from the number the bigger the print errors are. To be as precise as you can a trick is used and that is to print the number in hex (no rounding errors) and then convert the hex string itself to decadic base on integer math. That is much more accurate then naive floating point prints. for more info see related QAs:
my best attempt to print 32 bit floats with least rounding errors (integer math only)
How do libraries/programming languages convert floats to strings
How do I convert a very long binary number to decimal?
Now to get back to number of "precise" digits represented by float. For integer part of number is that easy:
dec_digits = floor(log10(2^24)) = floor(7.22) = 7
However for digits after decimal point is this not as precise (for first few decadic digits) as there are a lot rounding going on. For more info see:
How do you print the EXACT value of a floating point number?
I think what they mean in their documentation is that depending on the number that the precision ranges from 6 to 9 decimal places. Go by the standard that is explained on the page you linked, sometimes Microsoft are a bit lazy when it comes to documentation, like the rest of us.
The problem with floating point is that it is inaccurate. If you put the number 1.05 into the site in your link you will notice that it cannot be accurately stored in floating point. It's actually stored as 1.0499999523162841796875. It's stored this way to do calculations faster. It's not great for money, e.g. what if your item is priced at $1.05 and you sell a billion of them.
The smallNumber is more precise than big
Incorrect compare. The other number has more significant digits.
1.0000001f is attempting N digits of decimal precision.
100000001f attempts N+1.
I have a problem understanding the precision of float type.
To best understand float precision, think binary. Use "%a" for printing with a C99 or later compiler.
float is stored base 2. The significand is a Dyadic rational, some integer/power-of-2.
float commonly has 24 bits of binary precision. (23-bit explicitly encoded, 1 implied)
Between [1.0 ... 2.0), there are 223 different float values.
Between [2.0 ... 4.0), there are 223 different float values.
Between [4.0 ... 8.0), there are 223 different float values.
...
The possible values of a float are not distributed uniformly among powers-of-10. The grouping of float values to power-of-10 (decimal precision) results in the wobbling 6 to 9 decimal digits of precision.
How to calculate float type precision?
To find the difference between subsequent float values, since C99, use nextafterf()
Illustrative code:
#include<math.h>
#include<stdio.h>
void foooo(float b) {
float a = nextafterf(b, 0);
float c = nextafterf(b, b * 2.0f);
printf("%-15a %.9e\n", a, a);
printf("%-15a %.9e\n", b, b);
printf("%-15a %.9e\n", c, c);
printf("Local decimal precision %.2f digits\n", 1.0 - log10((c - b) / b));
}
int main(void) {
foooo(1.0000001f);
foooo(100000001.0f);
return 0;
}
Output
0x1p+0 1.000000000e+00
0x1.000002p+0 1.000000119e+00
0x1.000004p+0 1.000000238e+00
Local decimal precision 7.92 digits
0x1.7d783ep+26 9.999999200e+07
0x1.7d784p+26 1.000000000e+08
0x1.7d7842p+26 1.000000080e+08
Local decimal precision 8.10 digits

Truncate significant figures [duplicate]

Why do some numbers lose accuracy when stored as floating point numbers?
For example, the decimal number 9.2 can be expressed exactly as a ratio of two decimal integers (92/10), both of which can be expressed exactly in binary (0b1011100/0b1010). However, the same ratio stored as a floating point number is never exactly equal to 9.2:
32-bit "single precision" float: 9.19999980926513671875
64-bit "double precision" float: 9.199999999999999289457264239899814128875732421875
How can such an apparently simple number be "too big" to express in 64 bits of memory?
In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:
5179139571476070 * 2 -49
Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.
9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.
Seeing the Data
First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):
def float_to_bin_parts(number, bits=64):
if bits == 32: # single precision
int_pack = 'I'
float_pack = 'f'
exponent_bits = 8
mantissa_bits = 23
exponent_bias = 127
elif bits == 64: # double precision. all python floats are this
int_pack = 'Q'
float_pack = 'd'
exponent_bits = 11
mantissa_bits = 52
exponent_bias = 1023
else:
raise ValueError, 'bits argument must be 32 or 64'
bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]
There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.
Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.
When we call that function with our example, 9.2, here's what we get:
>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']
Interpreting the Data
You'll see I've split the return value into three components. These components are:
Sign
Exponent
Mantissa (also called Significand, or Fraction)
Sign
The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.
Exponent
The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).
Mantissa
The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:
6.0221413x1023
The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:
1.0010011001100110011001100110011001100110011001100110
This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.
When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:
0.0010011001100110011001100110011001100110011001100110
In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)
Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.
Recapping the Components
Sign (first component): 0 for positive, 1 for negative
Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa
Calculating the Number
Putting all three parts together, we're given this binary number:
1.0010011001100110011001100110011001100110011001100110 x 1011
Which we can then convert from binary to decimal:
1.1499999999999999 x 23 (inexact!)
And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:
9.1999999999999993
Representing as a Fraction
9.2
Now that we've built the number, it's possible to reconstruct it into a simple fraction:
1.0010011001100110011001100110011001100110011001100110 x 1011
Shift mantissa to a whole number:
10010011001100110011001100110011001100110011001100110 x 1011-110100
Convert to decimal:
5179139571476070 x 23-52
Subtract the exponent:
5179139571476070 x 2-49
Turn negative exponent into division:
5179139571476070 / 249
Multiply exponent:
5179139571476070 / 562949953421312
Which equals:
9.1999999999999993
9.5
>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']
Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.
Assemble the binary scientific notation:
1.0011 x 1011
Shift the decimal point:
10011 x 1011-100
Subtract the exponent:
10011 x 10-1
Binary to decimal:
19 x 2-1
Negative exponent to division:
19 / 21
Multiply exponent:
19 / 2
Equals:
9.5
Further reading
The Floating-Point Guide: What Every Programmer Should Know About Floating-Point Arithmetic, or, Why don’t my numbers add up? (floating-point-gui.de)
What Every Computer Scientist Should Know About Floating-Point Arithmetic (Goldberg 1991)
IEEE Double-precision floating-point format (Wikipedia)
Floating Point Arithmetic: Issues and Limitations (docs.python.org)
Floating Point Binary
This isn't a full answer (mhlester already covered a lot of good ground I won't duplicate), but I would like to stress how much the representation of a number depends on the base you are working in.
Consider the fraction 2/3
In good-ol' base 10, we typically write it out as something like
0.666...
0.666
0.667
When we look at those representations, we tend to associate each of them with the fraction 2/3, even though only the first representation is mathematically equal to the fraction. The second and third representations/approximations have an error on the order of 0.001, which is actually much worse than the error between 9.2 and 9.1999999999999993. In fact, the second representation isn't even rounded correctly! Nevertheless, we don't have a problem with 0.666 as an approximation of the number 2/3, so we shouldn't really have a problem with how 9.2 is approximated in most programs. (Yes, in some programs it matters.)
Number bases
So here's where number bases are crucial. If we were trying to represent 2/3 in base 3, then
(2/3)10 = 0.23
In other words, we have an exact, finite representation for the same number by switching bases! The take-away is that even though you can convert any number to any base, all rational numbers have exact finite representations in some bases but not in others.
To drive this point home, let's look at 1/2. It might surprise you that even though this perfectly simple number has an exact representation in base 10 and 2, it requires a repeating representation in base 3.
(1/2)10 = 0.510 = 0.12 = 0.1111...3
Why are floating point numbers inaccurate?
Because often-times, they are approximating rationals that cannot be represented finitely in base 2 (the digits repeat), and in general they are approximating real (possibly irrational) numbers which may not be representable in finitely many digits in any base.
While all of the other answers are good there is still one thing missing:
It is impossible to represent irrational numbers (e.g. π, sqrt(2), log(3), etc.) precisely!
And that actually is why they are called irrational. No amount of bit storage in the world would be enough to hold even one of them. Only symbolic arithmetic is able to preserve their precision.
Although if you would limit your math needs to rational numbers only the problem of precision becomes manageable. You would need to store a pair of (possibly very big) integers a and b to hold the number represented by the fraction a/b. All your arithmetic would have to be done on fractions just like in highschool math (e.g. a/b * c/d = ac/bd).
But of course you would still run into the same kind of trouble when pi, sqrt, log, sin, etc. are involved.
TL;DR
For hardware accelerated arithmetic only a limited amount of rational numbers can be represented. Every not-representable number is approximated. Some numbers (i.e. irrational) can never be represented no matter the system.
There are infinitely many real numbers (so many that you can't enumerate them), and there are infinitely many rational numbers (it is possible to enumerate them).
The floating-point representation is a finite one (like anything in a computer) so unavoidably many many many numbers are impossible to represent. In particular, 64 bits only allow you to distinguish among only 18,446,744,073,709,551,616 different values (which is nothing compared to infinity). With the standard convention, 9.2 is not one of them. Those that can are of the form m.2^e for some integers m and e.
You might come up with a different numeration system, 10 based for instance, where 9.2 would have an exact representation. But other numbers, say 1/3, would still be impossible to represent.
Also note that double-precision floating-points numbers are extremely accurate. They can represent any number in a very wide range with as much as 15 exact digits. For daily life computations, 4 or 5 digits are more than enough. You will never really need those 15, unless you want to count every millisecond of your lifetime.
Why can we not represent 9.2 in binary floating point?
Floating point numbers are (simplifying slightly) a positional numbering system with a restricted number of digits and a movable radix point.
A fraction can only be expressed exactly using a finite number of digits in a positional numbering system if the prime factors of the denominator (when the fraction is expressed in it's lowest terms) are factors of the base.
The prime factors of 10 are 5 and 2, so in base 10 we can represent any fraction of the form a/(2b5c).
On the other hand the only prime factor of 2 is 2, so in base 2 we can only represent fractions of the form a/(2b)
Why do computers use this representation?
Because it's a simple format to work with and it is sufficiently accurate for most purposes. Basically the same reason scientists use "scientific notation" and round their results to a reasonable number of digits at each step.
It would certainly be possible to define a fraction format, with (for example) a 32-bit numerator and a 32-bit denominator. It would be able to represent numbers that IEEE double precision floating point could not, but equally there would be many numbers that can be represented in double precision floating point that could not be represented in such a fixed-size fraction format.
However the big problem is that such a format is a pain to do calculations on. For two reasons.
If you want to have exactly one representation of each number then after each calculation you need to reduce the fraction to it's lowest terms. That means that for every operation you basically need to do a greatest common divisor calculation.
If after your calculation you end up with an unrepresentable result because the numerator or denominator you need to find the closest representable result. This is non-trivil.
Some Languages do offer fraction types, but usually they do it in combination with arbitary precision, this avoids needing to worry about approximating fractions but it creates it's own problem, when a number passes through a large number of calculation steps the size of the denominator and hence the storage needed for the fraction can explode.
Some languages also offer decimal floating point types, these are mainly used in scenarios where it is imporant that the results the computer gets match pre-existing rounding rules that were written with humans in mind (chiefly financial calculations). These are slightly more difficult to work with than binary floating point, but the biggest problem is that most computers don't offer hardware support for them.

Why does Convert.ToDecimal(Double) round to 15 significant figures?

I have a double with 17 digits after the decimal point, i.e.:
double myDouble = 0.12345678901234567;
If I convert this to a decimal like this:
decimal myDecimal = Convert.ToDecimal(myDouble);
then the value of myDecimal is rounded, as per the Convert.ToDecimal documentation, to 15 digits (i.e. 0.0123456789012345). My question is, why is this rounding performed?
I understand that if my original number could be accurately represented in base 10 and I was trying to store it as a double, then we could only have confidence in the first 15 digits. The final two digits would be subject to rounding error. But, that's a base 10 biased point of view. My number may be more accurately represented by a double and I wish to convert it to decimal while preserving as much accuracy as possible.
Shouldn't Convert.ToDecimal aim to minimise the difference between myDouble and (double)Convert.ToDecimal(myDouble)?
From the documentation of Double:
A Double value has up to 15 decimal digits of precision, although a
maximum of 17 digits is maintained internally
So, as the double value itself has a maximum of 15 decimal places, converting it to Decimal will result in a Decimal value with 15 significant figures.
The behavior of the rounding guarantees that conversion of any Decimal which has at most fifteen significant figures to double and back to Decimal will yield the original value unchanged. If values were rounded to sixteen figures rather than fifteen, such a guarantee would not only fail to hold for number with sixteen figures, but it wouldn't even hold for much shorter values. For example, the closest Double value to 9000.04 is approximately 9000.040000000000873115; rounding that to sixteen figures would yield 9000.040000000001.
The choice of rounding one should use depends upon whether regards the best Decimal value of double value 9000.04 as being 9000.04m, 9000.040000000001m, 9000.0400000000008731m, or perhaps something else. Microsoft probably decided that any representation other than 9000.04m would be confusing.
The following is from the documentation of the method in question.
http://msdn.microsoft.com/en-us/library/a69w9ca0(v=vs.110).aspx
"The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
Every terminating binary fraction is exactly representable as a decimal fraction, so the minimum possible difference for a finite number is always 0. The IEEE 754 64-bit representation of your number is exactly equal to 0.1234567890123456634920984242853592149913311004638671875
Every conversion from binary floating point to decimal or decimal string must embody some decision about rounding. In theory, they could preserve all the digits, but that would result in some very long outputs with most of the digits having little meaning.
One option, used in Java's Double.toString, is to stop at the shortest decimal representation that would convert back to the original double.
Most set some fairly arbitrary limit. 15 significant digits preserves most meaningful digits.

float.TryParse() rounded the value in C#

while trying to parse variable to float with following parameter float.TryParse(value, NumberStyles.Float, CultureInfo.InvariantCulture, out fValue),
the value=6666.77777 is rounded of to 6666.778.
can anyone help, i don't want my value to be rounded.
float only has around 6 significant digits. Note that digits before the decimal point count too. double has higher precision in that regard (around 16):
PS> [float]::Parse('6666,77777')
6666.778
PS> [double]::Parse('6666,77777')
6666.77777
But as others noted, this is just an approximate value because it cannot be represented exactly in base 2. If you need decimal exactness (e.g. for money values) then use a decimal. For most other things binary floating point (i.e. float and double) should be fine.
Why don't you use Double.TryParse or Decimal.TryParse to support higher precision:
float: Approximately ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures
double: Approximately ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
decimal: Approximately ±1.0 x 10-28 to ±7.9 x 1028 with 28 or 29 significant figures
Try this piece of code snippet instead:
double fValue;
double.TryParse("6666.77777", NumberStyles.Double, CultureInfo.InvariantCulture, out fValue);
OR
decimal fValue;
decimal.TryParse("6666.77777", NumberStyles.Decimal, CultureInfo.InvariantCulture, out fValue);
That is because value 666.77777 cannot be represented in the binary form floating point numbers use - the actual number contains more binary digits than the floating point has room for. The resulting number is the closest approximation.
Rounding is used when the exact result of a floating-point operation
(or a conversion to floating-point format) would need more digits than
there are digits in the significand. IEEE 754 requires correct
rounding: that is, the rounded result is as if infinitely precise
arithmetic was used to compute the value and then rounded (although in
implementation only three extra bits are needed to ensure this). There
are several different rounding schemes (or rounding modes).
Historically, truncation was the typical approach. Since the
introduction of IEEE 754, the default method (round to nearest, ties
to even, sometimes called Banker's Rounding) is more commonly used.
You should consider using double if you need more precision, or decimal if you need even more than that, though they too suffer from precision loss at some point.
you should use decimal if you need higher precision.

Do floating points have more precision if calculated at with a range of high values rather than low values?

Would a higher range of floats be more accurate to multiply / divide / add / subtract, than a lower range.
For example, would 567.56 / 345.54 be more accurate than .00097854 / .00021297 ?
The answer to your question is "no." Floating point numbers are (usually*) represented with a normalized mantissa and an exponent. Multiplication and division operate first on the normalized mantissa, then on the exponents.
Addition and subtraction are, of course, another story. Operations like your examples:
567.56 + 345.54 or .00097854 - .00021297
work fine. But operations with disparate orders of magnitude like
567.56 + .00097854 or 345.54 - .00021297
may lose some low-order precision.
The IEEE Floating point standards includes denormalized numbers. If you are an astrophysicist or runtime-library developer, you may need to understand them. See http://en.wikipedia.org/wiki/Denormal_number
For IEEE 754 binary floating-point numbers (the most common), floating-point values have the same number of bits in the significand throughout most of the exponent range. However, there is a portion of the range where the significand has effectively fewer bits. And the relative error caused by rounding does vary depending on where the significand lies within its range.
IEEE 754 floating-point numbers are represented by a sign (+1 or -1, encoded as 0 or 1), an exponent (for double-precision, -1022 to 1023, encoded as the exponent plus 1023, so 1 to 2046), and a significand (for double-precision, a fraction usually from 1 to just under 2, represented with 53 bits but encoded with 52 bits because the first bit is implicitly 1).
E.g., the number 6.5 is encoded with the bits 0 (sign +1), 10000000001 (exponent 2), and 1010000000000000000000000000000000000000000000000000 (binary fraction 1.1010, hex 1.a, decimal 1.3125). We can write this in hexadecimal floating-point as 0x1.ap2 (hex fraction 1.a multiplied by 2 to the power of decimal 2). Writing in hexadecimal floating-point enables humans to see the floating-point representation fairly easily.
For the exponent, the encoding values of 0 and 2047 are special. When the encoding is 0, the exponent is the same as when the encoding is 1 (-1022), but the implicit bit of the fraction is 0 instead of 1. When the encoding is 2047, the floating-point object represents infinity (if the significand bits are all zero) or a NaN (otherwise).
When the encoded exponent is 0 and the significand bits are all zero, the number represents zero (with +0 and -0 distinguished by the sign). If the significand bits are not all zero, the number is said to be denormalized. This is because most numbers are “normalized” by adjusting the exponent so that the fraction is between 1 (inclusive) and 2 (exclusive). For denormalized numbers, the fraction is less than 1; it starts with “0.” instead of “1.”.
When the result of a floating-point operation is a denormalized number, it effectively has fewer bits in the significand. Thus, as numbers drop below 0x1p-1022 (2-1022), the effective precision decreases.
When numbers are in the normal range (not underflowing to denormals and not overflowing to infinity), then there are no differences in the significands of numbers with different exponents, so:
(2a+2b)/2 has exactly the same result as a+b.
(2a-2b)/2 has exactly the same result as a-b.
(2ab)/2 has exactly the same result as ab.
Note, however, that the relative error can change. When a floating-point operation is performed, the exact mathematical result must be rounded to a representable value. This rounding can happen only in units representable by the significand. For a given exponent, the bits in the significand have a fixed value. So the last bit in the significand represents a certain value. That value is a greater portion of a significand near 1 than it is of a significand near 2.
For a double-precision result, the unit of least precision (ULP) is 1 part in 252 of the value of the greatest bit in the significand. When using round-to-nearest mode (the most common default), the greatest error is at most half of that, because, if the representable number in one direction is more than half an ULP away, the number in the other direction is less than half an ULP away. And the closer number is returned by a proper floating-point operation.
Thus, the maximum relative error in a result with a significand near 1 is slightly over 2-53, but the maximum relative error in a result with a significand near 2 is slightly under 2-54.
For the sake of completeness, I have to disagree a bit and say Yes, it may matter somehow...
Indeed, if you perform 56756.0 / 34554.0, then you'll get the nearest representable Float to the exact mathematical result, with a single floating point rounding "error".
This is because 56756.0 and 34554.0 are representable exactly in floating point (single or double precision IEEE 754), and because according to IEEE 754 standard, operations perform an exact rounding operation (in default mode to the nearest).
If you write 567.56 / 345.54, then both numbers are not represented exactly in floating point in radix 2, so the result of this operation is cumulating 3 floating point rounding "errors".
Let's compare the result in Squeak Smalltalk in double precision (Float), converted to exact arithmetic (Fraction with arbitrary integer length at numerator and denominator):
((56756.0 / 34554.0) asFraction - (56756 / 34554)) asFloat.
-> -7.932275867322412e-17
So far, so good, the magnitude of error is less than or equal to half an ulp, as promised by IEEE 754:
(56756 / 34554) asFloat ulp / 2
-> 1.1102230246251565e-16
With cumulated rounding errors, you may get a larger error (but never a smaller):
((567.56 / 345.54) asFraction - (56756 / 34554)) asFloat
-> -3.0136736359825544e-16
((0.00056756 / 0.00034554) asFraction - (56756 / 34554)) asFloat
-> 3.647664511768385e-16
Above example is hard to generalize, and I perfectly agree with other answers: generally, NO, you should only care of relative precision.
... Unless maybe if you want to implement some function with very strict tolerance about round off errors...
No. In the sense that there's the same number of significant digits available no matter what the order of magnitude (exponent part) of your number is.

Categories