Does ToString("F") lose precision relative to ToString("G")? - c#

I'm programming in C# using .Net 3.5, trying to control the format in which floats are output as strings.
However, I'm seeing some unexpected behaviour. If my code contains (for example):
float value = 50.8836975;
Edit Sorry, that (deleted) code "sample" was unhelpful. Basically, my question was seeking to explain the results of my debugging statements below when I set a breakpoint after "value" - a C# float - had been assigned the result of a calculation. Jon Skeet's answer is exactly what I needed (his first line takes me to task for the unhelpful code).
Then I see the following results when I try various options in my Immediate window:
?value
50.8836975
?value.ToString("G9")
"50.8836975"
?value.ToString("F9")
"50.883700000"
Can anyone explain why my F9-formatted value seems to have lost 3 digits of precision?

Your question is unclear because you've given a real value without an f suffix, and tried to assign it to a float variable. If you're actually using a float variable, then the exact value is
50.883697509765625
If you're actually using a double variable, then the exact value is:
50.8836974999999966939867590554058551788330078125
I get the same results as you for F9 if you use a float, but not if you use a double.
The reason for the reduced precision is revealed by the documentation for System.Single (float):
By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
I believe F is correctly displaying all of the real digits of precision, in an attempt to prevent you from believing that you've actually got more information than you have.

Related

How can I make the result to be the same in C++ and C# of double values [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 3 years ago.
Please notice that I am not asking why a double value changes into another precision, but how can I make the results to be the same.
I appreciate all the comments and answers below, but I didn't get a solution to my problem.
My question again.
I just met a problem when rewriting the code from C++ to C# of the precision problem in double value,
I have a method to calculate the result of a financial product of several double values, but when I compare the results, I got different but similar results from C++ and C# program due to the precision of the double value.
I debug the program in C++ I found that after I set the values to a and b
double a = 4.9;
double b = 5.1;
The value of a and b changes to, a is 4.9000000000000004 and b is 5.0999999999999996.
But in C#, a is still 4.9 and b is still 5.1, and finally, I got the results of -0.062579630921710816(C++ result), and -0.062579644244387111(C# result)
I am totally new in C# and C++, so are there any solutions to make the calculation result to be the same, please (I cannot change the C++ code)?
Or is it possible to make the calculation result to be the same?
It's a display issue - the actual value is as you expect and matches your C++ code. Try this code to see the approximate1 value stored:
double a = 4.9; double b = 5.1;
Console.WriteLine(a.ToString("G20"));
Console.WriteLine(b.ToString("G20"));
The result is:
4.9000000000000004
5.0999999999999996
See here to learn more about why I used G20, and what the other number format strings are.
1 I say approximate value because computers can only store fractions composed of 1/pow(2, n). For example 0.75 is 1/2 + 1/4, so this can be represented accurately. In the same way that we can't accurately represent 1/3 in base 10 (we might write 0.3, 0.33, 0.333, 0.3333, etc.), you can't represent some fractions in base 2 (binary) because they can't be composed of powers of two. See this question for more on that.
Short answer, probably because of compiler optimisations.
Following the IEEE Float spec to the letter would produce exactly the same results. But compilers often make decisions to sacrifice correctness for speed. Applying algebraic rearrangements to your code, fusing multiple operations together, all kinds of optimisations will change the result.
Most C compilers provide compiler flags that allow you to tune which of these optimisations are allowed. I'm not aware if C# provides any explicit guarantees.

double type Multiplication in C# giving me wrong values [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
If I execute the following expression in C#:
double i = 10*0.69;
i is: 6.8999999999999995. Why?
I understand numbers such as 1/3 can be hard to represent in binary as it has infinite recurring decimal places but this is not the case for 0.69. And 0.69 can easily be represented in binary, one binary number for 69 and another to denote the position of the decimal place.
How do I work around this? Use the decimal type?
Because you've misunderstood floating point arithmetic and how data is stored.
In fact, your code isn't actually performing any arithmetic at execution time in this particular case - the compiler will have done it, then saved a constant in the generated executable. However, it can't store an exact value of 6.9, because that value cannot be precisely represented in floating point point format, just like 1/3 can't be precisely stored in a finite decimal representation.
See if this article helps you.
why doesn't the framework work around this and hide this problem from me and give me the
right answer,0.69!!!
Stop behaving like a dilbert manager, and accept that computers, though cool and awesome, have limits. In your specific case, it doesn't just "hide" the problem, because you have specifically told it not to. The language (the computer) provides alternatives to the format, that you didn't choose. You chose double, which has certain advantages over decimal, and certain downsides. Now, knowing the answer, you're upset that the downsides don't magically disappear.
As a programmer, you are responsible for hiding this downside from managers, and there are many ways to do that. However, the makers of C# have a responsibility to make floating point work correctly, and correct floating point will occasionally result in incorrect math.
So will every other number storage method, as we do not have infinite bits. Our job as programmers is to work with limited resources to make cool things happen. They got you 90% of the way there, just get the torch home.
And 0.69 can easily be represented in
binary, one binary number for 69 and
another to denote the position of the
decimal place.
I think this is a common mistake - you're thinking of floating point numbers as if they are base-10 (i.e decimal - hence my emphasis).
So - you're thinking that there are two whole-number parts to this double: 69 and divide by 100 to get the decimal place to move - which could also be expressed as:
69 x 10 to the power of -2.
However floats store the 'position of the point' as base-2.
Your float actually gets stored as:
68999999999999995 x 2 to the power of some big negative number
This isn't as much of a problem once you're used to it - most people know and expect that 1/3 can't be expressed accurately as a decimal or percentage. It's just that the fractions that can't be expressed in base-2 are different.
but why doesn't the framework work around this and hide this problem from me and give me the right answer,0.69!!!
Because you told it to use binary floating point, and the solution is to use decimal floating point, so you are suggesting that the framework should disregard the type you specified and use decimal instead, which is very much slower because it is not directly implemented in hardware.
A more efficient solution is to not output the full value of the representation and explicitly specify the accuracy required by your output. If you format the output to two decimal places, you will see the result you expect. However if this is a financial application decimal is precisely what you should use - you've seen Superman III (and Office Space) haven't you ;)
Note that it is all a finite approximation of an infinite range, it is merely that decimal and double use a different set of approximations. The advantage of decimal is it produces the same approximations that you would if you were performing the calculation yourself. For example if you calculated 1/3, you would eventually stop writing 3's when it was 'good enough'.
For the same reason that 1 / 3 in a decimal systems comes out as 0.3333333333333333333333333333333333333333333 and not the exact fraction, which is infinitely long.
To work around it (e.g. to display on screen) try this:
double i = (double) Decimal.Multiply(10, (Decimal) 0.69);
Everyone seems to have answered your first question, but ignored the second part.

Is it possible to make an 'int' with a decimal value in C#?

I'm a new user of C#, but learned to make small simple games. So I'm having fun and training C# that way.
However, now I need to change a moving objects speed by 0.2, so I can change the speed using an interval, without the object bugging out. I'm using 'int' values to set speed values of the objects. My objects are moving with 2 pixels per milisec (1/1000 sec). I have tried multiplying with 2, but after doing this once or twice, the objects will move so fast, they bug out.
Looked through other questions on the site, but can't find anything, which seems to help me out.
So:
Is it possible to make an 'int', which hold a decimal value ?
If yes, then how can I make it, without risking bugs in the program ?
Thanks in advance!
Is it possible to make an 'int', which hold a decimal value ?
No, a variable of type int can only contains an integer number. In the world of C# and CLR an int is any integer number that can be represented by 32 bits. Nothing less, nothing more. However, a decimal value can be represented by integers, please see update below and comments,
In your case, I think that a float or a double would do the job. (I don't refer to decimal, since we use decimal for financial calculations).
Update
One important outcome of the comments below, coming from mike-wise, is the fact that a float could be represented by integers and actually this was the case before computers got float point registers. One more contribution on this made by mike is that we can find more information on this in the The Art of Computer Programming, Volume 2 chapter 4.
If you want to have only the integer part and if necessary also have a decimal , you could use a float (or double) and forcing the cast to int

Converting to a decimal without rounding

I have a float number, say 1.2999, that when put into a Convert.ToDecimal returns 1.3. The reason I want to convert the number to a decimal is for precision when adding and subtracting, not for rounding up. I know for sure that the decimal type can hold that number, since it can hold numbers bigger than a float can.
Why is it rounding the number up? Is there anyway to stop it from rounding?
Edit: I don't know why mine is rounding and yours is not, here is my exact code:
decNum += Convert.ToDecimal((9 * 0.03F) + 0);
I am really confused now. When I go into the debugger and see the output of the (9 * 0.03F) + 0 part, it shows 0.269999981 as float, but then it converts it into 0.27 decimal. I know however that 3% of 9 is 0.27. So does that mean that the original calculation is incorrect, and the convert is simply fixing it?
Damn I hate numbers so much lol!
What you say is happening doesn't appear to happen.
This program:
using System;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
float f = 1.2999f;
Console.WriteLine(f);
Decimal d = Convert.ToDecimal(f);
Console.WriteLine(d);
}
}
}
Prints:
1.2999
1.2999
I think you might be having problems when you convert the value to a string.
Alternatively, as ByteBlast notes below, perhaps you gave us the wrong test data.
Using float f = 1.2999999999f; does print 1.3
The reason for that is the float value is not precise enough to represent 1.299999999f exactly. That particular value ends up being rounded to 1.3 - but note that it is the float value that is being rounded before it is converted to the decimal.
If you use a double instead of a float, this doesn't happen unless you go to even more digits of precision (when you reach 1.299999999999999)
[EDIT] Based on your revised question, I think it's just expected rounding errors, so definitely read the following:
See "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for details.
Also see this link (recommended by Tim Schmelter in comments below).
Another thing to be aware of is that the debugger might display numbers to a different level of precision than the default double.ToString() (or equivalent) so that can lead to you seeing slightly different numbers.
Aside:
You might have some luck with the "round trip" format specifier:
Console.WriteLine(1.299999999999999.ToString());
Prints 1.3
But:
Console.WriteLine(1.299999999999999.ToString("r"));
Prints 1.2999999999999989
(Note that sneaky little 8 at the penultimate digit!)
For ultimate precision you can use the Decimal type, as you are already doing. That's optimised for base-10 numbers and provides a great many more digits of precision.
However, be aware that it's hundreds of times slower than float or double and that it can also suffer from rounding errors, albeit much less.

Convert.ToDecimal(double x) - System.OverflowException

What is the maximum double value that can be represented\converted to a decimal?
How can this value be derived - example please.
Update
Given a maximum value for a double that can be converted to a decimal, I would expect to be able to round-trip the double to a decimal, and then back again. However, given a figure such as (2^52)-1 as in #Jirka's answer, this does not work. For example:
Test]
public void round_trip_double_to_decimal()
{
double maxDecimalAsDouble = (Math.Pow(2, 52) - 1);
decimal toDecimal = Convert.ToDecimal(maxDecimalAsDouble);
double toDouble = Convert.ToDouble(toDecimal);
//Fails.
Assert.That(toDouble, Is.EqualTo(maxDecimalAsDouble));
}
All integers between -9,007,199,254,740,992 and 9,007,199,254,740,991 can be exactly represented in a double. (Keep reading, though.)
The upper bound is derived as 2^53 - 1. The internal representation of it is something like (0x1.fffffffffffff * 2^52) if you pardon my hexadecimal syntax.
Outside of this range, many integers can be still exactly represented if they are a multiple of a power of two.
The highest integer whatsoever that can be accurately represented would therefore be 9,007,199,254,740,991 * (2 ^ 1023), which is even higher than Decimal.MaxValue but this is a pretty meaningless fact, given that the value does not bother to change, for example, when you subtract 1 in double arithmetic.
Based on the comments and further research, I am adding info on .NET and Mono implementations of C# that relativizes most conclusions you and I might want to make.
Math.Pow does not seem to guarantee any particular accuracy and it seems to deliver a bit or two fewer than what a double can represent. This is not too surprising with a floating point function. The Intel floating point hardware does not have an instruction for exponentiation and I expect that the computation involves logarithm and multiplication instructions, where intermediate results lose some precision. One would use BigInteger.Pow if integral accuracy was desired.
However, even (decimal)(double)9007199254740991M results in a round trip violation. This time it is, however, a known bug, a direct violation of Section 6.2.1 of the C# spec. Interestingly I see the same bug even in Mono 2.8. (The referenced source shows that this conversion bug can hit even with much lower values.)
Double literals are less rounded, but still a little: 9007199254740991D prints out as 9007199254740990D. This is an artifact of internal multiplication by 10 when parsing the string literal (before the upper and lower bound converge to the same double value based on the "first zero after the decimal point"). This again violates the C# spec, this time Section 9.4.4.3.
Unlike C, C# has no hexadecimal floating point literals, so we cannot avoid that multiplication by 10 by any other syntax, except perhaps by going through Decimal or BigInteger, if these only provided accurate conversion operators. I have not tested BigInteger.
The above could almost make you wonder whether C# does not invent its own unique floating point format with reduced precision. No, Section 11.1.6 references 64bit IEC 60559 representation. So the above are indeed bugs.
So, to conclude, you should be able to fit even 9007199254740991M in a double precisely, but it's quite a challenge to get the value in place!
The moral of the story is that the traditional belief that "Arithmetic should be barely more precise than the data and the desired result" is wrong, as this famous article demonstrates (page 36), albeit in the context of a different programming language.
Don't store integers in floating point variables unless you have to.
MSDN Double data type
Decimal vs double
The value of Decimal.MaxValue is positive 79,228,162,514,264,337,593,543,950,335.

Categories