simple calculation, different results in c# and delphi - c#

The question is, why do these code snippets give different results?
private void InitializeOther()
{
double d1, d2, d3;
int i1;
d1 = 4.271343859532459e+18;
d2 = 4621333065.0;
i1 = 5;
d3 = (i1 * d1) - Utils.Sqr(d2);
MessageBox.Show(d3.ToString());
}
and
procedure TForm1.InitializeOther;
var d1, d2, d3 : Double;
i1 : Integer;
begin
d1:=4.271343859532459e+18;
d2:=4621333065.0;
i1:=5;
d3:=i1*d1-Sqr(d2);
ShowMessage(FloatToStr(d3));
end;
The Delphi code gives me 816, while the c# code gives me 0. Using a calculator, I get 775. Can anybody please give me a detailed explanation?
Many thanks!

Delphi stores intermediate values as Extended (an 80-bit floating point type). This expression is Extended:
i1*d1-Sqr(d2);
The same may not be true of C# (I don't know). The extra precision could be making a difference.

Note that you're at the limits of the precision of the Double data type here, which means that calculations here won't be accurate.
Example:
d1 = 4.271343859532459e+18
which can be said to be the same as:
d1 = 4271343859532459000
and so:
d1 * i1 = 21356719297662295000
in reality, the value in .NET will be more like this:
2.1356719297662296E+19
Note the rounding there. Hence, at this level, you're not getting the right answers.

This is certainly not an explanation of this exact situation but it will help to explain the problem.
What Every Computer Scientist Should Know About Floating-Point Arithmetic

A C# double has at most 16 digits of precision. Taking 4.271343859532459e+18 and multiply by 5 will give a number of 19 digits. You want to have a number with only 3 digits as a result. Double cannot do this.
In C#, the Decimal type can handle this example -- if you know to use the 123M format to initialize the Decimal values.
Decimal d1, d2, d3;
int i1;
d1 = 4.271343859532459e+18M;
d2 = 4621333065.0M;
i1 = 5;
d3 = (i1 * d1) - (d2*d2);
MessageBox.Show(d3.ToString());
This gives 775.00 which is the correct answer.

Any calculation such as this is going to lead to dramas with typical floating point arithmetic. The larger the difference in scaling of the numbers, the bigger the chance of an accuracy problem.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems gives a good overview.

I think that this is an error caused by the limited precision (Above all, because using doubles instead of integers). Perhaps d1 isn't the same after the assignment. d2*d2 will surely be different than the correct value as it's bigger than 2^32.
As 5*d1 is even bigger than 2^64, even using 64-bit integers won't help. You'd have to use bignums or a 128-bit integer class to get the correct result.

Basically, as other people have pointed out, double-precision isn't precise enough for the scale of the computation you're trying to do. Delphi uses "extended precision" by default, which adds another 16 bits over Double to allow for more precise computation. The .NET framework doesn't have an extended-precision data type.
Not sure what type your calculator is using, but it's apparently doing something different from both Delphi and C#.

As commented by the others, the double isn't precise enough for your computation.
The decimal is a good alternative eventhough someone pointed out that it would be rounded it is not.
In C#, the Decimal type cannot handle this example easily either since 4.271343859532459e+18 will be rounded to 4271343859532460000.
This is not the case. The answer if you use decimal will be correct. But as he said the range is different.

Related

Converting to a decimal without rounding

I have a float number, say 1.2999, that when put into a Convert.ToDecimal returns 1.3. The reason I want to convert the number to a decimal is for precision when adding and subtracting, not for rounding up. I know for sure that the decimal type can hold that number, since it can hold numbers bigger than a float can.
Why is it rounding the number up? Is there anyway to stop it from rounding?
Edit: I don't know why mine is rounding and yours is not, here is my exact code:
decNum += Convert.ToDecimal((9 * 0.03F) + 0);
I am really confused now. When I go into the debugger and see the output of the (9 * 0.03F) + 0 part, it shows 0.269999981 as float, but then it converts it into 0.27 decimal. I know however that 3% of 9 is 0.27. So does that mean that the original calculation is incorrect, and the convert is simply fixing it?
Damn I hate numbers so much lol!
What you say is happening doesn't appear to happen.
This program:
using System;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
float f = 1.2999f;
Console.WriteLine(f);
Decimal d = Convert.ToDecimal(f);
Console.WriteLine(d);
}
}
}
Prints:
1.2999
1.2999
I think you might be having problems when you convert the value to a string.
Alternatively, as ByteBlast notes below, perhaps you gave us the wrong test data.
Using float f = 1.2999999999f; does print 1.3
The reason for that is the float value is not precise enough to represent 1.299999999f exactly. That particular value ends up being rounded to 1.3 - but note that it is the float value that is being rounded before it is converted to the decimal.
If you use a double instead of a float, this doesn't happen unless you go to even more digits of precision (when you reach 1.299999999999999)
[EDIT] Based on your revised question, I think it's just expected rounding errors, so definitely read the following:
See "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for details.
Also see this link (recommended by Tim Schmelter in comments below).
Another thing to be aware of is that the debugger might display numbers to a different level of precision than the default double.ToString() (or equivalent) so that can lead to you seeing slightly different numbers.
Aside:
You might have some luck with the "round trip" format specifier:
Console.WriteLine(1.299999999999999.ToString());
Prints 1.3
But:
Console.WriteLine(1.299999999999999.ToString("r"));
Prints 1.2999999999999989
(Note that sneaky little 8 at the penultimate digit!)
For ultimate precision you can use the Decimal type, as you are already doing. That's optimised for base-10 numbers and provides a great many more digits of precision.
However, be aware that it's hundreds of times slower than float or double and that it can also suffer from rounding errors, albeit much less.

C# loss of precision when dividing doubles

I know this has been discussed time and time again, but I can't seem to get even the most simple example of a one-step division of doubles to result in the expected, unrounded outcome in C# - so I'm wondering if perhaps there's i.e. some compiler flag or something else strange I'm not thinking of. Consider this example:
double v1 = 0.7;
double v2 = 0.025;
double result = v1 / v2;
When I break after the last line and examine it in the VS debugger, the value of "result" is 27.999999999999996. I'm aware that I can resolve it by changing to "decimal," but that's not possible in the case of the surrounding program. Is it not strange that two low-precision doubles like this can't divide to the correct value of 28? Is the only solution really to Math.Round the result?
Is it not strange that two low-precision doubles like this can't divide to the correct value of 28?
No, not really. Neither 0.7 nor 0.025 can be exactly represented in the double type. The exact values involved are:
0.6999999999999999555910790149937383830547332763671875
0.025000000000000001387778780781445675529539585113525390625
Now are you surprised that the division doesn't give exactly 28? Garbage in, garbage out...
As you say, the right result to represent decimal numbers exactly is to use decimal. If the rest of your program is using the wrong type, that just means you need to work out which is higher: the cost of getting the wrong answer, or the cost of changing the whole program.
Precision is always a problem, in case you are dealing with float or double.
Its a known issue in Computer Science and every programming language is affected by it. To minimize these sort of errors, which are mostly related to rounding, a complete field of Numerical Analysis is dedicated to it.
For instance, let take the following code.
What would you expect?
You will expect the answer to be 1, but this is not the case, you will get 0.9999907.
float v = .001f;
float sum = 0;
for (int i = 0; i < 1000; i++ )
{
sum += v;
}
It has nothing to do with how 'simple' or 'small' the double numbers are. Strictly speaking, neither 0.7 or 0.025 may be stored as exactly those numbers in computer memory, so performing calculations on them may provide interesting results if you're after heavy precision.
So yes, use decimal or round.
To explain this by analogy:
Imagine that you are working in base 3. In base 3, 0.1 is (in decimal) 1/3, or 0.333333333'.
So you can EXACTLY represent 1/3 (decimal) in base 3, but you get rounding errors when trying to express it in decimal.
Well, you can get exactly the same thing with some decimal numbers: They can be exactly expressed in decimal, but they CAN'T be exactly expressed in binary; hence, you get rounding errors with them.
Short answer to your first question: No, it's not strange. Floating-point numbers are discrete approximations of the real numbers, which means that rounding errors will propagate and scale when you do arithmetic operations.
Theres' a whole field of mathematics called numerical analyis that basically deal with how to minimize the errors when working with such approximations.
It's the usual floating point imprecision. Not every number can be represented as a double, and those minor representation inaccuracies add up. It's also a reason why you should not compare doubles to exact numbers. I just tested it, and result.ToString() showed 28 (maybe some kind of rounding happens in double.ToString()?). result == 28 returned false though. And (int)result returned 27. So you'll just need to expect imprecisions like that.

Convert float to double loses precision but not via ToString

I have the following code:
float f = 0.3f;
double d1 = System.Convert.ToDouble(f);
double d2 = System.Convert.ToDouble(f.ToString());
The results are equivalent to:
d1 = 0.30000001192092896;
d2 = 0.3;
I'm curious to find out why this is?
Its not a loss of precision .3 is not representable in floating point. When the system converts to the string it rounds; if you print out enough significant digits you will get something that makes more sense.
To see it more clearly
float f = 0.3f;
double d1 = System.Convert.ToDouble(f);
double d2 = System.Convert.ToDouble(f.ToString("G20"));
string s = string.Format("d1 : {0} ; d2 : {1} ", d1, d2);
output
"d1 : 0.300000011920929 ; d2 : 0.300000012 "
You're not losing precision; you're upcasting to a more precise representation (double, 64-bits long) from a less precise representation (float, 32-bits long). What you get in the more precise representation (past a certain point) is just garbage. If you were to cast it back to a float FROM a double, you would have the exact same precision as you did before.
What happens here is that you've got 32 bits allocated for your float. You then upcast to a double, adding another 32 bits for representing your number (for a total of 64). Those new bits are the least significant (the farthest to the right of your decimal point), and have no bearing on the actual value since they were indeterminate before. As a result, those new bits have whatever values they happened to have when you did your upcast. They're just as indeterminate as they were before -- garbage, in other words.
When you downcast from a double to a float, it'll lop off those least-significant bits, leaving you with 0.300000 (7 digits of precision).
The mechanism for converting from a string to a float is different; the compiler needs to analyze the semantic meaning of the character string '0.3f' and figure out how that relates to a floating point value. It can't be done with bit-shifting like the float/double conversion -- thus, the value that you expect.
For more info on how floating point numbers work, you may be interested in checking out this wikipedia article on the IEEE 754-1985 standard (which has some handy pictures and good explanation of the mechanics of things), and this wiki article on the updates to the standard in 2008.
edit:
First, as #phoog pointed out below, upcasting from a float to a double isn't as simple as adding another 32 bits to the space reserved to record the number. In reality, you'll get an addition 3 bits for the exponent (for a total of 11), and an additional 29 bits for the fraction (for a total of 52). Add in the sign bit and you've got your total of 64 bits for the double.
Additionally, suggesting that there are 'garbage bits' in those least significant locations a gross generalization, and probably not be correct for C#. A bit of explanation, and some testing below suggests to me that this is deterministic for C#/.NET, and probably the result of some specific mechanism in the conversion rather than reserving memory for additional precision.
Way back in the beforetimes, when your code would compile into a machine-language binary, compilers (C and C++ compilers, at least) would not add any CPU instructions to 'clear' or initialize the value in memory when you reserved space for a variable. So, unless the programmer explicitly initialized a variable to some value, the values of the bits that were reserved for that location would maintain whatever value they had before you reserved that memory.
In .NET land, your C# or other .NET language compiles into an intermediate language (CIL, Common Intermediate Language), which is then Just-In-Time compiled by the CLR to execute as native code. There may or may not be an variable initialization step added by either the C# compiler or the JIT compiler; I'm not sure.
Here's what I do know:
I tested this by casting the float to three different doubles. Each one of the results had the exact same value.
That value was exactly the same as #rerun's value above: double d1 = System.Convert.ToDouble(f); result: d1 : 0.300000011920929
I get the same result if I cast using double d2 = (double)f; Result: d2 : 0.300000011920929
With three of us getting the same values, it looks like the upcast value is deterministic (and not actually garbage bits), indicating that .NET is doing something the same way across all of our machines. It's still true to say that the additional digits are no more or less precise than they were before, because 0.3f isn't exactly equal to 0.3 -- it's equal to 0.3, up to seven digits of precision. We know nothing about the values of additional digits beyond those first seven.
I use decimal cast for correct result in this case and same other case
float ff = 99.95f;
double dd = (double)(decimal)ff;

Math-pow incorrect results

double a1;
a1 = Math.Pow(somehighnumber, 40);
something.Text = Convert.ToString(xyz);
the result i get is have E+41 etc.
its like 1,125123E+41 etc. i dont get why.
Your question is very unclear; in the future, you'll probably get better results if you post a clear question with a code sample that actually compiles and demonstrates the problem you're actually having. Don't make people guess what the problem is.
If what you want to do is display a double-precision floating point number without the scientific notation then use the standard number formatting specifier:
Console.WriteLine(string.Format("{0:N}", Math.Pow(10, 100)));
Results in:
10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.00
If what you have a problem with is that the result is rounded off, then don't use double-precision floats; they are accurate to only 15 decimal places. Try doing your arithmetic in BigIntegers, which have arbitrary integer precision.
That's scientific notation. It means 1.125123 * 1041. Scientific notation is useful if your number becomes so large that displaying it in full would require a lot of screen space. Also, floating point arithmetic is not precise so even if you did display the number in full most of the digits would be incorrect anyway.
If you want precise calculations you should use BigInteger instead of double (this type is present in .NET 4.0 or newer).
double bigNum = Math.Pow(100, 100);
string bigString = string.Format("{0:F}", bigNum);
http://msdn.microsoft.com/en-us/library/dwhawy9k.aspx#FFormatString
Or you can use the 4.0 indroduced BigInteger(add a reference to System.Numerics to the Project):
Numerics.BigInteger bigInt = Numerics.BigInteger.Pow(1000, 1000);
string veryBigString = bigInt.ToString("F");
As you can see it also works with ToString.

Why is my number being rounded incorrectly?

This feels like the kind of code that only fails in-situ, but I will attempt to adapt it into a code snippet that represents what I'm seeing.
float f = myFloat * myConstInt; /* Where myFloat==13.45, and myConstInt==20 */
int i = (int)f;
int i2 = (int)(myFloat * myConstInt);
After stepping through the code, i==269, and i2==268. What's going on here to account for the difference?
Float math can be performed at higher precision than advertised. But as soon as you store it in float f, that extra precision is lost. You're not losing that precision in the second method until, of course, you cast the result down to int.
Edit: See this question Why differs floating-point precision in C# when separated by parantheses and when separated by statements? for a better explanation than I probably provided.
Because floating point variables are not infinitely accurate. Use a decimal if you need that kind of accuracy.
Different rounding modes may also play into this issue, but the accuracy problem is the one you're running into here, AFAIK.
Floating point has limited accuracy, and is based on binary rather than decimal. The decimal number 13.45 cannot be precisely represented in binary floating point, so rounds down. The multiplication by 20 further exaggerates the loss of precision. At this point you have 268.999... - not 269 - therefore the conversion to integer truncates to 268.
To get rounding to the nearest integer, you could try adding 0.5 before converting back to integer.
For "perfect" arithmetic, you could try using a Decimal or Rational numeric type - I believe C# has libraries for both, but am not certain. These will be slower, however.
EDIT - I have found a "decimal" type so far, but not a rational - I may be wrong about that being available. Decimal floating point is inaccurate, just like binary, but it's the kind of inaccuracy we're used to, so it gives less surprising results.
Replace with
double f = myFloat * myConstInt;
and see if you get the same answer.
I'd like to offer a different explanation.
Here's the code, which I've annotated (I looked into memory to dissect the floats):
float myFloat = 13.45; //In binary is 1101.01110011001100110011
int myConstInt = 20;
float f = myFloat * myConstInt; //In binary is exactly 100001101 (269 decimal)
int i = (int)f; // Turns float 269 into int 269 -- no surprises
int i2 = (int)(myFloat * myConstInt);//"Extra precision" causes round to 268
Let's look closer at the calculations:
f = 1101.01110011001100110011 * 10100 = 100001100.111111111111111 111
The part after the space is bits 25-27, which cause bit 24 to be rounded up, and hence the whole value to be rounded up to 269
int i2 = (int)(myFloat * myConstInt)
myfloat is extended to double precision for the calculation (0s are appended): 1101.0111001100110011001100000000000000000000000000000
myfloat * 20 = 100001100.11111111111111111100000000000000000000000000
Bits 54 and beyond are 0s, so no rounding is done: the cast results in the integer 268.
(A similar explanation would work if extended precision is used.)
UPDATE: I refined my answer and wrote a full-blown article called When Floats Don’t Behave Like Floats

Categories