Math-pow incorrect results - c#

double a1;
a1 = Math.Pow(somehighnumber, 40);
something.Text = Convert.ToString(xyz);
the result i get is have E+41 etc.
its like 1,125123E+41 etc. i dont get why.

Your question is very unclear; in the future, you'll probably get better results if you post a clear question with a code sample that actually compiles and demonstrates the problem you're actually having. Don't make people guess what the problem is.
If what you want to do is display a double-precision floating point number without the scientific notation then use the standard number formatting specifier:
Console.WriteLine(string.Format("{0:N}", Math.Pow(10, 100)));
Results in:
10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.00
If what you have a problem with is that the result is rounded off, then don't use double-precision floats; they are accurate to only 15 decimal places. Try doing your arithmetic in BigIntegers, which have arbitrary integer precision.

That's scientific notation. It means 1.125123 * 1041. Scientific notation is useful if your number becomes so large that displaying it in full would require a lot of screen space. Also, floating point arithmetic is not precise so even if you did display the number in full most of the digits would be incorrect anyway.
If you want precise calculations you should use BigInteger instead of double (this type is present in .NET 4.0 or newer).

double bigNum = Math.Pow(100, 100);
string bigString = string.Format("{0:F}", bigNum);
http://msdn.microsoft.com/en-us/library/dwhawy9k.aspx#FFormatString
Or you can use the 4.0 indroduced BigInteger(add a reference to System.Numerics to the Project):
Numerics.BigInteger bigInt = Numerics.BigInteger.Pow(1000, 1000);
string veryBigString = bigInt.ToString("F");
As you can see it also works with ToString.

Related

Converting to a decimal without rounding

I have a float number, say 1.2999, that when put into a Convert.ToDecimal returns 1.3. The reason I want to convert the number to a decimal is for precision when adding and subtracting, not for rounding up. I know for sure that the decimal type can hold that number, since it can hold numbers bigger than a float can.
Why is it rounding the number up? Is there anyway to stop it from rounding?
Edit: I don't know why mine is rounding and yours is not, here is my exact code:
decNum += Convert.ToDecimal((9 * 0.03F) + 0);
I am really confused now. When I go into the debugger and see the output of the (9 * 0.03F) + 0 part, it shows 0.269999981 as float, but then it converts it into 0.27 decimal. I know however that 3% of 9 is 0.27. So does that mean that the original calculation is incorrect, and the convert is simply fixing it?
Damn I hate numbers so much lol!
What you say is happening doesn't appear to happen.
This program:
using System;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
float f = 1.2999f;
Console.WriteLine(f);
Decimal d = Convert.ToDecimal(f);
Console.WriteLine(d);
}
}
}
Prints:
1.2999
1.2999
I think you might be having problems when you convert the value to a string.
Alternatively, as ByteBlast notes below, perhaps you gave us the wrong test data.
Using float f = 1.2999999999f; does print 1.3
The reason for that is the float value is not precise enough to represent 1.299999999f exactly. That particular value ends up being rounded to 1.3 - but note that it is the float value that is being rounded before it is converted to the decimal.
If you use a double instead of a float, this doesn't happen unless you go to even more digits of precision (when you reach 1.299999999999999)
[EDIT] Based on your revised question, I think it's just expected rounding errors, so definitely read the following:
See "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for details.
Also see this link (recommended by Tim Schmelter in comments below).
Another thing to be aware of is that the debugger might display numbers to a different level of precision than the default double.ToString() (or equivalent) so that can lead to you seeing slightly different numbers.
Aside:
You might have some luck with the "round trip" format specifier:
Console.WriteLine(1.299999999999999.ToString());
Prints 1.3
But:
Console.WriteLine(1.299999999999999.ToString("r"));
Prints 1.2999999999999989
(Note that sneaky little 8 at the penultimate digit!)
For ultimate precision you can use the Decimal type, as you are already doing. That's optimised for base-10 numbers and provides a great many more digits of precision.
However, be aware that it's hundreds of times slower than float or double and that it can also suffer from rounding errors, albeit much less.

C# loss of precision when dividing doubles

I know this has been discussed time and time again, but I can't seem to get even the most simple example of a one-step division of doubles to result in the expected, unrounded outcome in C# - so I'm wondering if perhaps there's i.e. some compiler flag or something else strange I'm not thinking of. Consider this example:
double v1 = 0.7;
double v2 = 0.025;
double result = v1 / v2;
When I break after the last line and examine it in the VS debugger, the value of "result" is 27.999999999999996. I'm aware that I can resolve it by changing to "decimal," but that's not possible in the case of the surrounding program. Is it not strange that two low-precision doubles like this can't divide to the correct value of 28? Is the only solution really to Math.Round the result?
Is it not strange that two low-precision doubles like this can't divide to the correct value of 28?
No, not really. Neither 0.7 nor 0.025 can be exactly represented in the double type. The exact values involved are:
0.6999999999999999555910790149937383830547332763671875
0.025000000000000001387778780781445675529539585113525390625
Now are you surprised that the division doesn't give exactly 28? Garbage in, garbage out...
As you say, the right result to represent decimal numbers exactly is to use decimal. If the rest of your program is using the wrong type, that just means you need to work out which is higher: the cost of getting the wrong answer, or the cost of changing the whole program.
Precision is always a problem, in case you are dealing with float or double.
Its a known issue in Computer Science and every programming language is affected by it. To minimize these sort of errors, which are mostly related to rounding, a complete field of Numerical Analysis is dedicated to it.
For instance, let take the following code.
What would you expect?
You will expect the answer to be 1, but this is not the case, you will get 0.9999907.
float v = .001f;
float sum = 0;
for (int i = 0; i < 1000; i++ )
{
sum += v;
}
It has nothing to do with how 'simple' or 'small' the double numbers are. Strictly speaking, neither 0.7 or 0.025 may be stored as exactly those numbers in computer memory, so performing calculations on them may provide interesting results if you're after heavy precision.
So yes, use decimal or round.
To explain this by analogy:
Imagine that you are working in base 3. In base 3, 0.1 is (in decimal) 1/3, or 0.333333333'.
So you can EXACTLY represent 1/3 (decimal) in base 3, but you get rounding errors when trying to express it in decimal.
Well, you can get exactly the same thing with some decimal numbers: They can be exactly expressed in decimal, but they CAN'T be exactly expressed in binary; hence, you get rounding errors with them.
Short answer to your first question: No, it's not strange. Floating-point numbers are discrete approximations of the real numbers, which means that rounding errors will propagate and scale when you do arithmetic operations.
Theres' a whole field of mathematics called numerical analyis that basically deal with how to minimize the errors when working with such approximations.
It's the usual floating point imprecision. Not every number can be represented as a double, and those minor representation inaccuracies add up. It's also a reason why you should not compare doubles to exact numbers. I just tested it, and result.ToString() showed 28 (maybe some kind of rounding happens in double.ToString()?). result == 28 returned false though. And (int)result returned 27. So you'll just need to expect imprecisions like that.

Get the decimal part of a number and the number of places after the decimal point (C#)

Does anyone know of an elegant way to get the decimal part of a number only? In particular I am looking to get the exact number of places after the decimal point so that the number can be formatted appropriately. I was wondering if there is away to do this without any kind of string extraction using the culture specific decimal separator....
For example
98.0 would be formatted as 98
98.20 would be formatted as 98.2
98.2765 would be formatted as 98.2765 etc.
It it's only for formatting purposes, just calling ToString will do the trick, I guess?
double d = (double)5 / 4;
Console.WriteLine(d.ToString()); // prints 1.75
d = (double)7 / 2;
Console.WriteLine(d.ToString()); // prints 3.5
d = 7;
Console.WriteLine(d.ToString()); // prints 7
That will, of course, format the number according to the current culture (meaning that the decimal sign, thousand separators and such will vary).
Update
As Clement H points out in the comments; if we are dealing with great numbers, at some point d.ToString() will return a string with scientific formatting instead (such as "1E+16" instead of "10000000000000000"). One way to overcome this probem, and force the full number to be printed, is to use d.ToString("0.#"), which will also result in the same output for lower numbers as the code sample above produces.
You can get all of the relevant information from the Decimal.GetBits method assuming you really mean System.Decimal. (If you're talking about decimal formatting of a float/double, please clarify the question.)
Basically GetBits will return you 4 integers in an array.
You can use the scaling factor (the fourth integer, after masking out the sign) to indicate the number of decimal places, but you should be aware that it's not necessarily the number of significant decimal places. In particular, the decimal representations of 1 and 1.0 are different (the former is 1/1, the latter is 10/10).
Unfortunately, manipulating the 96 bit integer is going to require some fiddly arithmetic unless you can use .NET 4.0 and BigInteger.
To be honest, you'll get a simpler solution by using the built in formatting with CultureInfo.InvariantCulture and then finding everything to the right of "."
Just to expand on the point about getbits, this expression gets the scaling factor from a decimal called foo:
(decimal.GetBits(foo)[3] & 16711680)>>16
You could use the Int() function to get the whole number component, then subtract from the original.

simple calculation, different results in c# and delphi

The question is, why do these code snippets give different results?
private void InitializeOther()
{
double d1, d2, d3;
int i1;
d1 = 4.271343859532459e+18;
d2 = 4621333065.0;
i1 = 5;
d3 = (i1 * d1) - Utils.Sqr(d2);
MessageBox.Show(d3.ToString());
}
and
procedure TForm1.InitializeOther;
var d1, d2, d3 : Double;
i1 : Integer;
begin
d1:=4.271343859532459e+18;
d2:=4621333065.0;
i1:=5;
d3:=i1*d1-Sqr(d2);
ShowMessage(FloatToStr(d3));
end;
The Delphi code gives me 816, while the c# code gives me 0. Using a calculator, I get 775. Can anybody please give me a detailed explanation?
Many thanks!
Delphi stores intermediate values as Extended (an 80-bit floating point type). This expression is Extended:
i1*d1-Sqr(d2);
The same may not be true of C# (I don't know). The extra precision could be making a difference.
Note that you're at the limits of the precision of the Double data type here, which means that calculations here won't be accurate.
Example:
d1 = 4.271343859532459e+18
which can be said to be the same as:
d1 = 4271343859532459000
and so:
d1 * i1 = 21356719297662295000
in reality, the value in .NET will be more like this:
2.1356719297662296E+19
Note the rounding there. Hence, at this level, you're not getting the right answers.
This is certainly not an explanation of this exact situation but it will help to explain the problem.
What Every Computer Scientist Should Know About Floating-Point Arithmetic
A C# double has at most 16 digits of precision. Taking 4.271343859532459e+18 and multiply by 5 will give a number of 19 digits. You want to have a number with only 3 digits as a result. Double cannot do this.
In C#, the Decimal type can handle this example -- if you know to use the 123M format to initialize the Decimal values.
Decimal d1, d2, d3;
int i1;
d1 = 4.271343859532459e+18M;
d2 = 4621333065.0M;
i1 = 5;
d3 = (i1 * d1) - (d2*d2);
MessageBox.Show(d3.ToString());
This gives 775.00 which is the correct answer.
Any calculation such as this is going to lead to dramas with typical floating point arithmetic. The larger the difference in scaling of the numbers, the bigger the chance of an accuracy problem.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems gives a good overview.
I think that this is an error caused by the limited precision (Above all, because using doubles instead of integers). Perhaps d1 isn't the same after the assignment. d2*d2 will surely be different than the correct value as it's bigger than 2^32.
As 5*d1 is even bigger than 2^64, even using 64-bit integers won't help. You'd have to use bignums or a 128-bit integer class to get the correct result.
Basically, as other people have pointed out, double-precision isn't precise enough for the scale of the computation you're trying to do. Delphi uses "extended precision" by default, which adds another 16 bits over Double to allow for more precise computation. The .NET framework doesn't have an extended-precision data type.
Not sure what type your calculator is using, but it's apparently doing something different from both Delphi and C#.
As commented by the others, the double isn't precise enough for your computation.
The decimal is a good alternative eventhough someone pointed out that it would be rounded it is not.
In C#, the Decimal type cannot handle this example easily either since 4.271343859532459e+18 will be rounded to 4271343859532460000.
This is not the case. The answer if you use decimal will be correct. But as he said the range is different.

decimal to double

I have the following test code:
decimal test1 = 0.0500000000000000045656554454M;
double test2 = (double)test1;
This results in test2 showing as 0.05 when debugging. Why is it being rounded to 2 decimal places?
Thanks
The value from that conversion is actually 0.050000000000000009714451465470119728706777095794677734375, as shown by DoubleConverter. That's the exact value of the nearest double to the decimal you converted.
When you use the debugger or normal string formatting, you aren't usually shown the exact result.
The reason is that double can contain no more than 15-16 significant digits.
see double (C# Reference)
You should take a look at this article about floating-point arithmetic and .NET. The rounding occurs due to a combination of how the number gets converted to a double-precision floating point value and how it is formatted when printed, since .NET defaults to 15 decimals for doubles, and your original number contains decimal past the 15th.
You could try test2.ToString("0.000000000000000000000000") to see if you might squeeze out any more information from the number, but I doubt it will.
There are two reasons I can think of:
Due to the different representation of decimal and double. See this article for more information about floating point representation. It is possible that there are not enough bits for the whole number representation in the double.
Due to the way numbers are printed. It is possible that in your printing options, there are less than 18 numbers after the decimal point specified - in which case, you'll get the rounded result.
I would check for tweaking the printing options first to make sure that the problem isn't there first.
.. But know that the only solution for the first problem is stop using double :-)

Categories