How to print all the decimal points without rounding up? - c#

I want to calculate wind_chill by getting temp and wind_speed from user.
All variables are declared as double.
wind_chill = 35.74 + 0.6215 * temp + (0.4275 * temp - 35.75) * System.Math.Pow(wind_speed, 0.16);
I am getting this O/P:-
Enter temperature and wind Speed: 20 7 wind_chill is:11.03490062551
But I want to print all the decimal number till last without round up.
Expected O/P:-
wind_chill = 11.034900625509998
By declaring variables as decimal and converting all the values in decimal:
I am getting this output:
wind_chill = 11.034900625510096
Still not matching with the expected one. I searched But I didn't get my answer.How to get expected output?

Calculate the value with doubles. It is just that .NET will format the value to 15 characters when printing. Use R format to get all digits.
var wind_chill = WindChillDbl(20.0, 7.0);
Console.WriteLine(String.Format("{0:R}", wind_chill));
public static double WindChillDbl(double temp, double wind_speed)
{
return 35.74 + 0.6215 * temp + (0.4275 * temp - 35.75) * System.Math.Pow(wind_speed, 0.16);
}
"By default, the return value only contains 15 digits of precision although a maximum of 17 digits is maintained internally. If the value of this instance has greater than 15 digits, ToString returns PositiveInfinitySymbol or NegativeInfinitySymbol instead of the expected number. If you require more precision, specify format with the "G17" format specification, which always returns 17 digits of precision, or "R", which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision." MSDN

You can inspect your calculation to see that the values returned for most of the expression are accurate. The problem is with the accuracy of System.Math.Pow(wind_speed, 0.16);. If you look at wolframalpha for that input, there are signifcantly more digits provided than the 1.36526100641507 returned by Math.Pow.
The reason for this is because Math.pow uses float point types which are inaccurate by design. You may also be able to use BigInteger and figure out a way to make your equation work with that.
You can resolve this in a couple of ways:
Use BigRational
Rework the equation to somehow use BigInteger
See this question: What is the equivalent of the Java BigDecimal class in C#?, specifically this answer: https://stackoverflow.com/a/13813535/2127492
If you do go with the BigDecimal class provided in that answer, you will be able to make use of the method BigDecimal Pow(double basis, double exponent) to improve the accuracy your calculation.
You can see your calculation with the above class here.

Related

c# print float values with more precision

I want to print floats with greater precision than is the default.
For example when I print the value of PI from a float i get 6 decimals. But if I copy the same value from the float into a double and print it i get 14 decimals.
Why do I get more precision when printing the same value but as a double?
How can I get Console.WriteLine() to output more decimals when printing floats without needing to copy it into a double first?
I also tried the 0.0000000000 but it did not write with more precision, it just added more zeroes. :-/
My test code:
float x = (float)Math.PI;
double xx = x;
Console.WriteLine(x);
Console.WriteLine(xx);
Console.WriteLine($"{x,18:0.0000000000}"); // Try to force more precision
Console.WriteLine($"{xx,18:0.0000000000}");
Output:
3,141593
3,14159274101257
3,1415930000 <-- The program just added more zeroes :-(
3,1415927410
I also tried to enter PI at https://www.h-schmidt.net/FloatConverter/IEEE754.html
The binary float representation of PI is 0b01000000010010010000111111011011
So the value is: 2 * 0b1.10010010000111111011011 = 2 * 1.57079637050628662109375 = 3.1415927410125732421875
So there are more decimals to output. How would I get C# to output this whole value?
There is no more precision in a float. When you convert it to a double, the accuracy is worse, even though the precision (number of digits) increased - basically, all those extra digits you see in the double print are conversion artifacts - they are wrong, just the best representation of that given float number you can get in a double. This has everything to do with how binary floating point numbers work.
Let's look at the binary representation of the float:
01000000010010010000111111011100
The first bit is sign, the next eight are the exponent, and the rest is the mantissa. What do we get when we cast it to a double? The exponent stays the same, but the mantissa is filled in with zeroes. But that actually changes the number - you get 3.1415929794311523 (rounded, as always with binary floats) instead of the correct 3.14159265358979 double value of pi. You get the illusion of greater precision, but it's only an illusion - the number is no more accurate than before, you just replaced zeroes with noise.
There's no 1:1 mapping between floats and doubles. The same float value can be represented by many different double values, and the decimal representation of the number can change accordingly. Every cast of float to double should be followed by a rounding operation if you care about decimal precision. Consider this code instead:
double.Parse(((float)Math.PI).ToString())
Instead of casting the float to a double, I first changed it to a decimal representation (in a string), and created the double from that. Now instead of having a "truncated" double, you have a proper double that doesn't lie about extra precision; when you print it out, you get 3.1415930000. Still rounded, of course, since it's still a binary->decimal conversion, but no longer pretending to have more precision than is actually there - the rounding happens at a later digit than in the float version, the zeroes are really zeroes, except for the last one (which is only approximately zero).
If you want real decimal precision, you might want to use a decimal type (e.g. int or decimal). float and double are both binary numbers, and only have binary precision. A finite decimal number isn't necessarily finite in binary, just like a finite trinary number isn't necessarily finite in decimal (e.g. 1/3 doesn't have a finite decimal representation).
A float within c# has a precision of 7 digits and no more. That means 1 digit before the decimal and 6 after.
If you do have any more digits in your output, they might be entirely wrong.
If you do need more digits, you have to use either double which has 15-16 digits precision or decimal which has 28-29 digits.
See MSDN for reference.
You can easily verify that as the digits of PI are a little different from your output. The correct first 40 digits are: 3.14159 26535 89793 23846 26433 83279 50288 4197
To print the value with 6 places after the decimal.
Console.WriteLine("{0:N6}", (double)2/3);
Output :
0.666667

Convert float to double

I'm trying to convert Single to Double while maintaining the original value. I've found the following method:
Single f = 5.2F;
Double d1 = f; // 5.19999980926514
Double d2 = Double.Parse(f.ToString()); // 5.2 (correct)
Is this practice recommendable? I don't need an optimal method, but the intended value must be passed on to the double. Are there even consequences to storing a rounded value in a double?
You could use "decimal" instead of a string.
float f = 5.2F;
decimal dec = new decimal(f);//5.2
double d = (double)dec; //5.2
The conversion is exact. All the Single values can be represented by a Double value, because they are "built" in the same way, just with more possible digits. What you see as 5.2F is in truth 5.1999998092651368. If you go http://www.h-schmidt.net/FloatConverter/IEEE754.html and insert 5.2 you'll see that it has an exponent of 2^2 (so 4) and a mantissa of 1.2999999523162842. Now, if you multiply the two numbers you'll get 5.1999998092651368.
Single have a maximum precision of 7 digits, so .NET only shows 7 digits. With a little rounding 5.1999998092651368 is 5.2
If you know that your numbers are multiples of e.g. 0.01, I would suggest that you convert to double, round to the nearest integer, and subtract that to get the fractional residue. Multiply that by 100, round to the nearest integer, and then divide by 100. Add that to the whole-number part to get the nearest double representation to the multiple of 0.01 which is nearest the original number.
Note that depending upon where the float values originally came from, such treatment may or may not improve accuracy. The closest float value to 9000.02 is about 9000.019531, and the closest float value to 9000.021 is about 9000.021484f. If the values were arrived at by converting 9000.020 and 9000.021 to float, the difference between them should be about 0.01. If, however, they were arrived at by e.g. computing 9000f+0.019531f and 9000f+0.021484f, then the difference between them should be closer to 0.02. Rounding to the nearest 0.01 before the subtract would improve accuracy in the former case and degrade it in the latter.

how to get the BigInteger to the pow Double in C#?

I tried to use BigInteger.Pow method to calculate something like 10^12345.987654321 but this method only accept integer number as exponent like this:
BigInteger.Pow(BigInteger x, int y)
so how can I use double number as exponent in above method?
There's no arbitrary precision large number support in C#, so this cannot be done directly. There are some alternatives (such as looking for a 3rd party library), or you can try something like the code below - if the base is small enough, like in your case.
public class StackOverflow_11179289
{
public static void Test()
{
int #base = 10;
double exp = 12345.123;
int intExp = (int)Math.Floor(exp);
double fracExp = exp - intExp;
BigInteger temp = BigInteger.Pow(#base, intExp);
double temp2 = Math.Pow(#base, fracExp);
int fractionBitsForDouble = 52;
for (int i = 0; i < fractionBitsForDouble; i++)
{
temp = BigInteger.Divide(temp, 2);
temp2 *= 2;
}
BigInteger result = BigInteger.Multiply(temp, (BigInteger)temp2);
Console.WriteLine(result);
}
}
The idea is to use big integer math to compute the power of the integer part of the exponent, then use double (64-bit floating point) math to compute the power of the fraction part. Then, using the fact that
a ^ (int + frac) = a ^ int * a ^ frac
we can combine the two values into a single big integer. But simply converting the double value to a BigInteger would lose a lot of its precision, so we first "shift" the precision onto the bigInteger (using the loop above, and the fact that the double type uses 52 bits for the precision), then multiplying the result.
Notice that the result is an approximation, if you want a more precise number, you'll need a library that does arbitrary precision floating point math.
Update: If the base / exponent are small enough that the power would be in the range of double, we can simply do what Sebastian Piu suggested (new BigInteger(Math.Pow((double)#base, exp)))
I like carlosfigueira's answer, but of course the result of his method can only be correct on the first (most significant) 15-17 digits, because a System.Double is used as a multiplier eventually.
It is interesting to note that there does exist a method BigInteger.Log that performs the "inverse" operation. So if you want to calculate Pow(7, 123456.78) you could, in theory, search all BigInteger numbers x to find one number such that BigInteger.Log(x, 7) is equal to 123456.78 or closer to 123456.78 than any other x of type BigInteger.
Of course the logarithm function is increasing, so your search can use some kind of "binary search" (bisection search). Our answer lies between Pow(7, 123456) and Pow(7, 123457) which can both be calculated exactly.
Skip the rest if you want
Now, how can we predict in advance if there are more than one integer whose logarithm is 123456.78, up to the precision of System.Double, or if there is in fact no integer whose logarithm hits that specific Double (the precise result of an ideal Pow function being an irrational number)? In our example, there will be very many integers giving the same Double 123456.78 because the factor m = Pow(7, epsilon) (where epsilon is the smallest positive number such that 123456.78 + epilon has a representation as a Double different from the representation of 123456.78 itself) is big enough that there will be very many integers between the true answer and the true answer multiplied by m.
Remember from calculus that the derivative of the mathemtical function x → Pow(7, x) is x → Log(7)*Pow(7, x), so the slope of the graph of the exponential function in question will be Log(7)*Pow(7, 123456.78). This number multiplied by the above epsilon is still much much greater than one, so there are many integers satisfying our need.
Actually, I think carlosfigueira's method will give a "correct" answer x in the sense that Log(x, 7) has the same representation as a Double as 123456.78 has. But has anyone tried it? :-)
I'll provide another answer that is hopefully more clear. The point is: Since the precision of System.Double is limited to approx. 15-17 decimal digits, the result of any Pow(BigInteger, Double) calculation will have an even more limited precision. Therefore, there's no hope of doing better than carlosfigueira's answer does.
Let me illustrate this with an example. Suppose we wanted to calculate
Pow(10, exponent)
where in this example I choose for exponent the double-precision number
const double exponent = 100.0 * Math.PI;
This is of course only an example. The value of exponent, in decimal, can be given as one of
314.159265358979
314.15926535897933
314.1592653589793258106510620564222335815429687500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...
The first of these numbers is what you normally see (15 digits). The second version is produced with exponent.ToString("R") and contains 17 digits. Note that the precision of Double is less than 17 digits. The third representation above is the theoretical "exact" value of exponent. Note that this differs, of course, from the mathematical number 100π near the 17th digit.
To figure out what Pow(10, exponent) ought to be, I simply did BigInteger.Log10(x) on a lot of numbers x to see how I could reproduce exponent. So the results presented here simply reflect the .NET Framework's implementation of BigInteger.Log10.
It turns out that any BigInteger x from
0x0C3F859904635FC0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
through
0x0C3F85990481FE7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
makes Log10(x) equal to exponent to the precision of 15 digits. Similarly, any number from
0x0C3F8599047BDEC0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
through
0x0C3F8599047D667FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
satisfies Log10(x) == exponent to the precision of Double. Put in another way, any number from the latter range is equally "correct" as the result of Pow(10, exponent), simply because the precision of exponent is so limited.
(Interlude: The bunches of 0s and Fs reveal that .NET's implementation only considers the most significant bytes of x. They don't care to do better, precisely because the Double type has this limited precision.)
Now, the only reason to introduce third-party software, would be if you insist that exponent is to be interpreted as the third of the decimal numbers given above. (It's really a miracle that the Double type allowed you to specify exactly the number you wanted, huh?) In that case, the result of Pow(10, exponent) would be an irrational (but algebraic) number with a tail of never-repeating decimals. It couldn't fit in an integer without rounding/truncating. PS! If we take the exponent to be the real number 100π, the result, mathematically, would be different: some transcendental number, I suspect.

Limiting double to 3 decimal places

This i what I am trying to achieve:
If a double has more than 3 decimal places, I want to truncate any decimal places beyond the third. (do not round.)
Eg.: 12.878999 -> 12.878
If a double has less than 3 decimals, leave unchanged
Eg.: 125 -> 125
89.24 -> 89.24
I came across this command:
double example = 12.34567;
double output = Math.Round(example, 3);
But I do not want to round. According to the command posted above,
12.34567 -> 12.346
I want to truncate the value so that it becomes: 12.345
Doubles don't have decimal places - they're not based on decimal digits to start with. You could get "the closest double to the current value when truncated to three decimal digits", but it still wouldn't be exactly the same. You'd be better off using decimal.
Having said that, if it's only the way that rounding happens that's a problem, you can use Math.Truncate(value * 1000) / 1000; which may do what you want. (You don't want rounding at all, by the sounds of it.) It's still potentially "dodgy" though, as the result still won't really just have three decimal places. If you did the same thing with a decimal value, however, it would work:
decimal m = 12.878999m;
m = Math.Truncate(m * 1000m) / 1000m;
Console.WriteLine(m); // 12.878
EDIT: As LBushkin pointed out, you should be clear between truncating for display purposes (which can usually be done in a format specifier) and truncating for further calculations (in which case the above should work).
I can't think of a reason to explicitly lose precision outside of display purposes. In that case, simply use string formatting.
double example = 12.34567;
Console.Out.WriteLine(example.ToString("#.000"));
double example = 3.1416789645;
double output = Convert.ToDouble(example.ToString("N3"));
Multiply by 1000 then use Truncate then divide by 1000.
If your purpose in truncating the digits is for display reasons, then you just just use an appropriate formatting when you convert the double to a string.
Methods like String.Format() and Console.WriteLine() (and others) allow you to limit the number of digits of precision a value is formatted with.
Attempting to "truncate" floating point numbers is ill advised - floating point numbers don't have a precise decimal representation in many cases. Applying an approach like scaling the number up, truncating it, and then scaling it down could easily change the value to something quite different from what you'd expected for the "truncated" value.
If you need precise decimal representations of a number you should be using decimal rather than double or float.
You can use:
double example = 12.34567;
double output = ( (double) ( (int) (example * 1000.0) ) ) / 1000.0 ;
Good answers above- if you're looking for something reusable here is the code. Note that you might want to check the decimal places value, and this may overflow.
public static decimal TruncateToDecimalPlace(this decimal numberToTruncate, int decimalPlaces)
{
decimal power = (decimal)(Math.Pow(10.0, (double)decimalPlaces));
return Math.Truncate((power * numberToTruncate)) / power;
}
In C lang:
double truncKeepDecimalPlaces(double value, int numDecimals)
{
int x = pow(10, numDecimals);
return (double)trunc(value * x) / x;
}

Formatting doubles for output in C#

Running a quick experiment related to Is double Multiplication Broken in .NET? and reading a couple of articles on C# string formatting, I thought that this:
{
double i = 10 * 0.69;
Console.WriteLine(i);
Console.WriteLine(String.Format(" {0:F20}", i));
Console.WriteLine(String.Format("+ {0:F20}", 6.9 - i));
Console.WriteLine(String.Format("= {0:F20}", 6.9));
}
Would be the C# equivalent of this C code:
{
double i = 10 * 0.69;
printf ( "%f\n", i );
printf ( " %.20f\n", i );
printf ( "+ %.20f\n", 6.9 - i );
printf ( "= %.20f\n", 6.9 );
}
However the C# produces the output:
6.9
6.90000000000000000000
+ 0.00000000000000088818
= 6.90000000000000000000
despite i showing up equal to the value 6.89999999999999946709 (rather than 6.9) in the debugger.
compared with C which shows the precision requested by the format:
6.900000
6.89999999999999946709
+ 0.00000000000000088818
= 6.90000000000000035527
What's going on?
( Microsoft .NET Framework Version 3.51 SP1 / Visual Studio C# 2008 Express Edition )
I have a background in numerical computing and experience implementing interval arithmetic - a technique for estimating errors due to the limits of precision in complicated numerical systems - on various platforms. To get the bounty, don't try and explain about the storage precision - in this case it's a difference of one ULP of a 64 bit double.
To get the bounty, I want to know how (or whether) .Net can format a double to the requested precision as visible in the C code.
The problem is that .NET will always round a double to 15 significant decimal digits before applying your formatting, regardless of the precision requested by your format and regardless of the exact decimal value of the binary number.
I'd guess that the Visual Studio debugger has its own format/display routines that directly access the internal binary number, hence the discrepancies between your C# code, your C code and the debugger.
There's nothing built-in that will allow you to access the exact decimal value of a double, or to enable you to format a double to a specific number of decimal places, but you could do this yourself by picking apart the internal binary number and rebuilding it as a string representation of the decimal value.
Alternatively, you could use Jon Skeet's DoubleConverter class (linked to from his "Binary floating point and .NET" article). This has a ToExactString method which returns the exact decimal value of a double. You could easily modify this to enable rounding of the output to a specific precision.
double i = 10 * 0.69;
Console.WriteLine(DoubleConverter.ToExactString(i));
Console.WriteLine(DoubleConverter.ToExactString(6.9 - i));
Console.WriteLine(DoubleConverter.ToExactString(6.9));
// 6.89999999999999946709294817992486059665679931640625
// 0.00000000000000088817841970012523233890533447265625
// 6.9000000000000003552713678800500929355621337890625
Digits after decimal point
// just two decimal places
String.Format("{0:0.00}", 123.4567); // "123.46"
String.Format("{0:0.00}", 123.4); // "123.40"
String.Format("{0:0.00}", 123.0); // "123.00"
// max. two decimal places
String.Format("{0:0.##}", 123.4567); // "123.46"
String.Format("{0:0.##}", 123.4); // "123.4"
String.Format("{0:0.##}", 123.0); // "123"
// at least two digits before decimal point
String.Format("{0:00.0}", 123.4567); // "123.5"
String.Format("{0:00.0}", 23.4567); // "23.5"
String.Format("{0:00.0}", 3.4567); // "03.5"
String.Format("{0:00.0}", -3.4567); // "-03.5"
Thousands separator
String.Format("{0:0,0.0}", 12345.67); // "12,345.7"
String.Format("{0:0,0}", 12345.67); // "12,346"
Zero
Following code shows how can be formatted a zero (of double type).
String.Format("{0:0.0}", 0.0); // "0.0"
String.Format("{0:0.#}", 0.0); // "0"
String.Format("{0:#.0}", 0.0); // ".0"
String.Format("{0:#.#}", 0.0); // ""
Align numbers with spaces
String.Format("{0,10:0.0}", 123.4567); // " 123.5"
String.Format("{0,-10:0.0}", 123.4567); // "123.5 "
String.Format("{0,10:0.0}", -123.4567); // " -123.5"
String.Format("{0,-10:0.0}", -123.4567); // "-123.5 "
Custom formatting for negative numbers and zero
String.Format("{0:0.00;minus 0.00;zero}", 123.4567); // "123.46"
String.Format("{0:0.00;minus 0.00;zero}", -123.4567); // "minus 123.46"
String.Format("{0:0.00;minus 0.00;zero}", 0.0); // "zero"
Some funny examples
String.Format("{0:my number is 0.0}", 12.3); // "my number is 12.3"
String.Format("{0:0aaa.bbb0}", 12.3);
Take a look at this MSDN reference. In the notes it states that the numbers are rounded to the number of decimal places requested.
If instead you use "{0:R}" it will produce what's referred to as a "round-trip" value, take a look at this MSDN reference for more info, here's my code and the output:
double d = 10 * 0.69;
Console.WriteLine(" {0:R}", d);
Console.WriteLine("+ {0:F20}", 6.9 - d);
Console.WriteLine("= {0:F20}", 6.9);
output
6.8999999999999995
+ 0.00000000000000088818
= 6.90000000000000000000
Though this question is meanwhile closed, I believe it is worth mentioning how this atrocity came into existence. In a way, you may blame the C# spec, which states that a double must have a precision of 15 or 16 digits (the result of IEEE-754). A bit further on (section 4.1.6) it's stated that implementations are allowed to use higher precision. Mind you: higher, not lower. They are even allowed to deviate from IEEE-754: expressions of the type x * y / z where x * y would yield +/-INF but would be in a valid range after dividing, do not have to result in an error. This feature makes it easier for compilers to use higher precision in architectures where that'd yield better performance.
But I promised a "reason". Here's a quote (you requested a resource in one of your recent comments) from the Shared Source CLI, in clr/src/vm/comnumber.cpp:
"In order to give numbers that are both
friendly to display and
round-trippable, we parse the number
using 15 digits and then determine if
it round trips to the same value. If
it does, we convert that NUMBER to a
string, otherwise we reparse using 17
digits and display that."
In other words: MS's CLI Development Team decided to be both round-trippable and show pretty values that aren't such a pain to read. Good or bad? I'd wish for an opt-in or opt-out.
The trick it does to find out this round-trippability of any given number? Conversion to a generic NUMBER structure (which has separate fields for the properties of a double) and back, and then compare whether the result is different. If it is different, the exact value is used (as in your middle value with 6.9 - i) if it is the same, the "pretty value" is used.
As you already remarked in a comment to Andyp, 6.90...00 is bitwise equal to 6.89...9467. And now you know why 0.0...8818 is used: it is bitwise different from 0.0.
This 15 digits barrier is hard-coded and can only be changed by recompiling the CLI, by using Mono or by calling Microsoft and convincing them to add an option to print full "precision" (it is not really precision, but by the lack of a better word). It's probably easier to just calculate the 52 bits precision yourself or use the library mentioned earlier.
EDIT: if you like to experiment yourself with IEE-754 floating points, consider this online tool, which shows you all relevant parts of a floating point.
Use
Console.WriteLine(String.Format(" {0:G17}", i));
That will give you all the 17 digits it have. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. {0:R} will not always give you 17 digits, it will give 15 if the number can be represented with that precision.
which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision. There isn't any thing you can to do to make the the double return more digits that is the way it's implemented. If you don't like it do a new double class yourself...
.NET's double cant store any more digits than 17 so you cant see 6.89999999999999946709 in the debugger you would see 6.8999999999999995. Please provide an image to prove us wrong.
The answer to this is simple and can be found on MSDN
Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
In your example, the value of i is 6.89999999999999946709 which has the number 9 for all positions between the 3rd and the 16th digit (remember to count the integer part in the digits). When converting to string, the framework rounds the number to the 15th digit.
i = 6.89999999999999 946709
digit = 111111 111122
1 23456789012345 678901
i tried to reproduce your findings, but when I watched 'i' in the debugger it showed up as '6.8999999999999995' not as '6.89999999999999946709' as you wrote in the question. Can you provide steps to reproduce what you saw?
To see what the debugger shows you, you can use a DoubleConverter as in the following line of code:
Console.WriteLine(TypeDescriptor.GetConverter(i).ConvertTo(i, typeof(string)));
Hope this helps!
Edit: I guess I'm more tired than I thought, of course this is the same as formatting to the roundtrip value (as mentioned before).
Another method, starting with the method:
double i = (10 * 0.69);
Console.Write(ToStringFull(i)); // Output 6.89999999999999946709294817
Console.Write(ToStringFull(-6.9) // Output -6.90000000000000035527136788
Console.Write(ToStringFull(i - 6.9)); // Output -0.00000000000000088817841970012523233890533
A Drop-In Function...
public static string ToStringFull(double value)
{
if (value == 0.0) return "0.0";
if (double.IsNaN(value)) return "NaN";
if (double.IsNegativeInfinity(value)) return "-Inf";
if (double.IsPositiveInfinity(value)) return "+Inf";
long bits = BitConverter.DoubleToInt64Bits(value);
BigInteger mantissa = (bits & 0xfffffffffffffL) | 0x10000000000000L;
int exp = (int)((bits >> 52) & 0x7ffL) - 1023;
string sign = (value < 0) ? "-" : "";
if (54 > exp)
{
double offset = (exp / 3.321928094887362358); //...or =Math.Log10(Math.Abs(value))
BigInteger temp = mantissa * BigInteger.Pow(10, 26 - (int)offset) >> (52 - exp);
string numberText = temp.ToString();
int digitsNeeded = (int)((numberText[0] - '5') / 10.0 - offset);
if (exp < 0)
return sign + "0." + new string('0', digitsNeeded) + numberText;
else
return sign + numberText.Insert(1 - digitsNeeded, ".");
}
return sign + (mantissa >> (52 - exp)).ToString();
}
How it works
To solve this problem I used the BigInteger tools. Large values are simple as they just require left shifting the mantissa by the exponent. For small values we cannot just directly right shift as that would lose the precision bits. We must first give it some extra size by multiplying it by a 10^n and then do the right shifts. After that, we move over the decimal n places to the left. More text/code here.
The answer is yes, double printing is broken in .NET, they are printing trailing garbage digits.
You can read how to implement it correctly here.
I have had to do the same for IronScheme.
> (* 10.0 0.69)
6.8999999999999995
> 6.89999999999999946709
6.8999999999999995
> (- 6.9 (* 10.0 0.69))
8.881784197001252e-16
> 6.9
6.9
> (- 6.9 8.881784197001252e-16)
6.8999999999999995
Note: Both C and C# has correct value, just broken printing.
Update: I am still looking for the mailing list conversation I had that lead up to this discovery.
I found this quick fix.
double i = 10 * 0.69;
System.Diagnostics.Debug.WriteLine(i);
String s = String.Format("{0:F20}", i).Substring(0,20);
System.Diagnostics.Debug.WriteLine(s + " " +s.Length );

Categories