How do I convert a double to have precision of 2 places?
For ex:
double x = 1.00d;
Console.WriteLine(Math.Round(x,2,MidpointRounding.AwayFromZero));
//Should output 1.00 for me, but it outputs just 1.
What I'm looking for is to store 1.00 in a double variable which when printed to console prints 1.00.
Thanks,
-Mike
"x" stores the number - it doesn't pay attention to the precision.
To output the number as a string with a specific amount of precision, use a format specifier, ie:
Console.WriteLine(x.ToString("F2"));
double is an IEEE-754 double-precision floating point number. It always has exactly 53 bits of precision. It does not have a precision that can be exactly represented in "decimal places" -- as the phrase implies, "decimal places" only measures precision relative to the base-10 number system.
If what you want is simply to change the precision the value is displayed with, then you can use one of the ToString() overloads to do that, as other answers here point out.
Remember too that double is an approximation of a continuous real value; that is, a value where there is no implied discrete quantization (although the values are in fact quantized to 53 bits). If what you are trying to represent is a discrete value, where there is an implicit quantization (for example, if you are working with currency values, where there is a significant difference between 0.1 and 0.099999999999999) then you should probably be using a data type that is based on integer semantics -- either a fixed-point representation using a scaled integer or something like the .NET decimal type.
This is not a question of storing, it is a question of formatting.
This code produces 1.00:
double x = 1.00d;
Console.WriteLine("{0:0.00}", Math.Round(x, 2, MidpointRounding.AwayFromZero));
The decimal data type has a concept of variable precision. The double data type does not. Its precision is always 53 bits.
The default string representation of a double rounds it slightly to minimize odd-looking values like 0.999999999999, and then drops any trailing zeros. As other answers note, you can change that behavior by using one of the type's ToString overloads.
Related
I want to print floats with greater precision than is the default.
For example when I print the value of PI from a float i get 6 decimals. But if I copy the same value from the float into a double and print it i get 14 decimals.
Why do I get more precision when printing the same value but as a double?
How can I get Console.WriteLine() to output more decimals when printing floats without needing to copy it into a double first?
I also tried the 0.0000000000 but it did not write with more precision, it just added more zeroes. :-/
My test code:
float x = (float)Math.PI;
double xx = x;
Console.WriteLine(x);
Console.WriteLine(xx);
Console.WriteLine($"{x,18:0.0000000000}"); // Try to force more precision
Console.WriteLine($"{xx,18:0.0000000000}");
Output:
3,141593
3,14159274101257
3,1415930000 <-- The program just added more zeroes :-(
3,1415927410
I also tried to enter PI at https://www.h-schmidt.net/FloatConverter/IEEE754.html
The binary float representation of PI is 0b01000000010010010000111111011011
So the value is: 2 * 0b1.10010010000111111011011 = 2 * 1.57079637050628662109375 = 3.1415927410125732421875
So there are more decimals to output. How would I get C# to output this whole value?
There is no more precision in a float. When you convert it to a double, the accuracy is worse, even though the precision (number of digits) increased - basically, all those extra digits you see in the double print are conversion artifacts - they are wrong, just the best representation of that given float number you can get in a double. This has everything to do with how binary floating point numbers work.
Let's look at the binary representation of the float:
01000000010010010000111111011100
The first bit is sign, the next eight are the exponent, and the rest is the mantissa. What do we get when we cast it to a double? The exponent stays the same, but the mantissa is filled in with zeroes. But that actually changes the number - you get 3.1415929794311523 (rounded, as always with binary floats) instead of the correct 3.14159265358979 double value of pi. You get the illusion of greater precision, but it's only an illusion - the number is no more accurate than before, you just replaced zeroes with noise.
There's no 1:1 mapping between floats and doubles. The same float value can be represented by many different double values, and the decimal representation of the number can change accordingly. Every cast of float to double should be followed by a rounding operation if you care about decimal precision. Consider this code instead:
double.Parse(((float)Math.PI).ToString())
Instead of casting the float to a double, I first changed it to a decimal representation (in a string), and created the double from that. Now instead of having a "truncated" double, you have a proper double that doesn't lie about extra precision; when you print it out, you get 3.1415930000. Still rounded, of course, since it's still a binary->decimal conversion, but no longer pretending to have more precision than is actually there - the rounding happens at a later digit than in the float version, the zeroes are really zeroes, except for the last one (which is only approximately zero).
If you want real decimal precision, you might want to use a decimal type (e.g. int or decimal). float and double are both binary numbers, and only have binary precision. A finite decimal number isn't necessarily finite in binary, just like a finite trinary number isn't necessarily finite in decimal (e.g. 1/3 doesn't have a finite decimal representation).
A float within c# has a precision of 7 digits and no more. That means 1 digit before the decimal and 6 after.
If you do have any more digits in your output, they might be entirely wrong.
If you do need more digits, you have to use either double which has 15-16 digits precision or decimal which has 28-29 digits.
See MSDN for reference.
You can easily verify that as the digits of PI are a little different from your output. The correct first 40 digits are: 3.14159 26535 89793 23846 26433 83279 50288 4197
To print the value with 6 places after the decimal.
Console.WriteLine("{0:N6}", (double)2/3);
Output :
0.666667
I have an old VBA application that has generates values like 1.123456789123456789123456789E-28. When I googled to find the equivalent data type in C#, I found this article in SO Difference between decimal, float and double in .NET?. As per what this post suggests, decimal data type has the largest size in C#. So I used decimal data type but I found that decimal seems to be limited in representing large values that VBA represents in exponential form. So I googled again and found this article https://msdn.microsoft.com/en-us/library/ae55hdtk.aspx. As stated in this article,
Nonintegral number values can be expressed as mmmEeee, in which mmm is
the mantissa (the significant digits) and eee is the exponent (a power
of 10). The highest positive values of the nonintegral types are
7.9228162514264337593543950335E+28 for Decimal, 3.4028235E+38 for Single, and 1.79769313486231570E+308 for Double.
This means Double data type has the largest range of values. So now I am confused between decimal and double.
Then I did a little experiment with code like below
double dbl = 1.123456789123456789123456789E-28;
decimal dcl = 1.123456789123456789123456789E-28M;
MessageBox.Show(dbl.ToString());
MessageBox.Show(dcl.ToString());
The dbl prints the value as it is 1.123456789123456789123456789E-28 while the dcl value changes to 0.0000000000000000000000000001;
Can someone explain what is happening here ? What is the equivalent data type in C# to represent large exponential values like in VBA with same precision?
According to the C# language specification: https://msdn.microsoft.com/en-us/library/aa691085%28v=vs.71%29.aspx
A real literal suffixed by F or f is of type float. For example, the literals 1f, 1.5f, 1e10f, and 123.456F are all of type float.
A real literal suffixed by D or d is of type double. For example, the literals 1d, 1.5d, 1e10d, and 123.456D are all of type double.
A real literal suffixed by M or m is of type decimal. For example, the literals 1m, 1.5m, 1e10m, and 123.456M are all of type decimal.
This literal is converted to a decimal value by taking the exact
value, and, if necessary, rounding to the nearest representable value
using banker's rounding (Section 4.1.7). Any scale apparent in the
literal is preserved unless the value is rounded or the value is zero
(in which latter case the sign and scale will be 0). Hence, the
literal 2.900m will be parsed to form the decimal with sign 0,
coefficient 2900, and scale 3.
Looking at the third bullet point, you can see the phrase "...This literal is converted to a decimal value by taking the exact value...". The value you are getting (0.0000000000000000000000000001) is the converted result of the input literal (1.123456789123456789123456789E-28M) that you started with. Of course, with the value rounded as per the second part of the third bullet-point regarding the conversion "...if necessary, rounding to the nearest representable value using banker's rounding...".
This is true for decimal, but for float or double the exponential notation can be preserved by the data type itself, so no conversion/rounding is necessary.
Addition:
To expand further on the above, see: https://msdn.microsoft.com/en-us/library/364x0z75.aspx
This is the definition for the decimal data type. As noted there: the range of decimal is (-7.9 x 1028 to 7.9 x 1028) / (100 to 28) or 28 to 29 significant digits. Your literal, 1.123456789123456789123456789E-28M is beyond that limit, so when it is converted, everything to the right of the 1 is rounded as per the rules above.
Solution(?):
If you need a larger precision than decimal can provide, you may want to look into implementing your own class, which is definitely doable. Consider searching for C# implementations of BigDecimal for ideas.
One implementation I found on a quick google: https://gist.github.com/nberardi/2667136
I cannot speak to its correctness, but it could at least be a starting point for you.
float and double are floating point numbers. That means that they are holding number and exponential coefficient separatly, where double is double size of float. double is not recommended to use in cases when precision is important, for example prices.
decimal is holding numbers in "decimal" form, with fixed point. It's more precise, especially with decimal calculations.
It seems that double is more likely to be what you're looking for.
I have a double with 17 digits after the decimal point, i.e.:
double myDouble = 0.12345678901234567;
If I convert this to a decimal like this:
decimal myDecimal = Convert.ToDecimal(myDouble);
then the value of myDecimal is rounded, as per the Convert.ToDecimal documentation, to 15 digits (i.e. 0.0123456789012345). My question is, why is this rounding performed?
I understand that if my original number could be accurately represented in base 10 and I was trying to store it as a double, then we could only have confidence in the first 15 digits. The final two digits would be subject to rounding error. But, that's a base 10 biased point of view. My number may be more accurately represented by a double and I wish to convert it to decimal while preserving as much accuracy as possible.
Shouldn't Convert.ToDecimal aim to minimise the difference between myDouble and (double)Convert.ToDecimal(myDouble)?
From the documentation of Double:
A Double value has up to 15 decimal digits of precision, although a
maximum of 17 digits is maintained internally
So, as the double value itself has a maximum of 15 decimal places, converting it to Decimal will result in a Decimal value with 15 significant figures.
The behavior of the rounding guarantees that conversion of any Decimal which has at most fifteen significant figures to double and back to Decimal will yield the original value unchanged. If values were rounded to sixteen figures rather than fifteen, such a guarantee would not only fail to hold for number with sixteen figures, but it wouldn't even hold for much shorter values. For example, the closest Double value to 9000.04 is approximately 9000.040000000000873115; rounding that to sixteen figures would yield 9000.040000000001.
The choice of rounding one should use depends upon whether regards the best Decimal value of double value 9000.04 as being 9000.04m, 9000.040000000001m, 9000.0400000000008731m, or perhaps something else. Microsoft probably decided that any representation other than 9000.04m would be confusing.
The following is from the documentation of the method in question.
http://msdn.microsoft.com/en-us/library/a69w9ca0(v=vs.110).aspx
"The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
Every terminating binary fraction is exactly representable as a decimal fraction, so the minimum possible difference for a finite number is always 0. The IEEE 754 64-bit representation of your number is exactly equal to 0.1234567890123456634920984242853592149913311004638671875
Every conversion from binary floating point to decimal or decimal string must embody some decision about rounding. In theory, they could preserve all the digits, but that would result in some very long outputs with most of the digits having little meaning.
One option, used in Java's Double.toString, is to stop at the shortest decimal representation that would convert back to the original double.
Most set some fairly arbitrary limit. 15 significant digits preserves most meaningful digits.
while trying to parse variable to float with following parameter float.TryParse(value, NumberStyles.Float, CultureInfo.InvariantCulture, out fValue),
the value=6666.77777 is rounded of to 6666.778.
can anyone help, i don't want my value to be rounded.
float only has around 6 significant digits. Note that digits before the decimal point count too. double has higher precision in that regard (around 16):
PS> [float]::Parse('6666,77777')
6666.778
PS> [double]::Parse('6666,77777')
6666.77777
But as others noted, this is just an approximate value because it cannot be represented exactly in base 2. If you need decimal exactness (e.g. for money values) then use a decimal. For most other things binary floating point (i.e. float and double) should be fine.
Why don't you use Double.TryParse or Decimal.TryParse to support higher precision:
float: Approximately ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures
double: Approximately ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
decimal: Approximately ±1.0 x 10-28 to ±7.9 x 1028 with 28 or 29 significant figures
Try this piece of code snippet instead:
double fValue;
double.TryParse("6666.77777", NumberStyles.Double, CultureInfo.InvariantCulture, out fValue);
OR
decimal fValue;
decimal.TryParse("6666.77777", NumberStyles.Decimal, CultureInfo.InvariantCulture, out fValue);
That is because value 666.77777 cannot be represented in the binary form floating point numbers use - the actual number contains more binary digits than the floating point has room for. The resulting number is the closest approximation.
Rounding is used when the exact result of a floating-point operation
(or a conversion to floating-point format) would need more digits than
there are digits in the significand. IEEE 754 requires correct
rounding: that is, the rounded result is as if infinitely precise
arithmetic was used to compute the value and then rounded (although in
implementation only three extra bits are needed to ensure this). There
are several different rounding schemes (or rounding modes).
Historically, truncation was the typical approach. Since the
introduction of IEEE 754, the default method (round to nearest, ties
to even, sometimes called Banker's Rounding) is more commonly used.
You should consider using double if you need more precision, or decimal if you need even more than that, though they too suffer from precision loss at some point.
you should use decimal if you need higher precision.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
I have been dealing with some numbers and C#, and the following line of code results in a different number than one would expect:
double num = (3600.2 - 3600.0);
I expected num to be 0.2, however, it turned out to be 0.1999999999998181. Is there any reason why it is producing a close, but still different decimal?
This is because double is a floating point datatype.
If you want greater accuracy you could switch to using decimal instead.
The literal suffix for decimal is m, so to use decimal arithmetic (and produce a decimal result) you could write your code as
var num = (3600.2m - 3600.0m);
Note that there are disadvantages to using a decimal. It is a 128 bit datatype as opposed to 64 bit which is the size of a double. This makes it more expensive both in terms of memory and processing. It also has a much smaller range than double.
There is a reason.
The reason is, that the way the number is stored in memory, in case of the double data type, doesn't allow for an exact representation of the number 3600.2. It also doesn't allow for an exact representation of the number 0.2.
0.2 has an infinite representation in binary. If You want to store it in memory or processor registers, to perform some calculations, some number close to 0.2 with finite representation is stored instead. It may not be apparent if You run code like this.
double num = (0.2 - 0.0);
This is because in this case, all binary digits available for representing numbers in double data type are used to represent the fractional part of the number (there is only the fractional part) and the precision is higher. If You store the number 3600.2 in an object of type double, some digits are used to represent the integer part - 3600 and there is less digits representing fractional part. The precision is lower and fractional part that is in fact stored in memory differs from 0.2 enough, that it becomes apparent after conversion from double to string
Change your type to decimal:
decimal num = (3600.2m - 3600.0m);
You should also read this.
See Wikipedia
Can't explain it better. I can also suggest reading What Every Computer Scientist Should Know About Floating-Point Arithmetic. Or see related questions on StackOverflow.