I am reading in an XML file and reading a specific value of 10534360.9
When I parse the string for a decimal ( decimal.Parse(afNodes2[0][column1].InnerText) ) I get 10534360.9, but when I parse it for a float ( float.Parse(afNodes2[0][column1].InnerText) ) I get 1.053436E+07.
Can someone explain why?
You are seeing the value represented in "E" or "exponential" notation.
1.053436E+07 is equivalent to 1.053436 x 10^7, or 10,534,360, which is the most precise way .NET can store 10,534,360.9 in the System.Single (float) data type (32 bits, 7 digits of precision).
You're seeing it represented that way because it is the default format produced by the Single.ToString() method, which the debugger uses to display values on the watch screens, console, etc.
EDIT
It probably goes without saying, but if you want to retain the precision of the original value in the XML, you could choose a data type that can retain more information about your numbers:
System.Double (64 bits, 15-16 digits)
System.Decimal (128 bits, 28-29 significant digits)
1.053436E+07 == 10534360.9
Those numbers are the same, just displayed differently.
Because float has a precision of 7 digits.
Decimal has a precision of 28 digits.
When viewing the data in a debugger, or displaying it via .ToString, by default it might be formatted using scientific notation:
Some examples of the return value are "100", "-123,456,789", "123.45e+6", "500", "3.1416", "600", "-0.123", and "-Infinity".
To format it as exact output, use the R (round trip) format string:
myFloat.ToString("R");
http://msdn.microsoft.com/en-us/library/dwhawy9k(v=vs.110).aspx
Related
Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with?
See the following example:
public void DoSomething()
{
decimal dec1 = 0.5M;
decimal dec2 = 0.50M;
Console.WriteLine(dec1); //Output: 0.5
Console.WriteLine(dec2); //Output: 0.50
Console.WriteLine(dec1 == dec2); //Output: True
}
The decimals are classed as equal, yet dec2 remembers that it was entered with an additional zero. What is the reason/purpose for this?
It can be useful to represent a number including its accuracy - so 0.5m could be used to mean "anything between 0.45m and 0.55m" (with appropriate limits) and 0.50m could be used to mean "anything between 0.495m and 0.545m".
I suspect that most developers don't actually use this functionality, but I can see how it could be useful sometimes.
I believe this ability first arrived in .NET 1.1, btw - I think decimals in 1.0 were always effectively normalized.
I think it was done to provide a better internal representation for numeric values retrieved from databases. Dbase engines have a long history of storing numbers in a decimal format (avoiding rounding errors) with an explicit specification for the number of digits in the value.
Compare the SQL Server decimal and numeric column types for example.
Decimals represent fixed-precision decimal values. The literal value 0.50M has the 2 decimal place precision embedded, and so the decimal variable created remembers that it is a 2 decimal place value. Behaviour is entirely by design.
The comparison of the values is an exact numerical equality check on the values, so here, trailing zeroes do not affect the outcome.
I have encountered something very weird when it comes to the standard numeric format strings in C#. This is probably a known quirk but i can't find anything documenting it and i can't figure out a way to actually get what i want.
I want to take a number like 17.929333333333489 and format it with no decimal places. So i just want "17". But when run this code.
decimal what = 17.929333333333489m;
Console.WriteLine(what.ToString("F0"));
I get this output
18
Looking at Microsoft's documentation on it their examples show the same output.
https://msdn.microsoft.com/en-us/library/kfsatb94(v=vs.110).aspx
// F: -195489100.84
// F0: -195489101
// F1: -195489100.8
// F2: -195489100.84
// F3: -195489100.838
Here is a code example showing the odd issue.
http://csharppad.com/gist/d67ddf5d0535c4fe8e39
This issue is not only limited to the standard ones like "F" and "N" but also the custom ones like "#".
How can i use the standard "F0" formatter and not have it round my number?
From the documentation on Standard Numeric Format Strings:
xx is an optional integer called the precision specifier. The precision specifier ranges from 0 to 99 and affects the number of digits in the result. Note that the precision specifier controls the number of digits in the string representation of a number. It does not round the number itself. To perform a rounding operation, use the Math.Ceiling, Math.Floor, or Math.Round method.
When precision specifier controls the number of fractional digits in the result string, the result strings reflect numbers that are rounded away from zero (that is, using MidpointRounding.AwayFromZero).
So the documentation does indeed discuss this, and there is no apparent way to prevent rounding of the output purely through a format string.
The best I can offer is to truncate the number yourself using Math.Truncate():
decimal what = 17.929333333333489m;
decimal truncatedWhat = Math.Truncate(what);
Console.WriteLine(truncatedWhat.ToString("F0"));
I believe using decimal with "m" at the end rounds up at the given decimal place.
Here is what I experimented.
decimal what = 17.429333333333489m;
Console.WriteLine(what.ToString("F0"));
Console.WriteLine(what.ToString("N0"));
Console.WriteLine(what.ToString("F1"));
Console.WriteLine(what.ToString("N1"))
17
17
17.4
17.4
If you want to get 17, I used different approach using int and deciam
double what = 17.929333333333489;
Console.WriteLine(string.Format("{0:0}", (int)what));
Console.WriteLine(string.Format("{0:0}", what));
Console.WriteLine(string.Format("{0:0.00}", Math.Floor(what*100)/100));
Console.WriteLine(string.Format("{0:0.00}", what));
17
18
17.92
17.93
How do I convert a double to have precision of 2 places?
For ex:
double x = 1.00d;
Console.WriteLine(Math.Round(x,2,MidpointRounding.AwayFromZero));
//Should output 1.00 for me, but it outputs just 1.
What I'm looking for is to store 1.00 in a double variable which when printed to console prints 1.00.
Thanks,
-Mike
"x" stores the number - it doesn't pay attention to the precision.
To output the number as a string with a specific amount of precision, use a format specifier, ie:
Console.WriteLine(x.ToString("F2"));
double is an IEEE-754 double-precision floating point number. It always has exactly 53 bits of precision. It does not have a precision that can be exactly represented in "decimal places" -- as the phrase implies, "decimal places" only measures precision relative to the base-10 number system.
If what you want is simply to change the precision the value is displayed with, then you can use one of the ToString() overloads to do that, as other answers here point out.
Remember too that double is an approximation of a continuous real value; that is, a value where there is no implied discrete quantization (although the values are in fact quantized to 53 bits). If what you are trying to represent is a discrete value, where there is an implicit quantization (for example, if you are working with currency values, where there is a significant difference between 0.1 and 0.099999999999999) then you should probably be using a data type that is based on integer semantics -- either a fixed-point representation using a scaled integer or something like the .NET decimal type.
This is not a question of storing, it is a question of formatting.
This code produces 1.00:
double x = 1.00d;
Console.WriteLine("{0:0.00}", Math.Round(x, 2, MidpointRounding.AwayFromZero));
The decimal data type has a concept of variable precision. The double data type does not. Its precision is always 53 bits.
The default string representation of a double rounds it slightly to minimize odd-looking values like 0.999999999999, and then drops any trailing zeros. As other answers note, you can change that behavior by using one of the type's ToString overloads.
I have the following:
string value = "9223372036854775807";
double parsedVal = double.Parse(value, CultureInfo.InvariantCulture);
... and the result is 9.2233720368547758E+18 which is not the exact same number. How should I convert string to double without loss of precision?
double can only guarantee 16 (approx) decimal digits of accuracy. You might try switching to decimal (which has a few more bits to play with, and holds that value with plenty of headroom).
You can't convert 9223372036854775807 to double without loss of precision, due to the definition of a double ((IEEE 754) standard for binary floating-point arithmetic).
By default, a Double value contains 15 decimal digits of precision,
although a maximum of 17 digits is maintained internally.
Using Decimal will get you the precision you need here. However please note that Decimal will not have the same storage range as a double.
See Can C# store more precise data than doubles?
Does anyone know of an elegant way to get the decimal part of a number only? In particular I am looking to get the exact number of places after the decimal point so that the number can be formatted appropriately. I was wondering if there is away to do this without any kind of string extraction using the culture specific decimal separator....
For example
98.0 would be formatted as 98
98.20 would be formatted as 98.2
98.2765 would be formatted as 98.2765 etc.
It it's only for formatting purposes, just calling ToString will do the trick, I guess?
double d = (double)5 / 4;
Console.WriteLine(d.ToString()); // prints 1.75
d = (double)7 / 2;
Console.WriteLine(d.ToString()); // prints 3.5
d = 7;
Console.WriteLine(d.ToString()); // prints 7
That will, of course, format the number according to the current culture (meaning that the decimal sign, thousand separators and such will vary).
Update
As Clement H points out in the comments; if we are dealing with great numbers, at some point d.ToString() will return a string with scientific formatting instead (such as "1E+16" instead of "10000000000000000"). One way to overcome this probem, and force the full number to be printed, is to use d.ToString("0.#"), which will also result in the same output for lower numbers as the code sample above produces.
You can get all of the relevant information from the Decimal.GetBits method assuming you really mean System.Decimal. (If you're talking about decimal formatting of a float/double, please clarify the question.)
Basically GetBits will return you 4 integers in an array.
You can use the scaling factor (the fourth integer, after masking out the sign) to indicate the number of decimal places, but you should be aware that it's not necessarily the number of significant decimal places. In particular, the decimal representations of 1 and 1.0 are different (the former is 1/1, the latter is 10/10).
Unfortunately, manipulating the 96 bit integer is going to require some fiddly arithmetic unless you can use .NET 4.0 and BigInteger.
To be honest, you'll get a simpler solution by using the built in formatting with CultureInfo.InvariantCulture and then finding everything to the right of "."
Just to expand on the point about getbits, this expression gets the scaling factor from a decimal called foo:
(decimal.GetBits(foo)[3] & 16711680)>>16
You could use the Int() function to get the whole number component, then subtract from the original.