I am currently formatting a double using the code:
myDouble.ToString("g4");
To get the first 4 decimal places. However I find this often switches over to scientific notation if the number is very large or very small. Is there an easy format string in C# to just have the first four decimal places, or zero if it is too small to be represented in that number of places?
For example, I would like:
1000 => 1000
0.1234567 => 0.1235
123456 => 123456 (Note: Not scientific notation)
0.000001234 => 0 (Note: Not scientific notation)
You can try like this:
0.1234567.ToString("0.####")
Also check Custom Numeric Format Strings
#
Replaces the "#" symbol with the corresponding digit if one is
present; otherwise, no digit appears in the result string.
Also as Jon as correctly pointed that it will round your number. See the note section
Rounding and Fixed-Point Format Strings
For fixed-point format strings
(that is, format strings that do not contain scientific notation
format characters), numbers are rounded to as many decimal places as
there are digit placeholders to the right of the decimal point.
Use the String.Format() method.
String.Format("{0:0.####}", 123.4567123); //output: 123.4567
Note: Num of #'s indicate the maximum number of digits after decimal that are required.
I agree with kjbartel comment.
I wanted exactly what the original question asked. But his question is slightly ambiguous.
The problem with ### format is it fills the slot if a digit can be represented or not.
So it does what the original question asks for some numbers but not others.
My basic need is, and it's a pretty common one, if the number is big I don't need to show decimal places. If the number is small I do want to show decimal places. Basically X number of significant digits.
The "Gn" Format will do significant digits, but it switches to scientific notation if you go over the number of digits. I don't want E notation, ever (same requirement as the question).
So I used fixed format ("Fn") but I calculate the width on the fly based on how "big" the number is.
var myFloatNumber = 123.4567;
var digits = (int) Math.Log10(myFloatNumber);
var maxDecimalplaces = 3;
var format = "F" + Math.Max(0,(maxDecimalplaces - digits));
I swear there was a way to do this in C++ (Visual Studio Flavor) in teh format statement or in C# and perhaps there is, but I can't find it.
So I came up with this. I could have converted to a string and measured length before decimal point as well. But converting it to a string twice felt wrong.
Related
I have encountered something very weird when it comes to the standard numeric format strings in C#. This is probably a known quirk but i can't find anything documenting it and i can't figure out a way to actually get what i want.
I want to take a number like 17.929333333333489 and format it with no decimal places. So i just want "17". But when run this code.
decimal what = 17.929333333333489m;
Console.WriteLine(what.ToString("F0"));
I get this output
18
Looking at Microsoft's documentation on it their examples show the same output.
https://msdn.microsoft.com/en-us/library/kfsatb94(v=vs.110).aspx
// F: -195489100.84
// F0: -195489101
// F1: -195489100.8
// F2: -195489100.84
// F3: -195489100.838
Here is a code example showing the odd issue.
http://csharppad.com/gist/d67ddf5d0535c4fe8e39
This issue is not only limited to the standard ones like "F" and "N" but also the custom ones like "#".
How can i use the standard "F0" formatter and not have it round my number?
From the documentation on Standard Numeric Format Strings:
xx is an optional integer called the precision specifier. The precision specifier ranges from 0 to 99 and affects the number of digits in the result. Note that the precision specifier controls the number of digits in the string representation of a number. It does not round the number itself. To perform a rounding operation, use the Math.Ceiling, Math.Floor, or Math.Round method.
When precision specifier controls the number of fractional digits in the result string, the result strings reflect numbers that are rounded away from zero (that is, using MidpointRounding.AwayFromZero).
So the documentation does indeed discuss this, and there is no apparent way to prevent rounding of the output purely through a format string.
The best I can offer is to truncate the number yourself using Math.Truncate():
decimal what = 17.929333333333489m;
decimal truncatedWhat = Math.Truncate(what);
Console.WriteLine(truncatedWhat.ToString("F0"));
I believe using decimal with "m" at the end rounds up at the given decimal place.
Here is what I experimented.
decimal what = 17.429333333333489m;
Console.WriteLine(what.ToString("F0"));
Console.WriteLine(what.ToString("N0"));
Console.WriteLine(what.ToString("F1"));
Console.WriteLine(what.ToString("N1"))
17
17
17.4
17.4
If you want to get 17, I used different approach using int and deciam
double what = 17.929333333333489;
Console.WriteLine(string.Format("{0:0}", (int)what));
Console.WriteLine(string.Format("{0:0}", what));
Console.WriteLine(string.Format("{0:0.00}", Math.Floor(what*100)/100));
Console.WriteLine(string.Format("{0:0.00}", what));
17
18
17.92
17.93
I am working on a calculator and I want to get a thousand dot in my string.
But when i do it like this:
double Answer = 12345;
tbAnswer.Text = Answer.ToString("n");
But when i do it like that it will give me 1,2345.00
I just want the thousand dot and if my double has 3 decimals, that it has 3 decimals, and if it has 2 then 2 etc like:
double Answer = 12345.1; //1,2345.1
double Answer = 12345.23 //1,2345.23
double Answer = 12345.456 //1,2345.456
Is this possible or do i have to stick with the minimum 2 decimals?
There's not a standard format code that will do that - you'll have to use a custom format code:
Answer.ToString("#,###.######");
Note that there's not a format specifier that will provide an unlimited number of decimal places. If you want to support native types up through decimal (which can have 29 decmial places) you could use:
Answer.ToString("#,###.#############################");
But that's ugly, and showing 29 digits of precision is rarely practical.
This question already has answers here:
How do I display a decimal value to 2 decimal places?
(19 answers)
Closed 8 years ago.
UPDATE
It's so simple...
When I try to convert the value $ 1.50 from a textbox to a decimal variable, like this:
decimal value = Convert.ToDecimal(textbox1.text.SubString(1));
OR
decimal value = Decimal.Parse(textbox1.text.SubString(1));
I get this result: 1.5.
I know that 1.5 and 1,50 worth the same. But I want to know if it's possible to have two digits after the dot on a decimal variable.
I want to have this as result: 1.50 instead of 1.5 even if these two values worth the same...
I want to have this as result: 1.50 instead of 1.5 even if these two values worth the same..
You have 1.50 or 1.500 or 1.5000. all depending on how you decide to format it / print it.
Your decimal value is stored in floating point format. How many decimal points you see is about output, not storage (at least until you reach the limit of the precision of the particular binary format, and 2 decimal places is nowhere close). A C# Decimal stores up to 29 significant digits.
See this reference. It gives an example of a currency format. It prints something like:
My amount = $1.50
But, you aren't storing a $ sign..., so where does it come from? The same place the "1.50" comes from, it is in your format specifier.
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Console.WriteLine("My amount = {0:C}", x);
var s = String.Format("My amount = {0:C}", x);
It is no different than saying, how do I store 1/3 (a repeating decimal)?
Well, it isn't 0.33, but if I only look at the first 2 digits, then it is 0.33. The closer i look (the more decimal places I ask for in the format), the more I get.
0.33333333333333... but that doesn't equal 0.330
You're confusing storage of the numeric value with rendering it as a string (display).
decimal a=1.5;
decimal b=1.50;
decimal c=1.500;
In memory: the zeros are kept to keep track of how much precision is desired. See the link in the comment by Chris Dunaway below.
However, note these tests:
(a==b) = true
(b==c)=true
Parsing ignores the trailing zeros, so your example one creates them, then they're ignored, as they're mathmatically irrelevant.
Now how you convert to string is a different story:
a.ToString("N4") returns the string "1.5000" (b. and c. the same)
a.ToString("N2") returns the string "1.50"
As the link in the comment explains, if you just to a.ToString, trailing zeros are retained.
If you store it in a database column as type 'decimal', it might be a different story - I haven't researched the results. These are the rules that .Net uses and while the databases might use different rules, these behaviours often follow official standards, so if you do your research you might find that the database behaves the same way!
The important thing to remember is that there is a difference between the way numbers are stored in memory and the way they are represented as strings. Floating point numbers may not retain trailing zeros this way, it's up to the rules of the in-memory storage of the type (usually set by standards bodies in very specific, detailed ways).
I assume that it has something to do with the number of leading or trailing zeroes, but I can't find anything in msdn that gives me a concrete answer.
At what point does Double.ToString(CultureInfo.InvariantCulture) start to return values in scientific notation?
From the docs for Double.ToString(IFormatProvider):
This instance is formatted with the general numeric format specifier ("G").
From the docs for the General Numeric Format Specifier:
Fixed-point notation is used if the exponent that would result from expressing the number in scientific notation is greater than -5 and less than the precision specifier; otherwise, scientific notation is used. The result contains a decimal point if required, and trailing zeros after the decimal point are omitted. If the precision specifier is present and the number of significant digits in the result exceeds the specified precision, the excess trailing digits are removed by rounding.
However, if the number is a Decimal and the precision specifier is omitted, fixed-point notation is always used and trailing zeros are preserved.
The default precision specifier for Double is documented to be 15.
Although earlier in the table, it's worded slightly differently:
Result: The most compact of either fixed-point or scientific notation.
I haven't worked out whether those two are always equivalent for a Double value...
EDIT: As per Abel's comment:
Also, it is not always the most compact notation. 0.0001 is larger then 1E-04, but the first is output. The MS docs are not complete here.
That fits in with the more detailed description, of course. (As the exponent required is greater than -5 and less than 15.)
From the documentation it follows that the most compact form to represent the number will be chosen.
I.e., when you do not specify a format string, the default is the "G" format string. From the specification of the G format string follows:
Result: The most compact of either fixed-point or scientific notation.
The default for the number of digits is 15 with the specifier. That means that a number that is representable as exactly a certain binary representation (like 0.1 in the example of harriyott) will be displayed as fixed point, unless the exponential notation is more compact.
When there are more digits, it will, by default, display all these digits (up to 15) and choose exponential notation once that is shorter.
Putting this together:
?(1.0/7.0).ToString()
"0,142857142857143" // 15 digits
?(10000000000.0/7.0).ToString()
"1428571428,57143" // 15 significant digits, E-notation not shorter
?(100000000000000000.0/7.0).ToString()
"1,42857142857143E+16" // 15 sign. digits, above range for non-E-notation (15)
?(0.001/7.0).ToString()
"0,000142857142857143" // non E-notation is shorter
?(0.0001/7.0).ToString()
"1,42857142857143E-05" // E-notation shorter
And, of interest:
?(1.0/2.0).ToString()
"0,5" // exact representation
?(1.0/5.0).ToString()
"0,2" // rounded, zeroes removed
?(1.0/2.0).ToString("G20")
"0,5" // exact representation
?(1.0/5.0).ToString("G20")
"0,20000000000000001" // unrounded
This is to show what happens behind the scene and why 0.2 is written as 0.2, not 0,20000000000000001, which is actually is. By default, 15 significant digits are shown. When there are more digits (and there always are, except for certain special numbers), these are rounded the normal way. After rounding, redundant zeroes are removed.
Note that a double has a precision of 15 or 16 digits, depending on the number. So, by showing 15 digits, what you see is a correctly rounded down number and always a complete representation, and the shortest representation of the double.
It uses the formatter "G" (for "General"), which is specified to use "the most compact of either fixed-point or scientific notation" http://msdn.microsoft.com/en-us/library/dwhawy9k.aspx
So since the fixed-point 0.00001 is more characters than 1E-05 it will favour the scientific notation. I suppose if they're of equal length, it favours fixed-point.
I've just tried this with a loop:
double a = 1;
for (var i = 1; i < 10; i++)
{
a = a / 10;
Console.WriteLine(a.ToString(CultureInfo.InvariantCulture));
}
The output was:
0.1
0.01
0.001
0.0001
1E-05
1E-06
1E-07
1E-08
1E-09
Does anyone know of an elegant way to get the decimal part of a number only? In particular I am looking to get the exact number of places after the decimal point so that the number can be formatted appropriately. I was wondering if there is away to do this without any kind of string extraction using the culture specific decimal separator....
For example
98.0 would be formatted as 98
98.20 would be formatted as 98.2
98.2765 would be formatted as 98.2765 etc.
It it's only for formatting purposes, just calling ToString will do the trick, I guess?
double d = (double)5 / 4;
Console.WriteLine(d.ToString()); // prints 1.75
d = (double)7 / 2;
Console.WriteLine(d.ToString()); // prints 3.5
d = 7;
Console.WriteLine(d.ToString()); // prints 7
That will, of course, format the number according to the current culture (meaning that the decimal sign, thousand separators and such will vary).
Update
As Clement H points out in the comments; if we are dealing with great numbers, at some point d.ToString() will return a string with scientific formatting instead (such as "1E+16" instead of "10000000000000000"). One way to overcome this probem, and force the full number to be printed, is to use d.ToString("0.#"), which will also result in the same output for lower numbers as the code sample above produces.
You can get all of the relevant information from the Decimal.GetBits method assuming you really mean System.Decimal. (If you're talking about decimal formatting of a float/double, please clarify the question.)
Basically GetBits will return you 4 integers in an array.
You can use the scaling factor (the fourth integer, after masking out the sign) to indicate the number of decimal places, but you should be aware that it's not necessarily the number of significant decimal places. In particular, the decimal representations of 1 and 1.0 are different (the former is 1/1, the latter is 10/10).
Unfortunately, manipulating the 96 bit integer is going to require some fiddly arithmetic unless you can use .NET 4.0 and BigInteger.
To be honest, you'll get a simpler solution by using the built in formatting with CultureInfo.InvariantCulture and then finding everything to the right of "."
Just to expand on the point about getbits, this expression gets the scaling factor from a decimal called foo:
(decimal.GetBits(foo)[3] & 16711680)>>16
You could use the Int() function to get the whole number component, then subtract from the original.