This question already has answers here:
How do I display a decimal value to 2 decimal places?
(19 answers)
Closed 8 years ago.
UPDATE
It's so simple...
When I try to convert the value $ 1.50 from a textbox to a decimal variable, like this:
decimal value = Convert.ToDecimal(textbox1.text.SubString(1));
OR
decimal value = Decimal.Parse(textbox1.text.SubString(1));
I get this result: 1.5.
I know that 1.5 and 1,50 worth the same. But I want to know if it's possible to have two digits after the dot on a decimal variable.
I want to have this as result: 1.50 instead of 1.5 even if these two values worth the same...
I want to have this as result: 1.50 instead of 1.5 even if these two values worth the same..
You have 1.50 or 1.500 or 1.5000. all depending on how you decide to format it / print it.
Your decimal value is stored in floating point format. How many decimal points you see is about output, not storage (at least until you reach the limit of the precision of the particular binary format, and 2 decimal places is nowhere close). A C# Decimal stores up to 29 significant digits.
See this reference. It gives an example of a currency format. It prints something like:
My amount = $1.50
But, you aren't storing a $ sign..., so where does it come from? The same place the "1.50" comes from, it is in your format specifier.
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Console.WriteLine("My amount = {0:C}", x);
var s = String.Format("My amount = {0:C}", x);
It is no different than saying, how do I store 1/3 (a repeating decimal)?
Well, it isn't 0.33, but if I only look at the first 2 digits, then it is 0.33. The closer i look (the more decimal places I ask for in the format), the more I get.
0.33333333333333... but that doesn't equal 0.330
You're confusing storage of the numeric value with rendering it as a string (display).
decimal a=1.5;
decimal b=1.50;
decimal c=1.500;
In memory: the zeros are kept to keep track of how much precision is desired. See the link in the comment by Chris Dunaway below.
However, note these tests:
(a==b) = true
(b==c)=true
Parsing ignores the trailing zeros, so your example one creates them, then they're ignored, as they're mathmatically irrelevant.
Now how you convert to string is a different story:
a.ToString("N4") returns the string "1.5000" (b. and c. the same)
a.ToString("N2") returns the string "1.50"
As the link in the comment explains, if you just to a.ToString, trailing zeros are retained.
If you store it in a database column as type 'decimal', it might be a different story - I haven't researched the results. These are the rules that .Net uses and while the databases might use different rules, these behaviours often follow official standards, so if you do your research you might find that the database behaves the same way!
The important thing to remember is that there is a difference between the way numbers are stored in memory and the way they are represented as strings. Floating point numbers may not retain trailing zeros this way, it's up to the rules of the in-memory storage of the type (usually set by standards bodies in very specific, detailed ways).
Related
Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with?
See the following example:
public void DoSomething()
{
decimal dec1 = 0.5M;
decimal dec2 = 0.50M;
Console.WriteLine(dec1); //Output: 0.5
Console.WriteLine(dec2); //Output: 0.50
Console.WriteLine(dec1 == dec2); //Output: True
}
The decimals are classed as equal, yet dec2 remembers that it was entered with an additional zero. What is the reason/purpose for this?
It can be useful to represent a number including its accuracy - so 0.5m could be used to mean "anything between 0.45m and 0.55m" (with appropriate limits) and 0.50m could be used to mean "anything between 0.495m and 0.545m".
I suspect that most developers don't actually use this functionality, but I can see how it could be useful sometimes.
I believe this ability first arrived in .NET 1.1, btw - I think decimals in 1.0 were always effectively normalized.
I think it was done to provide a better internal representation for numeric values retrieved from databases. Dbase engines have a long history of storing numbers in a decimal format (avoiding rounding errors) with an explicit specification for the number of digits in the value.
Compare the SQL Server decimal and numeric column types for example.
Decimals represent fixed-precision decimal values. The literal value 0.50M has the 2 decimal place precision embedded, and so the decimal variable created remembers that it is a 2 decimal place value. Behaviour is entirely by design.
The comparison of the values is an exact numerical equality check on the values, so here, trailing zeroes do not affect the outcome.
I have a class that does some length calculations based on a height on a ticket. It's been in place for years and working quite well... Until we got a unique ticket size.
They are entered by sales people in inches and are normally nice numbers like 3, 4 or 3.5 and store in a database - This one is however 3.66666 recurring (or 11/3) But it is being entered as 3.666 and causing the calculation to fail due to lost precision.
I have thought of a bit of a hack to restore precision for certain numbers, but thought maybe someone knows of a better way of getting a 3.666 or a 93.1333 back to it's number + two thirds status?
Thanks,
Mick.
As you explained in comments I see your point now. I've checked the numbers:
168000 / 3.666 = 45826.5139
168000 / 3.666666 = 45818.1901488
168000 * 3 / 11 = 45818.1818182
It makes a difference of 8 tickets. I have a feeling that your issue can be solved in many ways. On the side of user input for example. Or on the side of database. But back to your question:
How do I convert 3.666 or a 93.1333 back to it's number + two thirds
status?
You are looking for converting decimal (or double) to fraction.
There is already a question on SO: Algorithm for simplifying decimal to fractions which has many answeres. I've tested some of them, and none of them were satisfying. Some of them don't even hanlde recurrence. Perhaps I've missed the correct one, you can look by yourself.
Anyway, I believe you don't need to fully implement a conversion from 1.666 to 3/2, since it's not easy and you have a real-world sizes. You've said, that most of the time numbers are aroung 3, 3.5, 4 etc. So I suggest you to take a look at a question I've linked above and search for an algorythm of detecting the recurrence number. It was also discussed here How to know the repeating decimal in a fraction?
After what just convert 1.666 to 1.666666, since 1/1000000 of inch won't mess your calculations, as numbers above show.
It would be difficult to get the accurate value of double as double is floating point.
The MSDN says:
Remember that a floating-point number
can only approximate a decimal number,
and that the precision of a
floating-point number determines how
accurately that number approximates a
decimal number. By default, a Double
value contains 15 decimal digits of
precision, although a maximum of 17
digits is maintained internally. The
precision of a floating-point number
has several consequences:
Two floating-point numbers that appear equal for a particular
precision might not compare equal
because their least significant digits
are different.
A mathematical or comparison operation that uses a floating-point
number might not yield the same result
if a decimal number is used because
the floating-point number might not
exactly approximate the decimal
number.
I am currently formatting a double using the code:
myDouble.ToString("g4");
To get the first 4 decimal places. However I find this often switches over to scientific notation if the number is very large or very small. Is there an easy format string in C# to just have the first four decimal places, or zero if it is too small to be represented in that number of places?
For example, I would like:
1000 => 1000
0.1234567 => 0.1235
123456 => 123456 (Note: Not scientific notation)
0.000001234 => 0 (Note: Not scientific notation)
You can try like this:
0.1234567.ToString("0.####")
Also check Custom Numeric Format Strings
#
Replaces the "#" symbol with the corresponding digit if one is
present; otherwise, no digit appears in the result string.
Also as Jon as correctly pointed that it will round your number. See the note section
Rounding and Fixed-Point Format Strings
For fixed-point format strings
(that is, format strings that do not contain scientific notation
format characters), numbers are rounded to as many decimal places as
there are digit placeholders to the right of the decimal point.
Use the String.Format() method.
String.Format("{0:0.####}", 123.4567123); //output: 123.4567
Note: Num of #'s indicate the maximum number of digits after decimal that are required.
I agree with kjbartel comment.
I wanted exactly what the original question asked. But his question is slightly ambiguous.
The problem with ### format is it fills the slot if a digit can be represented or not.
So it does what the original question asks for some numbers but not others.
My basic need is, and it's a pretty common one, if the number is big I don't need to show decimal places. If the number is small I do want to show decimal places. Basically X number of significant digits.
The "Gn" Format will do significant digits, but it switches to scientific notation if you go over the number of digits. I don't want E notation, ever (same requirement as the question).
So I used fixed format ("Fn") but I calculate the width on the fly based on how "big" the number is.
var myFloatNumber = 123.4567;
var digits = (int) Math.Log10(myFloatNumber);
var maxDecimalplaces = 3;
var format = "F" + Math.Max(0,(maxDecimalplaces - digits));
I swear there was a way to do this in C++ (Visual Studio Flavor) in teh format statement or in C# and perhaps there is, but I can't find it.
So I came up with this. I could have converted to a string and measured length before decimal point as well. But converting it to a string twice felt wrong.
Does anyone know of an elegant way to get the decimal part of a number only? In particular I am looking to get the exact number of places after the decimal point so that the number can be formatted appropriately. I was wondering if there is away to do this without any kind of string extraction using the culture specific decimal separator....
For example
98.0 would be formatted as 98
98.20 would be formatted as 98.2
98.2765 would be formatted as 98.2765 etc.
It it's only for formatting purposes, just calling ToString will do the trick, I guess?
double d = (double)5 / 4;
Console.WriteLine(d.ToString()); // prints 1.75
d = (double)7 / 2;
Console.WriteLine(d.ToString()); // prints 3.5
d = 7;
Console.WriteLine(d.ToString()); // prints 7
That will, of course, format the number according to the current culture (meaning that the decimal sign, thousand separators and such will vary).
Update
As Clement H points out in the comments; if we are dealing with great numbers, at some point d.ToString() will return a string with scientific formatting instead (such as "1E+16" instead of "10000000000000000"). One way to overcome this probem, and force the full number to be printed, is to use d.ToString("0.#"), which will also result in the same output for lower numbers as the code sample above produces.
You can get all of the relevant information from the Decimal.GetBits method assuming you really mean System.Decimal. (If you're talking about decimal formatting of a float/double, please clarify the question.)
Basically GetBits will return you 4 integers in an array.
You can use the scaling factor (the fourth integer, after masking out the sign) to indicate the number of decimal places, but you should be aware that it's not necessarily the number of significant decimal places. In particular, the decimal representations of 1 and 1.0 are different (the former is 1/1, the latter is 10/10).
Unfortunately, manipulating the 96 bit integer is going to require some fiddly arithmetic unless you can use .NET 4.0 and BigInteger.
To be honest, you'll get a simpler solution by using the built in formatting with CultureInfo.InvariantCulture and then finding everything to the right of "."
Just to expand on the point about getbits, this expression gets the scaling factor from a decimal called foo:
(decimal.GetBits(foo)[3] & 16711680)>>16
You could use the Int() function to get the whole number component, then subtract from the original.
I have the following test code:
decimal test1 = 0.0500000000000000045656554454M;
double test2 = (double)test1;
This results in test2 showing as 0.05 when debugging. Why is it being rounded to 2 decimal places?
Thanks
The value from that conversion is actually 0.050000000000000009714451465470119728706777095794677734375, as shown by DoubleConverter. That's the exact value of the nearest double to the decimal you converted.
When you use the debugger or normal string formatting, you aren't usually shown the exact result.
The reason is that double can contain no more than 15-16 significant digits.
see double (C# Reference)
You should take a look at this article about floating-point arithmetic and .NET. The rounding occurs due to a combination of how the number gets converted to a double-precision floating point value and how it is formatted when printed, since .NET defaults to 15 decimals for doubles, and your original number contains decimal past the 15th.
You could try test2.ToString("0.000000000000000000000000") to see if you might squeeze out any more information from the number, but I doubt it will.
There are two reasons I can think of:
Due to the different representation of decimal and double. See this article for more information about floating point representation. It is possible that there are not enough bits for the whole number representation in the double.
Due to the way numbers are printed. It is possible that in your printing options, there are less than 18 numbers after the decimal point specified - in which case, you'll get the rounded result.
I would check for tweaking the printing options first to make sure that the problem isn't there first.
.. But know that the only solution for the first problem is stop using double :-)