I'm converting a string to a decimal with decimal.Parse():
decimal.Parse(transactionAmount)
If transactionAmount contains a whole number such as 1, the result is a decimal value of 1. The system I'm sending it to outside of my program treats it as 1 cent for some unknown reason unless it shows up as 1.00. How can I make sure that whole numbers contain a decimal point and a zero such as 1.0?
decimal contains number of digits after the point as part of its internal presentation. So 1m and 1.00m are different `decimal values. As result all parsing/formatting operations will try to preserve that information coming from/to string form unless forced otherwise.
One hack to make sure there are at least two digits after decimal separator is to add proper 0 - 0.00m:
decimal decimalOne = decimal.Parse("1"); // 1.
decimal decimalWithTwoDigit = decimalOne + 0.00m; // 1.00
Note that it is unusual to be sending decimal values in binary form to outside programs. Most likely you actually need to format decimal value with two digits only as covered in Force two decimal places in C# - .ToString("#.00").
Try Convert.ToDecimal() instead of decimal.Parse()
Related
Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with?
See the following example:
public void DoSomething()
{
decimal dec1 = 0.5M;
decimal dec2 = 0.50M;
Console.WriteLine(dec1); //Output: 0.5
Console.WriteLine(dec2); //Output: 0.50
Console.WriteLine(dec1 == dec2); //Output: True
}
The decimals are classed as equal, yet dec2 remembers that it was entered with an additional zero. What is the reason/purpose for this?
It can be useful to represent a number including its accuracy - so 0.5m could be used to mean "anything between 0.45m and 0.55m" (with appropriate limits) and 0.50m could be used to mean "anything between 0.495m and 0.545m".
I suspect that most developers don't actually use this functionality, but I can see how it could be useful sometimes.
I believe this ability first arrived in .NET 1.1, btw - I think decimals in 1.0 were always effectively normalized.
I think it was done to provide a better internal representation for numeric values retrieved from databases. Dbase engines have a long history of storing numbers in a decimal format (avoiding rounding errors) with an explicit specification for the number of digits in the value.
Compare the SQL Server decimal and numeric column types for example.
Decimals represent fixed-precision decimal values. The literal value 0.50M has the 2 decimal place precision embedded, and so the decimal variable created remembers that it is a 2 decimal place value. Behaviour is entirely by design.
The comparison of the values is an exact numerical equality check on the values, so here, trailing zeroes do not affect the outcome.
I am currently formatting a double using the code:
myDouble.ToString("g4");
To get the first 4 decimal places. However I find this often switches over to scientific notation if the number is very large or very small. Is there an easy format string in C# to just have the first four decimal places, or zero if it is too small to be represented in that number of places?
For example, I would like:
1000 => 1000
0.1234567 => 0.1235
123456 => 123456 (Note: Not scientific notation)
0.000001234 => 0 (Note: Not scientific notation)
You can try like this:
0.1234567.ToString("0.####")
Also check Custom Numeric Format Strings
#
Replaces the "#" symbol with the corresponding digit if one is
present; otherwise, no digit appears in the result string.
Also as Jon as correctly pointed that it will round your number. See the note section
Rounding and Fixed-Point Format Strings
For fixed-point format strings
(that is, format strings that do not contain scientific notation
format characters), numbers are rounded to as many decimal places as
there are digit placeholders to the right of the decimal point.
Use the String.Format() method.
String.Format("{0:0.####}", 123.4567123); //output: 123.4567
Note: Num of #'s indicate the maximum number of digits after decimal that are required.
I agree with kjbartel comment.
I wanted exactly what the original question asked. But his question is slightly ambiguous.
The problem with ### format is it fills the slot if a digit can be represented or not.
So it does what the original question asks for some numbers but not others.
My basic need is, and it's a pretty common one, if the number is big I don't need to show decimal places. If the number is small I do want to show decimal places. Basically X number of significant digits.
The "Gn" Format will do significant digits, but it switches to scientific notation if you go over the number of digits. I don't want E notation, ever (same requirement as the question).
So I used fixed format ("Fn") but I calculate the width on the fly based on how "big" the number is.
var myFloatNumber = 123.4567;
var digits = (int) Math.Log10(myFloatNumber);
var maxDecimalplaces = 3;
var format = "F" + Math.Max(0,(maxDecimalplaces - digits));
I swear there was a way to do this in C++ (Visual Studio Flavor) in teh format statement or in C# and perhaps there is, but I can't find it.
So I came up with this. I could have converted to a string and measured length before decimal point as well. But converting it to a string twice felt wrong.
I have a textbox which is restricted to decimal input but the sql table has restriction on the column that it is Decimal(18,2). So the error is people are able to enter more than 18 digits + as many as digits after decimal but if i still do a
Math.Round(decimal,intDigits);
then the digits before the decimal will exceed 18 and will shoot me an error. So the question is how to Round a decimal with this restrictions.Decimal(18,2).
if you need to restrict number of digits before the decimal point the rounding can't help you here. i would validate text box input with regular expression to restrict number of digits before and after the decimal point.
For the SQL Decimal(p,q) data type, p specifies the total precision (number of decimal digits), and q specifies the scale (number of digits to the right of the decimal point). In you case (decimal(18,2)), you've got 18 decimal digits, the rightmost 2 of which are to the right of the decimal point.
What you need to to is put validation constraints on your text box. You can
use a regular expression to do the validation. The regular expression ^-?\d{1,16}(\.\d{1,2})?$ would do the trick. Omit the optional minus sign if you don't want them to be able to enter negative numbers.
you could validate against a range: convert to Decimal, then check to see if it is within the valid range of values.
By rounding the decimal you can jump into the Decimal rounding problems.
To avoid this you can try do to something like this.
int myDecimalInt = myDecimal * (10 * intDigits);
This will give you concrete Integer with all numbers after the point truncated with desired precision: intDigits.
Or you can try to handle it on UI side: do not let a user to insert more digits then you can support.
I want to print out the string represenation of a double without losing precision using ToString() I get the following when I try formatting it as a string:
double i = 101535479557522.5;
i.ToString(); //displays 101535479557523
How do I do this in C#?
If you want it to be exactly the same you should use i.ToString("r") (the "r" is for round-trip). You can read about the different numeric formats on MSDN.
You're running into the limits of the precision of Double. From the docs:
Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
If you want more precision - and particularly maintaining a decimal representation - you should look at using the decimal type instead.
Does anyone know of an elegant way to get the decimal part of a number only? In particular I am looking to get the exact number of places after the decimal point so that the number can be formatted appropriately. I was wondering if there is away to do this without any kind of string extraction using the culture specific decimal separator....
For example
98.0 would be formatted as 98
98.20 would be formatted as 98.2
98.2765 would be formatted as 98.2765 etc.
It it's only for formatting purposes, just calling ToString will do the trick, I guess?
double d = (double)5 / 4;
Console.WriteLine(d.ToString()); // prints 1.75
d = (double)7 / 2;
Console.WriteLine(d.ToString()); // prints 3.5
d = 7;
Console.WriteLine(d.ToString()); // prints 7
That will, of course, format the number according to the current culture (meaning that the decimal sign, thousand separators and such will vary).
Update
As Clement H points out in the comments; if we are dealing with great numbers, at some point d.ToString() will return a string with scientific formatting instead (such as "1E+16" instead of "10000000000000000"). One way to overcome this probem, and force the full number to be printed, is to use d.ToString("0.#"), which will also result in the same output for lower numbers as the code sample above produces.
You can get all of the relevant information from the Decimal.GetBits method assuming you really mean System.Decimal. (If you're talking about decimal formatting of a float/double, please clarify the question.)
Basically GetBits will return you 4 integers in an array.
You can use the scaling factor (the fourth integer, after masking out the sign) to indicate the number of decimal places, but you should be aware that it's not necessarily the number of significant decimal places. In particular, the decimal representations of 1 and 1.0 are different (the former is 1/1, the latter is 10/10).
Unfortunately, manipulating the 96 bit integer is going to require some fiddly arithmetic unless you can use .NET 4.0 and BigInteger.
To be honest, you'll get a simpler solution by using the built in formatting with CultureInfo.InvariantCulture and then finding everything to the right of "."
Just to expand on the point about getbits, this expression gets the scaling factor from a decimal called foo:
(decimal.GetBits(foo)[3] & 16711680)>>16
You could use the Int() function to get the whole number component, then subtract from the original.