Decimal Rounding Problem - c#

I have a textbox which is restricted to decimal input but the sql table has restriction on the column that it is Decimal(18,2). So the error is people are able to enter more than 18 digits + as many as digits after decimal but if i still do a
Math.Round(decimal,intDigits);
then the digits before the decimal will exceed 18 and will shoot me an error. So the question is how to Round a decimal with this restrictions.Decimal(18,2).

if you need to restrict number of digits before the decimal point the rounding can't help you here. i would validate text box input with regular expression to restrict number of digits before and after the decimal point.

For the SQL Decimal(p,q) data type, p specifies the total precision (number of decimal digits), and q specifies the scale (number of digits to the right of the decimal point). In you case (decimal(18,2)), you've got 18 decimal digits, the rightmost 2 of which are to the right of the decimal point.
What you need to to is put validation constraints on your text box. You can
use a regular expression to do the validation. The regular expression ^-?\d{1,16}(\.\d{1,2})?$ would do the trick. Omit the optional minus sign if you don't want them to be able to enter negative numbers.
you could validate against a range: convert to Decimal, then check to see if it is within the valid range of values.

By rounding the decimal you can jump into the Decimal rounding problems.
To avoid this you can try do to something like this.
int myDecimalInt = myDecimal * (10 * intDigits);
This will give you concrete Integer with all numbers after the point truncated with desired precision: intDigits.
Or you can try to handle it on UI side: do not let a user to insert more digits then you can support.

Related

Force decimal point and 0's after decimal with decimal.Parse

I'm converting a string to a decimal with decimal.Parse():
decimal.Parse(transactionAmount)
If transactionAmount contains a whole number such as 1, the result is a decimal value of 1. The system I'm sending it to outside of my program treats it as 1 cent for some unknown reason unless it shows up as 1.00. How can I make sure that whole numbers contain a decimal point and a zero such as 1.0?
decimal contains number of digits after the point as part of its internal presentation. So 1m and 1.00m are different `decimal values. As result all parsing/formatting operations will try to preserve that information coming from/to string form unless forced otherwise.
One hack to make sure there are at least two digits after decimal separator is to add proper 0 - 0.00m:
decimal decimalOne = decimal.Parse("1"); // 1.
decimal decimalWithTwoDigit = decimalOne + 0.00m; // 1.00
Note that it is unusual to be sending decimal values in binary form to outside programs. Most likely you actually need to format decimal value with two digits only as covered in Force two decimal places in C# - .ToString("#.00").
Try Convert.ToDecimal() instead of decimal.Parse()

c# print float values with more precision

I want to print floats with greater precision than is the default.
For example when I print the value of PI from a float i get 6 decimals. But if I copy the same value from the float into a double and print it i get 14 decimals.
Why do I get more precision when printing the same value but as a double?
How can I get Console.WriteLine() to output more decimals when printing floats without needing to copy it into a double first?
I also tried the 0.0000000000 but it did not write with more precision, it just added more zeroes. :-/
My test code:
float x = (float)Math.PI;
double xx = x;
Console.WriteLine(x);
Console.WriteLine(xx);
Console.WriteLine($"{x,18:0.0000000000}"); // Try to force more precision
Console.WriteLine($"{xx,18:0.0000000000}");
Output:
3,141593
3,14159274101257
3,1415930000 <-- The program just added more zeroes :-(
3,1415927410
I also tried to enter PI at https://www.h-schmidt.net/FloatConverter/IEEE754.html
The binary float representation of PI is 0b01000000010010010000111111011011
So the value is: 2 * 0b1.10010010000111111011011 = 2 * 1.57079637050628662109375 = 3.1415927410125732421875
So there are more decimals to output. How would I get C# to output this whole value?
There is no more precision in a float. When you convert it to a double, the accuracy is worse, even though the precision (number of digits) increased - basically, all those extra digits you see in the double print are conversion artifacts - they are wrong, just the best representation of that given float number you can get in a double. This has everything to do with how binary floating point numbers work.
Let's look at the binary representation of the float:
01000000010010010000111111011100
The first bit is sign, the next eight are the exponent, and the rest is the mantissa. What do we get when we cast it to a double? The exponent stays the same, but the mantissa is filled in with zeroes. But that actually changes the number - you get 3.1415929794311523 (rounded, as always with binary floats) instead of the correct 3.14159265358979 double value of pi. You get the illusion of greater precision, but it's only an illusion - the number is no more accurate than before, you just replaced zeroes with noise.
There's no 1:1 mapping between floats and doubles. The same float value can be represented by many different double values, and the decimal representation of the number can change accordingly. Every cast of float to double should be followed by a rounding operation if you care about decimal precision. Consider this code instead:
double.Parse(((float)Math.PI).ToString())
Instead of casting the float to a double, I first changed it to a decimal representation (in a string), and created the double from that. Now instead of having a "truncated" double, you have a proper double that doesn't lie about extra precision; when you print it out, you get 3.1415930000. Still rounded, of course, since it's still a binary->decimal conversion, but no longer pretending to have more precision than is actually there - the rounding happens at a later digit than in the float version, the zeroes are really zeroes, except for the last one (which is only approximately zero).
If you want real decimal precision, you might want to use a decimal type (e.g. int or decimal). float and double are both binary numbers, and only have binary precision. A finite decimal number isn't necessarily finite in binary, just like a finite trinary number isn't necessarily finite in decimal (e.g. 1/3 doesn't have a finite decimal representation).
A float within c# has a precision of 7 digits and no more. That means 1 digit before the decimal and 6 after.
If you do have any more digits in your output, they might be entirely wrong.
If you do need more digits, you have to use either double which has 15-16 digits precision or decimal which has 28-29 digits.
See MSDN for reference.
You can easily verify that as the digits of PI are a little different from your output. The correct first 40 digits are: 3.14159 26535 89793 23846 26433 83279 50288 4197
To print the value with 6 places after the decimal.
Console.WriteLine("{0:N6}", (double)2/3);
Output :
0.666667

Error converting datatype numeric to decimal

I have a database column defined as amount decimal (19, 9).
In the UI it is a free text where user can enter value, whatever user enters value after that internally it gets multiplied by 1000.
Now when I am trying to save it to database, I get an error
Error converting datatype numeric to decimal
If I entered value as 78654323, now internally it gets multiplied by thousand
78654323 * 1000 = 78654323000
and on save I get this error. Why ?
What validation/data annotation should I put on the textbox for the range ?
Assuming this is for SQL Server (you haven't said so - but it looks like it):
You have defined your amount column to be of type decimal(19,9) in the database. You're not getting 19 digits before and 9 digits after the decimal point (which is what a lot of programmer's seem to think this notation means).
What is really means is:
you get a total of 19 digits
of that, there are 9 digits after the decimal point
and thus you get 10 digits before the decimal point.
When multiplying your entered value of 78654323 by one thousand, you end up with a value 78654323000 which has 11 digits (before the decimal point), and thus you cannot safe this value.
You need to increase the number of available digits before the decimal point (by using e.g. decimal(25,9) giving you 16 digits before the decimal point).
Your validation should be like getting total of 19 digits after multiplication with 1000. It should not cross 19(total before & after decimal) digits. It needs some calculation work on it. based on the result we should provide the validation.
For your result when it goes to database it will be stored as : 78654323000.00000000 which is suitable for decimal(20,9) not for decimal(19,9). so, 1 digit at before decimal should not be after multiplication.

Why does Convert.ToDecimal(Double) round to 15 significant figures?

I have a double with 17 digits after the decimal point, i.e.:
double myDouble = 0.12345678901234567;
If I convert this to a decimal like this:
decimal myDecimal = Convert.ToDecimal(myDouble);
then the value of myDecimal is rounded, as per the Convert.ToDecimal documentation, to 15 digits (i.e. 0.0123456789012345). My question is, why is this rounding performed?
I understand that if my original number could be accurately represented in base 10 and I was trying to store it as a double, then we could only have confidence in the first 15 digits. The final two digits would be subject to rounding error. But, that's a base 10 biased point of view. My number may be more accurately represented by a double and I wish to convert it to decimal while preserving as much accuracy as possible.
Shouldn't Convert.ToDecimal aim to minimise the difference between myDouble and (double)Convert.ToDecimal(myDouble)?
From the documentation of Double:
A Double value has up to 15 decimal digits of precision, although a
maximum of 17 digits is maintained internally
So, as the double value itself has a maximum of 15 decimal places, converting it to Decimal will result in a Decimal value with 15 significant figures.
The behavior of the rounding guarantees that conversion of any Decimal which has at most fifteen significant figures to double and back to Decimal will yield the original value unchanged. If values were rounded to sixteen figures rather than fifteen, such a guarantee would not only fail to hold for number with sixteen figures, but it wouldn't even hold for much shorter values. For example, the closest Double value to 9000.04 is approximately 9000.040000000000873115; rounding that to sixteen figures would yield 9000.040000000001.
The choice of rounding one should use depends upon whether regards the best Decimal value of double value 9000.04 as being 9000.04m, 9000.040000000001m, 9000.0400000000008731m, or perhaps something else. Microsoft probably decided that any representation other than 9000.04m would be confusing.
The following is from the documentation of the method in question.
http://msdn.microsoft.com/en-us/library/a69w9ca0(v=vs.110).aspx
"The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
Every terminating binary fraction is exactly representable as a decimal fraction, so the minimum possible difference for a finite number is always 0. The IEEE 754 64-bit representation of your number is exactly equal to 0.1234567890123456634920984242853592149913311004638671875
Every conversion from binary floating point to decimal or decimal string must embody some decision about rounding. In theory, they could preserve all the digits, but that would result in some very long outputs with most of the digits having little meaning.
One option, used in Java's Double.toString, is to stop at the shortest decimal representation that would convert back to the original double.
Most set some fairly arbitrary limit. 15 significant digits preserves most meaningful digits.

Formatting a double as a string correctly in C#?

I want to print out the string represenation of a double without losing precision using ToString() I get the following when I try formatting it as a string:
double i = 101535479557522.5;
i.ToString(); //displays 101535479557523
How do I do this in C#?
If you want it to be exactly the same you should use i.ToString("r") (the "r" is for round-trip). You can read about the different numeric formats on MSDN.
You're running into the limits of the precision of Double. From the docs:
Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
If you want more precision - and particularly maintaining a decimal representation - you should look at using the decimal type instead.

Categories