Issue with calculating financial values using float in C# - c#

I was reading an article related to difference between Float and double. and they given an example as below :
Say you have a $100 item, and you give a 10% discount. Your prices are all in full dollars, so you use int variables to store prices. Here is what you get:
int fullPrice = 100;
float discount = 0.1F;
Int32 finalPrice = (int)(fullPrice * (1-discount));
Console.WriteLine("The discounted price is ${0}.", finalPrice);
Guess what: the final price is $89, not the expected $90. Your customers will be happy, but you won't. You've given them an extra 1% discount.
In above example, to calculate the final price they have used fullPrice * (1-discount) . why they used (1-disocunt) ? it should be fullPrice * discount.
so my confusion is about logic to calculate the final price. why thay used (1-discount) instead of discount ?

the final price is $89, not the expected $90
That's because when 0.1 is float, it is not exactly 0.1, it's a little more. When you subtract it from 1 to do the math, you get $89.9999998509884. Casting to int truncates the result to 89 (demo).
You can make this work by using decimal data type for your discount. This type can represent 0.1 without a precision loss (demo).
why they used (1-disocunt)
1 represents 100%. The price after discount is (100%-10%)=90%

This question is actually about math.
Let's suggest you have an item which costs 100$.
The seller provides a discount to you - 10%.
Now you need to calculate what is the final price of an item.
I.e., how much money should you give to get it?
The answer: you need to pay the full price of an item minus discounted price.
100% of a cost - 10% discount = 90% final price
That's why it is fullPrice * (1 - discount).
If you calculate it using your formula fullPrice * discount then it will mean that the item which costs 100$ will be sold for 10$ due to 10% discount - which is incorrect. Actually, this formula fullPrice * discount may be used for calculation of discounted amount.

There is nothing wrong with the overall logic of the above example, but it does have very unfortunate choice of data types. This leads to the values being converted implicitly to double, introducing a slight rounding error in the process. By casting back to int, the result is truncated. This greatly amplifies the rounding error.
This is a good example of a problem that frequently occurs when dealing with financial values in programming:
Float values do not tanslate well to decimal fractions.
Most new developers tend to think of float values as decimal fractions, because they are mostly represented by these, when converting them to strings and vice versa. This is not the case. Float values have their fractional part stored as binary fraction, as descibed here.
This makes float values (and their calulations) being slightly askew from their decimal representations. This is why the following results to $89,9999998509884:
double fullPrice = 100;
double discount = 0.1F;
double finalPrice = (fullPrice * (1 - discount));
Console.WriteLine("The discounted price is ${0}.", finalPrice);
(Not so) fun fact: The above will work fine when using float as data type, because the afforementioned error lies below the resolution of single precision values in this example and the assembly code does use double precision behind the scenes. When the result gets converted to single precision, the error gets lost.
One way out of this problem, is to use the data type decimal, that was construced to do calculations that have to translate directly to decimal fractions (as financial calculations do):
decimal fullPrice = 100;
decimal discount = 0.1m;
decimal finalPrice = (fullPrice * (1 - discount));
Console.WriteLine("The discounted price is ${0}.", finalPrice);
Another would be to round all results of floating point calculations, before displaying or storing them. For financial calculations one shoud use:
Math.Round([your value here], 2, MidpointRounding.AwayFromZero);

Related

When should I use integer for arithmetic operations instead of float/double

Some SO user said to me that I should not use float/double for stuff like student grades, see the last comments: SequenceEqual() is not equal with custom class and float values
"Because it is often easier and safer to do all arithmetic in integers, and then convert to a suitable display format at the last minute, than it is to attempt to do arithmetic in floating point formats. "
I tried what he said but the result is not satisfying.
int grade1 = 580;
int grade2 = 210;
var average = (grade1 + grade2) / 2;
string result = string.Format("{0:0.0}", average / 100);
result is "3,0"
double grade3 = 5.80d;
double grade4 = 2.10d;
double average1 = (grade3 + grade4) / 2;
double averageFinal = Math.Round(average1);
string result1 = string.Format("{0:0.0}", averageFinal);
result1 is "4,0"
I would expect 4,0 because 3,95 should result in 4,0. That worked because I use Math.Round which again works only on a double or decimal. That would not work on an integer.
So what do I wrong here?
First of all, the specific problem you cite is one that vexes me greatly. You almost never want to do a problem in integer arithmetic and then convert it to a floating point type, because the computation will be done entirely in integers. I wish the C# compiler warned about this one; I see this all the time.
Second, the reason to prefer integer or decimal arithmetic to double arithmetic is that a double can only represent with perfect accuracy a fraction whose denominator is a power of two. When you say 0.1 in a double, you don't get 1/10, because 1/10 is not a fraction whose denominator is any power of two. You get the fraction that is closest to 1/10 that does have a power of two in the denominator.
This usually is "close enough", right up until it isn't. It is particularly nasty when you have tiny errors close to hard cutoffs. You want to say, for instance, that a student must have a 2.4 GPA in order to meet some condition, and the computations you do involving fractions with two in the denominator just happen to work out to 2.39999999999999999956...
Now, you do not necessarily get away from these problems with decimal arithmetic; decimal arithmetic has the same restriction: it can only represent numbers that are fractions with powers of ten in the denominator. You try to represent 1/3, and you're going to get a small, but non-zero error on every computation.
Thus standard advice is: if you are doing any computation where you expect exact arithmetic when making computations that involve fractions that are powers of ten in the numerator, such as financial computations, use decimal, or do the computation entirely in integers, scaled appropriately. If you're doing computations that involve physical quantities, where there is no inherent "base" to the computations, then use "double".
So why use integer over decimal or vice versa? Integer arithmetic can be smaller and faster; decimals take more time and space. But ultimately you should not worry about these small performance differences: pick the data type that most accurately reflects the mathematical domain you are working in, and use it.
You need to "convert to a suitable display format at the last minute":
string result = string.Format("{0:0.0}", average / 100.0);
average is a int, 100 is a int.
When you do average/100, you have an integer divided by an integer, in which you will get back an integer, but since 3.95 is not an integer, it is truncated to 3.
If you want to get a float or a double as your result, a double or float has to be involved in the arithmetic.
In this case, you can cast your result to a double (double)average/100 or divide by a double, average/100.0
The only reason you want to stay away from doing too many arithmetic operations with floats/decimals until the last second is the same reason you don't just plug in values for variables in long physics equations at the start, you lose precision. This is a very important concept in Numerical Methods when you have to deal with floating point representation of numbers, e.g. Machine epsilon.
I think you chose the wrong option of the two given. Knowing Eric, if you had been clear that you were doing floating-point operations like rounding, averaging, etc. then he would not have suggested using integers. The reason to not use double is because they can not always represent a decimal value precisely (you cannot represent 1.1 exactly in a double).
If you want to use floating-point math but still maintain decimal accuracy up to 28 significant digits, then use decimal:
decimal grade1 = 580;
decimal grade2 = 210;
var average = (grade1 + grade2) / 2;
average = Math.Round(average);
string result = string.Format("{0:0.0}", average);
Not that easy.
If the grades are in integer format then store in integer is OK.
But I would convert do decimal PRIOR to performing any math to remove rounding errors from the math unless you specifically want integer math.
For financial calculations decimal is recommended. In fact the suffix is m for money.
decimal (C# Reference)
The decimal keyword indicates a 128-bit data type. Compared to
floating-point types, the decimal type has more precision and a
smaller range, which makes it appropriate for financial and monetary
calculations. The approximate range and precision for the decimal type
are shown in the following table.
Float and Double are floating-point types. You get a bigger range then decimal but less precision. Decimal has a large enough range for grades.
decimal grade3 = 5.80m;
decimal grade4 = 2.10m;
decimal average1 = (grade3 + grade4) / 2;
int grade1 = 580;
int grade2 = 210;
var average = (grade1 + grade2) / 2;
string result = string.Format("{0:0.0}", average / 100); // 3.0
Debug.WriteLine(result);
decimal avg = ((decimal)grade1 + (decimal)grade2) / 200m; // 3.95
Debug.WriteLine(avg);
Debug.WriteLine(string.Format("{0:0.0}", avg)); // 4.0

double or decimal when calculating percent

Based on this thread decimal vs double!, decimal is always used for money. What is the proper way to define percent? like TaxPercent? If it's double then for calculating amount * 8% (double) you would have to cast it.
What's the proper way to define percent value (ie tax) and what would the calculation be.
Use the 'm' suffix to specify a literal as a decimal. So it must be 0.08m to ensure a double doesn't creep into the calculation.
decimal tax = amount * 0.08m;
You'll find a list of valid suffix characters in this post.

decimal issue when adding a flat fee

I have a small issue I believe the code is doing exactly what its suppose to be doing I have a function whereby I pass in an amount and I add a flat fee of 40 cents to it for the surcharge.
Below is how my current code is constructed
Double surcharge;
surcharge = 0.4 * moneyIn / 100;
If I pass 999.00m in as moneyIn it returns 0.3996 when in fact it should return 0.4 I'm unsure what I need to do to make it be 0.4.
You're not using decimal - you're using double. Use decimal everywhere (so moneyIn should be a decimal too). If you're actually using 999.00m for moneyIn, that would make it a decimal and your current code wouldn't even compile (as there are no implicit conversions between decimal and double).
Now your code doesn't actually talk about a flat fee of 40 cents - it's taking 0.4% of the original value. You should have something like:
decimal surcharge = 0.40m; // 40 cents
decimal total = moneyIn + surcharge;

Issue with float datatype

I have a radnumeric textbox with monthly salary as below.
txtMonthlySalary.Text=816177200
Now I need to calculate the annual salary and save it as a float variable in the sql server table.My table is already an existing one and the annual salary field is type float.
Actual calculation on Annual salary gives following result:
Annual salary = 816177200 * 12 = 9,794,126,400
But in the program,
float Fld_AnnualSalary = float.Parse(txtMonthlySalary.Text) * 12;
gives result as 9,794,127,000
Here float data type rounds the result it seems,which is a big variation from the actual expected result.
How can I handle this issue,so that I can get the exact result on multiplication without rounding and save it in a float variable in sql sever table.
float and even double are not generally acceptable data types to work with real money values (as you just proved for yourself).
Please use Decimal in the code and corresponding type in SQL.
A float has a precision limited to about seven digits, and you are trying to do calculations on a nine-digit number 816177200.
The anwer by Alexeis Levenkov contains the solution to your problem. The Decimal data type can hold at least 28 significant digits.

Wondering why the book we're using is using doubles and decimals instead of just decimals

I'm wondering why the book we're using's example is using a mix of decimals and doubles with a cast to decimal when necessary. Would it not be simpler to just make everything decimals and avoid the casts altogether?
Here's the main chunk of code that I'm curious about.
decimal amount;
decimal principal = 1000;
double rate = 0.05;
for (int year = 1; year <= 10; year++)
{
amount = principal * ((decimal) Math.Pow(1.0 + rate, year));
}
Are there any performance or accuracy issues that I'm overlooking?
There's no overload of Math.Pow that works with decimal - exponentiation on reals is inherently inaccurate, so it wouldn't make all that much sense for such an overload to exist.
Now since the only place where rate is used is in calculating the compound factor (the Pow call), it makes sense for it to be declared double - higher precision wouldn't help if it's going to just be 'cast away'.
The rest of the variables represent money, which is the use-case for decimal (that's why the suffix for decimal literals is M). One could imagine another computation where commmision of $2.10 were added to amount - we don't want such calculations to become inaccurate all for the sake of type-consistency in one pesky (inaccurate) compound-interest calculation.
Hence the inconsistency.
Decimal has a smaller range, but is more accurate than double. Decimal is what you should use to represent money in .Net. In the example you gave, amount and principal are monetary values. Rate is not. Thus, they are using the correct data types.
Here is the MSDN on Decimal: MSDN Decimal
You should read this topic
What it boils down to is: decimal is used for money/when you need precision in calculations, whereas double is more suited for scientific calculation because it performs better.

Categories