Some SO user said to me that I should not use float/double for stuff like student grades, see the last comments: SequenceEqual() is not equal with custom class and float values
"Because it is often easier and safer to do all arithmetic in integers, and then convert to a suitable display format at the last minute, than it is to attempt to do arithmetic in floating point formats. "
I tried what he said but the result is not satisfying.
int grade1 = 580;
int grade2 = 210;
var average = (grade1 + grade2) / 2;
string result = string.Format("{0:0.0}", average / 100);
result is "3,0"
double grade3 = 5.80d;
double grade4 = 2.10d;
double average1 = (grade3 + grade4) / 2;
double averageFinal = Math.Round(average1);
string result1 = string.Format("{0:0.0}", averageFinal);
result1 is "4,0"
I would expect 4,0 because 3,95 should result in 4,0. That worked because I use Math.Round which again works only on a double or decimal. That would not work on an integer.
So what do I wrong here?
First of all, the specific problem you cite is one that vexes me greatly. You almost never want to do a problem in integer arithmetic and then convert it to a floating point type, because the computation will be done entirely in integers. I wish the C# compiler warned about this one; I see this all the time.
Second, the reason to prefer integer or decimal arithmetic to double arithmetic is that a double can only represent with perfect accuracy a fraction whose denominator is a power of two. When you say 0.1 in a double, you don't get 1/10, because 1/10 is not a fraction whose denominator is any power of two. You get the fraction that is closest to 1/10 that does have a power of two in the denominator.
This usually is "close enough", right up until it isn't. It is particularly nasty when you have tiny errors close to hard cutoffs. You want to say, for instance, that a student must have a 2.4 GPA in order to meet some condition, and the computations you do involving fractions with two in the denominator just happen to work out to 2.39999999999999999956...
Now, you do not necessarily get away from these problems with decimal arithmetic; decimal arithmetic has the same restriction: it can only represent numbers that are fractions with powers of ten in the denominator. You try to represent 1/3, and you're going to get a small, but non-zero error on every computation.
Thus standard advice is: if you are doing any computation where you expect exact arithmetic when making computations that involve fractions that are powers of ten in the numerator, such as financial computations, use decimal, or do the computation entirely in integers, scaled appropriately. If you're doing computations that involve physical quantities, where there is no inherent "base" to the computations, then use "double".
So why use integer over decimal or vice versa? Integer arithmetic can be smaller and faster; decimals take more time and space. But ultimately you should not worry about these small performance differences: pick the data type that most accurately reflects the mathematical domain you are working in, and use it.
You need to "convert to a suitable display format at the last minute":
string result = string.Format("{0:0.0}", average / 100.0);
average is a int, 100 is a int.
When you do average/100, you have an integer divided by an integer, in which you will get back an integer, but since 3.95 is not an integer, it is truncated to 3.
If you want to get a float or a double as your result, a double or float has to be involved in the arithmetic.
In this case, you can cast your result to a double (double)average/100 or divide by a double, average/100.0
The only reason you want to stay away from doing too many arithmetic operations with floats/decimals until the last second is the same reason you don't just plug in values for variables in long physics equations at the start, you lose precision. This is a very important concept in Numerical Methods when you have to deal with floating point representation of numbers, e.g. Machine epsilon.
I think you chose the wrong option of the two given. Knowing Eric, if you had been clear that you were doing floating-point operations like rounding, averaging, etc. then he would not have suggested using integers. The reason to not use double is because they can not always represent a decimal value precisely (you cannot represent 1.1 exactly in a double).
If you want to use floating-point math but still maintain decimal accuracy up to 28 significant digits, then use decimal:
decimal grade1 = 580;
decimal grade2 = 210;
var average = (grade1 + grade2) / 2;
average = Math.Round(average);
string result = string.Format("{0:0.0}", average);
Not that easy.
If the grades are in integer format then store in integer is OK.
But I would convert do decimal PRIOR to performing any math to remove rounding errors from the math unless you specifically want integer math.
For financial calculations decimal is recommended. In fact the suffix is m for money.
decimal (C# Reference)
The decimal keyword indicates a 128-bit data type. Compared to
floating-point types, the decimal type has more precision and a
smaller range, which makes it appropriate for financial and monetary
calculations. The approximate range and precision for the decimal type
are shown in the following table.
Float and Double are floating-point types. You get a bigger range then decimal but less precision. Decimal has a large enough range for grades.
decimal grade3 = 5.80m;
decimal grade4 = 2.10m;
decimal average1 = (grade3 + grade4) / 2;
int grade1 = 580;
int grade2 = 210;
var average = (grade1 + grade2) / 2;
string result = string.Format("{0:0.0}", average / 100); // 3.0
Debug.WriteLine(result);
decimal avg = ((decimal)grade1 + (decimal)grade2) / 200m; // 3.95
Debug.WriteLine(avg);
Debug.WriteLine(string.Format("{0:0.0}", avg)); // 4.0
Based on this thread decimal vs double!, decimal is always used for money. What is the proper way to define percent? like TaxPercent? If it's double then for calculating amount * 8% (double) you would have to cast it.
What's the proper way to define percent value (ie tax) and what would the calculation be.
Use the 'm' suffix to specify a literal as a decimal. So it must be 0.08m to ensure a double doesn't creep into the calculation.
decimal tax = amount * 0.08m;
You'll find a list of valid suffix characters in this post.
I have a small issue I believe the code is doing exactly what its suppose to be doing I have a function whereby I pass in an amount and I add a flat fee of 40 cents to it for the surcharge.
Below is how my current code is constructed
Double surcharge;
surcharge = 0.4 * moneyIn / 100;
If I pass 999.00m in as moneyIn it returns 0.3996 when in fact it should return 0.4 I'm unsure what I need to do to make it be 0.4.
You're not using decimal - you're using double. Use decimal everywhere (so moneyIn should be a decimal too). If you're actually using 999.00m for moneyIn, that would make it a decimal and your current code wouldn't even compile (as there are no implicit conversions between decimal and double).
Now your code doesn't actually talk about a flat fee of 40 cents - it's taking 0.4% of the original value. You should have something like:
decimal surcharge = 0.40m; // 40 cents
decimal total = moneyIn + surcharge;
I have a radnumeric textbox with monthly salary as below.
txtMonthlySalary.Text=816177200
Now I need to calculate the annual salary and save it as a float variable in the sql server table.My table is already an existing one and the annual salary field is type float.
Actual calculation on Annual salary gives following result:
Annual salary = 816177200 * 12 = 9,794,126,400
But in the program,
float Fld_AnnualSalary = float.Parse(txtMonthlySalary.Text) * 12;
gives result as 9,794,127,000
Here float data type rounds the result it seems,which is a big variation from the actual expected result.
How can I handle this issue,so that I can get the exact result on multiplication without rounding and save it in a float variable in sql sever table.
float and even double are not generally acceptable data types to work with real money values (as you just proved for yourself).
Please use Decimal in the code and corresponding type in SQL.
A float has a precision limited to about seven digits, and you are trying to do calculations on a nine-digit number 816177200.
The anwer by Alexeis Levenkov contains the solution to your problem. The Decimal data type can hold at least 28 significant digits.
I'm wondering why the book we're using's example is using a mix of decimals and doubles with a cast to decimal when necessary. Would it not be simpler to just make everything decimals and avoid the casts altogether?
Here's the main chunk of code that I'm curious about.
decimal amount;
decimal principal = 1000;
double rate = 0.05;
for (int year = 1; year <= 10; year++)
{
amount = principal * ((decimal) Math.Pow(1.0 + rate, year));
}
Are there any performance or accuracy issues that I'm overlooking?
There's no overload of Math.Pow that works with decimal - exponentiation on reals is inherently inaccurate, so it wouldn't make all that much sense for such an overload to exist.
Now since the only place where rate is used is in calculating the compound factor (the Pow call), it makes sense for it to be declared double - higher precision wouldn't help if it's going to just be 'cast away'.
The rest of the variables represent money, which is the use-case for decimal (that's why the suffix for decimal literals is M). One could imagine another computation where commmision of $2.10 were added to amount - we don't want such calculations to become inaccurate all for the sake of type-consistency in one pesky (inaccurate) compound-interest calculation.
Hence the inconsistency.
Decimal has a smaller range, but is more accurate than double. Decimal is what you should use to represent money in .Net. In the example you gave, amount and principal are monetary values. Rate is not. Thus, they are using the correct data types.
Here is the MSDN on Decimal: MSDN Decimal
You should read this topic
What it boils down to is: decimal is used for money/when you need precision in calculations, whereas double is more suited for scientific calculation because it performs better.