This question already has answers here:
How does modulus operation works with float data type?
(2 answers)
Closed 7 years ago.
Newbie in C#, trying to work out a simple calculation.
float old x=300
float Distance=300
float pitch=0.8
int sign=1
new x= old x - (sign * (Distance % pitch) * 0.5 f)
The value generated by program for new x is 299.6 (which I don't understand).
The value for (Distance % pitch) is 0.7999955. If you calculate manually 300 modulo 0.8 is 0. I am guessing modulo function behaves differently for float values but i don't know how. Or it is calculated as 300 percentage of 0.8?
Explanation on this will be much appreciated.
Never expect binary floating-point calculations (using float or double) on decimal floating point values (e.g. 0.8) to give exact results.
The problem is that most of decimal numbers cannot be represented exactly in a binary floating-point format. You can think of it as trying to represent 1/3 in a decimal notation. It's not possible.
In your case, 0.8f is not really 0.8. If you cast the value to double to get some extra precision, you will see it's closer to 0.800000011920929
float pitch = 0.8f;
Console.WriteLine((double)pitch); // prints 0.800000011920929
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
AFAIK .NET's default round option is to even, so Math.Round(1.225,2) should give 1.22 but it gives 1.23.
Math.Round(2.225,2) = 2.22
Math.Round(100.225,2) = 100.22
all the values I tried rounds to nearest even but only 1.225 and -1.225 rounds to 1.23 and -1.23.
The main problem is that in float and double, the amount of decimal places is not part of the value, and the precision isn't decimal, but rather, binary. And there's no finite binary number that can represent 1.225 exactly.
So when you do Math.Round(1.225f, 2), you're actually doing something more like Math.Round(1.22500002f, 2) - there's no midpoint rounding involved.
The same problem appears with Math.Round(2.225f, 2) - it's just that the "real" value is slightly smaller than 2.225f, so the result rounds down. But there's still no midpoint rounding involved.
If you need decimal precision, use decimal. Neither float nor double are designed for decimal precision - they're fine for e.g. physics calculations, but not for e.g. accounting.
1.225 can't be represented exactly in floating point, so you're really rounding 1.225000023841858.
This question already has answers here:
Why does integer division in C# return an integer and not a float?
(8 answers)
Closed 7 years ago.
I am creating a winforms app of a quiz and i am trying to work out the overall percentage of correct answers however i always end up with 0%, can anyone tell me why?
class calculatePercentage
{
public static int totalPercentage;
public static void calculate()
{
totalPercentage = (Program.totalScore/45*100);
}
}
i have tried using an int, decimal, double, float however i am still getting the same result.
Program.totalScore is the number of correct answers overall,
45 is the total number of questions.
e.g (30/45)*100 = 67% rounded.
When i do this calculation on a calculator it works out correct, but not in code.
Thanks in advance
int calculations will always round down to the nearest whole number, so Program.totalScore/45 will come out to zero, and multiplying that by zero still gives you zero. You could start with the multiplication first:
totalPercentage = (100 * Program.totalScore) / 45;
... but be careful because this will always round down. If you want more accuracy, you'll need to use doubles and Math.Round().
totalPercentage = (int) Math.Round((100.0 * Program.totalScore) / 45);
This question already has answers here:
Implicitly converting int to double
(8 answers)
Closed 7 years ago.
May be I'm asking silly question but I don't understand why this doesn't give me the output I expect (i.e 2.5):
double x = 5/2;
Console.WriteLine(x.ToString());
.Net Fiddle
5 / 2 performs integer division no matter which type you assign it. It always disregards fractional part.
You need to use floating-point division instead.
double x = 5.0 / 2;
double x = 5 / 2.0;
double x = 5.0 / 2.0;
From / Operator
When you divide two integers, the result is always an integer. For
example, the result of 7 / 3 is 2.
From C# Specification part $7.7.2 Division operator, there 3 types of division;
Integer division
Floating-point division
Decimal division
And from the relevant part in integer division;
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
Console.WriteLine(0.5f * 2f); // 1
Console.WriteLine(0.5f * 2f - 1f); // 0
Console.WriteLine(0.1f * 10f); // 1
Console.WriteLine(0.1f * 10f - 1f); // 1.490116E-08
Why does 0.1f * 10f - 1f end up being 1.490116E-08 (0.0000001490116)?
See Wiki: Floating Point. float/double/decimal are relative precision types. Not all values (of which there are infinitely many) can be exactly stored. You are seeing the result of this loss of accuracy. This is why it is almost always correct to use |a - b| < small_delta for float-point comparisons.
Because floating operations are not precise, take a look here:
http://en.wikipedia.org/wiki/Floating_point
section Some other computer representations for non-integral numbers. 0.1 cannot finitely be represented in base 2.
Take a look at What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Read this (explains quite exactly this case): http://www.exploringbinary.com/the-answer-is-one-unless-you-use-floating-point/
Easy, approximations accumulate.
0.5 is representable exactly in floating point, which is why 0.5*2 = 1 exactly.
However, 0.1 is not representable exactly, hence 0.1*10 is not exactly 1.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What's wrong with this division?
If you divide 2 / 3, it should return 0.66666666666666667. Instead, I get 0.0 in double value type and 0 in decimal.
My purpose is to divide even (e.g. 2 / 3) and round to 1 always to the nearest.
Any help?
You're doing integer division, from the sounds of it. Try this:
decimal result = 2.0 / 3.0;
Or even force it to decimals for all of the operations:
decimal result = 2.0m / 3.0m;
This should give you a result more like you expect.
Doing 2/3 is integer division which will not return the decimal place of the division. To get .666666667 you will need to do 2.0 / 3.0 which are both doubles to get the expected answer.