This is what happens on my computer:
(double)(float)0.6
= 0.60000002384185791
(double)0.6f
= 0.60000002384185791
(double)(6/10f)
= 0.6
(double)(float)(6/10f)
= 0.6
6/10f is also a float,how come it can be precisely 0.6?
In my mind (double)(6/10f) should also be 0.60000002384185791.
Can someone help explain this? thanks!
First, its important to bear in mind that 0.6 cannot be accurately represented as a float however it can be accurately represented as a double (the inaccuracies of floating point arithmetic are well documented, if its not clear why 0.6 cannot be accurately represented as a float, try this link)
The reason why you are seeing the above behaviour is down to the compiler - if you take a look at the compiled assembly in reflector then whats going on here is a little clearer:
(UPDATE I've changed the code so that it doesn't use Console.WriteLine, as I realised that the compiler was choosing an overload for you, which confused the situation)
// As written in source
var j = (double)(float)0.6;
var k = (double)0.6f;
var l = (double)(6/10f);
var m = (double)(float)(6/10f);
// Code as seen by Reflector
double j = 0.60000002384185791;
double k = 0.60000002384185791;
double l = 0.6;
double m = 0.6;
Why the compiler chooses to compile in this particular way is beyond me (fyi, this is all with optimisations turned off)
Some interesting other cases:
// Code
var a = 0.6;
var b = (double)0.6;
var c = 0.6f;
var d = (float)0.6;
var e = 6 / 10;
var f = 6 / (10f);
var g = (float)(6 / 10);
var h = 6 / 10f;
var i = (double)6 / 10;
// Prints out 0.60000002384185791
double n = (float)0.6;
double o = f;
// As seen by Reflector
double a = 0.6;
double b = 0.6;
float c = 0.6f;
float d = 0.6f;
int e = 0;
float f = 0.6f;
float g = 0f;
float h = 0.6f;
double i = 0.6;
double n = 0.60000002384185791;
double o = f;
The compiler only seems to do the above trick in a couple of special cases, why it does this only when casting to a double is completely beyond me!
The rest of the time it seems to do some trickery to make floating point arithmetic seem to work where in fact it normally wouldn't.
It appears to be rounding the result. Are you displaying the result with the necessary digits of precision? You can use this c# class from Jon Skeet to get the exact numeric representation of the result printed out.
Note that ToString() will not always print all of the digits, nor will the Visual Studio debugger.
If I were a betting man, I'd say the difference is in where the coercion is happening. In the latter two examples (the ones with 6/10f), there are two literals that are both whole numbers (the integer 6 and the float 10.00000000...). The division appears to be happening after the coercion, at least in the compiler you're using. In the first two examples, you have a fractional float literal (0.6) which cannot be adequately expressed as a binary value within the mantissa of a float. Coercing that value to a double cannot repair the damage that was already done.
In the environments that are producing completely consistent results, the division is occurring before the coercion to double (the 6 will be coerced to a float for the division to match the 10, the division is carried out in float space, then the result is coerced to a double).
Related
Is it save to compare the result of Math.Round(double,int) with == and use it for example as the key of a HashSet<Double> or a GroupBy(d=>Math.Round(d,1))?
In other words, are there any doubles x and y for which the following assertion will fail?
double x = ...;
double y = ...;
double xRound = Math.Round(x, 1);
double yRound = Math.Round(y, 1);
Debug.Assert(xRound==yRound || Math.Abs(xRound-yRound)>=0.1);
Let's say that I would like to group a list of doubles:
List<double> values = ...;
List<double> keys = values.GroupBy(d=>Math.Round(d,1)).Select(kv=>kv.Key).ToList();
Is there a chance that I would get a key with the value 0.100000000 and another key with the value 0.09999999999?
(I tried parsing the disassembled net framework Math.cs source, but Round() eventually calls a native function.)
Generally, given the same starting value and the same operations, the same result would be obtained.
However, if you (for example) did:
double d1 = 10;
d1 /= 0.1;
double d2 = 25;
d2 /= 0.25;
then you may well find that d1 and d2 do not have the same value (because IEEE754 can represent 0.25 exactly but not 0.1.
So, given that and the rather large number of issues people seem to have with floating point, I'd say your best bet would be to choose a different method of hashing.
I am using Math.Floor method to find out how many times can number a be in number b. In this concrete example, variables are with these values:
double a = 1.2;
double b = 0.1;
double c = Math.Floor(a / b) * b;
This returns c = 11 instead of c = 12 as I thought it will. I guess it has something to do with rounding, but how can I get it work properly? When I raise number a to 1,21, it returns 12.
double and float are internally represented in a way that lacks precision for many numbers, which means that a may not be exactly 1.2 (but, say, 1.199999999...) and b not exactly 0.1.
If you want exact precision, for quantities that do not have a margin of error like money, use decimal.
Have a look here for how Math.Floor works. ..basically , it rounds down ...
My guess would be that with doubles you don't really get 12 as the division (even though this will seems to work: double d = a/b; // d=12).
Solal pointed to the same, and gave a good advice about decimal :)
Here's what happens with decimals:
decimal a = 1.2m;
decimal b = 0.1m;
decimal c = Math.Floor(a / b) ; // c =12
The result of your code is 1.1.
I am assuming you want to get 1.2.
You need to use
double c = Math.Ceiling(a / b) * b;
let suppose dis.text = 2, prc.text = 100, I am using these codes.It Should be
net_prc.text = 98.But its giving me -100.Can anybody tell me why?,And how can i get correct
discounted percentage??
private void net_prcTabChanged(object sender, EventArgs e)
{
int d;
int di;
int i;
d = Convert.ToInt32(dis.Text);
i = Convert.ToInt32(prc.Text);
di = -((d / 100) * i) + i;
net_prc.Text = di.ToString();
}
Try (d / 100.0) to force it to use floating point arithmetic
di = -((d / 100) * i) + i;
All values in this statement are Integers. You are going to be computing arithmetic with decimal places, and you need to increase the precision of your variables to a double or a float. Instead, add a decimal place to one of the values in the equation. This will force all values into doubles.
This is a process called Arithmetic Promotion. It is where, at run time, the precision of every variable in an equation is increased to the size of the most precise variable.
Proper way to do it would be, changing the datatype of di to float
di = (d * 100) / i;
C# has an odd way of doing maths, because your numbers are cast as integers, you can only do integer math with them. you need to initially have them as float or as double so you can do float math or anything at all that requires a decimal place within the calculations.
Even dis.text = 1.5
private void net_prcTabChanged(object sender, EventArgs e)
{
double d;
double di;
double i;
d = Convert.ToDouble(dis.Text);
i = Convert.ToDouble(prc.Text);
di = -((d * 100.0) / i ) + i;
net_prc.Text = di.ToString();
}
Your division, d / 100, is a division of integers, and it returns an integer, probably 0 (zero). This is certainly the case with your example d = 2.
Addition: If you really want to do this with integers (rather than changing to decimal or double like many other answers recommend), consider changing the sub-expression
((d / 100) * i)
into
((d * i) / 100)
because it will give you a better precision to do the division as the last operation. With the numbers of your example, d=2 and i=100, the first sub-expression will give 0*100 or 0, while the changed sub-expression yields 200/100 which will be 2. However, you will not get rounding to nearest integer; instead you will get truncating (fractional part is discarded no matter if it's close to 1).
I have this:
double result = 60 / 23;
In my program, the result is 2, but correct is 2,608695652173913. Where is problem?
60 and 23 are integer literals so you are doing integer division and then assigning to a double. The result of the integer division is 2.
Try
double result = 60.0 / 23.0;
Or equivalently
double result = 60d / 23d;
Where the d suffix informs the complier that you meant to write a double literal.
You can use any of the following all will give 2.60869565217391:
double result = 60 / 23d;
double result = 60d / 23;
double result = 60d/ 23d;
double result = 60.0 / 23.0;
But
double result = 60 / 23; //give 2
Explanation:
if any of the number is double it will give a double
EDIT:
Documentation
The evaluation of the expression is performed according to the following rules:
If one of the floating-point types is double, the expression evaluates to double (or bool in the case of relational or Boolean expressions).
If there is no double type in the expression, it evaluates to float (or bool in the case of relational or Boolean expressions).
It will work
double result = (double)60 / (double) 23;
Or equivalently
double result = (double)60 / 23;
(double) 60 / 23
Haven't used C# for a while, but you are dividing two integers, which as far as I remember makes the result an integer as well.
You can force your number literals to be doubles by adding the letter "d", likes this:
double result = 60d / 23d;
double result = 60.0 / 23.0;
It is best practice to correctly decorate numerals for their appropriate type. This avoids not only the bug you are experiencing, but makes the code more readable and maintainable.
double x = 100d;
single x = 100f;
decimal x = 100m;
convert the dividend and divisor into double values, so that result is double
double res= 60d/23d;
To add to what has been said so far... 60/23 is an operation on two constants. The compiler recognizes the result as a constant and pre-computes the answer. Since the operation is on two integers, the compiler uses an integer result The integer operation of 60/23 has a result of 2; so the compiler effective creates the following code:
double result = 2;
As has been pointed out already, you need to tell the compiler not to use integers, changing one or both of the operands to non-integer will get the compiler to use a floating-point constant.
I was very surprised when I found out my code wasn't working so I created a console application to see where the problem lies and I've got even more surprised when I saw the code below returns 0
static void Main(string[] args)
{
float test = 140 / 1058;
Console.WriteLine(test);
Console.ReadLine();
}
I'm trying to get the result in % and put it in a progress(meaning (140 / 1058) * 100) bar on my application,the second value(1058) is actually ulong type in my application,but that doesn't seem to be the problem.
The question is - where the problem is?
You are using integer arithmetic and then converting the result to a float. Use floating-point arithmetic instead:
float test = 140f / 1058f;
The problem is that you are dividing integers and not floats. Only the result is a float. Change the code to be the following
float test = 140f / 1058f;
EDIT
John mentioned that there is a variable of type ulong. If that's the case then just use a cast opeartion
ulong value = GetTheValue();
float test = 140f / ((float)value);
Note, there is a possible loss of precision here since you're going from ulong to float.
This will work the way you expect ...
float test = (float)140 / (float)1058;
By the way, your code works fine for me (prints a 0.1323251 to the console).
The division being performed is integer division. Replace
float test = 140 / 1058;
with
float test = 140f / 1058;
to force floating-point division.
In general, if you have
int x;
int y;
and want to perform floating-point division then you must cast either x or y to a float as in
float f = ((float) x) / y;