I was very surprised when I found out my code wasn't working so I created a console application to see where the problem lies and I've got even more surprised when I saw the code below returns 0
static void Main(string[] args)
{
float test = 140 / 1058;
Console.WriteLine(test);
Console.ReadLine();
}
I'm trying to get the result in % and put it in a progress(meaning (140 / 1058) * 100) bar on my application,the second value(1058) is actually ulong type in my application,but that doesn't seem to be the problem.
The question is - where the problem is?
You are using integer arithmetic and then converting the result to a float. Use floating-point arithmetic instead:
float test = 140f / 1058f;
The problem is that you are dividing integers and not floats. Only the result is a float. Change the code to be the following
float test = 140f / 1058f;
EDIT
John mentioned that there is a variable of type ulong. If that's the case then just use a cast opeartion
ulong value = GetTheValue();
float test = 140f / ((float)value);
Note, there is a possible loss of precision here since you're going from ulong to float.
This will work the way you expect ...
float test = (float)140 / (float)1058;
By the way, your code works fine for me (prints a 0.1323251 to the console).
The division being performed is integer division. Replace
float test = 140 / 1058;
with
float test = 140f / 1058;
to force floating-point division.
In general, if you have
int x;
int y;
and want to perform floating-point division then you must cast either x or y to a float as in
float f = ((float) x) / y;
Related
Calling ToString() on imprecise floats produces numbers a human would expect (eg. 4.999999... gets printed as 5). However, when casting to int, the fractional part gets dropped and the result is an integer decremented by 1.
float num = 5f;
num *= 0.01f;
num *= 100f;
num.ToString().Dump(); // produces "5", as expected
((int)num).ToString().Dump(); // produces "4"
How do I cast the float to int, so that I get the human friendly value, that float.ToString() produces?
I'm using this dirty hack:
int ToInt(float value) {
return (int)(value + Math.Sign(value) * 0.00001);
}
..but surely there must be a more elegant solution.
Edit: I'm well aware of the reasons why floats are truncated the way they are (4.999... to 4, etc.) The question is about casting to int while emulating the default behavior of System.Single.ToString()
To better illustrate the behavior I'm looking for:
-4.99f should be cast to -4
4.01f should be cast to 4
4.99f should be cast to 4
4.999999f should be cast to 4
4.9999993f should be cast to 5
This is the exact same behavior that ToString produces.
Try running this:
float almost5 = 4.9999993f;
Console.WriteLine(almost5); // "5"
Console.WriteLine((int)almost5); // "4"
Maybe you are looking for this:
Convert.ToInt32(float)
source
0.05f * 100 is not exactly 5 due to floating point rounding (it's actually the value 4.999998.... when expressed as a float
The answer is that in the case of (int)(.05f * 100), you are taking the float value 4.999998 and truncating it to an integer, which yields 4.
So use Math.Round. The Math.Round function rounds a float value to the nearest integer, and rounds midpoint values to the nearest even number.
float num = 5f;
num *= 0.01f;
num *= 100f;
Console.WriteLine(num.ToString());
Console.WriteLine(((int)Math.Round(num)).ToString());
So far the best solution seems to be the one from the question.
int ToInt(float value) {
return (int)(value + Math.Sign(value) * 0.000001f);
}
This effectively snaps the value to the closest int, if the difference is small enough (less than 0.000001). However, this function differs from ToString's behavior and is slightly more tolerant to imprecisions.
Another solution, suggested by #chux is to use ToString and parse the string back. Using Int32.Parse throws an exception when a number has a decimal point (or a comma), so you have to keep only the integer part of the string and it may cause other troubles depending on your default CultureInfo.
I've encountered a float calculation precision problem in C#, here is the minimal working example :
int num = 160;
float test = 1.3f;
float result = num * test;
int result_1 = (int)result;
int result_2 = (int)(num * test);
int result_3 = (int)(float)(num * test);
Console.WriteLine("{0} {1} {2} {3}", result, result_1, result_2, result_3);
The code above will output "208 208 207 208", could someone explain something on the weird value of result_2 which should be 208?
(binary can not represent 1.3 precisely which will cause float precision problem, but I'm curious on the details)
num * test will probably give you a result like 207.9999998... and when you cast this float value to int you get 207, because casting to int will round the result down to the nearest integer in this case 207 (similar as Math.Floor()).
If you assign num * test to a float type like float result = num * test; the value 207.9999998... will be rounded to the nearest float value witch is 208.
Let's summerize:
float result = num * test; gives you 208 because you are assigning num * test to a float type.
int result_1 = (int)result; gives you 208 because you are casting the value of result to int -> (int)208 .
int result_2 = (int)(num * test); gives you 207 because you are casting something like 207.9999998... to int -> (int)207.9999998....
int result_3 = (int)(float)(num * test); gives you 208 because you are first casting 207.9999998... to float which gives you 208 and then you are casting 208 to int.
You can also take a look at C# language specification:
Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.
So basically to answer your question - No, it shouldn't. You can use different types, i.e. decimal or Binary floating points etc. And if you are more interested about floating point concepts and formats, you can read Jeffrey Sax's - Floating Point in .NET part 1: Concepts and Formats.
I wonder why my next statement always returns 1, and how I can fix it. I accounted for integer division by casting the first element in the division to float. Apart from that I'm not getting much further.
int value = any int;
float test = (float)value / int.MaxValue / 2 + 1;
By the way my intention is to make this convert ANY integer to a 0-1 float
To rescale a number in the range s..e to 0..1, you do (value-s)/(e-s).
So in this case:
double d = ((double)value - int.MinValue) / ((double)int.MaxValue - int.MinValue);
float test = (float)d;
It doesn't always return zero. For example, this code:
int value = 12345678;
float test = (float)value / int.MaxValue / 2 + 1;
Console.WriteLine(test);
Prints 1.002874
The problem is that floats are not very precise, so for small values of value, the result will be 0 to the number of digits of precision that floats can handle.
For example, value == 2300 will print 0, but value == 2400 will print 1.000001.
If you use double you get better results:
int value = 1;
double test = (double)value / int.MaxValue / 2 + 1;
Console.WriteLine(test);
Prints 1.00000000023283
Avoid implicit type conversions. Use all elements in your expression of type double, if that is the type you want. Convert the int.MaxValue and the 2 to double before using them in the division, so that no implicit type conversions are involved.
Also, you might want to parenthesize your expression to make it more readable. As it is, it is error prone.
Finally, if your expression is too complex and you don't know what's going on, split it into simpler expressions, all of them of type double.
P.S.: By the way, trying to get precise results and using float instead of double is not a very wise thing to do. Use float for precise floating point calculations.
My boss reported my a bug today because my configuration was decreasing whenever he wanted to specify a value below 5%. I know that I can just round my number before casting it as int to fix my problem, but I don't understand why this problem occurs.
I have a app.config file with the value "0.04" and a configuration section with a float property. When the section is read, the float value retrieved is 0.04, which is fine. I want to put this value in a windows forms TrackBar which accept an integer value so I multiply my value by 100 and a cast it as int. For some reason, the result is not 4, but it's 3. You can test it like this :
Console.WriteLine((int)(float.Parse("0.04", System.Globalization.CultureInfo.InvariantCulture) * 100)); // 3
What happened?
It's because 0.04 can't be exactly represented as a float - and neither can the result of multiplying it by 100. The result is very slightly less than 4, so the cast to int truncates it.
Basically, if you want to use numbers represented accurately in decimal, you should use the decimal type instead of float or double. See my articles on decimal floating point and binary floating point for more information.
EDIT: There's something more interesting going on here, actually... in particular, if you assign the result to a local variable first, that changes the result:
using System;
using System.Globalization;
class Test
{
static void Main()
{
// Assign first, then multiply and assign back, then print
float f = Foo();
f *= 100;
Console.WriteLine((int) f); // Prints 4
// Assign once, then multiply within the expression...
f = Foo();
Console.WriteLine((int) (f * 100)); // Prints 4
Console.WriteLine((int) (Foo() * 100)); // Prints 3
}
// No need to do parsing here. We just need to get the results from a method
static float Foo()
{
return 0.04f;
}
}
I'm not sure exactly what's going on here, but the exact value of 0.04f is:
0.039999999105930328369140625
... so it does make sense for it not to print 4, potentially.
I can force the result of 3 if the multiplication by 100 is performed with double arithmetic instead of float:
f = Foo();
Console.WriteLine((int) ((double)f * 100)); // Prints 3
... but it's not clear to me why that's happening in the original version, given that float.Parse returns float, not double. At a guess, the result remains in registers and the subsequent multiplication is performed using double arithmetic (which is valid according to the spec) but it's certainly a surprising difference.
This happens because the float value is really more like 0.039999999999; you are therefore converting a value like 3.99999999999 to int, which yields 3.
You can solve the problem by rounding:
Console.WriteLine((int)Math.Round(float.Parse("0.04", System.Globalization.CultureInfo.InvariantCulture) * 100));
As a float 0.04*100 might well be represented as 3.9999999999, and casting to an int just truncates it, so that is why yo are seeing 3
It's actually not 4 but 3,99999 and lots of other numbers. Do something like this:
(int)(float.Parse("0.04") * 100.0 + 0.5)
Casting to float is like a floor operator and as this is not exactly 4 it is truncated to 3.
I have this:
double result = 60 / 23;
In my program, the result is 2, but correct is 2,608695652173913. Where is problem?
60 and 23 are integer literals so you are doing integer division and then assigning to a double. The result of the integer division is 2.
Try
double result = 60.0 / 23.0;
Or equivalently
double result = 60d / 23d;
Where the d suffix informs the complier that you meant to write a double literal.
You can use any of the following all will give 2.60869565217391:
double result = 60 / 23d;
double result = 60d / 23;
double result = 60d/ 23d;
double result = 60.0 / 23.0;
But
double result = 60 / 23; //give 2
Explanation:
if any of the number is double it will give a double
EDIT:
Documentation
The evaluation of the expression is performed according to the following rules:
If one of the floating-point types is double, the expression evaluates to double (or bool in the case of relational or Boolean expressions).
If there is no double type in the expression, it evaluates to float (or bool in the case of relational or Boolean expressions).
It will work
double result = (double)60 / (double) 23;
Or equivalently
double result = (double)60 / 23;
(double) 60 / 23
Haven't used C# for a while, but you are dividing two integers, which as far as I remember makes the result an integer as well.
You can force your number literals to be doubles by adding the letter "d", likes this:
double result = 60d / 23d;
double result = 60.0 / 23.0;
It is best practice to correctly decorate numerals for their appropriate type. This avoids not only the bug you are experiencing, but makes the code more readable and maintainable.
double x = 100d;
single x = 100f;
decimal x = 100m;
convert the dividend and divisor into double values, so that result is double
double res= 60d/23d;
To add to what has been said so far... 60/23 is an operation on two constants. The compiler recognizes the result as a constant and pre-computes the answer. Since the operation is on two integers, the compiler uses an integer result The integer operation of 60/23 has a result of 2; so the compiler effective creates the following code:
double result = 2;
As has been pointed out already, you need to tell the compiler not to use integers, changing one or both of the operands to non-integer will get the compiler to use a floating-point constant.