I am attempting to do some mat on two UInt64 values and store the result in a float:
UInt64 val64 = 18446744073709551615;
UInt64 val64_2 = 18446744073709551000;
float val = (float)val64 - val64_2;
Console.WriteLine(val64);
Console.WriteLine(val.ToString("f"));
Console.ReadKey();
I am expecting the val to be 615.0 but instead I am getting 0.0!
Using double instead for val seems to work but surely float is capable of storing 615.0. What am I missing here?
It's not the result that is being truncated, it's the values used in the calculation. You are casting val64 to a float in your sum. This also means val64_2 will be cast to a float (to match val64). Both have lost enough precision that they are the same value when represted as a float, and the difference is 0.
You want to keep them as UInt64 for the subtraction, and have the result as a float. i.e.
float val = (float)(val64 - val64_2);
Float is an approximation that can store only 6 or 7 significant digits (see https://msdn.microsoft.com/en-us/library/hd7199ke.aspx)
In your code both UInt64s end up as 1.84467441E+19 when cast to float.
As various people have already mentioned the solution is to keep the values as UInt64s for the subtraction.
Related
Calling ToString() on imprecise floats produces numbers a human would expect (eg. 4.999999... gets printed as 5). However, when casting to int, the fractional part gets dropped and the result is an integer decremented by 1.
float num = 5f;
num *= 0.01f;
num *= 100f;
num.ToString().Dump(); // produces "5", as expected
((int)num).ToString().Dump(); // produces "4"
How do I cast the float to int, so that I get the human friendly value, that float.ToString() produces?
I'm using this dirty hack:
int ToInt(float value) {
return (int)(value + Math.Sign(value) * 0.00001);
}
..but surely there must be a more elegant solution.
Edit: I'm well aware of the reasons why floats are truncated the way they are (4.999... to 4, etc.) The question is about casting to int while emulating the default behavior of System.Single.ToString()
To better illustrate the behavior I'm looking for:
-4.99f should be cast to -4
4.01f should be cast to 4
4.99f should be cast to 4
4.999999f should be cast to 4
4.9999993f should be cast to 5
This is the exact same behavior that ToString produces.
Try running this:
float almost5 = 4.9999993f;
Console.WriteLine(almost5); // "5"
Console.WriteLine((int)almost5); // "4"
Maybe you are looking for this:
Convert.ToInt32(float)
source
0.05f * 100 is not exactly 5 due to floating point rounding (it's actually the value 4.999998.... when expressed as a float
The answer is that in the case of (int)(.05f * 100), you are taking the float value 4.999998 and truncating it to an integer, which yields 4.
So use Math.Round. The Math.Round function rounds a float value to the nearest integer, and rounds midpoint values to the nearest even number.
float num = 5f;
num *= 0.01f;
num *= 100f;
Console.WriteLine(num.ToString());
Console.WriteLine(((int)Math.Round(num)).ToString());
So far the best solution seems to be the one from the question.
int ToInt(float value) {
return (int)(value + Math.Sign(value) * 0.000001f);
}
This effectively snaps the value to the closest int, if the difference is small enough (less than 0.000001). However, this function differs from ToString's behavior and is slightly more tolerant to imprecisions.
Another solution, suggested by #chux is to use ToString and parse the string back. Using Int32.Parse throws an exception when a number has a decimal point (or a comma), so you have to keep only the integer part of the string and it may cause other troubles depending on your default CultureInfo.
I've encountered a float calculation precision problem in C#, here is the minimal working example :
int num = 160;
float test = 1.3f;
float result = num * test;
int result_1 = (int)result;
int result_2 = (int)(num * test);
int result_3 = (int)(float)(num * test);
Console.WriteLine("{0} {1} {2} {3}", result, result_1, result_2, result_3);
The code above will output "208 208 207 208", could someone explain something on the weird value of result_2 which should be 208?
(binary can not represent 1.3 precisely which will cause float precision problem, but I'm curious on the details)
num * test will probably give you a result like 207.9999998... and when you cast this float value to int you get 207, because casting to int will round the result down to the nearest integer in this case 207 (similar as Math.Floor()).
If you assign num * test to a float type like float result = num * test; the value 207.9999998... will be rounded to the nearest float value witch is 208.
Let's summerize:
float result = num * test; gives you 208 because you are assigning num * test to a float type.
int result_1 = (int)result; gives you 208 because you are casting the value of result to int -> (int)208 .
int result_2 = (int)(num * test); gives you 207 because you are casting something like 207.9999998... to int -> (int)207.9999998....
int result_3 = (int)(float)(num * test); gives you 208 because you are first casting 207.9999998... to float which gives you 208 and then you are casting 208 to int.
You can also take a look at C# language specification:
Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.
So basically to answer your question - No, it shouldn't. You can use different types, i.e. decimal or Binary floating points etc. And if you are more interested about floating point concepts and formats, you can read Jeffrey Sax's - Floating Point in .NET part 1: Concepts and Formats.
I wonder why my next statement always returns 1, and how I can fix it. I accounted for integer division by casting the first element in the division to float. Apart from that I'm not getting much further.
int value = any int;
float test = (float)value / int.MaxValue / 2 + 1;
By the way my intention is to make this convert ANY integer to a 0-1 float
To rescale a number in the range s..e to 0..1, you do (value-s)/(e-s).
So in this case:
double d = ((double)value - int.MinValue) / ((double)int.MaxValue - int.MinValue);
float test = (float)d;
It doesn't always return zero. For example, this code:
int value = 12345678;
float test = (float)value / int.MaxValue / 2 + 1;
Console.WriteLine(test);
Prints 1.002874
The problem is that floats are not very precise, so for small values of value, the result will be 0 to the number of digits of precision that floats can handle.
For example, value == 2300 will print 0, but value == 2400 will print 1.000001.
If you use double you get better results:
int value = 1;
double test = (double)value / int.MaxValue / 2 + 1;
Console.WriteLine(test);
Prints 1.00000000023283
Avoid implicit type conversions. Use all elements in your expression of type double, if that is the type you want. Convert the int.MaxValue and the 2 to double before using them in the division, so that no implicit type conversions are involved.
Also, you might want to parenthesize your expression to make it more readable. As it is, it is error prone.
Finally, if your expression is too complex and you don't know what's going on, split it into simpler expressions, all of them of type double.
P.S.: By the way, trying to get precise results and using float instead of double is not a very wise thing to do. Use float for precise floating point calculations.
I have the following code :
double a = 8/ 3;
Response.Write(a);
It returns the value 2. Why? I need at least one decimal digit. Something like 2.6, or 2.66. How can I get such results?
Try
double a = 8/3.0d;
or
double a = 8.0d/3;
to get a precise answer.
Since in expression a = 8/3 both the operands are int so the result is int irrespective of the fact that it is being stored in a double. The results are always in the higher data type of operands
EDIT
To answer
8 and 3 are get from variable. Can I do a sort of cast?
In case the values are coming from a variable you can cast one of the operands into double like:
int b = 8;
int c = 3;
double a = ((double) b) /c;
Because the calculation are being done in integer type not double. To make it double use:
double a = 8d/ 3d;
Response.Write(a);
Or
double a = 8.0/ 3.0;
Response.Write(a);
One of your operands should be explicitly marked as double either by using d or specifying a decimal point 0
or if you need you can cast them to double before the calculations. You can cast either one or both operands to double.
double a = ((double) 8)/((double)3)
because 8 and 3 are integer numbers and interpreter rounds it to 2.
You can simply advise to interpreter that you numbers are floating numbers:
double a = (double)8 / 3;
Because its making a rounding towards minus, its the way its implemented in the framework. However if you specify the precision by using the above example:
double a = 8/3.0d;
then rounding is no longer performed.
Or in simple terms you assigned an integer value to a double, thats why the rounding was performed in the first place. It saw an operation with integers.
Coz 8 and 3 both ints. And int's division operator with two ints in it returns int as well. (F12 when the cursor is on slash sign).
I was very surprised when I found out my code wasn't working so I created a console application to see where the problem lies and I've got even more surprised when I saw the code below returns 0
static void Main(string[] args)
{
float test = 140 / 1058;
Console.WriteLine(test);
Console.ReadLine();
}
I'm trying to get the result in % and put it in a progress(meaning (140 / 1058) * 100) bar on my application,the second value(1058) is actually ulong type in my application,but that doesn't seem to be the problem.
The question is - where the problem is?
You are using integer arithmetic and then converting the result to a float. Use floating-point arithmetic instead:
float test = 140f / 1058f;
The problem is that you are dividing integers and not floats. Only the result is a float. Change the code to be the following
float test = 140f / 1058f;
EDIT
John mentioned that there is a variable of type ulong. If that's the case then just use a cast opeartion
ulong value = GetTheValue();
float test = 140f / ((float)value);
Note, there is a possible loss of precision here since you're going from ulong to float.
This will work the way you expect ...
float test = (float)140 / (float)1058;
By the way, your code works fine for me (prints a 0.1323251 to the console).
The division being performed is integer division. Replace
float test = 140 / 1058;
with
float test = 140f / 1058;
to force floating-point division.
In general, if you have
int x;
int y;
and want to perform floating-point division then you must cast either x or y to a float as in
float f = ((float) x) / y;