Is it save to compare the result of Math.Round(double,int) with == and use it for example as the key of a HashSet<Double> or a GroupBy(d=>Math.Round(d,1))?
In other words, are there any doubles x and y for which the following assertion will fail?
double x = ...;
double y = ...;
double xRound = Math.Round(x, 1);
double yRound = Math.Round(y, 1);
Debug.Assert(xRound==yRound || Math.Abs(xRound-yRound)>=0.1);
Let's say that I would like to group a list of doubles:
List<double> values = ...;
List<double> keys = values.GroupBy(d=>Math.Round(d,1)).Select(kv=>kv.Key).ToList();
Is there a chance that I would get a key with the value 0.100000000 and another key with the value 0.09999999999?
(I tried parsing the disassembled net framework Math.cs source, but Round() eventually calls a native function.)
Generally, given the same starting value and the same operations, the same result would be obtained.
However, if you (for example) did:
double d1 = 10;
d1 /= 0.1;
double d2 = 25;
d2 /= 0.25;
then you may well find that d1 and d2 do not have the same value (because IEEE754 can represent 0.25 exactly but not 0.1.
So, given that and the rather large number of issues people seem to have with floating point, I'd say your best bet would be to choose a different method of hashing.
Related
I am using Math.Floor method to find out how many times can number a be in number b. In this concrete example, variables are with these values:
double a = 1.2;
double b = 0.1;
double c = Math.Floor(a / b) * b;
This returns c = 11 instead of c = 12 as I thought it will. I guess it has something to do with rounding, but how can I get it work properly? When I raise number a to 1,21, it returns 12.
double and float are internally represented in a way that lacks precision for many numbers, which means that a may not be exactly 1.2 (but, say, 1.199999999...) and b not exactly 0.1.
If you want exact precision, for quantities that do not have a margin of error like money, use decimal.
Have a look here for how Math.Floor works. ..basically , it rounds down ...
My guess would be that with doubles you don't really get 12 as the division (even though this will seems to work: double d = a/b; // d=12).
Solal pointed to the same, and gave a good advice about decimal :)
Here's what happens with decimals:
decimal a = 1.2m;
decimal b = 0.1m;
decimal c = Math.Floor(a / b) ; // c =12
The result of your code is 1.1.
I am assuming you want to get 1.2.
You need to use
double c = Math.Ceiling(a / b) * b;
I'm converting digital minute values to digital degrees (longitude and latitude), how would I come about retrieving all values apart from the first two? From here I can calculate values, such as 5322.72233N and 00127.5333W, to another format. I've looked at Math.Truncate (Best way to get whole number part of a Decimal number).
Let's calculate latitude 5322.72233
22.72233 / 60
+53
Final result is 53.3787055
Calculating the longitude with value 00127.5333
(Along the lines of... (calculation hasn't been checked thoroughly yet))
27.5333 / 60
+1
x -1
Final result
I'm sure it's simple.
You could just use the % operator (modulus) to remove any digits beyond 100 like this:
var input = 5322.72233m;
var output = input % 100m; // 22.72233
From that point the rest of the math should be pretty easy.
You can try to get the remainder as follows:
double a = 5322.72233;
double b = 100;
double c = a % b;
Next:
c = c / 60 + 53; // 53.3787055
Suppose I have three doubles, a, b and c.
double a = 1.234560123;
double b = 7.890120123;
double c = a * b;
c = 9.740827669535655129
I want to work with numbers with only 5 decimal places. So if I round a and b using Math.Round(a, 5) and Math.Round(b, 5) I get:
double a_r = Math.Round(a, 5);
double b_r = Math.Round(b, 5);
a_r = 1.23456
b_r = 7.89012
double c_r = a_r * b_r;
c_r = 9.7408265472
But when I calculate c, I still get a number with more than 5 decimal places (this will happen in every multiplication, division, potentiation and similar operations). I could round all results in my code, but that's hard work that I want to avoid.
As I use c in other operations and the results of this operations in other ones, I don't want to round all the intermediate results every time to not propagate the error caused by undesired decimal places.
Is there a way to define doubles with a fixed number of decimal places, independently of the operation?
Typically, it's best to leave the doubles in place, and use the custom formatting to display the values to 5 decimal points:
double a = 1.234560123;
double b = 7.890120123;
double c = a * b;
Console.WriteLine("Result = {0:N5}", c);
Nearly all routines that convert numeric values into strings allow the use of the Standard Numeric Format Strings as well as Custom Numeric Format Strings.
You canĀ“t define a double with limited decimal places. You should rely on formatting the number when you display it. See this question
I found a way to solve my problem using Operator Overload.
So I re-defined all my operations as multiplication, division, complex multiplication and matrices operations to round the result to the number of decimal places I wanted.
An example:
public static double operator *(double d1, double d2)
{
double result;
result = Math.Round(d1 * d2, 5);
return result;
}
I'm running NUnit tests to evaluate some known test data and calculated results. The numbers are floating point doubles so I don't expect them to be exactly equal, but I'm not sure how to treat them as equal for a given precision.
In NUnit we can compare with a fixed tolerance:
double expected = 0.389842845321551d;
double actual = 0.38984284532155145d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and that works fine for numbers below zero, but as the numbers grow the tolerance really needs to be changed so we always care about the same number of digits of precision.
Specifically, this test fails:
double expected = 1.95346834136148d;
double actual = 1.9534683413614817d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and of course larger numbers fail with tolerance..
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
What's the correct way to evaluate two floating point numbers are equal with a given precision? Is there a built-in way to do this in NUnit?
From msdn:
By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
Let's assume 15, then.
So, we could say that we want the tolerance to be to the same degree.
How many precise figures do we have after the decimal point? We need to know the distance of the most significant digit from the decimal point, right? The magnitude. We can get this with a Log10.
Then we need to divide 1 by 10 ^ precision to get a value around the precision we want.
Now, you'll need to do more test cases than I have, but this seems to work:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
// Log10(100) = 2, so to get the manitude we add 1.
int magnitude = 1 + (expected == 0.0 ? -1 : Convert.ToInt32(Math.Floor(Math.Log10(expected))));
int precision = 15 - magnitude ;
double tolerance = 1.0 / Math.Pow(10, precision);
Assert.That(actual, Is.EqualTo(expected).Within(tolerance));
It's late - there could be a gotcha in here. I tested it against your three sets of test data and each passed. Changing pricision to be 16 - magnitude caused the test to fail. Setting it to 14 - magnitude obviously caused it to pass as the tolerance was greater.
This is what I came up with for The Floating-Point Guide (Java code, but should translate easily, and comes with a test suite, which you really really need):
public static boolean nearlyEqual(float a, float b, float epsilon)
{
final float absA = Math.abs(a);
final float absB = Math.abs(b);
final float diff = Math.abs(a - b);
if (a * b == 0) { // a or b or both are zero
// relative error is not meaningful here
return diff < (epsilon * epsilon);
} else { // use relative error
return diff / (absA + absB) < epsilon;
}
}
The really tricky question is what to do when one of the numbers to compare is zero. The best answer may be that such a comparison should always consider the domain meaning of the numbers being compared rather than trying to be universal.
How about converting the items each to string and comparing the strings?
string test1 = String.Format("{0:0.0##}", expected);
string test2 = String.Format("{0:0.0##}", actual);
Assert.AreEqual(test1, test2);
Assert.That(x, Is.EqualTo(y).Within(10).Percent);
is a decent option (changes it to a relative comparison, where x is required to be within 10% of y). You may want to add extra handling for 0, as otherwise you'll get an exact comparison in that case.
Update:
Another good option is
Assert.That(x, Is.EqualTo(y).Within(1).Ulps);
where Ulps means units in the last place. See https://docs.nunit.org/articles/nunit/writing-tests/constraints/EqualConstraint.html#comparing-floating-point-values.
I don't know if there's a built-in way to do it with nunit, but I would suggest multiplying each float by the 10x the precision you're seeking, storing the results as longs, and comparing the two longs to each other.
For example:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d;
//for a precision of 4
long lActual = (long) 10000 * actual;
long lExpected = (long) 10000 * expected;
if(lActual == lExpected) { // Do comparison
// Perform desired actions
}
This is a quick idea, but how about shifting them down till they are below zero? Should be something like num/(10^ceil(log10(num))) . . . not to sure about how well it would work, but its an idea.
1632.4587642911599 / (10^ceil(log10(1632.4587642911599))) = 0.16324587642911599
How about:
const double significantFigures = 10;
Assert.AreEqual(Actual / Expected, 1.0, 1.0 / Math.Pow(10, significantFigures));
The difference between the two values should be less than either value divided by the precision.
Assert.Less(Math.Abs(firstValue - secondValue), firstValue / Math.Pow(10, precision));
open FsUnit
actual |> should (equalWithin errorMargin) expected
I have this:
double result = 60 / 23;
In my program, the result is 2, but correct is 2,608695652173913. Where is problem?
60 and 23 are integer literals so you are doing integer division and then assigning to a double. The result of the integer division is 2.
Try
double result = 60.0 / 23.0;
Or equivalently
double result = 60d / 23d;
Where the d suffix informs the complier that you meant to write a double literal.
You can use any of the following all will give 2.60869565217391:
double result = 60 / 23d;
double result = 60d / 23;
double result = 60d/ 23d;
double result = 60.0 / 23.0;
But
double result = 60 / 23; //give 2
Explanation:
if any of the number is double it will give a double
EDIT:
Documentation
The evaluation of the expression is performed according to the following rules:
If one of the floating-point types is double, the expression evaluates to double (or bool in the case of relational or Boolean expressions).
If there is no double type in the expression, it evaluates to float (or bool in the case of relational or Boolean expressions).
It will work
double result = (double)60 / (double) 23;
Or equivalently
double result = (double)60 / 23;
(double) 60 / 23
Haven't used C# for a while, but you are dividing two integers, which as far as I remember makes the result an integer as well.
You can force your number literals to be doubles by adding the letter "d", likes this:
double result = 60d / 23d;
double result = 60.0 / 23.0;
It is best practice to correctly decorate numerals for their appropriate type. This avoids not only the bug you are experiencing, but makes the code more readable and maintainable.
double x = 100d;
single x = 100f;
decimal x = 100m;
convert the dividend and divisor into double values, so that result is double
double res= 60d/23d;
To add to what has been said so far... 60/23 is an operation on two constants. The compiler recognizes the result as a constant and pre-computes the answer. Since the operation is on two integers, the compiler uses an integer result The integer operation of 60/23 has a result of 2; so the compiler effective creates the following code:
double result = 2;
As has been pointed out already, you need to tell the compiler not to use integers, changing one or both of the operands to non-integer will get the compiler to use a floating-point constant.