Specific if condition evaluates incorrectly - c#

Issue
In unity, I have to calculate the angle of a spawning particle system according to the location where a different object hit it.
During this calculation I identify if the hit occurs on the bottom half of the impacted entity.
This evaluation fails, and produces a different result.
I debugged this (using VS2017 and latest Unity 2018 release) and found that when I perform a 'watch' on the relevant expression, it evaluates to true while when running the code itself it is evaluated to false.
How I Tried to Resolve it
Initially when facing the issue, I managed to work-around it by altering the expression inside of the condition, but now it doesn't change it.
I have tried to pull out the expression to it's own value, but it displays inconsistency regarding the result.
In the debugger it would, on rare occasions, evaluate correctly, while all other times (including not using the debugger) evaluate incorrectly.
The Relevant Code
SpriteRenderer Renderer = GetComponent<SpriteRenderer>();
Bounds Bounds = Renderer.bounds;
Vector3 Center = Bounds.center;
Vector3 HalfSize = Bounds.extents;
if (HitPosition.y != int.MinValue)
{
if (HitPosition.y < Center.y + HalfSize.y && HitPosition.y > Center.y - HalfSize.y)
{
HitPosition.y = Center.y + (HalfSize.y * (Center.y < 0 ? 1 : -1));
}
}
else
{
HitPosition.y = Random.Range(Center.y - (HalfSize.y * 0.2f), Center.y + (HalfSize.y * 0.9f));
HitPosition.x += (HitPosition.x < Center.x ? -1 : 1) * HitPosition.x / 100;
}
bool HitOnRight = HitPosition.x >= Center.x + HalfSize.x;
bool HitOnLeft = HitPosition.x <= Center.x - HalfSize.x;
float Angle = 0f;
if (Center.y - HalfSize.y - HitPosition.y >= 0) // <--- Issue
{
Angle = HitOnLeft ? 135f : (HitOnRight ? -135f : 180f);
}
else
{
Angle = HitOnLeft ? 90f : (HitOnRight ? -90f : 0f);
}
Result Expectations
Since this is a sort-of generic method, that is used accord multiple entities in the scene I expect it to evaluate differently across various
In the cases in which the evaluation issue occurs, I expect it to evaluate to true, in all other occurrences it should (and does) evaluates to false.

Following #McAden and another fellow who removed his answer (or had it removed), the issue was indeed the calculation itself.
I'll explain the reason for the issue, in the case that anyone in the future will come upon this.
Unity, when printing output using the Debug.LogXXX functions, will round all values to 2 decimal points.
I tried to find offical documentation, but as it appears it is a part of C# itself.
float (for those who come from a Java background) is identified as a Single type object, and when printed are printed using the ToString() method which narrows the result to 2 decimal places.
This can cause confusion when performing a standard Debug.LogFormat print, because it will display 2 decimal places, and in cases such as mine, the difference will be so small that the result will be rounded to 0.
After deeper debugging and further diving into the actual value of the difference, I arrived to the point where the values I got were:
Center Y Value: 5.114532
Half Y Value: 0.64
Hit Y Value: 4.474533
Difference Result: -0.0000001192
This is a less than perfect example, because you can see that it will result in -0.000001 difference there were many instances until now that resulted in what appeared to be an absolute 0.
In order to perform a proper comparison, we need to follow C# official documentation, which dictates:
In cases where a loss of precision is likely to affect the result of a comparison, you can use the following techniques instead of calling the Equals or CompareTo method:
Call the Math.Round method to ensure that both values have the same precision. The following example modifies a previous example to use this approach so that two fractional values are equivalent.
using System;
public class Example
{
public static void Main()
{
float value1 = .3333333f;
float value2 = 1.0f/3;
int precision = 7;
value1 = (float) Math.Round(value1, precision);
value2 = (float) Math.Round(value2, precision);
Console.WriteLine("{0:R} = {1:R}: {2}", value1, value2, value1.Equals(value2));
}
}
// The example displays the following output:
// 0.3333333 = 0.3333333: True
Another approach is to implement an IsApproximatelyEqual;
Test for approximate equality instead of equality. This technique requires that you define either an absolute amount by which the two values can differ but still be equal, or that you define a relative amount by which the smaller value can diverge from the larger value.
Eventually, the root cause of this issue was that I had ignored the deeper precision values represented by the variables and thus found myself not understanding why the comparison is incorrect.
Thank you for your help, hope no one runs into this type of silly issue :)

Operator precedence. Try some parentheses.
if ((Center.y - HalfSize.y - HitPosition.y) >= 0) // <--- Issue
{
Angle = HitOnLeft ? 135f : (HitOnRight ? -135f : 180f);
}
else
{
Angle = HitOnLeft ? 90f : (HitOnRight ? -90f : 0f);
}

Related

error: The input string was not in the correct format

Checking for number 3 in Text
error:
The input string was not in the correct format
public Text Timer;
private void FixedUpdate()
{
if (Convert.ToDouble(Timer.text) == 3)
{
Debug.Log("w");
}
}
we don't know which string content your Time.text has but you should rather use either double.TryParse or since everything in the Unity API uses float anyway maybe rather float.TryParse. Alternatively you can also cast it to (float) since you are wanting to compare it basically to an int value anyway the precision doesn't really matter.
Because a second point is: You should never directly compare floating point type numbers via == (See How should I do floating point comparison?). It is possible that a float (or double) value logically should have the value 3 like e.g. (3f + 3f + 3f) / 3f but due to the floating point imprecision results in a value like 2.9999999999 or 3.000000001 in which case a comparing to 3 might fail unexpectedly.
Rather use the Unity built-in Mathf.Approximately which basically equals using
if(Math.Abs(a-b) <= epsilon)
where Mathf.Epsilon
The smallest value that a float can have different from zero.
So I would recommend to rather do something like
if (double.TryParse(Timer.text, out var time) && Mathf.Approximately((float)time, 3))
{
Debug.Log("w");
}
Note however that if this is a timer that is continuously increased you might never exactly hit the value 3 so you might want to either use a certain threshold range around it like
if (double.TryParse(Timer.text, out var time) && time - 3 >= someThreshold)
{
Debug.Log("w");
}
or simply use e.g.
if (double.TryParse(Timer.text, out var time) && time >= 3)
{
Debug.Log("w");
}
if you only want any value bigger or equal to 3.

How do I force floating point values greater than 1e-8 to 0

I'm currently working on implementing Matrix forward reduction, and this far it passes almost all of my test cases. However I have problem rounding floating point values. I reduce non-zero rows below a pivot element by simply adding -1*a[j,col]/a[i,col], where i = pivot, and j is initialized at i + 1, iterating downward until all rows are done.
However say I want a tolerance of 1e-10 for floating point comparisons. How can I force values in a[j,col] to zero if they exceed this tolerance?
In some cases I have values at 1e-14 to 1e-15, in my test cases, which should be zero. The current cases I tried is shown below, but this did not work. Can anyone point me in the right direction? '
This is the first time I try comparing floating point values, which I have read can be difficult, so I hope someone can help me solve this, as it is currently forcing my application at a stand still until it is fixed.
var tolerance = 1e-10;
if (a[j, lead] < a[j, lead]*tolerance) {
a[j, lead] = 0;
}
Not sure I understand your conditions completely, but if you just care about the exponent you could check what the exponents are and based off of that check if it fits your criteria.
Your title sais "How do I force floating point values greater than 1e-8 to 0":
var tolerance = 1e-8;
bool compareExponents =
(int) Math.Floor(Math.Log10(Math.Abs(a[j, lead]))) >
(int) Math.Floor(Math.Log10(Math.Abs(tolerance))) ? true : false;
a[j,lead] = compareExponents ? 0 : a[j,lead];
//if the value(i.e a[j,lead]) is larger than the tolerance, i'm setting the value to 0.
//otherwise, just keep the current value.
//
//1e^-15 > tolerance ==> 0
//1e^-8 > tolerance ==> a[j,lead]
If you only care about the exponent not exceeding a specific value, you can try this.
You can try this approach
var tolerance = 1e-10;
var fraction = a[j, lead] - Math.Truncate(a[j, lead];
if (fraction<tolerance)
{
a[j, lead] = 0;
}

Why is a simple get-statement so slow?

A few years back, I got an assignment at school, where I had to parallelize a Raytracer.
It was an easy assignment, and I really enjoyed working on it.
Today, I felt like profiling the raytracer, to see if I could get it to run any faster (without completely overhauling the code). During the profiling, I noticed something interesting:
// Sphere.Intersect
public bool Intersect(Ray ray, Intersection hit)
{
double a = ray.Dir.x * ray.Dir.x +
ray.Dir.y * ray.Dir.y +
ray.Dir.z * ray.Dir.z;
double b = 2 * (ray.Dir.x * (ray.Pos.x - Center.x) +
ray.Dir.y * (ray.Pos.y - Center.y) +
ray.Dir.z * (ray.Pos.z - Center.z));
double c = (ray.Pos.x - Center.x) * (ray.Pos.x - Center.x) +
(ray.Pos.y - Center.y) * (ray.Pos.y - Center.y) +
(ray.Pos.z - Center.z) * (ray.Pos.z - Center.z) - Radius * Radius;
// more stuff here
}
According to the profiler, 25% of the CPU time was spent on get_Dir and get_Pos, which is why, I decided to optimize the code in the following way:
// Sphere.Intersect
public bool Intersect(Ray ray, Intersection hit)
{
Vector3d dir = ray.Dir, pos = ray.Pos;
double xDir = dir.x, yDir = dir.y, zDir = dir.z,
xPos = pos.x, yPos = pos.y, zPos = pos.z,
xCen = Center.x, yCen = Center.y, zCen = Center.z;
double a = xDir * xDir +
yDir * yDir +
zDir * zDir;
double b = 2 * (xDir * (xPos - xCen) +
yDir * (yPos - yCen) +
zDir * (zPos - zCen));
double c = (xPos - xCen) * (xPos - xCen) +
(yPos - yCen) * (yPos - yCen) +
(zPos - zCen) * (zPos - zCen) - Radius * Radius;
// more stuff here
}
With astonishing results.
In the original code, running the raytracer with its default arguments (create a 1024x1024 image with only direct lightning and without AA) would take ~88 seconds.
In the modified code, the same would take a little less than 60 seconds.
I achieved a speedup of ~1.5 with only this little modification to the code.
At first, I thought the getter for Ray.Dir and Ray.Pos were doing some stuff behind the scene, that would slow the program down.
Here are the getters for both:
public Vector3d Pos
{
get { return _pos; }
}
public Vector3d Dir
{
get { return _dir; }
}
So, both return a Vector3D, and that's it.
I really wonder, how calling the getter would take that much longer, than accessing the variable directly.
Is it because of the CPU caching variables? Or maybe the overhead from calling these methods repeatedly added up? Or maybe the JIT handling the latter case better than the former? Or maybe there's something else I'm not seeing?
Any insights would be greatly appreciated.
Edit:
As #MatthewWatson suggested, I used a StopWatch to time release builds outside of the debugger. In order to get rid of noise, I ran the tests multiple times. As a result, the former code takes ~21 seconds (between 20.7 and 20.9) to finish, whereas the latter only ~19 seconds (between 19 and 19.2).
The difference has become negligible, but it is still there.
Introduction
I'd be willing to bet that the original code is so much slower because of a quirk in C# involving properties of type structs. It's not exactly intuitive, but this type of property is inherently slow. Why? Because structs are not passed by reference. So in order to access ray.Dir.x, you have to
Load local variable ray.
Call get_Dir and store the result in a temporary variable. This involves copying the entire struct, even though only the field 'x' is ever used.
Access field x from the temporary copy.
Looking at the original code, the get accessors are called 18 times. This is a huge waste, because it means that the entire struct is copied 18 times overall. In your optimized code, there are only two copies - Dir and Pos are both called only once; further access to the values only consist of the third step from above:
Access field x from the temporary copy.
To sum it up, structs and properties do not go together.
Why does C# behave this way with struct properties?
It has something to do with the fact that in C#, structs are value types. You are passing around the value itself, rather than a pointer to the value.
Why doesn't the compiler recognize that the get accessor is simply returning a field, and bypass the property alltogether?
In debug mode, optimizations like this are skipped to provide for a better debegging experience. Even in release mode, you'll find that most jitters don't often do this. I don't know exactly why, but I believe it is because the field is not always word-aligned. Modern CPUs have odd performance requirements. :-)

Evaluate if two doubles are equal based on a given precision, not within a certain fixed tolerance

I'm running NUnit tests to evaluate some known test data and calculated results. The numbers are floating point doubles so I don't expect them to be exactly equal, but I'm not sure how to treat them as equal for a given precision.
In NUnit we can compare with a fixed tolerance:
double expected = 0.389842845321551d;
double actual = 0.38984284532155145d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and that works fine for numbers below zero, but as the numbers grow the tolerance really needs to be changed so we always care about the same number of digits of precision.
Specifically, this test fails:
double expected = 1.95346834136148d;
double actual = 1.9534683413614817d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and of course larger numbers fail with tolerance..
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
What's the correct way to evaluate two floating point numbers are equal with a given precision? Is there a built-in way to do this in NUnit?
From msdn:
By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
Let's assume 15, then.
So, we could say that we want the tolerance to be to the same degree.
How many precise figures do we have after the decimal point? We need to know the distance of the most significant digit from the decimal point, right? The magnitude. We can get this with a Log10.
Then we need to divide 1 by 10 ^ precision to get a value around the precision we want.
Now, you'll need to do more test cases than I have, but this seems to work:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
// Log10(100) = 2, so to get the manitude we add 1.
int magnitude = 1 + (expected == 0.0 ? -1 : Convert.ToInt32(Math.Floor(Math.Log10(expected))));
int precision = 15 - magnitude ;
double tolerance = 1.0 / Math.Pow(10, precision);
Assert.That(actual, Is.EqualTo(expected).Within(tolerance));
It's late - there could be a gotcha in here. I tested it against your three sets of test data and each passed. Changing pricision to be 16 - magnitude caused the test to fail. Setting it to 14 - magnitude obviously caused it to pass as the tolerance was greater.
This is what I came up with for The Floating-Point Guide (Java code, but should translate easily, and comes with a test suite, which you really really need):
public static boolean nearlyEqual(float a, float b, float epsilon)
{
final float absA = Math.abs(a);
final float absB = Math.abs(b);
final float diff = Math.abs(a - b);
if (a * b == 0) { // a or b or both are zero
// relative error is not meaningful here
return diff < (epsilon * epsilon);
} else { // use relative error
return diff / (absA + absB) < epsilon;
}
}
The really tricky question is what to do when one of the numbers to compare is zero. The best answer may be that such a comparison should always consider the domain meaning of the numbers being compared rather than trying to be universal.
How about converting the items each to string and comparing the strings?
string test1 = String.Format("{0:0.0##}", expected);
string test2 = String.Format("{0:0.0##}", actual);
Assert.AreEqual(test1, test2);
Assert.That(x, Is.EqualTo(y).Within(10).Percent);
is a decent option (changes it to a relative comparison, where x is required to be within 10% of y). You may want to add extra handling for 0, as otherwise you'll get an exact comparison in that case.
Update:
Another good option is
Assert.That(x, Is.EqualTo(y).Within(1).Ulps);
where Ulps means units in the last place. See https://docs.nunit.org/articles/nunit/writing-tests/constraints/EqualConstraint.html#comparing-floating-point-values.
I don't know if there's a built-in way to do it with nunit, but I would suggest multiplying each float by the 10x the precision you're seeking, storing the results as longs, and comparing the two longs to each other.
For example:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d;
//for a precision of 4
long lActual = (long) 10000 * actual;
long lExpected = (long) 10000 * expected;
if(lActual == lExpected) { // Do comparison
// Perform desired actions
}
This is a quick idea, but how about shifting them down till they are below zero? Should be something like num/(10^ceil(log10(num))) . . . not to sure about how well it would work, but its an idea.
1632.4587642911599 / (10^ceil(log10(1632.4587642911599))) = 0.16324587642911599
How about:
const double significantFigures = 10;
Assert.AreEqual(Actual / Expected, 1.0, 1.0 / Math.Pow(10, significantFigures));
The difference between the two values should be less than either value divided by the precision.
Assert.Less(Math.Abs(firstValue - secondValue), firstValue / Math.Pow(10, precision));
open FsUnit
actual |> should (equalWithin errorMargin) expected

How to interpret situations where Math.Acos() reports invalid input?

Hey all. I'm computing the angle between two vectors, and sometimes Math.Acos() returns NaN when it's input is out of bounds (-1 > input && input > 1) for a cosine. What does that mean, exactly? Would someone be able to explain what's happening? Any help is appreciated!
Here's my method:
public double AngleBetween(vector b)
{
var dotProd = this.Dot(b);
var lenProd = this.Len*b.Len;
var divOperation = dotProd/lenProd;
// http://msdn.microsoft.com/en-us/library/system.math.acos.aspx
return Math.Acos(divOperation) * (180.0 / Math.PI);
}
Here's my implementation of Dot and Len:
public double Dot(vector b)
{
// x's and y's are lattitudes and longitudes (respectively)
return ( this.From.x*b.From.x + this.From.y*b.From.y);
}
public double Len{
get
{
// geo is of type SqlGeography (MS SQL 2008 Spatial Type) with an SRID of 4326
return geo.STLength().Value;
}
}
You have vectors for which divOperation turns out to be < -1 or > 1? Then I think you should check your implementations of Dot and Len.
Since the Cos of an angle is always between -1 and +1 there is no way to compute the inverse function (Acos) of a value outside that range OR it means you passed NaN to the ACos function.
I suspect in this case it's the latter - one of your lengths is probably zero.
NaN means "not a number". Mathematically, you can't take the arccosine of a number that is outside the range [-1, 1] (or maybe you can but the result is complex -- I don't remember) so the result of trying to do that is not any number at all.

Categories