could not use the full range of double in C#? [duplicate] - c#

I have a unit test, testing boundaries:
[TestMethod]
[ExpectedException(typeof(ArgumentOutOfRangeException))]
public void CreateExtent_InvalidTop_ShouldThrowArgumentOutOfRangeException()
{
var invalidTop = 90.0 + Double.Epsilon;
new Extent(invalidTop, 0.0, 0.0, 0.0);
}
public static readonly double MAX_LAT = 90.0;
public Extent(double top, double right, double bottom, double left)
{
if (top > GeoConstants.MAX_LAT)
throw new ArgumentOutOfRangeException("top"); // not hit
}
I thought I'd just tip the 90.0 over the edge by adding the minimum possible positive double to it, but now the exception is not thrown, any idea why?
When debugging, I see top as coming in as 90, when it should be 90.00000000.... something.
EDIT:
I should have thought a bit harder, 90+Double.Epsilon will lose its resolution. Seems the best way to go is do some bit shifting.
SOLUTION:
[TestMethod]
[ExpectedException(typeof(ArgumentOutOfRangeException))]
public void CreateExtent_InvalidTop_ShouldThrowArgumentOutOfRangeException()
{
var invalidTop = Utility.IncrementTiny(90); // 90.000000000000014
// var sameAsEpsilon = Utility.IncrementTiny(0);
new Extent(invalidTop, 0, 0, 0);
}
/// <summary>
/// Increment a double-precision number by the smallest amount possible
/// </summary>
/// <param name="number">double-precision number</param>
/// <returns>incremented number</returns>
public static double IncrementTiny(double number)
{
#region SANITY CHECKS
if (Double.IsNaN(number) || Double.IsInfinity(number))
throw new ArgumentOutOfRangeException("number");
#endregion
var bits = BitConverter.DoubleToInt64Bits(number);
// if negative then go opposite way
if (number > 0)
return BitConverter.Int64BitsToDouble(bits + 1);
else if (number < 0)
return BitConverter.Int64BitsToDouble(bits - 1);
else
return Double.Epsilon;
}
/// <summary>
/// Decrement a double-precision number by the smallest amount possible
/// </summary>
/// <param name="number">double-precision number</param>
/// <returns>decremented number</returns>
public static double DecrementTiny(double number)
{
#region SANITY CHECKS
if (Double.IsNaN(number) || Double.IsInfinity(number))
throw new ArgumentOutOfRangeException("number");
#endregion
var bits = BitConverter.DoubleToInt64Bits(number);
// if negative then go opposite way
if (number > 0)
return BitConverter.Int64BitsToDouble(bits - 1);
else if (number < 0)
return BitConverter.Int64BitsToDouble(bits + 1);
else
return 0 - Double.Epsilon;
}
This does the job.

Per the documentation of Double.Epsilon:
The value of the Epsilon property reflects the smallest positive
Double value that is significant in numeric operations or comparisons
when the value of the Double instance is zero.
(Emphasis mine.)
Adding it to 90.0 does not produce "the next smallest value after 90.0", this just yields 90.0 again.

Double.Epsilon is the smallest positive representable value. Just because it's representable on its own does not mean it's the smallest value between any other representable value and the next highest one.
Imagine you had a system to represent just integers. You can represent any integer to 5 significant figures, along with a scale (e.g. in the range 1-100).
So these values are exactly representable, for example
12345 (digits=12345, scale = 0)
12345000 (digits=12345, scale = 3)
In that system, the "epsilon" value would be 1... but if you add 1 to 12345000 you'd still end up with 12345000 because the system couldn't represent the exact result of 12345001.
Now apply the same logic to double, with all its intricacies, and you get a much smaller epsilon, but the same general principle: a value which is distinct from zero, but still can end up not making any difference when added to larger numbers.
Note that much larger values have the same property too - for example, if x is a very large double, then x + 1 may well be equal to x because the gap between two "adjacent" doubles becomes more than 2 as the values get big.

In C99 and C++, the function that does what you were trying to do is called nextafter and is in math.h. I do not know if C# has any equivalent, but if it does, I would expect it to have a similar name.

Because Double.Epsilon is the "smallest noticeable change" (loosely speaking) in a double number.
.. but this does not mean that it will have any effect when you use it.
As you know, floats/doubles vary in their resolution depending on the magnitude of the vlue they contain. For example, artificial:
...
-100 -> +-0.1
-10 -> +-0.01
0 -> +-0.001
10 -> +-0.01
100 -> +-0.1
...
If the resolutions were like this, the Epsilon would be 0.001, as it's the smallest possible change. But what would be the expected result of 1000000 + 0.001 in such system?

Related

How to generate a random double number in the inclusive [0,1] range? [duplicate]

This question already has answers here:
Get random double (floating point) value from random byte array between 0 and 1 in C#?
(6 answers)
random double in four different interval
(2 answers)
C#: Random.NextDouble and including borders of the custom interval
(4 answers)
Closed 1 year ago.
The following code generates a double number in the range [0,1) which means that 1 is exclusive.
var random = new Random();
random.NextDouble();
I am looking for some smart way to generate a random double number in the range [0,1]. It means that 1 is inclusive. I know that the probability of generating 0 or 1 is really low, but imagine that I want to implement a correct mathematical function that requires from me the inclusive limits. How can I do it?
The question is: What is the correct way of generating random in the range [0,1]. If there is no such way, I would love to learn it also.
After taking a shower, I have conceived of a potential solution based on my understanding of how a random floating point generator works. My solution makes three assumptions, which I believe to be reasonable, however I can not verify if these assumptions are correct or not. Because of this, the following code is purely academic in nature, and I would not recommend its use in practice. The assumptions are as follows:
The distribution of random.NextDouble() is uniform
The difference between any two adjacent numbers in the range produced by random.NextDouble() is a constant epsilon e
The maximum value generated by random.NextDouble() is equal to 1 - e
Provided that those three assumptions are correct, the following code generates random doubles in the range [0, 1].
// For the sake of brevity, we'll omit the finer details of reusing a single instance of Random
var random = new Random();
double RandomDoubleInclusive() {
double d = 0.0;
int i = 0;
do {
d = random.NextDouble();
i = random.Next(2);
} while (i == 1 && d > 0)
return d + i;
}
This is somewhat difficult to conceptualize, but the essence is somewhat like the below coin-flipping explanation, except instead of a starting value of 0.5, you start at 1, and if at any point the sum exceeds 1, you restart the entire process.
From an engineering standpoint, this code is a blatant pessimization with little practical advantage. However, mathematically, provided that the original assumptions are correct, the result will be as mathematically sound as the original implementation.
Below is the original commentary on the nature of random floating point values and how they're generated.
Original Reply:
Your question carries with it a single critical erroneous assumption: Your use of the word "Correct". We are working with floating point numbers. We abandoned correctness long ago.
What follows is my crude understanding of how a random number generator produces a random floating point value.
You have a coin, a sum starting at zero, and a value starting at one half (0.5).
Flip the coin.
If heads, add the value to the sum.
Half the value.
Repeat 23 times.
You have just generated a random number. Here are some properties of the number (for reference, 2^23 is 8,388,608, and 2^(-23) is the inverse of that, or approximately 0.0000001192):
The number is one of 2^23 possible values
The lowest value is 0
The highest value is 1 - 2^(-23);
The smallest difference between any two potential values is 2^(-23)
The values are evenly distributed across the range of potential values
The odds of getting any one value are completely uniform across the range
Those last two points are true regardless of how many times you flip the coin
The process for generating the number was really really easy
That last point is the kicker. It means if you can generate raw entropy (i.e. perfectly uniform random bits), you can generate an arbitrarily precise number in a very useful range with complete uniformity. Those are fantastic properties to have. The only caveat is that it doesn't generate the number 1.
The reason that caveat is seen as acceptable is because every other aspect of the generation is so damned good. If you're trying to get a high precision random value between 0 and 1, chances are you don't actually care about landing on 1 any more than you care about landing on 0.38719, or any other random number in that range.
While there are methods for getting 1 included in your range (which others have stated already), they're all going to cost you in either speed or uniformity. I'm just here to tell you that it might not actually be worth the tradeoff.
If you want uniform distribution, it‘s harder than it seams. Look how NextDouble is implemented.
There’re ways to produce uniformly distributed numbers in arbitrary intervals, an easy one is selectively discarding some of the generated values. Here’s how I would do that for your problem.
/// <summary>Utility function to generate random 64-bit numbers</summary>
static ulong nextUlong( Random rand )
{
Span<byte> buffer = stackalloc byte[ 8 ];
rand.NextBytes( buffer );
return BitConverter.ToUInt64( buffer );
}
/// <summary>Generate a random number in [ 0 .. +1 ] interval, inclusive.</summary>
public static double nextDoubleInclusive( Random rand )
{
// We need uniformly distributed integer in [ 0 .. 2^53 ]
// The interval contains ( 2^53 + 1 ) distinct values.
// The complete range of ulong is [ 0 .. 2^64 - 1 ], 2^64 distinct values.
// 2^64 / ( 2^53 + 1 ) is about 2047.99, here's why
// https://www.wolframalpha.com/input/?i=2%5E64+%2F+%28+2%5E53+%2B+1+%29
const ulong discardThreshold = 2047ul * ( ( 1ul << 53 ) + 1 );
ulong src;
do
{
src = nextUlong( rand );
}
while( src >= discardThreshold );
// Got uniformly distributed value in [ 0 .. discardThreshold ) interval
// Dividing by 2047 gets us a uniformly distributed value in [ 0 .. 2^53 ]
src /= 2047;
// Produce the result
return src * ( 1.0 / ( 1ul << 53 ) );
}
Usually, knowing that NextDouble() has a finite range, we multiply the value to suit the range we need.
For this reason it is common to create your own wrapper to produce the next business value when the built in logic does not meet your requirements.
For this particular example, why not just post process the result, when zero get the value from Next(0,2)
public static double NextInclude1(this Random rand = null)
{
rand = rand ?? new Random();
var result = rand.NextDouble();
if (result == 0) result = rand.Next(0,2);
return result;
}
You can implement your own bias for 0 or 1 as a result by varying the comparison to zero, if you do that though you are likely to create an exclusion range, so after the comparison you may need to return the next NextDouble()
public static double NextInclude1(this Random rand = null)
{
rand = rand ?? new Random();
var result = rand.NextDouble();
if (result < 0.2)
result = rand.Next(0,2);
else
result = rand.NextDouble();
return result;
}
This particular example results in an overall bias for 0, it's up to you to determine the specific parameters that you would accept, overall NextDouble() is your base level tool for most of your custom Random needs.
The Random.Next method returns an integer value in the range [0..Int32.MaxValue) (the exclusive range-end is denoted by the right parenthesis). So if you want to make the value 1.0 a possible result of the NextDouble method (source code), you could do this:
/// <summary>Returns a random floating-point number that is greater than or equal to 0.0,
/// and less than or equal to 1.0.</summary>
public static double NextDoubleInclusive(this Random random)
{
return (random.Next() * (1.0 / (Int32.MaxValue - 1)));
}
This fiddle verifies that the expression (Int32.MaxValue - 1) * (1.0 / (Int32.MaxValue - 1)) evaluates to 1.0.
This definitely works, You can check the distribution here https://dotnetfiddle.net/SMMOrM
Random random = new Random();
double result = (int)(random.NextDouble() * 10) > 4
? random.NextDouble()
: 1 - random.NextDouble();
Update
Agree with Snoot, this version would return 0 and 1 twice less often as other values
Easy, you can do this
var random = new Random();
var myRandom = 1 - Math.Abs(random.NextDouble() - random.NextDouble());
update
Sorry, this won't generate normal distribution of results, where they will tend to be higher ones, close to 1.

How To Display Only One Number After Point Without RoundUp [duplicate]

How can I multiply two decimals and round the result down to 2 decimal places?
For example if the equation is 41.75 x 0.1 the result will be 4.175. If I do this in c# with decimals it will automatically round up to 4.18. I would like to round down to 4.17.
I tried using Math.Floor but it just rounds down to 4.00. Here is an example:
Math.Floor (41.75 * 0.1);
The Math.Round(...) function has an Enum to tell it what rounding strategy to use. Unfortunately the two defined won't exactly fit your situation.
The two Midpoint Rounding modes are:
AwayFromZero - When a number is halfway between two others, it is rounded toward the nearest number that is away from zero. (Aka, round up)
ToEven - When a number is halfway between two others, it is rounded toward the nearest even number. (Will Favor .16 over .17, and .18 over .17)
What you want to use is Floor with some multiplication.
var output = Math.Floor((41.75 * 0.1) * 100) / 100;
The output variable should have 4.17 in it now.
In fact you can also write a function to take a variable length as well:
public decimal RoundDown(decimal i, double decimalPlaces)
{
var power = Convert.ToDecimal(Math.Pow(10, decimalPlaces));
return Math.Floor(i * power) / power;
}
public double RoundDown(double number, int decimalPlaces)
{
return Math.Floor(number * Math.Pow(10, decimalPlaces)) / Math.Pow(10, decimalPlaces);
}
As of .NET Core 3.0 and the upcoming .NET Framework 5.0 the following is valid
Math.Round(41.75 * 0.1, 2, MidpointRounding.ToZero)
There is no native support for precision floor/ceillin in c#.
You can however mimic the functionality by multiplying the number, the floor, and then divide by the same multiplier.
eg,
decimal y = 4.314M;
decimal x = Math.Floor(y * 100) / 100; // To two decimal places (use 1000 for 3 etc)
Console.WriteLine(x); // 4.31
Not the ideal solution, but should work if the number is small.
One more solution is to make rounding toward zero from rounding away from zero.
It should be something like this:
static decimal DecimalTowardZero(decimal value, int decimals)
{
// rounding away from zero
var rounded = decimal.Round(value, decimals, MidpointRounding.AwayFromZero);
// if the absolute rounded result is greater
// than the absolute source number we need to correct result
if (Math.Abs(rounded) > Math.Abs(value))
{
return rounded - new decimal(1, 0, 0, value < 0, (byte)decimals);
}
else
{
return rounded;
}
}
This is my Float-Proof Round Down.
public static class MyMath
{
public static double RoundDown(double number, int decimalPlaces)
{
string pr = number.ToString();
string[] parts = pr.Split('.');
char[] decparts = parts[1].ToCharArray();
parts[1] = "";
for (int i = 0; i < decimalPlaces; i++)
{
parts[1] += decparts[i];
}
pr = string.Join(".", parts);
return Convert.ToDouble(pr);
}
}
I've found that the best method is to use strings; the binary vagaries of Math tend to get things wrong, otherwise. One waits for .Net 5.0 to make this fact obsolete. No decimal places is a special case: you can use Math.Floor for that. Otherwise, we ToString the number with one more decimal place than is required, then parse that without its last digit to get the answer:
/// <summary>
/// Truncates a Double to the given number of decimals without rounding
/// </summary>
/// <param name="D">The Double</param>
/// <param name="Precision">(optional) The number of Decimals</param>
/// <returns>The truncated number</returns>
public static double RoundDown(this double D, int Precision = 0)
{
if (Precision <= 0) return Math.Floor(D);
string S = D.ToString("0." + new string('0', Precision + 1));
return double.Parse(S.Substring(0, S.Length - 1));
}
If you want to round down any double to specific decimal places, if doesn´t matter if is the midpoint, you can use:
public double RoundDownDouble(double number, int decimaPlaces)
{
var tmp = Math.Pow(10, decimaPlaces);
return Math.Truncate(number * tmp) / tmp;
}

Rounding down to 2 decimal places in c#

How can I multiply two decimals and round the result down to 2 decimal places?
For example if the equation is 41.75 x 0.1 the result will be 4.175. If I do this in c# with decimals it will automatically round up to 4.18. I would like to round down to 4.17.
I tried using Math.Floor but it just rounds down to 4.00. Here is an example:
Math.Floor (41.75 * 0.1);
The Math.Round(...) function has an Enum to tell it what rounding strategy to use. Unfortunately the two defined won't exactly fit your situation.
The two Midpoint Rounding modes are:
AwayFromZero - When a number is halfway between two others, it is rounded toward the nearest number that is away from zero. (Aka, round up)
ToEven - When a number is halfway between two others, it is rounded toward the nearest even number. (Will Favor .16 over .17, and .18 over .17)
What you want to use is Floor with some multiplication.
var output = Math.Floor((41.75 * 0.1) * 100) / 100;
The output variable should have 4.17 in it now.
In fact you can also write a function to take a variable length as well:
public decimal RoundDown(decimal i, double decimalPlaces)
{
var power = Convert.ToDecimal(Math.Pow(10, decimalPlaces));
return Math.Floor(i * power) / power;
}
public double RoundDown(double number, int decimalPlaces)
{
return Math.Floor(number * Math.Pow(10, decimalPlaces)) / Math.Pow(10, decimalPlaces);
}
As of .NET Core 3.0 and the upcoming .NET Framework 5.0 the following is valid
Math.Round(41.75 * 0.1, 2, MidpointRounding.ToZero)
There is no native support for precision floor/ceillin in c#.
You can however mimic the functionality by multiplying the number, the floor, and then divide by the same multiplier.
eg,
decimal y = 4.314M;
decimal x = Math.Floor(y * 100) / 100; // To two decimal places (use 1000 for 3 etc)
Console.WriteLine(x); // 4.31
Not the ideal solution, but should work if the number is small.
One more solution is to make rounding toward zero from rounding away from zero.
It should be something like this:
static decimal DecimalTowardZero(decimal value, int decimals)
{
// rounding away from zero
var rounded = decimal.Round(value, decimals, MidpointRounding.AwayFromZero);
// if the absolute rounded result is greater
// than the absolute source number we need to correct result
if (Math.Abs(rounded) > Math.Abs(value))
{
return rounded - new decimal(1, 0, 0, value < 0, (byte)decimals);
}
else
{
return rounded;
}
}
This is my Float-Proof Round Down.
public static class MyMath
{
public static double RoundDown(double number, int decimalPlaces)
{
string pr = number.ToString();
string[] parts = pr.Split('.');
char[] decparts = parts[1].ToCharArray();
parts[1] = "";
for (int i = 0; i < decimalPlaces; i++)
{
parts[1] += decparts[i];
}
pr = string.Join(".", parts);
return Convert.ToDouble(pr);
}
}
I've found that the best method is to use strings; the binary vagaries of Math tend to get things wrong, otherwise. One waits for .Net 5.0 to make this fact obsolete. No decimal places is a special case: you can use Math.Floor for that. Otherwise, we ToString the number with one more decimal place than is required, then parse that without its last digit to get the answer:
/// <summary>
/// Truncates a Double to the given number of decimals without rounding
/// </summary>
/// <param name="D">The Double</param>
/// <param name="Precision">(optional) The number of Decimals</param>
/// <returns>The truncated number</returns>
public static double RoundDown(this double D, int Precision = 0)
{
if (Precision <= 0) return Math.Floor(D);
string S = D.ToString("0." + new string('0', Precision + 1));
return double.Parse(S.Substring(0, S.Length - 1));
}
If you want to round down any double to specific decimal places, if doesn´t matter if is the midpoint, you can use:
public double RoundDownDouble(double number, int decimaPlaces)
{
var tmp = Math.Pow(10, decimaPlaces);
return Math.Truncate(number * tmp) / tmp;
}

I'm trying to understand Microsoft's DoubleUtil.AreClose() code that I reflected over

If you reflect over WindowsBase.dll > MS.Internal.DoubleUtil.AreClose(...) you'll get the following code:
public static bool AreClose(double value1, double value2)
{
if (value1 == value2)
{
return true;
}
double num2 = ((Math.Abs(value1) + Math.Abs(value2)) + 10.0) * 2.2204460492503131E-16;
double num = value1 - value2;
return ((-num2 < num) && (num2 > num));
}
I'm trying to understand two different things:
Where did they come up with the formula for num2? I guess I just don't understand the significance of first adding the value of 10.0 and secondly multiplying all the results by this the number 2.2204460492503131E-16. Anyone know why this is the formula used?
What is the point of the return statement there? It seems that by default if num2 greater than num than the negated value of num2 should be less than num. Maybe I'm missing something here, but it seems redundant. To me it's like checking if 5 is larger than 3 and if -5 is less than 3 (as an example).
This appears to be a "tolerance" value that's based on the magnitude of the numbers being compared. Note that due to how floating point numbers are represented, the smallest representable difference between numbers with an exponent of 0 is 2-53 or approximately 1.11022 × 10-16. (See unit in the last place and floating point on Wikipedia.) The constant here is exactly two times that value, so it allows for small rounding errors that have accumulated during computations.
If you reorder the parameters in the conditionals, and then rename num2 to tolerance and num to diff, it should become clear.
Viz.:
return ((-num2 < num) && (num2 > num));
return ((num > -num2) && (num < num2));
return ((diff > -tolerance) && (diff < tolerance));
The comments should help understanding this method :)
/// <summary>
/// AreClose - Returns whether or not two doubles are "close". That is, whether or
/// not they are within epsilon of each other. Note that this epsilon is proportional
/// to the numbers themselves to that AreClose survives scalar multiplication.
/// There are plenty of ways for this to return false even for numbers which
/// are theoretically identical, so no code calling this should fail to work if this
/// returns false. This is important enough to repeat:
/// NB: NO CODE CALLING THIS FUNCTION SHOULD DEPEND ON ACCURATE RESULTS - this should be
/// used for optimizations *only*.
/// </summary>
/// <returns>
/// bool - the result of the AreClose comparision.
/// </returns>
/// <param name="value1"> The first double to compare. </param>
/// <param name="value2"> The second double to compare. </param>
public static bool AreClose(double value1, double value2)
{
// in case they are Infinities (then epsilon check does not work)
if (value1 == value2)
{
return true;
}
// This computes (|value1-value2| / (|value1| + |value2| + 10.0)) < DBL_EPSILON
double eps = (Math.Abs(value1) + Math.Abs(value2) + 10.0) * DBL_EPSILON;
double delta = value1 - value2;
return (-eps < delta) && (eps > delta);
}
Update
And here the "mystic" value DBL_EPSILON
// Const values come from sdk\inc\crt\float.h
internal const double DBL_EPSILON = 2.2204460492503131e-016; /* smallest such that 1.0+DBL_EPSILON != 1.0 */
src
Searching on google for that number lead me to this page
http://en.m.wikipedia.org/wiki/Machine_epsilon
In graphics, calculating geometry can result in little two points which may be very close from pixel point of view. Since floating point numbers may give little different result due to rounding done at bitwise calculation. So this method checks if number is close to another number within range of machine epsilon.
I don't know why but the closer are the numbers to 0 the difference must be smaller to pass the check.
And for small numbers the return makes sense, take values 0 and 1 for example. Without the first part it would pass but 0 and 1 are not close enough :)

Double.Epsilon for equality, greater than, less than, less than or equal to, greater than or equal to

http://msdn.microsoft.com/en-us/library/system.double.epsilon.aspx
If you create a custom algorithm that
determines whether two floating-point
numbers can be considered equal, you
must use a value that is greater than
the Epsilon constant to establish the
acceptable absolute margin of
difference for the two values to be
considered equal. (Typically, that
margin of difference is many times
greater than Epsilon.)
So is this not really an epsilon that could be used for comparisons? I don't really understand the MSDN wording.
Can it be used as the epsilon in the examples here? - What is the most effective way for float and double comparison?
And finally this seems really important so I'd like to make sure I have a solid implementation for equality, greater than, less than, less than or equal to, and greater than or equal to.
I don't know what they were smoking when they wrote that. Double.Epsilon is the smallest representable non-denormal floating point value that isn't 0. All you know is that, if there's a truncation error, it will always be larger than this value. Much larger.
The System.Double type can represent values accurate to up to 15 digits. So a simple first order estimate if a double value x is equal to some constant is to use an epsilon of constant * 1E-15
public static bool AboutEqual(double x, double y) {
double epsilon = Math.Max(Math.Abs(x), Math.Abs(y)) * 1E-15;
return Math.Abs(x - y) <= epsilon;
}
You have to watch out though, truncation errors can accumulate. If both x and y are computed values then you have to increase the epsilon.
I'd like to make sure I have a solid implementation for equality, greater than, less than, less than or equal to, and greater than or equal to.
You are using binary floating point arithmetic.
Binary floating point arithmetic was designed to represent physical quantities like length, mass, charge, time, and so on.
Presumably then you are using binary floating point arithmetic as it was intended to be used: to do arithmetic on physical quantities.
Measurements of physical quantities always have a particular precision, depending on the precision of the device used to measure them.
Since you are the one providing the values for the quantities you are manipulating, you are the one who knows what the "error bars" are on that quantity. For example, if you are providing the quantity "the height of the building is 123.56 metres" then you know that this is accurate to the centimetre, but not to the micrometer.
Therefore, when comparing two quantities for equality, the desired semantics is to say "are these two quantities equal within the error bars specified by each measurement?"
So now we have an answer to your question. What you must do is keep track of what the error is on each quantity; for example, the height of the building is "within 0.01 of 123.56 meters" because you know that is how precise the measurement is. If you then get another measurement which is 123.5587 and want to know whether the two measurements are "equal" within error tolerances, then do the subtraction and see if it falls into the error tolerance. In this case it does. If the measurements were in fact precise to the micrometre, then they are not equal.
In short: you are the only person here who knows what sensible error tolerances are, because you are the only person who knows where the figures you are manipulating came from in the first place. Use whatever error tolerance makes sense for your measurements given the precision of the equipment you used to produce it.
If you have two double values that are close to 1.0, but they differ in only their least significant bits, then the difference between them will be many orders of magnitude greater than Double.Epsilon. In fact, the difference is 324 decimal orders of magnitude. This is because of the effect of the exponent portion. Double.Epsilon has a huge negative exponent on it, while 1.0 has an exponent of zero (after the biases are removed, of course).
If you want to compare two similar values for equality, then you will need to choose a custom epsilon value that is appropriate for the orders-of-magnitude size of the values to be compared.
If the double values that you are comparing are near 1.0. Then the value of the least siginificant bit would be near 0.0000000000000001. If the double values that you are comparing are in the quadrillions, then the value of the least significant bit could be as much as a thousand. No single value for epsilon could be used for equality comparisons in both of those circumstances.
I just did this - using Kent Bogarts idea.
private bool IsApproximatelyEqual(double x, double y, double acceptableVariance)
{
double variance = x > y ? x - y : y - x;
return variance < acceptableVariance;
//or
//return Math.Abs(x - y) < acceptableVariance;
}
It could be used for comparisons, assuming you want to ensure the two values are either exactly equal, or have the smallest representable difference for the double type. Generally speaking, you would want to use a number greater than double.Epsilon to check whether two doubles are approximately equal.
Why the .NET framework doesn't define something like
bool IsApproximatelyEqual(double value, double permittedVariance);
is beyond me.
I use the following
public static class MathUtil {
/// <summary>
/// smallest such that 1.0+EpsilonF != 1.0
/// </summary>
public const float EpsilonF = 1.192092896e-07F;
/// <summary>
/// smallest such that 1.0+EpsilonD != 1.0
/// </summary>
public const double EpsilonD = 2.2204460492503131e-016;
[MethodImpl( MethodImplOptions.AggressiveInlining )]
public static bool IsZero( this double value ) {
return value < EpsilonD && value > -EpsilonD;
}
[MethodImpl( MethodImplOptions.AggressiveInlining )]
public static int Sign( this double value ) {
if ( value < -EpsilonD ) {
return -1;
}
if ( value > EpsilonD )
return 1;
return 0;
}
and if you want to check for equality of two doubles 'a' and 'b', you can use
(a-b).IsZero();
and if you want to get the comparison result, use
(a-b).Sign();
The problem with comparing doubles is that when you do a comparison between two different math results that are equal but which, due to rounding errors, aren't evaluating to the same value, they will have some difference...which is larger than epsilon, except on edge cases. And using a reliable epsilon value is also difficult. Some people consider two doubles equal if the difference between them is less than some percentage value, since using a static minimum difference epsilon may mean your differences are too small or large when the double itself is high or low.
Here's some code that included twice within the Silverlight Control Toolkit:
public static bool AreClose(double value1, double value2)
{
//in case they are Infinities (then epsilon check does not work)
if(value1 == value2) return true;
// This computes (|value1-value2| / (|value1| + |value2| + 10.0)) < DBL_EPSILON
double eps = (Math.Abs(value1) + Math.Abs(value2) + 10.0) * DBL_EPSILON;
double delta = value1 - value2;
return(-eps < delta) && (eps > delta);
}
In one place they use 1e-6 for epsilon; in another they use 1.192093E-07. You will want to choose your own epsilon.
There is no choice you have to calculate it yourself or define own constant.
double calculateMachineEpsilon() {
double result = 1.0;
double one = 1.0/256;
while(one + result/2.0 != 1.0) {
result/=2.0;
}
return result;
}

Categories