Imagine that a - b < c (a, b, c are C# doubles). Is it guaranteed that a < b + c?
Thanks!
EDIT
Let's say that the arithmetical overflow doesn't occur unlike the following example:
double a = 1L << 53;
double b = 1;
double c = a;
Console.WriteLine(a - b < c); // Prints True
Console.WriteLine(a < b + c); // Prints False
Imagine that Math.Abs(a) < 1.0 && Math.Abs(b) < 1.0 && Math.Abs(c) < 1.0
No. Suppose a = c, a very large number, and b is a very small number. It's possible that a - b has a representation less than a, but a + b is so close to a (and bigger) that it still ends up being most precisely representable as a.
Here's an example:
double a = 1L << 53;
double b = 1;
double c = a;
Console.WriteLine(a - b < c); // Prints True
Console.WriteLine(a < b + c); // Prints False
EDIT:
Here's another example, which matches your edited question:
double a = 1.0;
double b = 1.0 / (1L << 53);
double c = a;
Console.WriteLine(a - b < c); // Prints True
Console.WriteLine(a < b + c); // Prints False
In other words, when we subtract a very small number from 1, we get a result less than 1. When we add the same number to 1, we just get 1 back due to the limitations of double precision.
no not always:
double a = double.MaxValue;
double b = double.MaxValue;
double c = 0.1;
Console.WriteLine(a - b < c); // True
Console.WriteLine(a < b + c); // False
This link speaks about floating-point arithmetic properties, and could be very interesting:
FLOATING-POINT FALLACIES
In particular, search for Properties of Relations
Related
I am trying to solve the Fibonacci sequence with both negative numbers and large numbers and came up with the following code and algorithm. I am certain the algorithm works, but the issue I am having is for very large numbers the precision of the result is incorrect. Here is the code:
public class Fibonacci
{
public static BigInteger fib(int n)
{
decimal p = (decimal) (1 + Math.Sqrt(5)) / 2;
decimal q = (decimal) (1 - Math.Sqrt(5)) / 2;
decimal r = (decimal) Math.Sqrt(5);
Console.WriteLine("n: {0} p: {1}, q: {2}, t: {3}",
n,
p,
q,
(Pow(p, n) - Pow(q, n)) / r);
return (BigInteger) (Decimal.Round((Pow(p, n) - Pow(q, n)) / r));
}
public static decimal Pow(decimal x, int y)
{
if(y < 0)
return 1 / Pow(x, -1 * y);
else if(y == 0)
return 1;
else if(y % 2 == 0)
{
decimal z = Pow(x, y / 2);
return z * z;
}
else if(y % 2 == 1)
return Pow(x, y - 1) * x;
else
return 1;
}
Small values of If we take a large number like -96 to get the Fibonacci for, I get a result of -51680708573203484173 but the real number is -51680708854858323072. I checked the rounding was OK, but it appears somewhere along the way my result is losing precision and not saving its values correctly. I thought using decimals would solve this precision issue (previously used doubles), but that did not work.
Where in my code am I incorrectly missing precision or is there another issue with my code I am misdiagnosing?
Try this.
public static BigInteger Fibonacci(int n)
{
BigInteger a = 0;
BigInteger b = 1;
for (int i = 31; i >= 0; i--)
{
BigInteger d = a * (b * 2 - a);
BigInteger e = a * a + b * b;
a = d;
b = e;
if ((((uint)n >> i) & 1) != 0)
{
BigInteger c = a + b;
a = b;
b = c;
}
}
return a;
}
Good Luck!
As you wrote, decimal has approximately 28 decimal digits of precision. However, Math.Sqrt(5), being a double, does not.
Using a more accurate square root of 5 enables this algorithm to stay exact for longer, though of course it is still limited by precision eventually, just later.
public static BigInteger fib(int n)
{
decimal sqrt5 = 2.236067977499789696409173668731276235440618359611525724270m;
decimal p = (1 + sqrt5) / 2;
decimal q = (1 - sqrt5) / 2;
decimal r = sqrt5;
return (BigInteger) (Decimal.Round((Pow(p, n) - Pow(q, n)) / r));
}
This way fib(96) = 51680708854858323072 which is correct. However, it becomes wrong again at 128.
I have a math equation and I want use it in my program
but I got an error. The result from this equation is non number ..
the math equation i:
note: n is the number in the textbox
((298/2)^2*ACOS(((298/2)-n)/(298/2))-((298/2)-n)*((2*(298/2)*n-n^2))^(0.5))*1213/1000
in button code
double a = Math.Pow((298.00 / 2), 2);
double b = (Math.Acos(((298.00 / 2) - Convert.ToDouble(textBox1.Text)) / (298.00 / 2)));
double c = ((298.00 / 2) - Convert.ToDouble(textBox1.Text));
double d = (2 * (298.00 / 2) * Convert.ToDouble(textBox1.Text));
double f = Math.Pow(Convert.ToDouble(textBox1.Text), 2);
double r = ((a * b) - (c * d) )- f;
double result = Math.Pow(r, (0.5));
double h = result * 1213.00 / 1000;
textBox1.Text = Convert.ToString(h );
}
If anyone knows what is wrong here, tell me please.
The error is in variable r because the result from r is less that 0!
Lets rewrite given math expression in more convenient way:
( (298/2)^2 * ACOS(((298/2)-n)/(298/2)) - ((298/2)-n) * ( (2*(298/2)*n - n^2 ) ) ^ (0.5) ) * 1213/1000
--------- ------------------------- ----------- ------------ ---
a b c d f
--------------------------------
r = (d - f) ^ 0.5
------------------------------------------------------------------------------------------------------
(a * b - c * r) * 1213/1000
Now it is easier to see that you have an error in formula:
double r = ((a * b) - (c * d) )- f;
r must be calculated using the next formula:
double r = Math.Sqrt(d - f); // It is better to use Math.Sqrt(x) instead of Math.Pow(x, 0.5).
Now knowing where the error is we can fix it:
double n = Convert.ToDouble(textBox1.Text);
double a = Math.Pow(298.00 / 2, 2);
double b = Math.Acos( (298.00 / 2 - n) / (298.00 / 2) );
double c = 298.00 / 2 - n;
double d = 2 * 298.00 / 2 * n;
double f = Math.Pow(n, 2);
double r = Math.Sqrt(d - f);
double result = a * b - c * r;
double h = result * 1213.00 / 1000;
textBox1.Text = Convert.ToString(h);
But for values of n > 298 you will still get r = NaN because for such values of n expression d - f is negative.
I'm trying to find solutions to a simple MDAS (i.e. Multiplication, Division...) problem using C#, and while it mostly gets correct solutions, I have problems when the variables add up to x.99999... so I get wrong answers because it doesn't compute to 1.
For example, if I have:
decimal a = 1M;
decimal b = 2M;
decimal c = 6M;
decimal d = 4M;
decimal e = 7M;
decimal f = 8M;
decimal g = 3M;
decimal h = 5M;
decimal i = 9M;
Console.WriteLine(a + 13 * b / c + d + 12 * e - f - 11);
Console.WriteLine(g * h / i);
Console.WriteLine(a + 13 * b / c + d + 12 * e - f - 11 + g * h / i);
Which gives me:
74.33333333333333333333333333
1.6666666666666666666666666667
75.999999999999999999999999997
But I want:
74.33333333333333333333333333
1.6666666666666666666666666667
76
Is there a way that I can always get a precise answer to .(6)+.(3) = 1 without needing to check and modify the values? If not what is the best way to go about it?
The following will round your double value to within a precision of 0.1:
// parameters
double d = -7.9;
double precision = 0.1;
//the conversion
double v = (Math.Abs(Math.Round(d)-d)) < precision ? Math.Round(d) : d;
//output
int i = (int) v;
Console.WriteLine(i);
I went with #tia and Dai's suggestion to use a Rational data type. I got tompazourek's Rationals NuGet package and used it like this:
Rational a = 1;
Rational b = 2;
Rational c = 6;
Rational d = 4;
Rational e = 7;
Rational f = 8;
Rational g = 3;
Rational h = 5;
Rational i = 9;
Console.WriteLine(a + (Rational)13 * b / c + d + (Rational)12 * e - f - (Rational)11);
Console.WriteLine(g * h / i);
Rational toTest = a + (Rational)13 * b / c + d + (Rational)12 * e - f - (Rational)11 + g * h / i;
Console.WriteLine(toTest);
Console.WriteLine(toTest.Equals(76));
Which gives me:
446/6
15/9
4104/54
True
int a = 3, b = 9, c = 2;
double e = 11, f = 0.1;
e = (b + c) / a * a;
Now, I want to now what the result of e is. When I do the math in my head, I get the result to be 11/9 = 1,22222222
BUT
when I run the program in the compiler, I simply get 9. Which way to think is right?
There are 2 thing going wrong,
your misunderstanding your order of operations
your integers are truncating
so, what the compiler is doing is
9+2 = 11
11/3 = 3.666 truncated to 3
3*3 = 9
Use parenthesis, and don't use ints, use all doubles.
double a = 3, b = 9, c = 2, d;
double e = 11, f = 0.1;
string s = "AB", t = "BA", v;
e = (b + c) / (a * a);
if a, b, and c all have to be ints, than you can cast them
int a = 3, b = 9, c = 2, d;
double e = 11, f = 0.1;
string s = "AB", t = "BA", v;
e = ((double)b + (double)c) / ((double)a * (double)a);
now this could just easily be done with
e = (double)(b + c) / (double)(a * a);
and that's because both operations in parenthesis leave no remainder, but it is a bad practice to rely on those kind of coincidences.
All your input variables are integers, so the arithmetic is done as integer arithmetic. Also all arithemtic operators in C# are evaluated left to right, so
e = (b + c) / a * a is equivalent to e = ((b + c) / a ) * a
(b + c ) / a * a = ((9 + 2) / 3) * 3
= (11 / 3) * 3
= 3 * 3
= 9
The expression that I think you want is
e = (double)(b + c) / (a * a);
For details on operator precendence in C# see this .NET documentation
The compiler is quite correct - you may want to review how operator precedence in C# works.
You have two choice.Use doubles instead of integers, or use explicit cast and convert one of your values to double
e = (b + c) / (double)(a * a);
This statement :
(b + c) / a * a
evaluated like:
(int) / (int)
If you divide two integers, how could you get double results ? Then your result converting double with implicit type conversion.But result is already '1' without the floating points,and conversion is meaningless in this case.
But when you do something like this:
e = (b + c) / (double)(a * a);
Returning value from b + c promoted to double and then (double / double) division is performing.So you get the correct results.
How can I compute a base 2 logarithm without using the built-in math functions in C#?
I use Math.Log and BigInteger.Log repeatedly in an application millions of times and it becomes painfully slow.
I am interested in alternatives that use binary manipulation to achieve the same. Please bear in mind that I can make do with Log approximations in case that helps speed up execution times.
Assuming you're only interested in the integral part of the logarithm, you can do something like that:
static int LogBase2(uint value)
{
int log = 31;
while (log >= 0)
{
uint mask = (1 << log);
if ((mask & value) != 0)
return (uint)log;
log--;
}
return -1;
}
(note that the return value for 0 is wrong; it should be negative infinity, but there is no such value for integral datatypes so I return -1 instead)
http://graphics.stanford.edu/~seander/bithacks.html
For the BigInteger you could use the toByteArray() method and then manually find the most significant 1 and count the number of zeroes afterward. This would give you the base-2 logarithm with integer precision.
The bit hacks page is useful for things like this.
Find the log base 2 of an integer with a lookup table
The code there is in C, but the basic idea will work in C# too.
If you can make due with approximations then use a trick that Intel chips use: precalculate the values into an array of suitable size and then reference that array. You can make the array start and end with any min/max values, and you can create as many in-between values as you need to achieve the desired accuracy.
You can try this C algorithm to get the binary logarithm (base 2) of a double N :
static double native_log_computation(const double n) {
// Basic logarithm computation.
static const double euler = 2.7182818284590452354 ;
unsigned a = 0, d;
double b, c, e, f;
if (n > 0) {
for (c = n < 1 ? 1 / n : n; (c /= euler) > 1; ++a);
c = 1 / (c * euler - 1), c = c + c + 1, f = c * c, b = 0;
for (d = 1, c /= 2; e = b, b += 1 / (d * c), b - e /* > 0.0000001 */ ;)
d += 2, c *= f;
} else b = (n == 0) / 0.;
return n < 1 ? -(a + b) : a + b;
}
static inline double native_ln(const double n) {
// Returns the natural logarithm (base e) of N.
return native_log_computation(n) ;
}
static inline double native_log_base(const double n, const double base) {
// Returns the logarithm (base b) of N.
// Right hand side can be precomputed to 2.
return native_log_computation(n) / native_log_computation(base) ;
}
Source