Dividing a BigIntegers to return double - c#

I want to calculate the slope of a line.
public sealed class Point
{
public System.Numerics.BigInteger x = 0;
public System.Numerics.BigInteger y = 0;
public double CalculateSlope (Point point)
{
return ((point.Y - this.Y) / (point.X - this.X));
}
}
I know that BigInteger has a DivRem function that returns the division result plus the remainder but am not sure how to apply it to get a double. The numbers I'm dealing with are far far beyond the range of Int64.MaxValue so the remainder itself could be out of range to calculate by conventional division.
EDIT:
Not sure if it helps but I'm dealing with only positive integers (>=1).
IMPORTANT: I only need a few decimal points of precision (5 should be good enough for my purpose).

Get BigRational from Codeplex. Its part of Microsoft's Base Class Library, so it's a work-in-progress for .Net. Once you have that, then do something like:
System.Numerics.BigInteger x = GetDividend() ;
System.Numerics.BigInteger y = GetDivisor() ;
BigRational r = new BigRational( x , y ) ;
double value = (double) r ;
Dealing with the inevitable overflow/underflow/loss of precision is, of course, another problem.
Since you can't drop the BigRational library into your code, evidently, the other approach would be to get out the right algorithms book and roll your own...
The easy way, of course, of "rolling one's own" here, since a rational number is represented as the ratio (division) of two integers, is to grab the explicit conversion to double operator from the BigRational class and tweak it to suit. It took me about 15 minutes.
About the only significant modification I made is in how the sign of the result is set when the result is positive or negative zero/infinity. While I was at it, I converted it to a BigInteger extension method for you:
public static class BigIntExtensions
{
public static double DivideAndReturnDouble( this BigInteger x , BigInteger y )
{
// The Double value type represents a double-precision 64-bit number with
// values ranging from -1.79769313486232e308 to +1.79769313486232e308
// values that do not fit into this range are returned as +/-Infinity
if (SafeCastToDouble(x) && SafeCastToDouble(y))
{
return (Double) x / (Double) y;
}
// kick it old-school and figure out the sign of the result
bool isNegativeResult = ( ( x.Sign < 0 && y.Sign > 0 ) || ( x.Sign > 0 && y.Sign < 0 ) ) ;
// scale the numerator to preseve the fraction part through the integer division
BigInteger denormalized = (x * s_bnDoublePrecision) / y ;
if ( denormalized.IsZero )
{
return isNegativeResult ? BitConverter.Int64BitsToDouble(unchecked((long)0x8000000000000000)) : 0d; // underflow to -+0
}
Double result = 0 ;
bool isDouble = false ;
int scale = DoubleMaxScale ;
while ( scale > 0 )
{
if (!isDouble)
{
if ( SafeCastToDouble(denormalized) )
{
result = (Double) denormalized;
isDouble = true;
}
else
{
denormalized = denormalized / 10 ;
}
}
result = result / 10 ;
scale-- ;
}
if (!isDouble)
{
return isNegativeResult ? Double.NegativeInfinity : Double.PositiveInfinity;
}
else
{
return result;
}
}
private const int DoubleMaxScale = 308 ;
private static readonly BigInteger s_bnDoublePrecision = BigInteger.Pow( 10 , DoubleMaxScale ) ;
private static readonly BigInteger s_bnDoubleMaxValue = (BigInteger) Double.MaxValue;
private static readonly BigInteger s_bnDoubleMinValue = (BigInteger) Double.MinValue;
private static bool SafeCastToDouble(BigInteger value)
{
return s_bnDoubleMinValue <= value && value <= s_bnDoubleMaxValue;
}
}

The BigRational library has a conversion operator to double.
Also, remember to return infinity as a special case for a vertical line, you'll get a divide by zero exception with your current code. Probably best to just calculate the X1 - X2 first, and return infinity if it's zero, then do the division, to avoid redundant operations.

This does not deal with negative but hopefully give you a start.
double doubleMax = double.MaxValue;
BigInteger numerator = 120;
BigInteger denominator = 50;
if (denominator != 0)
{
Debug.WriteLine(numerator / denominator);
Debug.WriteLine(numerator % denominator);
BigInteger ansI = numerator / denominator;
if (ansI < (int)doubleMax)
{
double slope = (double)ansI + ((double)(numerator % denominator) / (double)denominator); ;
Debug.WriteLine(slope);
}
}

Related

C# Decimal Precision Improvement in Powers and Fibonacci

I am trying to solve the Fibonacci sequence with both negative numbers and large numbers and came up with the following code and algorithm. I am certain the algorithm works, but the issue I am having is for very large numbers the precision of the result is incorrect. Here is the code:
public class Fibonacci
{
public static BigInteger fib(int n)
{
decimal p = (decimal) (1 + Math.Sqrt(5)) / 2;
decimal q = (decimal) (1 - Math.Sqrt(5)) / 2;
decimal r = (decimal) Math.Sqrt(5);
Console.WriteLine("n: {0} p: {1}, q: {2}, t: {3}",
n,
p,
q,
(Pow(p, n) - Pow(q, n)) / r);
return (BigInteger) (Decimal.Round((Pow(p, n) - Pow(q, n)) / r));
}
public static decimal Pow(decimal x, int y)
{
if(y < 0)
return 1 / Pow(x, -1 * y);
else if(y == 0)
return 1;
else if(y % 2 == 0)
{
decimal z = Pow(x, y / 2);
return z * z;
}
else if(y % 2 == 1)
return Pow(x, y - 1) * x;
else
return 1;
}
Small values of If we take a large number like -96 to get the Fibonacci for, I get a result of -51680708573203484173 but the real number is -51680708854858323072. I checked the rounding was OK, but it appears somewhere along the way my result is losing precision and not saving its values correctly. I thought using decimals would solve this precision issue (previously used doubles), but that did not work.
Where in my code am I incorrectly missing precision or is there another issue with my code I am misdiagnosing?
Try this.
public static BigInteger Fibonacci(int n)
{
BigInteger a = 0;
BigInteger b = 1;
for (int i = 31; i >= 0; i--)
{
BigInteger d = a * (b * 2 - a);
BigInteger e = a * a + b * b;
a = d;
b = e;
if ((((uint)n >> i) & 1) != 0)
{
BigInteger c = a + b;
a = b;
b = c;
}
}
return a;
}
Good Luck!
As you wrote, decimal has approximately 28 decimal digits of precision. However, Math.Sqrt(5), being a double, does not.
Using a more accurate square root of 5 enables this algorithm to stay exact for longer, though of course it is still limited by precision eventually, just later.
public static BigInteger fib(int n)
{
decimal sqrt5 = 2.236067977499789696409173668731276235440618359611525724270m;
decimal p = (1 + sqrt5) / 2;
decimal q = (1 - sqrt5) / 2;
decimal r = sqrt5;
return (BigInteger) (Decimal.Round((Pow(p, n) - Pow(q, n)) / r));
}
This way fib(96) = 51680708854858323072 which is correct. However, it becomes wrong again at 128.

Get Smallest and Nearest Number to Zero - C#

How to get Smallest and Nearest Number to Zero using C#?
For example the smallest and nearest DECIMAL (DOUBLE) number to zero maybe is 0.000009 on one PC and 0.0000000000000000001 on another.
I mean The Most Possible result by 1/THE_MOST_LONG_INTEGER.
How to get it?
I guess you mean to calculate your Machine Epsilon ( http://en.wikipedia.org/wiki/Machine_epsilon ).
Which can be calculated in several ways, as example:
double machineEpsilon = 1.0d;
do {
machineEpsilon= machineEpsilon/ 2.0d;
}
while ((double)(1.0 + machineEpsilon) != 1.0);
I know it's an old question, but here's another possible method, trying to avoid iterations.
Based on the floating point format, you can just generate the smallest fraction artificially.
In C#, being sizeof(double)=64:
FOR NORMAL NUMBERS
public double GetMinNormalFraction()
{
double inf =double.PositiveInfinity ;
unsafe
{
ulong Inf_UL = *((ulong*)&inf);
ulong minFraction_UL = (Inf_UL ^ (Inf_UL << 1)) ^ ((1ul << 63) + 1ul);
return *((double*)&minFraction_UL); //2.2250738585072019E-308 in my computer
}
}
FOR SUBNORMAL/DENORMAL NUMBERS
public double GetMinDenormalFraction()
{
unsafe
{
ulong unit = 1ul;
return *((double*)&unit); //4.94065645841247E-324 in my computer
}
}
You can check if these are the smallest values using them as start point in machine-epsilon algorithm previously proposed by Saverio Terracciano (remembering to change to while ( machineEpsilon > 0.0); for denormal fractions)
EDIT
For the sake of completion but already out of the question, in case of needing the smallest fraction that can be added to a number, not just zero, the general method would be something like:
public double GetMinFractionCloseTo(double number)
{
//check if number is a denormal/subnormal number
if (double.IsNaN(number) || double.IsInfinity(number))
return 0.0; // or throw Exception,etc
unsafe
{
double inf = double.PositiveInfinity;
ulong Inf_UL = *((ulong*)&inf);
ulong Number_UL = *((ulong*)&number);
bool isDenormal = (Inf_UL & Number_UL) == 0;
if (isDenormal)
{
//MinFraction is always the same with denormals
ulong unit = 1ul;
return *((double*)&unit);
}
else //Normal number
{
//Detect if it's the last normal number close to zero
//(This can be skipped most of the times as it is very unlikely)
long maxLongValue = long.MaxValue;
ulong ExcludeSign= *((ulong*)&(maxLongValue));
ulong minFract_UL = (Inf_UL ^ (Inf_UL << 1)) | 1ul;
bool isLimitToDenormal = ((minFract_UL ^ Number_UL) & ExcludeSign) == 0;
if (isLimitToDenormal)
{
//MinFraction is always the same with denormals
ulong unit = 1ul;
return *((double*)&unit);
}
else
{
ulong ClosestValue_UL = Number_UL ^ 1ul;
double ClosestValue = *((double*)&(ClosestValue_UL));
return Math.Abs(number - ClosestValue);
}
}
}
}

Truncating a number to specified decimal places

I need to truncate a number to 2 decimal places, which basically means
chopping off the extra digits.
Eg:
2.919 -> 2.91
2.91111 -> 2.91
Why? This is what SQL server is doing when storing a number of a
particular precision. Eg, if a column is Decimal(8,2), and you try to
insert/update a number of 9.1234, the 3 and 4 will be chopped off.
I need to do exactly the same thing in c# code.
The only possible ways that I can think of doing it are either:
Using the stringformatter to "print" it out only
two decimal places, and then converting it to a decimal,
eg:
decimal tooManyDigits = 2.1345
decimal ShorterDigits = Convert.ToDecimal(tooManyDigits.ToString("0.##"));
// ShorterDigits is now 2.13
I'm not happy with this because it involves a to-string and then
another string to decimal conversion which seems a bit mad.
Using Math.Truncate (which only accepts an integer), so I
can multiply it by 100, truncate it, then divide by 100. eg:
decimal tooLongDecimal = 2.1235;
tooLongDecimal = Math.Truncate(tooLongDecimal * 100) / 100;
I'm also not happy with this because if tooLongDecimal is 0,
I'll get a divide by 0 error.
Surely there's a better + easier way! Any suggestions?
You've answered the question yourself; it seems you just misunderstood what division by zero means. The correct way to do this is to multiply, truncate, then devide, like this:
decimal TruncateTo100ths(decimal d)
{
return Math.Truncate(d* 100) / 100;
}
TruncateTo100ths(0m); // 0
TruncateTo100ths(2.919m); // 2.91
TruncateTo100ths(2.91111m); // 2.91
TruncateTo100ths(2.1345m); // 2.13
There is no division by zero here, there is only division by 100, which is perfectly safe.
The previously offered mathematical solutions are vulnerable to overflow with large numbers and/or a large number of decimal places. Consider instead the following extension method:
public static decimal TruncateDecimal(this decimal d, int decimals)
{
if (decimals < 0)
throw new ArgumentOutOfRangeException("decimals", "Value must be in range 0-28.");
else if (decimals > 28)
throw new ArgumentOutOfRangeException("decimals", "Value must be in range 0-28.");
else if (decimals == 0)
return Math.Truncate(d);
else
{
decimal integerPart = Math.Truncate(d);
decimal scalingFactor = d - integerPart;
decimal multiplier = (decimal) Math.Pow(10, decimals);
scalingFactor = Math.Truncate(scalingFactor * multiplier) / multiplier;
return integerPart + scalingFactor;
}
}
Usage:
decimal value = 18446744073709551615.262626263m;
value = value.TruncateDecimal(6); // Result: 18446744073709551615.262626
I agree with p.s.w.g. I had the similar requirement and here is my experience and a more generalized function for truncating.
http://snathani.blogspot.com/2014/05/truncating-number-to-specificnumber-of.html
public static decimal Truncate(decimal value, int decimals)
{
decimal factor = (decimal)Math.Pow(10, decimals);
decimal result = Math.Truncate(factor * value) / factor;
return result;
}
Using decimal.ToString('0.##') also imposes rounding:
1.119M.ToString("0.##") // -> 1.12
(Yeah, likely should be a comment, but it's hard to format well as such.)
public static decimal Rounding(decimal val, int precision)
{
decimal res = Trancating(val, precision + 1);
return Math.Round(res, precision, MidpointRounding.AwayFromZero);
}
public static decimal Trancating(decimal val,int precision)
{
if (val.ToString().Contains("."))
{
string valstr = val.ToString();
string[] valArr = valstr.Split('.');
if(valArr[1].Length < precision)
{
int NoOfZeroNeedToAdd = precision - valArr[1].Length;
for (int i = 1; i <= NoOfZeroNeedToAdd; i++)
{
valstr = string.Concat(valstr, "0");
}
}
if(valArr[1].Length > precision)
{
valstr = valArr[0] +"."+ valArr[1].Substring(0, precision);
}
return Convert.ToDecimal(valstr);
}
else
{
string valstr=val.ToString();
for(int i = 0; i <= precision; i++)
{
if (i == 1)
valstr = string.Concat(valstr, ".0");
if(i>1)
valstr = string.Concat(valstr, "0");
}
return Convert.ToDecimal(valstr);
}
}

How can I compute a base 2 logarithm without using the built-in math functions in C#?

How can I compute a base 2 logarithm without using the built-in math functions in C#?
I use Math.Log and BigInteger.Log repeatedly in an application millions of times and it becomes painfully slow.
I am interested in alternatives that use binary manipulation to achieve the same. Please bear in mind that I can make do with Log approximations in case that helps speed up execution times.
Assuming you're only interested in the integral part of the logarithm, you can do something like that:
static int LogBase2(uint value)
{
int log = 31;
while (log >= 0)
{
uint mask = (1 << log);
if ((mask & value) != 0)
return (uint)log;
log--;
}
return -1;
}
(note that the return value for 0 is wrong; it should be negative infinity, but there is no such value for integral datatypes so I return -1 instead)
http://graphics.stanford.edu/~seander/bithacks.html
For the BigInteger you could use the toByteArray() method and then manually find the most significant 1 and count the number of zeroes afterward. This would give you the base-2 logarithm with integer precision.
The bit hacks page is useful for things like this.
Find the log base 2 of an integer with a lookup table
The code there is in C, but the basic idea will work in C# too.
If you can make due with approximations then use a trick that Intel chips use: precalculate the values into an array of suitable size and then reference that array. You can make the array start and end with any min/max values, and you can create as many in-between values as you need to achieve the desired accuracy.
You can try this C algorithm to get the binary logarithm (base 2) of a double N :
static double native_log_computation(const double n) {
// Basic logarithm computation.
static const double euler = 2.7182818284590452354 ;
unsigned a = 0, d;
double b, c, e, f;
if (n > 0) {
for (c = n < 1 ? 1 / n : n; (c /= euler) > 1; ++a);
c = 1 / (c * euler - 1), c = c + c + 1, f = c * c, b = 0;
for (d = 1, c /= 2; e = b, b += 1 / (d * c), b - e /* > 0.0000001 */ ;)
d += 2, c *= f;
} else b = (n == 0) / 0.;
return n < 1 ? -(a + b) : a + b;
}
static inline double native_ln(const double n) {
// Returns the natural logarithm (base e) of N.
return native_log_computation(n) ;
}
static inline double native_log_base(const double n, const double base) {
// Returns the logarithm (base b) of N.
// Right hand side can be precomputed to 2.
return native_log_computation(n) / native_log_computation(base) ;
}
Source

Doubles, Ints, Math.Round in C#

I have to convert a double value x into two integers as specified by the following...
"x field consists of two signed 32 bit integers: x_i which represents the integral part and x_f which represents the fractional part multiplied by 10^8. e.g.: x of 80.99 will have x_i as 80 and x_f as 99,000,000"
First I tried the following, but it seems to fail sometimes, giving an xF value of 1999999 when it ought to be 2000000
// Doesn't work, sometimes we get 1999999 in the xF
int xI = (int)x;
int xF = (int)(((x - (double)xI) * 100000000));
The following seems to work in all the cases that I've tested. But I was wondering if there's a better way to do it without the round call. And also, could there be cases where this could still fail?
// Works, we get 2000000 but there's the round call
int xI = (int)x;
double temp = Math.Round(x - (double)xI, 6);
int xF = (int)(temp * 100000000);
The problem is (1) that binary floating point trades precision for range and (2) certain values, such as 3.1 cannot be repsented exactly in standard binary floating point formats, such as IEEE 754-2008.
First read David Goldberg's "What Every Computer Scientist Should Know About Floating-Point Arithmetic", published in ACM Computing Surveys, Vol 23, No 1, March 1991.
Then see these pages for more on the dangers, pitfalls and traps of using floats to store exact values:
http://steve.hollasch.net/cgindex/coding/ieeefloat.html
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
Why roll your own when System.Decimal gives you precise decimal floating point?
But, if your going to do it, something like this should do you just fine:
struct WonkyNumber
{
private const double SCALE_FACTOR = 1.0E+8 ;
private int _intValue ;
private int _fractionalValue ;
private double _doubleValue ;
public int IntegralValue
{
get
{
return _intValue ;
}
set
{
_intValue = value ;
_doubleValue = ComputeDouble() ;
}
}
public int FractionalValue
{
get
{
return _fractionalValue ;
}
set
{
_fractionalValue = value ;
_doubleValue = ComputeDouble() ;
}
}
public double DoubleValue
{
get
{
return _doubleValue ;
}
set
{
this.DoubleValue = value ;
ParseDouble( out _intValue , out _fractionalValue ) ;
}
}
public WonkyNumber( double value ) : this()
{
_doubleValue = value ;
ParseDouble( out _intValue , out _fractionalValue ) ;
}
public WonkyNumber( int x , int y ) : this()
{
_intValue = x ;
_fractionalValue = y ;
_doubleValue = ComputeDouble() ;
return ;
}
private void ParseDouble( out int x , out int y )
{
double remainder = _doubleValue % 1.0 ;
double quotient = _doubleValue - remainder ;
x = (int) quotient ;
y = (int) Math.Round( remainder * SCALE_FACTOR ) ;
return ;
}
private double ComputeDouble()
{
double value = (double) this.IntegralValue
+ ( ( (double) this.FractionalValue ) / SCALE_FACTOR )
;
return value ;
}
public static implicit operator WonkyNumber( double value )
{
WonkyNumber instance = new WonkyNumber( value ) ;
return instance ;
}
public static implicit operator double( WonkyNumber value )
{
double instance = value.DoubleValue ;
return instance ;
}
}
I think using decimals solve the problem, because internally they really use a decimal representation of the numbers. With double you get rounding errors when converting the binary representation of a number to decimal. Try this:
double x = 1234567.2;
decimal d = (decimal)x;
int xI = (int)d;
int xF = (int)(((d - xI) * 100000000));
EDIT: The endless discussion with RuneFS shows that the matter is not that easy. Therefore I made a very simple test with one million iterations:
public static void TestDecimals()
{
int doubleFailures = 0;
int decimalFailures = 0;
for (int i = 0; i < 1000000; i++) {
double x = 1234567.7 + (13*i);
int frac = FracUsingDouble(x);
if (frac != 70000000) {
doubleFailures++;
}
frac = FracUsingDecimal(x);
if (frac != 70000000) {
decimalFailures++;
}
}
Console.WriteLine("Failures with double: {0}", doubleFailures); // => 516042
Console.WriteLine("Failures with decimal: {0}", decimalFailures); // => 0
Console.ReadKey();
}
private static int FracUsingDouble(double x)
{
int xI = (int)x;
int xF = (int)(((x - xI) * 100000000));
return xF;
}
private static int FracUsingDecimal(double x)
{
decimal d = (decimal)x;
int xI = (int)d;
int xF = (int)(((d - xI) * 100000000));
return xF;
}
In this Test 51.6% of the doubles-only conversion fail, where as no conversion fails when the number is converted to decimal first.
There are two issues:
Your input value will rarely be equal to its decimal representation with 8 digits after the decimal point. So some kind of rounding is inevitable. In other words: your number i.20000000 will actually be slightly less or slightly more than i.2.
Casting to int always rounds towards zero. This is why, if i.20000000 is less than i.2, you will get 19999999 for the fractional part. Using Convert.ToInt32 rounds to nearest, which is what you'll want here. It will give you 20000000 in all cases.
So, provided all your numbers are in the range 0-99999999.99999999, the following will always get you the nearest solution:
int xI = (int)x;
int xF = Convert.ToInt32((x - (double)xI) * 100000000);
Of course, as others have suggested, converting to decimal and using that for your calculations is an excellent option.

Categories