I am currently working on a fraction datatype that is written in C#10 (.NET 6).
It is based on two BigIntegers in the Numerator and Denominator, which means that this fraction basically has infinite precision.
The only problem that now arises is memory consumption.
The Numerator and Denominator both get really big quickly, especially when roots are calculated using Newton's method.
An example for this using Newton's method to calculate the square root of 1/16:
x_0 = 1/16
x_1 = 17/32
x_2 = 353/1088
x_3 = 198593/768128
x_4 = 76315468673/305089687808
x_5 = 11641533109203575871233/46566125024733548077568 (≈0.2500000398)
The results get accurate quickly but the "size" of the fraction almost doubles every iteration.
When more precise results are required, the memory usage is just way too high.
A solution to this problem is rounding.
I want the user to be able to set a maximum error which the rounded fraction should be in.
My approach was using Euclid's GCD algorithm but modified for this max error clause:
public static Fraction Round(Fraction a, Fraction error)
{
BigInteger gcd = GCD(Math.Max(a.numerator, a.denominator), Math.Min(a.numerator, a.denominator), error);
return new Fraction(){ numerator = RoundedDivision(a.numerator, gcd), denominator = RoundedDivision(a.denominator, gcd) };
}
public static BigInteger GCD(BigInteger a, BigInteger b, BigFraction maxError) =>
new Fraction(){ numerator = BigInteger.Abs(b), denominator = 1 } <= maxError ? a : GCD(b, a % b, maxError * a);
// Rounds an integer division: 0.5 -> 1; -0.5 -> -1
public static BigInteger RoundedDivision(BigInteger a, BigInteger b) => (a*2 + a.Sign * b.Sign * b) / (2*b);
The results looked promising at first, but i ran into cases where the rounded result was outside the specified error.
An example for this using pseudo-code:
value = 878/399 (≈2.12531; value is represented as a fraction of two integers, not the division result)
error = 1/8 (=0.125)
Round(value, error) = Round(878/399, 1/8) = 2 (=2/1)
2 is outside of the specified max error by ~0.00031 (878/399 - 1/8 ≈ 2.00031 => 2 shouldn't be possible).
Correct results would be eg. 2.125 or 2.25, because these values lie in the range of 878/399 ± 1/8.
Does someone know why this is happening and how can i fix this?
EDIT: The result of the rounding should still be a fraction.
Related
How come dividing two 32 bit int numbers as ( int / int ) returns to me 0, but if I use Decimal.Divide() I get the correct answer? I'm by no means a c# guy.
int is an integer type; dividing two ints performs an integer division, i.e. the fractional part is truncated since it can't be stored in the result type (also int!). Decimal, by contrast, has got a fractional part. By invoking Decimal.Divide, your int arguments get implicitly converted to Decimals.
You can enforce non-integer division on int arguments by explicitly casting at least one of the arguments to a floating-point type, e.g.:
int a = 42;
int b = 23;
double result = (double)a / b;
In the first case, you're doing integer division, so the result is truncated (the decimal part is chopped off) and an integer is returned.
In the second case, the ints are converted to decimals first, and the result is a decimal. Hence they are not truncated and you get the correct result.
The following line:
int a = 1, b = 2;
object result = a / b;
...will be performed using integer arithmetic. Decimal.Divide on the other hand takes two parameters of the type Decimal, so the division will be performed on decimal values rather than integer values. That is equivalent of this:
int a = 1, b = 2;
object result = (Decimal)a / (Decimal)b;
To examine this, you can add the following code lines after each of the above examples:
Console.WriteLine(result.ToString());
Console.WriteLine(result.GetType().ToString());
The output in the first case will be
0
System.Int32
..and in the second case:
0,5
System.Decimal
I reckon Decimal.Divide(decimal, decimal) implicitly converts its 2 int arguments to decimals before returning a decimal value (precise) where as 4/5 is treated as integer division and returns 0
You want to cast the numbers:
double c = (double)a/(double)b;
Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:
double c = (double)a/b;
here is a Small Program :
static void Main(string[] args)
{
int a=0, b = 0, c = 0;
int n = Convert.ToInt16(Console.ReadLine());
string[] arr_temp = Console.ReadLine().Split(' ');
int[] arr = Array.ConvertAll(arr_temp, Int32.Parse);
foreach (int i in arr)
{
if (i > 0) a++;
else if (i < 0) b++;
else c++;
}
Console.WriteLine("{0}", (double)a / n);
Console.WriteLine("{0}", (double)b / n);
Console.WriteLine("{0}", (double)c / n);
Console.ReadKey();
}
In my case nothing worked above.
what I want to do is divide 278 by 575 and multiply by 100 to find percentage.
double p = (double)((PeopleCount * 1.0 / AllPeopleCount * 1.0) * 100.0);
%: 48,3478260869565 --> 278 / 575 ---> 0
%: 51,6521739130435 --> 297 / 575 ---> 0
if I multiply the PeopleCount by 1.0 it makes it decimal and division will be 48.34...
also multiply by 100.0 not 100.
If you are looking for 0 < a < 1 answer, int / int will not suffice. int / int does integer division. Try casting one of the int's to a double inside the operation.
The answer marked as such is very nearly there, but I think it is worth adding that there is a difference between using double and decimal.
I would not do a better job explaining the concepts than Wikipedia, so I will just provide the pointers:
floating-point arithmetic
decimal data type
In financial systems, it is often a requirement that we can guarantee a certain number of (base-10) decimal places accuracy. This is generally impossible if the input/source data is in base-10 but we perform the arithmetic in base-2 (because the number of decimal places required for the decimal expansion of a number depends on the base; one third takes infinitely many decimal places to express in base-10 as 0.333333..., but it takes only one decimal in base-3: 0.1).
Floating-point numbers are faster to work with (in terms of CPU time; programming-wise they are equally simple) and preferred whenever you want to minimize rounding error (as in scientific applications).
I Have to get amounts as below
1099948.275 = 1099948.27,
1099948.276 = 1099948.28,
1099948.274 = 1099948.27
when I use Math.Round(1_099_948.275, 2) I am getting output as 1099948.28 but not 1099948.27. I even tried with awayfromzero and toeven and none of them are working. Any idea?
Is there way to find 3rd number after decimal and if it is 5 just take first two number after decimal else go with mat.round?
This is not how rounding works, neither in C# nor in Math.
By default, anything 5 or more is rounded up, and anything less than 5 is rounded down. This is called "Away from Zero" rounding.
Rounding away from zero
Midpoint values are rounded to the next number away from zero. For example, 3.75 rounds to 3.8, 3.85 rounds to 3.9, -3.75 rounds to -3.8, and -3.85 rounds to -3.9. This form of rounding is represented by the MidpointRounding.AwayFromZero enumeration member.
The other mode is rounding to the closest even number. This is half of what you're looking for. In this way, when something ends with a "5" (the midpoint), that number is rounded to the closest even number, which could be up or down. This is done to distribute out the chances a rounding affects one particular party in a monetary transaction.
Rounding to nearest even, or banker's rounding
Midpoint values are rounded to the nearest even number. For example, both 3.75 and 3.85 round to 3.8, and both -3.75 and -3.85 round to -3.8. This form of rounding is represented by the MidpointRounding.ToEven enumeration member.
The source for both of the above quotes are on the Math.Round documentation page.
There is no supported rounding system that always rounds a 5 down. You'd need to inspect the value first to see if the last digit is a 5, then truncate the digit rather than attempt rounding.
One could implement your own rounding strategy:
public static double RoundDown(double value, int precision)
{
var rounded = Math.Ceiling(value * Math.Pow(10,precision) - 0.5) / Math.Pow(10,precision);
return rounded;
}
public static decimal RoundDown(decimal value, int precision)
{
decimal scale = 1;
for(int i=0;i<precision;i++)
{
scale *= 10;
}
var rounded = Math.Ceiling(value * scale - 0.5M) / scale;
return rounded;
}
Usage:
var values = new double[] {1099948.275, 1099948.276, 1099948.274 };
Console.WriteLine("double:");
foreach(var value in values)
{
var rounded = RoundDown(value, 2);
Console.WriteLine($"{value} = {rounded}");
}
Console.WriteLine("decimal:");
var values_m = new decimal[] {1099948.275M, 1099948.276M, 1099948.274M };
foreach(var value in values_m)
{
var rounded = RoundDown(value, 2);
Console.WriteLine($"{value} = {rounded}");
}
double:
1099948.275 = 1099948.27
1099948.276 = 1099948.28
1099948.274 = 1099948.27
decimal:
1099948.275 = 1099948.27
1099948.276 = 1099948.28
1099948.274 = 1099948.27
You can get what you need by subtracting 0.001 from the number before rounding it
Math.Round(1099948.275-0.001,2)
Edit:
to answer some of the comments
Math.Round(1099948.270-0.001,2) will still give the correct answer which is 1099948.27
similarly for any other number.
negative numbers may need to be handled differently depending on requirements.
So I have started a project where I make a quadratic equation solver and I have managed to do so. My next step is to convert the value of X1 and X2 eg.(X+X1)(X+X2) to an exact fraction, if they become a decimal.
So an example is:
12x2 + 44x + 21
gives me,
X1 = -3.10262885097732
X2 = -0.564037815689349
But how would i be able to convert this to an exact fraction?
Thanks for the help!
You can solve this problem using Continued Fractions.
As stated in comments, you can't obtain a fraction (a rational number) that exactly represents an irrational number, but you can get pretty close.
I implemented once, in a pet project, a rational number type. You can find it here. Look into TryFromDouble for an example of how to get the closest rational number (with specified precision) to any given number using Continued Fractions.
An extract of relevant code is the following (I will
Not post the whole type implementation because it is too long, but the code should still be pretty understandable):
public static bool TryFromDouble(double target, double precision, out Rational result)
{
//Continued fraction algorithm: http://en.wikipedia.org/wiki/Continued_fraction
//Implemented recursively. Problem is figuring out when precision is met without unwinding each solution. Haven't figured out how to do that.
//Current implementation computes rational number approximations for increasing algorithm depths until precision criteria is met, maximum depth is reached (fromDoubleMaxIterations)
//or an OverflowException is thrown. Efficiency is probably improvable but this method will not be used in any performance critical code. No use in optimizing it unless there is
//a good reason. Current implementation works reasonably well.
result = zero;
int steps = 0;
while (Math.Abs(target - Rational.ToDouble(result)) > precision)
{
if (steps > fromDoubleMaxIterations)
{
result = zero;
return false;
}
result = getNearestRationalNumber(target, 0, steps++);
}
return true;
}
private static Rational getNearestRationalNumber(double number, int currentStep, int maximumSteps)
{
var integerPart = (BigInteger)number;
double fractionalPart = number - Math.Truncate(number);
while (currentStep < maximumSteps && fractionalPart != 0)
{
return integerPart + new Rational(1, getNearestRationalNumber(1 / fractionalPart, ++currentStep, maximumSteps));
}
return new Rational(integerPart);
}
When converting a "high" precision Double to a Decimal I lose precision with Convert.ToDecimal or casting to (Decimal) due to Rounding.
Example :
double d = -0.99999999999999956d;
decimal result = Convert.ToDecimal(d); // Result = -1
decimal result = (Decimal)(d); // Result = -1
The Decimal value returned by Convert.ToDecimal(double) contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
So I in order to keep my precision, I have to convert my double to a String and then call Convert.ToDecimal(String):
decimal result = System.Convert.ToDecimal(d.ToString("G20")); // Result = -0.99999999999999956d
This method is working but I would like to avoid using a String variable in order to convert a Double to Decimal without rounding after 15 digits?
One possible solution is to decompose d as the exact sum of n doubles, the last of which is small and contains all the trailing significant digits that you desire when converted to decimal, and the first (n-1) of which convert exactly to decimal.
For the source double d between -1.0 and 1.0:
decimal t = 0M;
bool b = d < 0;
if (b) d = -d;
if (d >= 0.5) { d -= 0.5; t = 0.5M; }
if (d >= 0.25) { d -= 0.25; t += 0.25M; }
if (d >= 0.125) { d -= 0.125; t += 0.125M; }
if (d >= 0.0625) { d -= 0.0625; t += 0.0625M; }
t += Convert.ToDecimal(d);
if (b) t = -t;
Test it on ideone.com.
Note that the operations d -= are exact, even if C# computes the binary floating-point operations at a higher precision than double (which it allows itself to do).
This is cheaper than a conversion from double to string, and provides a few additional digits of accuracy in the result (four bits of accuracy for the above four if-then-elses).
Remark: if C# did not allow itself to do floating-point computations at a higher precision, a good trick would have been to use Dekker splitting to split d into two values d1 and d2 that would convert each exactly to decimal. Alas, Dekker splitting only works with a strict interpretation of IEEE 754 multiplication and addition.
Another idea is to use C#'s version of frexp to obtain the significand s and exponent e of d, and to compute (Decimal)((long) (s * 4503599627370496.0d)) * <however one computes 2^e in Decimal>.
There are two approaches, one of which will work for values up below 2^63, and the other of which will work for values larger than 2^53.
Split smaller values into whole-number and fractional parts. The whole-number part may be precisely cast to long and then Decimal [note that a direct cast to Decimal may not be precise!] The fractional part may be precisely multiplied by 9007199254740992.0 (2^53), converted to long and then Decimal, and then divided by 9007199254740992.0m. Adding the result of that division to the whole-number part should yield a Decimal value which is within one least-significant-digit of being correct [it may not be precisely rounded, but will still be far better than the built-in conversions!]
For larger values, multiply by (1.0/281474976710656.0) (2^-48), take the whole-number part of that result, multiply it back by 281474976710656.0, and subtract it from the original result. Convert the whole-number results from the division and the subtraction to Decimal (they should convert precisely), multiply the former by 281474976710656m, and add the latter.
This obviously doesn't work.
BigInteger Total = 1000000000000000000000000000000000000000000000000000022234235423534543;
BigInteger Actual = 83450348250384508349058934085;
string Percent = ((Decimal)100.0/Total*Actual).ToString()+"%";
The question is, how to I get my precise percent?
Currently I use..
string sTotal = (task.End - task.Start).ToString();
BigInteger current = task.End;
string sCurrent = (task.End-current).ToString().PadLeft(sTotal.Length, '0');
Int32 maxLength = sCurrent.Length;
if (maxLength > Int64.MaxValue.ToString().Length - 1)
maxLength = Int64.MaxValue.ToString().Length - 1;
UInt64 currentI = Convert.ToUInt64(sCurrent.Substring(0, maxLength));
UInt64 totalI = Convert.ToUInt64(sTotal.Substring(0, maxLength));
Percent = (Decimal)100.0 / totalI
* currentI;
Can you suggest better?
You're computing a rational, not an integer, so you should install the Solver Foundation:
http://msdn.microsoft.com/en-us/library/ff524509(v=VS.93).aspx
and use Rational rather than BigInteger:
http://msdn.microsoft.com/en-us/library/ff526610(v=vs.93).aspx
You can then call ToDouble if you want to get the rational as the nearest double.
I need it accurate to 56 decimal places
OK, that is a ridiculous amount of precision, but I'll take you at your word.
Since a double has only 15 decimal places of precision and a decimal only 29, you can't use double or decimal. You're going to have to write the code yourself to do the division.
Here are two ways to do it:
First, write an algorithm that emulates doing long division. You can do it by hand, so you can write a computer program to do it. Keep going until you generate the required number of bits of precision.
Second: WOLOG assume that the rational in question is positive and is of the form x / y where x and y are big integers. Let b be 10p for a desired precision p. You wish to find the big integer a with the property that:
a * y < b * x
and
b * x < (a + 1) * y
Either a/b or (a+1)/b is the decimal fraction with p digits closest to x/y.
Make sense?
You can find the value of a by doing a binary search over the set of non-negative BigIntegers.
To do the binary search, first you have to find upper and lower bounds. Lower is easy enough; you know that 0 is a lower bound because by assumption the fraction x/y is positive. To find the upper bound, try 1/b, 10/b, 100/b ... and so on until you find a value that is larger than x/y. Now you have an upper and lower bound, and you can binary search the resulting space to find the exact value of a that makes the inequalities true.