So I have started a project where I make a quadratic equation solver and I have managed to do so. My next step is to convert the value of X1 and X2 eg.(X+X1)(X+X2) to an exact fraction, if they become a decimal.
So an example is:
12x2 + 44x + 21
gives me,
X1 = -3.10262885097732
X2 = -0.564037815689349
But how would i be able to convert this to an exact fraction?
Thanks for the help!
You can solve this problem using Continued Fractions.
As stated in comments, you can't obtain a fraction (a rational number) that exactly represents an irrational number, but you can get pretty close.
I implemented once, in a pet project, a rational number type. You can find it here. Look into TryFromDouble for an example of how to get the closest rational number (with specified precision) to any given number using Continued Fractions.
An extract of relevant code is the following (I will
Not post the whole type implementation because it is too long, but the code should still be pretty understandable):
public static bool TryFromDouble(double target, double precision, out Rational result)
{
//Continued fraction algorithm: http://en.wikipedia.org/wiki/Continued_fraction
//Implemented recursively. Problem is figuring out when precision is met without unwinding each solution. Haven't figured out how to do that.
//Current implementation computes rational number approximations for increasing algorithm depths until precision criteria is met, maximum depth is reached (fromDoubleMaxIterations)
//or an OverflowException is thrown. Efficiency is probably improvable but this method will not be used in any performance critical code. No use in optimizing it unless there is
//a good reason. Current implementation works reasonably well.
result = zero;
int steps = 0;
while (Math.Abs(target - Rational.ToDouble(result)) > precision)
{
if (steps > fromDoubleMaxIterations)
{
result = zero;
return false;
}
result = getNearestRationalNumber(target, 0, steps++);
}
return true;
}
private static Rational getNearestRationalNumber(double number, int currentStep, int maximumSteps)
{
var integerPart = (BigInteger)number;
double fractionalPart = number - Math.Truncate(number);
while (currentStep < maximumSteps && fractionalPart != 0)
{
return integerPart + new Rational(1, getNearestRationalNumber(1 / fractionalPart, ++currentStep, maximumSteps));
}
return new Rational(integerPart);
}
Related
I am currently working on a fraction datatype that is written in C#10 (.NET 6).
It is based on two BigIntegers in the Numerator and Denominator, which means that this fraction basically has infinite precision.
The only problem that now arises is memory consumption.
The Numerator and Denominator both get really big quickly, especially when roots are calculated using Newton's method.
An example for this using Newton's method to calculate the square root of 1/16:
x_0 = 1/16
x_1 = 17/32
x_2 = 353/1088
x_3 = 198593/768128
x_4 = 76315468673/305089687808
x_5 = 11641533109203575871233/46566125024733548077568 (≈0.2500000398)
The results get accurate quickly but the "size" of the fraction almost doubles every iteration.
When more precise results are required, the memory usage is just way too high.
A solution to this problem is rounding.
I want the user to be able to set a maximum error which the rounded fraction should be in.
My approach was using Euclid's GCD algorithm but modified for this max error clause:
public static Fraction Round(Fraction a, Fraction error)
{
BigInteger gcd = GCD(Math.Max(a.numerator, a.denominator), Math.Min(a.numerator, a.denominator), error);
return new Fraction(){ numerator = RoundedDivision(a.numerator, gcd), denominator = RoundedDivision(a.denominator, gcd) };
}
public static BigInteger GCD(BigInteger a, BigInteger b, BigFraction maxError) =>
new Fraction(){ numerator = BigInteger.Abs(b), denominator = 1 } <= maxError ? a : GCD(b, a % b, maxError * a);
// Rounds an integer division: 0.5 -> 1; -0.5 -> -1
public static BigInteger RoundedDivision(BigInteger a, BigInteger b) => (a*2 + a.Sign * b.Sign * b) / (2*b);
The results looked promising at first, but i ran into cases where the rounded result was outside the specified error.
An example for this using pseudo-code:
value = 878/399 (≈2.12531; value is represented as a fraction of two integers, not the division result)
error = 1/8 (=0.125)
Round(value, error) = Round(878/399, 1/8) = 2 (=2/1)
2 is outside of the specified max error by ~0.00031 (878/399 - 1/8 ≈ 2.00031 => 2 shouldn't be possible).
Correct results would be eg. 2.125 or 2.25, because these values lie in the range of 878/399 ± 1/8.
Does someone know why this is happening and how can i fix this?
EDIT: The result of the rounding should still be a fraction.
I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers, or have misleading titles (they really are about SQL server or some other databases, not C#), or any answers at all. The following post, I think, is the closest to what I'm after, but even this seems wrong:
Determine the decimal precision of an input number
First, there seems to be some confusion about the difference between scale and precision. Per Google (per MSDN):
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.
With that being said, the number 12345.67890M would have a scale of 5 and a precision of 10. I have not discovered a single code example that would accurately calculate this in C#.
I want to make two helper methods, decimal.Scale(), and decimal.Precision(), such that the following unit test passes:
[TestMethod]
public void ScaleAndPrecisionTest()
{
//arrange
var number = 12345.67890M;
//act
var scale = number.Scale();
var precision = number.Precision();
//assert
Assert.IsTrue(precision == 10);
Assert.IsTrue(scale == 5);
}
but I have yet to find a snippet that will do this, though several people have suggested using decimal.GetBits(), and others have said, convert it to a string and parse it.
Converting it to a string and parsing it is, in my mind, an awful idea, even disregarding the localization issue with the decimal point. The math behind the GetBits() method, however, is like Greek to me.
Can anyone describe what the calculations would look like for determining scale and precision in a decimal value for C#?
This is how you get the scale using the GetBits() function:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
byte scale = (byte) ((bits[3] >> 16) & 0x7F);
And the best way I can think of to get the precision is by removing the fraction point (i.e. use the Decimal Constructor to reconstruct the decimal number without the scale mentioned above) and then use the logarithm:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0);
int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1;
Now we can put them into extensions:
public static class Extensions{
public static int GetScale(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
return (int) ((bits[3] >> 16) & 0x7F);
}
public static int GetPrecision(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0);
return (int)Math.Floor(Math.Log10((double)d)) + 1;
}
}
And here is a fiddle.
First of all, solve the "physical" problem: how you're gonna decide which digits are significant. The fact is, "precision" has no physical meaning unless you know or guess the absolute error.
Now, there are 2 fundamental ways to determine each digit (and thus, their number):
get+interpret the meaningful parts
calculate mathematically
The 2nd way can't detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I won't cover it unless requested.
For the first one, in the Decimal's interface, I see 2 basic methods to get the parts: ToString() (a few overloads) and GetBits().
ToString(String, IFormatInfo) is actually a reliable way since you can define the format exactly.
E.g. use the F specifier and pass a culture-neutral NumberFormatInfo in which you have manually set all the fields that affect this particular format.
regarding the NumberDecimalDigits field: a test shows that it is the minimal number - so set it to 0 (the docs are unclear on this), - and trailing zeros are printed all right if there are any
The semantics of GetBits() result are documented clearly in its MSDN article (so laments like "it's Greek to me" won't do ;) ). Decompiling with ILSpy shows that it's actually a tuple of the object's raw data fields:
public static int[] GetBits(decimal d)
{
return new int[]
{
d.lo,
d.mid,
d.hi,
d.flags
};
}
And their semantics are:
|high|mid|low| - binary digits (96 bits), interpreted as an integer (=aligned to the right)
flags:
bits 16 to 23 - "the power of 10 to divide the integer number" (=number of fractional decimal digits)
(thus (flags>>16)&0xFF is the raw value of this field)
bit 31 - sign (doesn't concern us)
as you can see, this is very similar to IEEE 754 floats.
So, the number of fractional digits is the exponent value. The number of total digits is the number of digits in the decimal representation of the 96-bit integer.
Racil's answer gives you the value of the internal scale value of the decimal which is correct, although if the internal representation ever changes it'll be interesting.
In the current format the precision portion of decimal is fixed at 96 bits, which is between 28 and 29 decimal digits depending on the number. All .NET decimal values share this precision. Since this is constant there's no internal value you can use to determine it.
What you're apparently after though is the number of digits, which we can easily determine from the string representation. We can also get the scale at the same time or at least using the same method.
public struct DecimalInfo
{
public int Scale;
public int Length;
public override string ToString()
{
return string.Format("Scale={0}, Length={1}", Scale, Length);
}
}
public static class Extensions
{
public static DecimalInfo GetInfo(this decimal value)
{
string decStr = value.ToString().Replace("-", "");
int decpos = decStr.IndexOf(".");
int length = decStr.Length - (decpos < 0 ? 0 : 1);
int scale = decpos < 0 ? 0 : length - decpos;
return new DecimalInfo { Scale = scale, Length = length };
}
}
I'm running NUnit tests to evaluate some known test data and calculated results. The numbers are floating point doubles so I don't expect them to be exactly equal, but I'm not sure how to treat them as equal for a given precision.
In NUnit we can compare with a fixed tolerance:
double expected = 0.389842845321551d;
double actual = 0.38984284532155145d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and that works fine for numbers below zero, but as the numbers grow the tolerance really needs to be changed so we always care about the same number of digits of precision.
Specifically, this test fails:
double expected = 1.95346834136148d;
double actual = 1.9534683413614817d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and of course larger numbers fail with tolerance..
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
What's the correct way to evaluate two floating point numbers are equal with a given precision? Is there a built-in way to do this in NUnit?
From msdn:
By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
Let's assume 15, then.
So, we could say that we want the tolerance to be to the same degree.
How many precise figures do we have after the decimal point? We need to know the distance of the most significant digit from the decimal point, right? The magnitude. We can get this with a Log10.
Then we need to divide 1 by 10 ^ precision to get a value around the precision we want.
Now, you'll need to do more test cases than I have, but this seems to work:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
// Log10(100) = 2, so to get the manitude we add 1.
int magnitude = 1 + (expected == 0.0 ? -1 : Convert.ToInt32(Math.Floor(Math.Log10(expected))));
int precision = 15 - magnitude ;
double tolerance = 1.0 / Math.Pow(10, precision);
Assert.That(actual, Is.EqualTo(expected).Within(tolerance));
It's late - there could be a gotcha in here. I tested it against your three sets of test data and each passed. Changing pricision to be 16 - magnitude caused the test to fail. Setting it to 14 - magnitude obviously caused it to pass as the tolerance was greater.
This is what I came up with for The Floating-Point Guide (Java code, but should translate easily, and comes with a test suite, which you really really need):
public static boolean nearlyEqual(float a, float b, float epsilon)
{
final float absA = Math.abs(a);
final float absB = Math.abs(b);
final float diff = Math.abs(a - b);
if (a * b == 0) { // a or b or both are zero
// relative error is not meaningful here
return diff < (epsilon * epsilon);
} else { // use relative error
return diff / (absA + absB) < epsilon;
}
}
The really tricky question is what to do when one of the numbers to compare is zero. The best answer may be that such a comparison should always consider the domain meaning of the numbers being compared rather than trying to be universal.
How about converting the items each to string and comparing the strings?
string test1 = String.Format("{0:0.0##}", expected);
string test2 = String.Format("{0:0.0##}", actual);
Assert.AreEqual(test1, test2);
Assert.That(x, Is.EqualTo(y).Within(10).Percent);
is a decent option (changes it to a relative comparison, where x is required to be within 10% of y). You may want to add extra handling for 0, as otherwise you'll get an exact comparison in that case.
Update:
Another good option is
Assert.That(x, Is.EqualTo(y).Within(1).Ulps);
where Ulps means units in the last place. See https://docs.nunit.org/articles/nunit/writing-tests/constraints/EqualConstraint.html#comparing-floating-point-values.
I don't know if there's a built-in way to do it with nunit, but I would suggest multiplying each float by the 10x the precision you're seeking, storing the results as longs, and comparing the two longs to each other.
For example:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d;
//for a precision of 4
long lActual = (long) 10000 * actual;
long lExpected = (long) 10000 * expected;
if(lActual == lExpected) { // Do comparison
// Perform desired actions
}
This is a quick idea, but how about shifting them down till they are below zero? Should be something like num/(10^ceil(log10(num))) . . . not to sure about how well it would work, but its an idea.
1632.4587642911599 / (10^ceil(log10(1632.4587642911599))) = 0.16324587642911599
How about:
const double significantFigures = 10;
Assert.AreEqual(Actual / Expected, 1.0, 1.0 / Math.Pow(10, significantFigures));
The difference between the two values should be less than either value divided by the precision.
Assert.Less(Math.Abs(firstValue - secondValue), firstValue / Math.Pow(10, precision));
open FsUnit
actual |> should (equalWithin errorMargin) expected
I am terribly annoyed by the inaccuracy of the intrinsic trig functions in the CLR. It is well know that
Math.Sin(Math.PI)=0.00000000000000012246063538223773
instead of 0. Something similar happens with Math.Cos(Math.PI/2).
But when I am doing a long series of calculations that on special cases evaluate to
Math.Sin(Math.PI/2+x)-Math.Cos(x)
and the result is zero for x=0.2, but not zero for x=0.1 (try it). Another issue is when the argument is a large number, the inaccuracy gets proportionally large.
So I wonder if anyone has coded some better representation of the trig functions in C# for sharing with the world. Does the CLR call some standard C math library implementing CORDIC or something similar? link:wikipedia CORDIC
This has nothing to do with accuracy of trigonometric functions but more with the CLS type system. According to the documentation a double has 15-16 digits precision (which is exactly what you get) so you can't be more precise with this type. So if you want more precision you will need to create a new type that is capable of storing it.
Also notice that you should never be writing a code like this:
double d = CalcFromSomewhere();
if (d == 0)
{
DoSomething();
}
You should do instead:
double d = CalcFromSomewhere();
double epsilon = 1e-5; // define the precision you are working with
if (Math.Abs(d) < epsilon)
{
DoSomething();
}
I hear you. I am terribly annoyed by the inaccuracy of division. The other day I did:
Console.WriteLine(1.0 / 3.0);
and I got 0.333333333333333, instead of the correct answer which is 0.333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333...
Perhaps now you see what the problem is. Math.Pi is not equal to pi any more than 1.0 / 3.0 is equal to one third. Both of them differ from the true value by a few hundred quadrillionths, and therefore any calculations you perform with Math.Pi or 1.0/3.0 are also going to be off by a few hundred quadrillionths, including taking the sine.
If you don't like that approximate arithmetic is approximate then don't use approximate arithmetic. Use exact arithmetic. I used to use Waterloo Maple when I needed exact arithmetic; perhaps you should buy a copy of that.
This is a result of floating-point precision. You get a certain number of significant digits possible, and anything that can't be represented exactly is approximated. For example, pi is not a rational number, and so it's impossible to get an exact representation. Since you can't get an exact value of pi, you aren't going to get exact sines and cosines of numbers including pi (nor will you get exact values of sines and cosines most of the time).
The best intermediate explanation is "What Every Computer Scientist Should Know About Floating-Point Arithmetic". If you don't want to go into that, just remember that floating point numbers are usually approximations, and that floating-point calculations are like moving piles of sand on the ground: with everything you do with them, you lose a little sand and pick up a little dirt.
If you want exact representation, you'll need to find yourself a symbolic algebra system.
You need to use an arbitrary-precision decimal library. (.Net 4.0 has an arbitrary integer class, but not decimal).
A few popular ones are available:
BigNum
W3B.Sine
I reject the idea the the errors are due to round-off. What can be done is define sin(x) as follows, using a Taylor's expansion with 6 terms:
const double π=Math.PI;
const double π2=Math.PI/2;
const double π4=Math.PI/4;
public static double Sin(double x)
{
if (x==0) { return 0; }
if (x<0) { return -Sin(-x); }
if (x>π) { return -Sin(x-π); }
if (x>π4) { return Cos(π2-x); }
double x2=x*x;
return x*(x2/6*(x2/20*(x2/42*(x2/72*(x2/110*(x2/156-1)+1)-1)+1)-1)+1);
}
public static double Cos(double x)
{
if (x==0) { return 1; }
if (x<0) { return Cos(-x); }
if (x>π) { return -Cos(x-π); }
if (x>π4) { return Sin(π2-x); }
double x2=x*x;
return x2/2*(x2/12*(x2/30*(x2/56*(x2/90*(x2/132-1)+1)-1)+1)-1)+1;
}
Typical error is 1e-16 and worst case is 1e-11. It is worse than the CLR, but it is controllable by adding more terms. The good news is that for the special cases in the OP and for Sin(45°) the answer is exact.
Our current implementation of sine and cosine is
public static double Sin(double d) {
d = d % (2 * Math.PI); // Math.Sin calculates wrong results for values larger than 1e6
if (d == 0 || d == Math.PI || d == -Math.PI) {
return 0.0;
}
else {
return Math.Sin(d);
}
}
public static double Cos(double d) {
d = d % (2 * Math.PI); // Math.Cos calculates wrong results for values larger than 1e6
double multipleOfPi = d / Math.PI; // avoid calling the expensive modulo function twice
if (multipleOfPi == 0.5 || multipleOfPi == -0.5 || multipleOfPi == 1.5 || multipleOfPi == -1.5) {
return 0.0;
}
else {
return Math.Cos(d);
}
}
This code works (C# 3)
double d;
if(d == (double)(int)d) ...;
Is there a better way to do this?
For extraneous reasons I want to avoid the double cast so; what nice ways exist other than this? (even if they aren't as good)
Note: Several people pointed out the (important) point that == is often problematic regrading floating point. In this cases I expect values in the range of 0 to a few hundred and they are supposed to be integers (non ints are errors) so if those points "shouldn't" be an issue for me.
d == Math.Floor(d)
does the same thing in other words.
NB: Hopefully you're aware that you have to be very careful when doing this kind of thing; floats/doubles will very easily accumulate miniscule errors that make exact comparisons (like this one) fail for no obvious reason.
This would work I think:
if (d % 1 == 0) {
//...
}
If your double is the result of another calculation, you probably want something like:
d == Math.Floor(d + 0.00001);
That way, if there's been a slight rounding error, it'll still match.
I cannot answer the C#-specific part of the question, but I must point out you are probably missing a generic problem with floating point numbers.
Generally, integerness is not well defined on floats. For the same reason that equality is not well defined on floats. Floating point calculations normally include both rounding and representation errors.
For example, 1.1 + 0.6 != 1.7.
Yup, that's just the way floating point numbers work.
Here, 1.1 + 0.6 - 1.7 == 2.2204460492503131e-16.
Strictly speaking, the closest thing to equality comparison you can do with floats is comparing them up to a chosen precision.
If this is not sufficient, you must work with a decimal number representation, with a floating point number representation with built-in error range, or with symbolic computations.
A simple test such as 'x == floor(x)' is mathematically assured to work correctly, for any fixed-precision FP number.
All legal fixed-precision FP encodings represent distinct real numbers, and so for every integer x, there is at most one fixed-precision FP encoding that matches it exactly.
Therefore, for every integer x that CAN be represented in such way, we have x == floor(x) necessarily, since floor(x) by definition returns the largest FP number y such that y <= x and y represents an integer; so floor(x) must return x.
If you are just going to convert it, Mike F / Khoth's answer is good, but doesn't quite answer your question. If you are going to actually test, and it's actually important, I recommend you implement something that includes a margin of error.
For instance, if you are considering money and you want to test for even dollar amounts, you might say (following Khoth's pattern):
if( Math.abs(d - Math.Floor(d + 0.001)) < 0.001)
In other words, take the absolute value of the difference of the value and it's integer representation and ensure that it's small.
You don't need the extra (double) in there. This works:
if (d == (int)d) {
//...
}
Use Math.Truncate()
This will let you choose what precision you're looking for, plus or minus half a tick, to account for floating point drift. The comparison is integral also which is nice.
static void Main(string[] args)
{
const int precision = 10000;
foreach (var d in new[] { 2, 2.9, 2.001, 1.999, 1.99999999, 2.00000001 })
{
if ((int) (d*precision + .5)%precision == 0)
{
Console.WriteLine("{0} is an int", d);
}
}
}
and the output is
2 is an int
1.99999999 is an int
2.00000001 is an int
Something like this
double d = 4.0;
int i = 4;
bool equal = d.CompareTo(i) == 0; // true
Could you use this
bool IsInt(double x)
{
try
{
int y = Int16.Parse(x.ToString());
return true;
}
catch
{
return false;
}
}
To handle the precision of the double...
Math.Abs(d - Math.Floor(d)) <= double.Epsilon
Consider the following case where a value less then double.Epsilon fails to compare as zero.
// number of possible rounds
const int rounds = 1;
// precision causes rounding up to double.Epsilon
double d = double.Epsilon*.75;
// due to the rounding this comparison fails
Console.WriteLine(d == Math.Floor(d));
// this comparison succeeds by accounting for the rounding
Console.WriteLine(Math.Abs(d - Math.Floor(d)) <= rounds*double.Epsilon);
// The difference is double.Epsilon, 4.940656458412465E-324
Console.WriteLine(Math.Abs(d - Math.Floor(d)).ToString("E15"));