How to optimize this smoothstep function ? Is there any alternative? - c#

In one of my projects, I use the following smoothstep() function :
float smoothstep(float a, float b, float m, int n)
{
for(int i = 0 ; i < n ; i++)
{
m = m * m * (3 - 2 * m);
}
return a + (b - a) * m;
}
It works great, however, it has two disadvantages :
It's slow (especially for big values of n)
It doesn't work for non integer values (eg : n = 1.5)
Is there an alternative (excluding precalculating points and then interpolating) providing better performance (and same behavior), or another function giving a great approximation ?

You should be able to precompute the "m" term, since it doesn't rely on a or b, and assuming you're doing this over an entire interpolation, this should speed up your code significantly.
Alternatively, you could use the built in MathHelper.Smoothstep method, which provides a cubic interpolation, rather than the linear interpolation you get out of your version. There are also other, more advanced, interpolators in that class, also.

Related

Why doesnt check the count? Still counting but doesnt check [duplicate]

I'm currently writing some code where I have something along the lines of:
double a = SomeCalculation1();
double b = SomeCalculation2();
if (a < b)
DoSomething2();
else if (a > b)
DoSomething3();
And then in other places I may need to do equality:
double a = SomeCalculation3();
double b = SomeCalculation4();
if (a == 0.0)
DoSomethingUseful(1 / a);
if (b == 0.0)
return 0; // or something else here
In short, I have lots of floating point math going on and I need to do various comparisons for conditions. I can't convert it to integer math because such a thing is meaningless in this context.
I've read before that floating point comparisons can be unreliable, since you can have things like this going on:
double a = 1.0 / 3.0;
double b = a + a + a;
if ((3 * a) != b)
Console.WriteLine("Oh no!");
In short, I'd like to know: How can I reliably compare floating point numbers (less than, greater than, equality)?
The number range I am using is roughly from 10E-14 to 10E6, so I do need to work with small numbers as well as large.
I've tagged this as language agnostic because I'm interested in how I can accomplish this no matter what language I'm using.
TL;DR
Use the following function instead of the currently accepted solution to avoid some undesirable results in certain limit cases, while being potentially more efficient.
Know the expected imprecision you have on your numbers and feed them accordingly in the comparison function.
bool nearly_equal(
float a, float b,
float epsilon = 128 * FLT_EPSILON, float abs_th = FLT_MIN)
// those defaults are arbitrary and could be removed
{
assert(std::numeric_limits<float>::epsilon() <= epsilon);
assert(epsilon < 1.f);
if (a == b) return true;
auto diff = std::abs(a-b);
auto norm = std::min((std::abs(a) + std::abs(b)), std::numeric_limits<float>::max());
// or even faster: std::min(std::abs(a + b), std::numeric_limits<float>::max());
// keeping this commented out until I update figures below
return diff < std::max(abs_th, epsilon * norm);
}
Graphics, please?
When comparing floating point numbers, there are two "modes".
The first one is the relative mode, where the difference between x and y is considered relatively to their amplitude |x| + |y|. When plot in 2D, it gives the following profile, where green means equality of x and y. (I took an epsilon of 0.5 for illustration purposes).
The relative mode is what is used for "normal" or "large enough" floating points values. (More on that later).
The second one is an absolute mode, when we simply compare their difference to a fixed number. It gives the following profile (again with an epsilon of 0.5 and a abs_th of 1 for illustration).
This absolute mode of comparison is what is used for "tiny" floating point values.
Now the question is, how do we stitch together those two response patterns.
In Michael Borgwardt's answer, the switch is based on the value of diff, which should be below abs_th (Float.MIN_NORMAL in his answer). This switch zone is shown as hatched in the graph below.
Because abs_th * epsilon is smaller that abs_th, the green patches do not stick together, which in turn gives the solution a bad property: we can find triplets of numbers such that x < y_1 < y_2 and yet x == y2 but x != y1.
Take this striking example:
x = 4.9303807e-32
y1 = 4.930381e-32
y2 = 4.9309825e-32
We have x < y1 < y2, and in fact y2 - x is more than 2000 times larger than y1 - x. And yet with the current solution,
nearlyEqual(x, y1, 1e-4) == False
nearlyEqual(x, y2, 1e-4) == True
By contrast, in the solution proposed above, the switch zone is based on the value of |x| + |y|, which is represented by the hatched square below. It ensures that both zones connects gracefully.
Also, the code above does not have branching, which could be more efficient. Consider that operations such as max and abs, which a priori needs branching, often have dedicated assembly instructions. For this reason, I think this approach is superior to another solution that would be to fix Michael's nearlyEqual by changing the switch from diff < abs_th to diff < eps * abs_th, which would then produce essentially the same response pattern.
Where to switch between relative and absolute comparison?
The switch between those modes is made around abs_th, which is taken as FLT_MIN in the accepted answer. This choice means that the representation of float32 is what limits the precision of our floating point numbers.
This does not always make sense. For example, if the numbers you compare are the results of a subtraction, perhaps something in the range of FLT_EPSILON makes more sense. If they are squared roots of subtracted numbers, the numerical imprecision could be even higher.
It is rather obvious when you consider comparing a floating point with 0. Here, any relative comparison will fail, because |x - 0| / (|x| + 0) = 1. So the comparison needs to switch to absolute mode when x is on the order of the imprecision of your computation -- and rarely is it as low as FLT_MIN.
This is the reason for the introduction of the abs_th parameter above.
Also, by not multiplying abs_th with epsilon, the interpretation of this parameter is simple and correspond to the level of numerical precision that we expect on those numbers.
Mathematical rumbling
(kept here mostly for my own pleasure)
More generally I assume that a well-behaved floating point comparison operator =~ should have some basic properties.
The following are rather obvious:
self-equality: a =~ a
symmetry: a =~ b implies b =~ a
invariance by opposition: a =~ b implies -a =~ -b
(We don't have a =~ b and b =~ c implies a =~ c, =~ is not an equivalence relationship).
I would add the following properties that are more specific to floating point comparisons
if a < b < c, then a =~ c implies a =~ b (closer values should also be equal)
if a, b, m >= 0 then a =~ b implies a + m =~ b + m (larger values with the same difference should also be equal)
if 0 <= λ < 1 then a =~ b implies λa =~ λb (perhaps less obvious to argument for).
Those properties already give strong constrains on possible near-equality functions. The function proposed above verifies them. Perhaps one or several otherwise obvious properties are missing.
When one think of =~ as a family of equality relationship =~[Ɛ,t] parameterized by Ɛ and abs_th, one could also add
if Ɛ1 < Ɛ2 then a =~[Ɛ1,t] b implies a =~[Ɛ2,t] b (equality for a given tolerance implies equality at a higher tolerance)
if t1 < t2 then a =~[Ɛ,t1] b implies a =~[Ɛ,t2] b (equality for a given imprecision implies equality at a higher imprecision)
The proposed solution also verifies these.
Comparing for greater/smaller is not really a problem unless you're working right at the edge of the float/double precision limit.
For a "fuzzy equals" comparison, this (Java code, should be easy to adapt) is what I came up with for The Floating-Point Guide after a lot of work and taking into account lots of criticism:
public static boolean nearlyEqual(float a, float b, float epsilon) {
final float absA = Math.abs(a);
final float absB = Math.abs(b);
final float diff = Math.abs(a - b);
if (a == b) { // shortcut, handles infinities
return true;
} else if (a == 0 || b == 0 || diff < Float.MIN_NORMAL) {
// a or b is zero or both are extremely close to it
// relative error is less meaningful here
return diff < (epsilon * Float.MIN_NORMAL);
} else { // use relative error
return diff / (absA + absB) < epsilon;
}
}
It comes with a test suite. You should immediately dismiss any solution that doesn't, because it is virtually guaranteed to fail in some edge cases like having one value 0, two very small values opposite of zero, or infinities.
An alternative (see link above for more details) is to convert the floats' bit patterns to integer and accept everything within a fixed integer distance.
In any case, there probably isn't any solution that is perfect for all applications. Ideally, you'd develop/adapt your own with a test suite covering your actual use cases.
I had the problem of Comparing floating point numbers A < B and A > B
Here is what seems to work:
if(A - B < Epsilon) && (fabs(A-B) > Epsilon)
{
printf("A is less than B");
}
if (A - B > Epsilon) && (fabs(A-B) > Epsilon)
{
printf("A is greater than B");
}
The fabs--absolute value-- takes care of if they are essentially equal.
We have to choose a tolerance level to compare float numbers. For example,
final float TOLERANCE = 0.00001;
if (Math.abs(f1 - f2) < TOLERANCE)
Console.WriteLine("Oh yes!");
One note. Your example is rather funny.
double a = 1.0 / 3.0;
double b = a + a + a;
if (a != b)
Console.WriteLine("Oh no!");
Some maths here
a = 1/3
b = 1/3 + 1/3 + 1/3 = 1.
1/3 != 1
Oh, yes..
Do you mean
if (b != 1)
Console.WriteLine("Oh no!")
Idea I had for floating point comparison in swift
infix operator ~= {}
func ~= (a: Float, b: Float) -> Bool {
return fabsf(a - b) < Float(FLT_EPSILON)
}
func ~= (a: CGFloat, b: CGFloat) -> Bool {
return fabs(a - b) < CGFloat(FLT_EPSILON)
}
func ~= (a: Double, b: Double) -> Bool {
return fabs(a - b) < Double(FLT_EPSILON)
}
Adaptation to PHP from Michael Borgwardt & bosonix's answer:
class Comparison
{
const MIN_NORMAL = 1.17549435E-38; //from Java Specs
// from http://floating-point-gui.de/errors/comparison/
public function nearlyEqual($a, $b, $epsilon = 0.000001)
{
$absA = abs($a);
$absB = abs($b);
$diff = abs($a - $b);
if ($a == $b) {
return true;
} else {
if ($a == 0 || $b == 0 || $diff < self::MIN_NORMAL) {
return $diff < ($epsilon * self::MIN_NORMAL);
} else {
return $diff / ($absA + $absB) < $epsilon;
}
}
}
}
You should ask yourself why you are comparing the numbers. If you know the purpose of the comparison then you should also know the required accuracy of your numbers. That is different in each situation and each application context. But in pretty much all practical cases there is a required absolute accuracy. It is only very seldom that a relative accuracy is applicable.
To give an example: if your goal is to draw a graph on the screen, then you likely want floating point values to compare equal if they map to the same pixel on the screen. If the size of your screen is 1000 pixels, and your numbers are in the 1e6 range, then you likely will want 100 to compare equal to 200.
Given the required absolute accuracy, then the algorithm becomes:
public static ComparisonResult compare(float a, float b, float accuracy)
{
if (isnan(a) || isnan(b)) // if NaN needs to be supported
return UNORDERED;
if (a == b) // short-cut and takes care of infinities
return EQUAL;
if (abs(a-b) < accuracy) // comparison wrt. the accuracy
return EQUAL;
if (a < b) // larger / smaller
return SMALLER;
else
return LARGER;
}
The standard advice is to use some small "epsilon" value (chosen depending on your application, probably), and consider floats that are within epsilon of each other to be equal. e.g. something like
#define EPSILON 0.00000001
if ((a - b) < EPSILON && (b - a) < EPSILON) {
printf("a and b are about equal\n");
}
A more complete answer is complicated, because floating point error is extremely subtle and confusing to reason about. If you really care about equality in any precise sense, you're probably seeking a solution that doesn't involve floating point.
I tried writing an equality function with the above comments in mind. Here's what I came up with:
Edit: Change from Math.Max(a, b) to Math.Max(Math.Abs(a), Math.Abs(b))
static bool fpEqual(double a, double b)
{
double diff = Math.Abs(a - b);
double epsilon = Math.Max(Math.Abs(a), Math.Abs(b)) * Double.Epsilon;
return (diff < epsilon);
}
Thoughts? I still need to work out a greater than, and a less than as well.
I came up with a simple approach to adjusting the size of epsilon to the size of the numbers being compared. So, instead of using:
iif(abs(a - b) < 1e-6, "equal", "not")
if a and b can be large, I changed that to:
iif(abs(a - b) < (10 ^ -abs(7 - log(a))), "equal", "not")
I suppose that doesn't satisfy all the theoretical issues discussed in the other answers, but it has the advantage of being one line of code, so it can be used in an Excel formula or an Access query without needing a VBA function.
I did a search to see if others have used this method and I didn't find anything. I tested it in my application and it seems to be working well. So it seems to be a method that is adequate for contexts that don't require the complexity of the other answers. But I wonder if it has a problem I haven't thought of since no one else seems to be using it.
If there's a reason the test with the log is not valid for simple comparisons of numbers of various sizes, please say why in a comment.
You need to take into account that the truncation error is a relative one. Two numbers are about equal if their difference is about as large as their ulp (Unit in the last place).
However, if you do floating point calculations, your error potential goes up with every operation (esp. careful with subtractions!), so your error tolerance needs to increase accordingly.
The best way to compare doubles for equality/inequality is by taking the absolute value of their difference and comparing it to a small enough (depending on your context) value.
double eps = 0.000000001; //for instance
double a = someCalc1();
double b = someCalc2();
double diff = Math.abs(a - b);
if (diff < eps) {
//equal
}

BesselK Function in C#

I am attempting to implement the BesselK method from Boost (a C++ library).
The Boost method accepts two doubles and returns a double. (I have it implemented below as cyl_bessel_k .)
The equation I modeled this off of comes from Boosts documention:
http://www.boost.org/doc/libs/1_45_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/bessel/mbessel.html
I have also been checking values against Wolfram:
http://www.wolframalpha.com/input/?i=BesselK%283%2C1%29
I am able to match output from the Boost method when passing a positive non-integer value for "v". However, when an integer is passed, my output is severely off. So,there is an obvious discontinuity issue. From reading up on this, it seems that this issue arises from passing a negative integer to the gamma function. Somehow reflection comes into play here with the Bessel_I method, but I'm nearing the end of my math skillset.
1.) What needs to happen to the bessel_i method with reflection to make this work?
2.) I'm currently doing a partial sum approach. Boost uses a continuous fraction approach. How can I modify this to account for convergence?
Any input is appreciated! Thank you!
static double cyl_bessel_k(double v, double x)
{
if (v > 0)
{
double iNegativeV = cyl_bessel_i(-v, x);
double iPositiveV = cyl_bessel_i(v, x);
double besselSecondKind = (Math.PI / 2) * ((iNegativeV - iPositiveV ) / (Math.Sin(Math.PI * v)));
return besselSecondKind;
}
else
{
//error handling
}
}
static double cyl_bessel_i(double v, double x)
{
if (x == 0)
{
return 0;
}
double summed = 0;
double a = Math.Pow((0.5d * x), v);
for (double k = 0; k < 10; k++) //how to account for convergence? 10 is arbitrary
{
double b = Math.Pow(0.25d * Math.Pow(x, 2), k);
double kFactorial = SpecialFunctions.Factorial((int)k); //comes from MathNet.Numerics (Nuget)
double gamma = SpecialFunctions.Gamma(v + k + 1); //comes from MathNet.Numerics
summed += b / (kFactorial * gamma);
}
return a * summed;
}
After lots of refactoring and trying things that didn't work, this is what I came up with. It's mostly Boost logic that has been adapted and translated into C#.
It's not perfect though (likely due to rounding, precision,etc). Any improvements are welcome! Max error is 0.0000001926% between true Bessel_K value from Wolfram and my adapted method. This is occurs when parameter 'v' is an integer. For my purposes, this was close enough.
Link to fiddle:
https://dotnetfiddle.net/QIYzK6
Hopefully it saves someone some headache.

Method to ensure GetHashCode() overload returns the same for semi-equal R3 float vectors

This one is for the binary and primitive experts. I am implementing a float R3 vector struct and my definition for "equality" is actually "mostly equal." Specifically, for all coords of the compared vectors Abs( (a[i] - b[i]) / (a[i] + b[i]) ) < .00001 returns true.
private static bool FloatEquality(float a, float b)
{
if (a == b)
{
return true;
}
else
{
float e;
try
{
e = (b - a) / (b + a);
}
catch (DivideByZeroException)
{
float g = float.Epsilon;
e = (b - a) / g;
}
//AppConsole.AppConsole.Instance.WriteLine(e);
if (e < .00001f && e > -.00001f)
{
return true;
}
else
{
return false;
}
}
}
My problem is in determining if there's a way to get the hash values to come out the same on vectors that meet this requirement due to the fact that I want to be able to use these vectors as "keys" for a Dictionary.
As you can see, the above code is used to check for equality on 3 different coordinates.
I was thinking of extracting the bytes from the three float coordinates and using the middle two from each.
(the following isn't code but Stack Overflow won't let me post it unless I indent it)
Vector(x,y,z):
x's float byte[] = [ x1 x2 x3 x3 ]
y's float byte[] = [ y1 y2 y3 y4 ]
z's float byte[] = [ z1 z2 z3 z4 ]
Hash code: byte[] {x2^x3 , y2^y3, z2 ^ z3, x2 ^ z3}
Or something like that... In short - I'm curious how to ensure that the hashcodes of vectors which fit my equals method will always come out the same... If someone has a great idea with very low cost computation, I'd love to hear it. Or if you could direct me to a place that discusses more in depth how floats are stored and which bytes will always be the same if the above comparison method returns equal.
I may need a new comparison method rather than a hash function because there's really no way that I can be sure that any of the bytes will match I guess...
Well, the basic idea is simple - you have to artificially reduce the precision of your floats. How to do this efficiently depends a lot on the kind of data you're expecting to see.
For example, if you're mostly using small values, you could simply use something like this:
(int)Math.Round(x1 * 1000)
^ (int)Math.Round(x2 * 1000)
^ (int)Math.Round(x3 * 1000)
Note that while I'm not actually fulfilling your if (e < .00001f && e > -.00001f) condition, it doesn't matter - the idea is to reduce the collisions, and ensure that what values that are equal will have equal hash codes. It's not necessary (or possible) to also ensure that values that are not equal will not have equal hash code. The rest should be handled in the overrides of Equals, == etc. - that's where strict equality checks must be present. Unlike Equals and company, GetHashCode() only has data about a single vector, so you don't even have an option of using data from more than that single vector in there.
Hash codes are only there to make key collisions infrequent. So Dictionary will still work if each of your vectors will return 0 in GetHashCode() - it's just that the performance will suffer. As long as equal vectors end up with equal hash codes, the hash code can be anything that suits your needs :)
Of course, the best way would simply be not to use vectors as keys in the dictionary. Find the part of the vector that interests you (helps you the most), and use that as a key. Maybe you'll find out Dictionary isn't actually what you want anyway (for example, in a game, there's tons of different space partitioning methods that can be used with vectors - from simple grid-like layouts, through manual space partitioning, up to things like BSP).

Questions on a Haskell -> C# conversion

Background:
I was "dragged" into seeing this question:
Fibonacci's Closed-form expression in Haskell
when the author initially tagged with many other languages but later focused to a Haskell question. Unfortunately I have no experience whatsoever with Haskell so I couldn't really participate in the question. However one of the answers caught my eye where the answerer turned it into a pure integer math problem. That sounded awesome to me so I had to figure out how it worked and compare this to a recursive Fibonacci implementation to see how accurate it was. I have a feeling that if I just remembered the relevant math involving irrational numbers, I might be able to work everything out myself (but I don't). So the first step for me was to port it to a language I am familiar with. In this case, I am doing C#.
I am not completely in the dark fortunately. I have plenty experience in another functional language (OCaml) so a lot of it looked somewhat familiar to me. Starting out with the conversion, everything seemed straightforward since it basically defined a new numeric type to help with the calculations. However I've hit a couple of roadblocks in the translation and am having trouble finishing it. I'm getting completely wrong results.
Analysis:
Here's the code that I'm translating:
data Ext = Ext !Integer !Integer
deriving (Eq, Show)
instance Num Ext where
fromInteger a = Ext a 0
negate (Ext a b) = Ext (-a) (-b)
(Ext a b) + (Ext c d) = Ext (a+c) (b+d)
(Ext a b) * (Ext c d) = Ext (a*c + 5*b*d) (a*d + b*c) -- easy to work out on paper
-- remaining instance methods are not needed
fib n = divide $ twoPhi^n - (2-twoPhi)^n
where twoPhi = Ext 1 1
divide (Ext 0 b) = b `div` 2^n -- effectively divides by 2^n * sqrt 5
So based on my research and what I can deduce (correct me if I'm wrong anywhere), the first part declares type Ext with a constructor that will have two Integer parameters (and I guess will inherit the Eq and Show types/modules).
Next is the implementation of Ext which "derives" from Num. fromInteger performs a conversion from an Integer. negate is the unary negation and then there's the binary addition and multiplication operators.
The last part is the actual Fibonacci implementation.
Questions:
In the answer, hammar (the answerer) mentions that exponentiation is handled by the default implementation in Num. But what does that mean and how is that actually applied to this type? Is there an implicit number "field" that I'm missing? Does it just apply the exponentiation to each corresponding number it contains? I assume it does the latter and end up with this C# code:
public static Ext operator ^(Ext x, int p) // "exponent"
{
// just apply across both parts of Ext?
return new Ext(BigInt.Pow(x.a, p), BigInt.Pow(x.b, p));
// Ext (a^p) (b^p)
}
However this conflicts with how I perceive why negate is needed, it wouldn't need it if this actually happens.
Now the meat of the code. I read the first part divide $ twoPhi^n - (2-twoPhi)^n as:
divide the result of the following expression: twoPhi^n - (2-twoPhi)^n.
Pretty simple. Raise twoPhi to the nth power. Subtract from that the result of the rest. Here we're doing binary subtraction but we only implemented unary negation. Or did we not? Or can binary subtraction be implied because it could be made up combining addition and negation (which we have)? I assume the latter. And this eases my uncertainty about the negation.
The last part is the actual division: divide (Ext 0 b) = b `div` 2^n. Two concerns here. From what I've found, there is no division operator, only a `div` function. So I would just have to divide the numbers here. Is this correct? Or is there a division operator but a separate `div` function that does something else special?
I'm not sure how to interpret the beginning of the line. Is it just a simple pattern match? In other words, would this only apply if the first parameter was a 0? What would the result be if it didn't match (the first was non-zero)? Or should I be interpreting it as we don't care about the first parameter and apply the function unconditionally? This seems to be the biggest hurdle and using either interpretation still yields the incorrect results.
Did I make any wrong assumptions anywhere? Or is it all right and I just implemented the C# incorrectly?
Code:
Here's the (non-working) translation and the full source (including tests) so far just in case anyone is interested.
// code removed to keep post size down
// full source still available through link above
Progress:
Ok so looking at the answers and comments so far, I think I know where to go from here and why.
The exponentiation just needed to do what it normally does, multiply p times given that we've implemented the multiply operation. It never crossed my mind that we should do what math class has always told us to do. The implied subtraction from having addition and negation is a pretty handy feature too.
Also spotted a typo in my implementation. I added when I should have multiplied.
// (Ext a b) * (Ext c d) = Ext (a*c + 5*b*d) (a*d + b*c)
public static Ext operator *(Ext x, Ext y)
{
return new Ext(x.a * y.a + 5*x.b*y.b, x.a*y.b + x.b*y.a);
// ^ oops!
}
Conclusion:
So now it's completed. I only implemented to essential operators and renamed it a bit. Named in a similar manner as complex numbers. So far, consistent with the recursive implementation, even at really large inputs. Here's the final code.
static readonly Complicated TWO_PHI = new Complicated(1, 1);
static BigInt Fib_x(int n)
{
var x = Complicated.Pow(TWO_PHI, n) - Complicated.Pow(2 - TWO_PHI, n);
System.Diagnostics.Debug.Assert(x.Real == 0);
return x.Bogus / BigInt.Pow(2, n);
}
struct Complicated
{
private BigInt real;
private BigInt bogus;
public Complicated(BigInt real, BigInt bogus)
{
this.real = real;
this.bogus = bogus;
}
public BigInt Real { get { return real; } }
public BigInt Bogus { get { return bogus; } }
public static Complicated Pow(Complicated value, int exponent)
{
if (exponent < 0)
throw new ArgumentException(
"only non-negative exponents supported",
"exponent");
Complicated result = 1;
Complicated factor = value;
for (int mask = exponent; mask != 0; mask >>= 1)
{
if ((mask & 0x1) != 0)
result *= factor;
factor *= factor;
}
return result;
}
public static implicit operator Complicated(int real)
{
return new Complicated(real, 0);
}
public static Complicated operator -(Complicated l, Complicated r)
{
var real = l.real - r.real;
var bogus = l.bogus - r.bogus;
return new Complicated(real, bogus);
}
public static Complicated operator *(Complicated l, Complicated r)
{
var real = l.real * r.real + 5 * l.bogus * r.bogus;
var bogus = l.real * r.bogus + l.bogus * r.real;
return new Complicated(real, bogus);
}
}
And here's the fully updated source.
[...], the first part declares type Ext with a constructor that will have two Integer parameters (and I guess will inherit the Eq and Show types/modules).
Eq and Show are type classes. You can think of them as similar to interfaces in C#, only more powerful. deriving is a construct that can be used to automatically generate implementations for a handful of standard type classes, including Eq, Show, Ord and others. This reduces the amount of boilerplate you have to write.
The instance Num Ext part provides an explicit implementation of the Num type class. You got most of this part right.
[the answerer] mentions that exponentiation is handled by the default implementation in Num. But what does that mean and how is that actually applied to this type? Is there an implicit number "field" that I'm missing? Does it just apply the exponentiation to each corresponding number it contains?
This was a bit unclear on my part. ^ is not in the type class Num, but it is an auxilliary function defined entirely in terms of the Num methods, sort of like an extension method. It implements exponentiation to positive integral powers through binary exponentiation. This is the main "trick" of the code.
[...] we're doing binary subtraction but we only implemented unary negation. Or did we not? Or can binary subtraction be implied because it could be made up combinding addition and negation (which we have)?
Correct. The default implementation of binary minus is x - y = x + (negate y).
The last part is the actual division: divide (Ext 0 b) = b `div` 2^n. Two concerns here. From what I've found, there is no division operator, only a div function. So I would just have to divide the numbers here. Is this correct? Or is there a division operator but a separate div function that does something else special?
There is only a syntactic difference between operators and functions in Haskell. One can treat an operator as a function by writing it parenthesis (+), or treat a function as a binary operator by writing it in `backticks`.
div is integer division and belongs to the type class Integral, so it is defined for all integer-like types, including Int (machine-sized integers) and Integer (arbitrary-size integers).
I'm not sure how to interpret the beginning of the line. Is it just a simple pattern match? In other words, would this only apply if the first parameter was a 0? What would the result be if it didn't match (the first was non-zero)? Or should I be interpreting it as we don't care about the first parameter and apply the function unconditionally?
It is indeed just a simple pattern match to extract the coefficient of √5. The integral part is matched against a zero to express to readers that we indeed expect it to always be zero, and to make the program crash if some bug in the code was causing it not to be.
A small improvement
Replacing Integer with Rational in the original code, you can write fib n even closer to Binet's formula:
fib n = divSq5 $ phi^n - (1-phi)^n
where divSq5 (Ext 0 b) = numerator b
phi = Ext (1/2) (1/2)
This performs the divisions throughout the computation, instead of saving it all up for the end. This results in smaller intermediate numbers and about 20% speedup when calculating fib (10^6).
First, Num, Show, Eq are type classes, not types nor modules. They are a bit similar to interfaces in C#, but are resolved statically rather than dynamically.
Second, exponentiation is performed via multiplication with the implementation of ^, which is not a member of the Num typeclass, but a separate function.
The implementation is the following:
(^) :: (Num a, Integral b) => a -> b -> a
x0 ^ y0 | y0 < 0 = error "Negative exponent"
| y0 == 0 = 1
| otherwise = f x0 y0
where -- f : x0 ^ y0 = x ^ y
f x y | even y = f (x * x) (y `quot` 2)
| y == 1 = x
| otherwise = g (x * x) ((y - 1) `quot` 2) x
-- g : x0 ^ y0 = (x ^ y) * z
g x y z | even y = g (x * x) (y `quot` 2) z
| y == 1 = x * z
| otherwise = g (x * x) ((y - 1) `quot` 2) (x * z)
This seems to be the missing part of solution.
You are right about subtraction. It is implemented via addition and negation.
Now, the divide function divides only if a equals to 0. Otherwise we get a pattern match failure, indicating a bug in the program.
The div function is a simple integer division, equivalent to / applied to integral types in C#. There is also an operator / in Haskell, but it indicates real number division.
A quick implementation in C#. I implemented exponentiation using the square-and-multiply algorithm.
It is enlightening to compare this type which has the form a+b*Sqrt(5) with the complex numbers which take the form a+b*Sqrt(-1). Addition and subtraction work just the same. Multiplication is slightly different, because i^2 isn't -1 but +5 here. Division is slightly more complicated, but shouldn't be too hard either.
Exponentiation is defined as multiplying a number with itself n times. But of course that's slow. So we use the fact that ((a*a)*a)*a is identical to (a*a)*(a*a) and rewrite using the square-and-multiply algorithm. So we just need log(n) multiplications instead of n multiplications.
Just calculating the exponential of the individual components doesn't work. That's because the matrix underlying your type isn't diagonal. Compare this to the property of complex numbers. You can't simply calculate the exponential of the real and imaginary part separately.
struct MyNumber
{
public readonly BigInteger Real;
public readonly BigInteger Sqrt5;
public MyNumber(BigInteger real,BigInteger sqrt5)
{
Real=real;
Sqrt5=sqrt5;
}
public static MyNumber operator -(MyNumber left,MyNumber right)
{
return new MyNumber(left.Real-right.Real, left.Sqrt5-right.Sqrt5);
}
public static MyNumber operator*(MyNumber left,MyNumber right)
{
BigInteger real=left.Real*right.Real + left.Sqrt5*right.Sqrt5*5;
BigInteger sqrt5=left.Real*right.Sqrt5 + right.Real*left.Sqrt5;
return new MyNumber(real,sqrt5);
}
public static MyNumber Power(MyNumber b,int exponent)
{
if(!(exponent>=0))
throw new ArgumentException();
MyNumber result=new MyNumber(1,0);
MyNumber multiplier=b;
while(exponent!=0)
{
if((exponent&1)==1)//exponent is odd
result*=multiplier;
multiplier=multiplier*multiplier;
exponent/=2;
}
return result;
}
public override string ToString()
{
return Real.ToString()+"+"+Sqrt5.ToString()+"*Sqrt(5)";
}
}
BigInteger Fibo(int n)
{
MyNumber num = MyNumber.Power(new MyNumber(1,1),n)-MyNumber.Power(new MyNumber(1,-1),n);
num.Dump();
if(num.Real!=0)
throw new Exception("Asser failed");
return num.Sqrt5/BigInteger.Pow(2,n);
}
void Main()
{
MyNumber num=new MyNumber(1,2);
MyNumber.Power(num,2).Dump();
Fibo(5).Dump();
}

How do I calculate PI in C#?

How can I calculate the value of PI using C#?
I was thinking it would be through a recursive function, if so, what would it look like and are there any math equations to back it up?
I'm not too fussy about performance, mainly how to go about it from a learning point of view.
If you want recursion:
PI = 2 * (1 + 1/3 * (1 + 2/5 * (1 + 3/7 * (...))))
This would become, after some rewriting:
PI = 2 * F(1);
with F(i):
double F (int i) {
return 1 + i / (2.0 * i + 1) * F(i + 1);
}
Isaac Newton (you may have heard of him before ;) ) came up with this trick.
Note that I left out the end condition, to keep it simple. In real life, you kind of need one.
How about using:
double pi = Math.PI;
If you want better precision than that, you will need to use an algorithmic system and the Decimal type.
If you take a close look into this really good guide:
Patterns for Parallel Programming: Understanding and Applying Parallel Patterns with the .NET Framework 4
You'll find at Page 70 this cute implementation (with minor changes from my side):
static decimal ParallelPartitionerPi(int steps)
{
decimal sum = 0.0;
decimal step = 1.0 / (decimal)steps;
object obj = new object();
Parallel.ForEach(
Partitioner.Create(0, steps),
() => 0.0,
(range, state, partial) =>
{
for (int i = range.Item1; i < range.Item2; i++)
{
decimal x = (i - 0.5) * step;
partial += 4.0 / (1.0 + x * x);
}
return partial;
},
partial => { lock (obj) sum += partial; });
return step * sum;
}
There are a couple of really, really old tricks I'm surprised to not see here.
atan(1) == PI/4, so an old chestnut when a trustworthy arc-tangent function is
present is 4*atan(1).
A very cute, fixed-ratio estimate that makes the old Western 22/7 look like dirt
is 355/113, which is good to several decimal places (at least three or four, I think).
In some cases, this is even good enough for integer arithmetic: multiply by 355 then divide by 113.
355/113 is also easy to commit to memory (for some people anyway): count one, one, three, three, five, five and remember that you're naming the digits in the denominator and numerator (if you forget which triplet goes on top, a microsecond's thought is usually going to straighten it out).
Note that 22/7 gives you: 3.14285714, which is wrong at the thousandths.
355/113 gives you 3.14159292 which isn't wrong until the ten-millionths.
Acc. to /usr/include/math.h on my box, M_PI is #define'd as:
3.14159265358979323846
which is probably good out as far as it goes.
The lesson you get from estimating PI is that there are lots of ways of doing it,
none will ever be perfect, and you have to sort them out by intended use.
355/113 is an old Chinese estimate, and I believe it pre-dates 22/7 by many years. It was taught me by a physics professor when I was an undergrad.
Good overview of different algorithms:
Computing pi;
Gauss-Legendre-Salamin.
I'm not sure about the complexity claimed for the Gauss-Legendre-Salamin algorithm in the first link (I'd say O(N log^2(N) log(log(N)))).
I do encourage you to try it, though, the convergence is really fast.
Also, I'm not really sure about why trying to convert a quite simple procedural algorithm into a recursive one?
Note that if you are interested in performance, then working at a bounded precision (typically, requiring a 'double', 'float',... output) does not really make sense, as the obvious answer in such a case is just to hardcode the value.
What is PI? The circumference of a circle divided by its diameter.
In computer graphics you can plot/draw a circle with its centre at 0,0 from a initial point x,y, the next point x',y' can be found using a simple formula:
x' = x + y / h : y' = y - x' / h
h is usually a power of 2 so that the divide can be done easily with a shift (or subtracting from the exponent on a double). h also wants to be the radius r of your circle. An easy start point would be x = r, y = 0, and then to count c the number of steps until x <= 0 to plot a quater of a circle. PI is 4 * c / r or PI is 4 * c / h
Recursion to any great depth, is usually impractical for a commercial program, but tail recursion allows an algorithm to be expressed recursively, while implemented as a loop. Recursive search algorithms can sometimes be implemented using a queue rather than the process's stack, the search has to backtrack from a deadend and take another path - these backtrack points can be put in a queue, and multiple processes can un-queue the points and try other paths.
Calculate like this:
x = 1 - 1/3 + 1/5 - 1/7 + 1/9 (... etc as far as possible.)
PI = x * 4
You have got Pi !!!
This is the simplest method I know of.
The value of PI slowly converges to the actual value of Pi (3.141592165......). If you iterate more times, the better.
Here's a nice approach (from the main Wikipedia entry on pi); it converges much faster than the simple formula discussed above, and is quite amenable to a recursive solution if your intent is to pursue recursion as a learning exercise. (Assuming that you're after the learning experience, I'm not giving any actual code.)
The underlying formula is the same as above, but this approach averages the partial sums to accelerate the convergence.
Define a two parameter function, pie(h, w), such that:
pie(0,1) = 4/1
pie(0,2) = 4/1 - 4/3
pie(0,3) = 4/1 - 4/3 + 4/5
pie(0,4) = 4/1 - 4/3 + 4/5 - 4/7
... and so on
So your first opportunity to explore recursion is to code that "horizontal" computation as the "width" parameter increases (for "height" of zero).
Then add the second dimension with this formula:
pie(h, w) = (pie(h-1,w) + pie(h-1,w+1)) / 2
which is used, of course, only for values of h greater than zero.
The nice thing about this algorithm is that you can easily mock it up with a spreadsheet to check your code as you explore the results produced by progressively larger parameters. By the time you compute pie(10,10), you'll have an approximate value for pi that's good enough for most engineering purposes.
Enumerable.Range(0, 100000000).Aggregate(0d, (tot, next) => tot += Math.Pow(-1d, next)/(2*next + 1)*4)
using System;
namespace Strings
{
class Program
{
static void Main(string[] args)
{
/* decimal pie = 1;
decimal e = -1;
*/
var stopwatch = new System.Diagnostics.Stopwatch();
stopwatch.Start(); //added this nice stopwatch start routine
//leibniz formula in C# - code written completely by Todd Mandell 2014
/*
for (decimal f = (e += 2); f < 1000001; f++)
{
e += 2;
pie -= 1 / e;
e += 2;
pie += 1 / e;
Console.WriteLine(pie * 4);
}
decimal finalDisplayString = (pie * 4);
Console.WriteLine("pie = {0}", finalDisplayString);
Console.WriteLine("Accuracy resulting from approximately {0} steps", e/4);
*/
// Nilakantha formula - code written completely by Todd Mandell 2014
// π = 3 + 4/(2*3*4) - 4/(4*5*6) + 4/(6*7*8) - 4/(8*9*10) + 4/(10*11*12) - (4/(12*13*14) etc
decimal pie = 0;
decimal a = 2;
decimal b = 3;
decimal c = 4;
decimal e = 1;
for (decimal f = (e += 1); f < 100000; f++)
// Increase f where "f < 100000" to increase number of steps
{
pie += 4 / (a * b * c);
a += 2;
b += 2;
c += 2;
pie -= 4 / (a * b * c);
a += 2;
b += 2;
c += 2;
e += 1;
}
decimal finalDisplayString = (pie + 3);
Console.WriteLine("pie = {0}", finalDisplayString);
Console.WriteLine("Accuracy resulting from {0} steps", e);
stopwatch.Stop();
TimeSpan ts = stopwatch.Elapsed;
Console.WriteLine("Calc Time {0}", ts);
Console.ReadLine();
}
}
}
public static string PiNumberFinder(int digitNumber)
{
string piNumber = "3,";
int dividedBy = 11080585;
int divisor = 78256779;
int result;
for (int i = 0; i < digitNumber; i++)
{
if (dividedBy < divisor)
dividedBy *= 10;
result = dividedBy / divisor;
string resultString = result.ToString();
piNumber += resultString;
dividedBy = dividedBy - divisor * result;
}
return piNumber;
}
In any production scenario, I would compel you to look up the value, to the desired number of decimal points, and store it as a 'const' somewhere your classes can get to it.
(unless you're writing scientific 'Pi' specific software...)
Regarding...
... how to go about it from a learning point of view.
Are you trying to learning to program scientific methods? or to produce production software? I hope the community sees this as a valid question and not a nitpick.
In either case, I think writing your own Pi is a solved problem. Dmitry showed the 'Math.PI' constant already. Attack another problem in the same space! Go for generic Newton approximations or something slick.
#Thomas Kammeyer:
Note that Atan(1.0) is quite often hardcoded, so 4*Atan(1.0) is not really an 'algorithm' if you're calling a library Atan function (an quite a few already suggested indeed proceed by replacing Atan(x) by a series (or infinite product) for it, then evaluating it at x=1.
Also, there are very few cases where you'd need pi at more precision than a few tens of bits (which can be easily hardcoded!). I've worked on applications in mathematics where, to compute some (quite complicated) mathematical objects (which were polynomial with integer coefficients), I had to do arithmetic on real and complex numbers (including computing pi) with a precision of up to a few million bits... but this is not very frequent 'in real life' :)
You can look up the following example code.
I like this paper, which explains how to calculate π based on a Taylor series expansion for Arctangent.
The paper starts with the simple assumption that
Atan(1) = π/4 radians
Atan(x) can be iteratively estimated with the Taylor series
atan(x) = x - x^3/3 + x^5/5 - x^7/7 + x^9/9...
The paper points out why this is not particularly efficient and goes on to make a number of logical refinements in the technique. They also provide a sample program that computes π to a few thousand digits, complete with source code, including the infinite-precision math routines required.
The following link shows how to calculate the pi constant based on its definition as an integral, that can be written as a limit of a summation, it's very interesting:
https://sites.google.com/site/rcorcs/posts/calculatingthepiconstant
The file "Pi as an integral" explains this method used in this post.
First, note that C# can use the Math.PI field of the .NET framework:
https://msdn.microsoft.com/en-us/library/system.math.pi(v=vs.110).aspx
The nice feature here is that it's a full-precision double that you can either use, or compare with computed results. The tabs at that URL have similar constants for C++, F# and Visual Basic.
To calculate more places, you can write your own extended-precision code. One that is quick to code and reasonably fast and easy to program is:
Pi = 4 * [4 * arctan (1/5) - arctan (1/239)]
This formula and many others, including some that converge at amazingly fast rates, such as 50 digits per term, are at Wolfram:
Wolfram Pi Formulas
PI (π) can be calculated by using infinite series. Here are two examples:
Gregory-Leibniz Series:
π/4 = 1 - 1/3 + 1/5 - 1/7 + 1/9 - ...
C# method :
public static decimal GregoryLeibnizGetPI(int n)
{
decimal sum = 0;
decimal temp = 0;
for (int i = 0; i < n; i++)
{
temp = 4m / (1 + 2 * i);
sum += i % 2 == 0 ? temp : -temp;
}
return sum;
}
Nilakantha Series:
π = 3 + 4 / (2x3x4) - 4 / (4x5x6) + 4 / (6x7x8) - 4 / (8x9x10) + ...
C# method:
public static decimal NilakanthaGetPI(int n)
{
decimal sum = 0;
decimal temp = 0;
decimal a = 2, b = 3, c = 4;
for (int i = 0; i < n; i++)
{
temp = 4 / (a * b * c);
sum += i % 2 == 0 ? temp : -temp;
a += 2; b += 2; c += 2;
}
return 3 + sum;
}
The input parameter n for both functions represents the number of iteration.
The Nilakantha Series in comparison with Gregory-Leibniz Series converges more quickly. The methods can be tested with the following code:
static void Main(string[] args)
{
const decimal pi = 3.1415926535897932384626433832m;
Console.WriteLine($"PI = {pi}");
//Nilakantha Series
int iterationsN = 100;
decimal nilakanthaPI = NilakanthaGetPI(iterationsN);
decimal CalcErrorNilakantha = pi - nilakanthaPI;
Console.WriteLine($"\nNilakantha Series -> PI = {nilakanthaPI}");
Console.WriteLine($"Calculation error = {CalcErrorNilakantha}");
int numDecNilakantha = pi.ToString().Zip(nilakanthaPI.ToString(), (x, y) => x == y).TakeWhile(x => x).Count() - 2;
Console.WriteLine($"Number of correct decimals = {numDecNilakantha}");
Console.WriteLine($"Number of iterations = {iterationsN}");
//Gregory-Leibniz Series
int iterationsGL = 1000000;
decimal GregoryLeibnizPI = GregoryLeibnizGetPI(iterationsGL);
decimal CalcErrorGregoryLeibniz = pi - GregoryLeibnizPI;
Console.WriteLine($"\nGregory-Leibniz Series -> PI = {GregoryLeibnizPI}");
Console.WriteLine($"Calculation error = {CalcErrorGregoryLeibniz}");
int numDecGregoryLeibniz = pi.ToString().Zip(GregoryLeibnizPI.ToString(), (x, y) => x == y).TakeWhile(x => x).Count() - 2;
Console.WriteLine($"Number of correct decimals = {numDecGregoryLeibniz}");
Console.WriteLine($"Number of iterations = {iterationsGL}");
Console.ReadKey();
}
The following output shows that Nilakantha Series returns six correct decimals of PI with one hundred iterations whereas Gregory-Leibniz Series returns five correct decimals of PI with one million iterations:
My code can be tested >> here
Here is a nice way:
Calculate a series of 1/x^2 for x from 1 to what ever you want- the bigger number- the better pie result. Multiply the result by 6 and to sqrt().
Here is the code in c# (main only):
static void Main(string[] args)
{
double counter = 0;
for (double i = 1; i < 1000000; i++)
{
counter = counter + (1 / (Math.Pow(i, 2)));
}
counter = counter * 6;
counter = Math.Sqrt(counter);
Console.WriteLine(counter);
}
public double PI = 22.0 / 7.0;

Categories