Convert excel formulas to web based application - c#

I have an excel sheet where there are few formulas applied on each cells and now I want to convert them into a web based application.
Everything seems fine but there are few cells which has formulas which are dependent on each other.
Ex,
Input cells:
Cost: 9790
Allocation: 3%
Non-Allocation: 97%
Cells with formula:
Cell-1: (Cell-3 * Allocation) = (10000 * 3%)
Result: $300
Cell-2: (Non-Allocation * Cell-3) = (10000 * 97%)
Result: $9700
Cell-3: (Cost + Cell-1 - Fee on Damage) = (9790 + 300 - 90)
Result: $10000
So here how will I calculate the dependent cell values as Cell-1 is dependent on Cell-3 and Cell-3 is dependent on Cell-1 ?
EDIT:
Fee On Damage: (Total Fee * Allocation)
And Total Fee: calculated based on Cell-3 (Conditions like if Cell-3 <= 10000 then 3000 etc.)

I did not know that excel can to something like this but so be it.
It turns out that this is a simple excercise in school-algebra:
let's give Cell-3 the variable z
and Cell-1 the variable x
then you have the formulas:
x = z*Allocation
and
z = Cost + x - Fee
and so you can plug in the later into the first and get:
x = (Cost + x - Fee) * Allocation
get all x on the left side:
(1-Allocation)*x = (Cost - Fee) * Allocation
and divide
x = (Cost - Fee) * Allocation / (1-Allocation)
no plug into z:
z = Cost - Fee + (Cost - Fee) * Allocation / (1-Allocation)
(you can simplify the last too)
so let's check with Cost = 9790, Fee = 90 and Allocation = 0.03:
x = 9700 * 0.03 / 0.97 = 300
z = 9700 + 300 = 10000
seems right.
Remark
Obvious it's much harder to this this using some kind of auto-convert tool you might want to write in some common programming language like C#, as you'd need to teach your programm how to do basic algebra ;) - but if you only have a few of those cells and just want to translate it you can do it manually.

You didn't tag JavaScript, but here's a client-side JS solution, which could easily be implemented in c# or asp.net:
var Cost = 9790,
Allocation = 0.03,
Fee = 90,
C1 = 0,
C3 = 0,
C1iterations = 100,
C3iterations = 100;
function Cell1() {
if(--C1iterations) {
C1 = Cell3() * Allocation;
}
else {
C1iterations = 100;
}
return C1;
}
function Cell3() {
if(--C3iterations) {
C3 = Cost + Cell1() - Fee;
}
else {
C3iterations = 100;
}
return C3;
}
document.body.innerHTML= 'Cell1: '+Cell1()+'<br>Cell3: '+Cell3();
You can tell Excel to allow circular references, in which case it defaults to 100 iterations. This code duplicates that functionality, so you don't have to work through the algebra.
All variables are global for demonstration purposes, but they could easily be made local by using closures.

Related

Time to Temperature Calculation

This might not be the correct place for this, so apologies in advance if it isn't.
My situation - I need to come up with a simple formula/method of giving it an hour E.g. 13, 15, 01 etc, and based on that number, the method will return the 'approx' temperature for that particular time.
This is very approximate and it will not use weather data or anything like that, it will just take the hour of the day and return a value between say -6 deg C > 35 deg C. (very extreme weather, but you get the idea.)
This is the sort of examples I would like to know how to do:
Just as a note, I COULD use an ugly array of 24 items, each referencing the temp for that hour, but this needs to be float based - e.g. 19.76 should return 9.25 deg...
Another note: I don't want a complete solution - I'm a confident programmer in various languages, but the maths have really stumped me on this. I've tried various methods on paper using TimeToPeak (the peak hour being 1pm or around there) but to no avail. Any help would be appreciated at this point.
EDIT
Following your comment, here is a function that provides a sinusoidal distribution with various useful optional parameters.
private static double SinDistribution(
double value,
double lowToHighMeanPoint = 0.0,
double length = 10.0,
double low = -1.0,
double high = 1.0)
{
var amplitude = (high - low) / 2;
var mean = low + amplitude;
return mean + (amplitude * Math.Sin(
(((value - lowToHighMeanPoint) % length) / length) * 2 * Math.PI));
}
You could use it like this, to get the results you desired.
for (double i = 0.0; i < 24.0; i++)
{
Console.WriteLine("{0}: {1}", i, SinDistribution(i, 6.5, 24.0, -6.0, 35.0));
}
This obviously discounts environmental factors and assumes the day is an equinox but I think it answers the question.
So,
double EstimatedTemperature(double hour, double[] distribution)
{
var low = Math.Floor(hour);
var lowIndex = (int)low;
var highIndex = (int)Math.Ceiling(hour);
if (highIndex > distribution.Count - 1)
{
highIndex = 0;
}
if (lowIndex < 0)
{
lowIndex = distribution.Count - 1;
}
var lowValue = distribution.ElementAt(lowIndex);
var highValue = distribution.ElementAt(highIndex);
return lowValue + ((hour - low) * (highValue - lowValue));
}
assuming a rather simplistic linear transition between each point in the distibution. You'll get erroneous results if the hour is mapped to elements that are not present in the distribution.
For arbitrary data points, I would go with one of the other linear interpolation solutions that have been provided.
However, this particular set of data is generated by a triangle wave:
temp = 45*Math.Abs(2*((t-1)/24-Math.Floor((t-1)/24+.5)))-10;
The data in your table is linear up and down from a peak at hour 13 and a minimum at hour 1. If that is the type of model that you want then this is really easy to put into a formulaic solution. You would just simply perform linear interpolation between the two extremes of the temperature based upon the hour value. You would have two data points:
(xmin, ymin) as (hour-min, temp-min)
(xmax, ymax) as (hour-max, temp-max)
You would have two equations of the form:
The two equations would use the (x0, y0) and (x1, y1) values as the above two data points but apply them the opposite assignment (ie peak would be (x0, y0) on one and (x1, y1) in the other equation.
You would then select which equation to use based upon the hour value, insert the X value as the hour and compute as Y for the temperature value.
You will want to offset the X values used in the equations so that you take care of the offset between when Hour 0 and where the minimum temperature peak happens.
Here is an example of how you could do this using a simple set of values in the function, if you wish, add these as parameters;
public double GetTemp(double hour)
{
int min = 1;
int max = min + 12;
double lowest = -10;
double highest = 35;
double change = 3.75;
return (hour > max) ? ((max - hour) * change) + highest : (hour < min) ? ((min - hour)*change) + lowest : ((hour - max) * change) + highest;
}
I have tested this according to your example and it is working with 19.75 = 9.6875.
There is no check to see whether the value entered is within 0-24, but that you can probably manage yourself :)
You can use simple 2 point linear approximation. Try somthing like this:
function double hourTemp(double hour)
{
idx1 = round(hour);
idx2 = idx1 + 1;
return (data[idx2] - data[idx1]) * (hour - idx1) + data[idx1];
}
Or use 3,5 or more points to get polynom cofficients with Ordinary Least Squares method.
Your sample data similar to the sin function so you can make sin function approximation.

get match percentages between two objects by parameters

I want to create a program that will automate a process that i am doing manually today.
I apologize if the solution seems to be easy i just don't want to think about new algorithm specially for my problem because i am sure that someone already thought about it.
My Scenario is this:
I have candidates list that are looking for jobs and I have jobs list.
For each candidate I know the following requirements of the job that he is searching for. like:
Salary
Location of the Job
Company Size (Big / Small)
In the manual process what i do is to match between those candidate's requirements parameters to the job's requirements parameter and "return" the jobs that seems to fit to the candidate (it doesn't have to be a completely match).
Of course i am considering candidate's requirement is "nice to have" or "must have".
I am searching for an algorithm that returns a fit percentage between each candidate to each job.
Can someone please point me to a any name of matching algorithm like this.
Thanks
My advice is to convert every object to a vector in a 3-D space and then find the Euclidean distance between the two vectors (objects).
First, assign salary, location and size to x, y and z axis, respectively.
Then map the properties to [0, 1] interval of the axis.
For example, if your min salary is 1'000, and max salary is 10'000, then you would map:
$ 1'000 -> 0 on the x axis,
$ 10'000 -> to 1 on the x axis.
Mapping locations is hard, but let's say you have a map grid, and you assign a value to each patch of the grid according to geo position - closer ones have similar values. For example, US states provide us with a good example:
New York -> 1.0 on the y axis,
New Jersey -> 0.99 on the y axis,
...
California -> 0.1 on the y axis.
Map company sizes something like:
start-up -> 0.2 on the z axis,
...
multinational -> 1.0 on the z axis.
So, to give an example: John wants a salary of 9.000, wants a job in New York, and wants to work in a start-up company. His vector in 3D space would be [0.82, 1.00, 0.1].
Peter wants a salary of 5.500, wants a job in New Jersey, and wants to work in a really big company - [0.5, 0.99, 0.8]. And at last, Mike wants a salary of 8.000, a job in California, and a start-up too - [0.73, 0.1, 0.1].
According to formula for Euclidean distance in 3D space:
d(a, b) = sqrt((a1-b1)^2 + (a2-b2)^2 + (a3 - b3)^2)
Distance between John and Peter is: d(J, P) = 0.77
Distance between John and Mike is: d(J, M) = 0.90
So the conclusion would be that John and Peter are closer than John and Mike.
One more thing you could do is to bring in some constants to each axis to emphasize the importance of it (location is more important than company size, for example) so in the formula you could do something like:
d(a, b) = sqrt((a1-b1)^2 + (C*a2 - C*b2)^2 + (a3 - b3)^2), where C = 10
similiarity(A,B) = 1 / (1 + (distance(A,B) / unit))
Case where distance is 0:
similarity(A,A)
= 1 / (1 + (distance(A,A) / unit))
= 1 / (1 + (0 / unit))
= 1 / (1 + 0)
= 1.0
~ 100 %
Case where distance is infinite:
similarity(A,Z)
= 1 / (1 + (distance(A,Z) / unit))
= 1 / (1 + (infinity / unit))
= 1 / infinity
= 0.0
~ 0 %
Code:
JobComparison* compare (Job a, Job b)
{
// define units based on measurement
double unit1 = 1000.0;
double unit2 = 100.0;
double unit3 = 10.0;
// calculate distance
double d1 = abs(a.salary - b.salary);
double d2 = distance(a.location, b.location);
double d3 = abs(a.companySize - b.companySize);
// calculate similiarity
double p1 = 1 / (1 + (d1 / unit1));
double p2 = 1 / (1 + (d2 / unit2));
double p3 = 1 / (1 + (d3 / unit3));
return new JobCompare(p1, p2, p3);
}
public class JobCompare
{
public:
double salarySimiliarity;
double locationSimiliarity;
double companySimiliarity;
}
public class Job
{
public:
double salary;
Location location;
double companySize;
}

C# aggregation function for a non-linear series

I'm given two arrays, one representing a price, the other representing a number of units:
e.g.
decimal[] price = new decimal[] {1.65, 1.6, 1.55, 1.4, 1.3};
long[] quantity = new long[] {5000, 10000, 12000, 20000, 50000};
So the first 5000 units will cost 1.65 each, the next will cost 10000 will cost 1.6 each, and so on...
It's pretty easy to get the average price with an aggregate funct when you know the amount of units you wish to order e.g.Average price for 7000 units = (5000/7000 * 1.65) + (2000/7000 * 1.6), however, I'm having trouble coming up with an algorithm for when the total unit amount is the unknown variable, and we are given the target average price.
E.g. How many units would I have to order so the average unit price = 1.57
If you think geometrically about it, consider a chart showing the total price (ordinate axis) as a function of the total number of items (abscissa) bought. The plot starts in (0, 0) (buying zero costs zero). First we get a straight line segment of slope 1.65 and horizontal width 5000. Then from the end-point of that comes a new segment of slope 1.6 and width 10000. The total plot is continuous and piece-wise straight-lined but with bends where the unit price changes.
Then to solve your problem, find the intersection with the line of equation y == 1.57 * x, i.e. the line starting at (0, 0) and having slope 1.57. For each of the segments (whose two endpoints you know), check if this segment meets y == 1.57 * x, and if it does, there's your solution.
If the numbers in your price array are decreasing, there can be at most one solution (given that 1.57 is strictly less than the first price, price[0]), the plot representing a concave function.
EDIT: I tried to code this geometry in C#. I didn't add checks that price are all positive and decreasing, and that quantity are all positive. You must check that. Here's my code:
decimal[] price = { 1.65m, 1.6m, 1.55m, 1.4m, 1.3m, };
long[] quantity = { 5000, 10000, 12000, 20000, 50000, };
decimal desiredAverage = 1.57m;
int length = price.Length;
if (length != quantity.Length)
throw new InvalidOperationException();
var abscissaValues = new long[length + 1];
var ordinateValues = new decimal[length + 1];
for (int i = 1; i <= length; ++i)
{
for (int j = 0; j < i; ++j)
{
abscissaValues[i] += quantity[j];
ordinateValues[i] += price[j] * quantity[j];
}
} // calculation of plot complete
int segmentNumber = Enumerable.Range(1, length).FirstOrDefault(i => ordinateValues[i] / abscissaValues[i] <= desiredAverage);
if (segmentNumber > 1)
{
decimal x = (ordinateValues[segmentNumber - 1] * abscissaValues[segmentNumber] - abscissaValues[segmentNumber - 1] * ordinateValues[segmentNumber])
/ (desiredAverage * (abscissaValues[segmentNumber] - abscissaValues[segmentNumber - 1]) - (ordinateValues[segmentNumber] - ordinateValues[segmentNumber - 1]));
Console.WriteLine("Found solution x == " + x);
}
else
{
Console.WriteLine("No solution");
}
I don't know if someone can write it more beautifully, but it seems to work. Output is:
Found solution x == 29705.882352941176470588235294
I believe that's because there's no one answer, no one combination of prices which will result in one average, no close form equation. What we may be looking at is a variant of the Knapsack Problem. http://en.wikipedia.org/wiki/Knapsack_problem with a minimization of value instead of maximization.
EDIT: As correctly pointed out below, this is not a variant of the knapsack problem. There is a closed form solution:
If T = total units bought,
1.57 = 1.55 * (12000/T) + 1.6 * ((T-12000)/T). Solve for T.
The starting price block (here 1.55) is the block just below the average price per unit given in the problem (here 1.57).

Help with dynamic range compression function (audio)

I am writing a C# function for doing dynamic range compression (an audio effect that basically squashes transient peaks and amplifies everything else to produce an overall louder sound). I have written a function that does this (I think):
alt text http://www.freeimagehosting.net/uploads/feea390f84.jpg
public static void Compress(ref short[] input, double thresholdDb, double ratio)
{
double maxDb = thresholdDb - (thresholdDb / ratio);
double maxGain = Math.Pow(10, -maxDb / 20.0);
for (int i = 0; i < input.Length; i += 2)
{
// convert sample values to ABS gain and store original signs
int signL = input[i] < 0 ? -1 : 1;
double valL = (double)input[i] / 32768.0;
if (valL < 0.0)
{
valL = -valL;
}
int signR = input[i + 1] < 0 ? -1 : 1;
double valR = (double)input[i + 1] / 32768.0;
if (valR < 0.0)
{
valR = -valR;
}
// calculate mono value and compress
double val = (valL + valR) * 0.5;
double posDb = -Math.Log10(val) * 20.0;
if (posDb < thresholdDb)
{
posDb = thresholdDb - ((thresholdDb - posDb) / ratio);
}
// measure L and R sample values relative to mono value
double multL = valL / val;
double multR = valR / val;
// convert compressed db value to gain and amplify
val = Math.Pow(10, -posDb / 20.0);
val = val / maxGain;
// re-calculate L and R gain values relative to compressed/amplified
// mono value
valL = val * multL;
valR = val * multR;
double lim = 1.5; // determined by experimentation, with the goal
// being that the lines below should never (or rarely) be hit
if (valL > lim)
{
valL = lim;
}
if (valR > lim)
{
valR = lim;
}
double maxval = 32000.0 / lim;
// convert gain values back to sample values
input[i] = (short)(valL * maxval);
input[i] *= (short)signL;
input[i + 1] = (short)(valR * maxval);
input[i + 1] *= (short)signR;
}
}
and I am calling it with threshold values between 10.0 db and 30.0 db and ratios between 1.5 and 4.0. This function definitely produces a louder overall sound, but with an unacceptable level of distortion, even at low threshold values and low ratios.
Can anybody see anything wrong with this function? Am I handling the stereo aspect correctly (the function assumes stereo input)? As I (dimly) understand things, I don't want to compress the two channels separately, so my code is attempting to compress a "virtual" mono sample value and then apply the same degree of compression to the L and R sample value separately. Not sure I'm doing it right, however.
I think part of the problem may the "hard knee" of my function, which kicks in the compression abruptly when the threshold is crossed. I think I may need to use a "soft knee" like this:
alt text http://www.freeimagehosting.net/uploads/4c1040fda8.jpg
Can anybody suggest a modification to my function to produce the soft knee curve?
The open source Skype Voice Changer project includes a port to C# of a number of nice compressors written by Scott Stillwell, all with configurable parameters:
Fast attack compressor
Fairly childish (compressor limiter)
Event Horizon (peak eating limiter)
The first one looks like it has the capability to do soft-knee, although the parameter to do so is not exposed.
I think your basic understanding of how to do compression is wrong (sorry ;)). It's not about "compressing" individual sample values; that will radically change the waveform and produce severe harmonic distortions. You need to assess the input signal volume over many samples (I would have to Google for the correct formula), and use this to apply a much-more-gradually-changing multiplier to the input samples to generate output.
The DSP forum at kvraudio.com/forum might point you in the right direction if you have a hard time finding the usual techniques.

How do I calculate PI in C#?

How can I calculate the value of PI using C#?
I was thinking it would be through a recursive function, if so, what would it look like and are there any math equations to back it up?
I'm not too fussy about performance, mainly how to go about it from a learning point of view.
If you want recursion:
PI = 2 * (1 + 1/3 * (1 + 2/5 * (1 + 3/7 * (...))))
This would become, after some rewriting:
PI = 2 * F(1);
with F(i):
double F (int i) {
return 1 + i / (2.0 * i + 1) * F(i + 1);
}
Isaac Newton (you may have heard of him before ;) ) came up with this trick.
Note that I left out the end condition, to keep it simple. In real life, you kind of need one.
How about using:
double pi = Math.PI;
If you want better precision than that, you will need to use an algorithmic system and the Decimal type.
If you take a close look into this really good guide:
Patterns for Parallel Programming: Understanding and Applying Parallel Patterns with the .NET Framework 4
You'll find at Page 70 this cute implementation (with minor changes from my side):
static decimal ParallelPartitionerPi(int steps)
{
decimal sum = 0.0;
decimal step = 1.0 / (decimal)steps;
object obj = new object();
Parallel.ForEach(
Partitioner.Create(0, steps),
() => 0.0,
(range, state, partial) =>
{
for (int i = range.Item1; i < range.Item2; i++)
{
decimal x = (i - 0.5) * step;
partial += 4.0 / (1.0 + x * x);
}
return partial;
},
partial => { lock (obj) sum += partial; });
return step * sum;
}
There are a couple of really, really old tricks I'm surprised to not see here.
atan(1) == PI/4, so an old chestnut when a trustworthy arc-tangent function is
present is 4*atan(1).
A very cute, fixed-ratio estimate that makes the old Western 22/7 look like dirt
is 355/113, which is good to several decimal places (at least three or four, I think).
In some cases, this is even good enough for integer arithmetic: multiply by 355 then divide by 113.
355/113 is also easy to commit to memory (for some people anyway): count one, one, three, three, five, five and remember that you're naming the digits in the denominator and numerator (if you forget which triplet goes on top, a microsecond's thought is usually going to straighten it out).
Note that 22/7 gives you: 3.14285714, which is wrong at the thousandths.
355/113 gives you 3.14159292 which isn't wrong until the ten-millionths.
Acc. to /usr/include/math.h on my box, M_PI is #define'd as:
3.14159265358979323846
which is probably good out as far as it goes.
The lesson you get from estimating PI is that there are lots of ways of doing it,
none will ever be perfect, and you have to sort them out by intended use.
355/113 is an old Chinese estimate, and I believe it pre-dates 22/7 by many years. It was taught me by a physics professor when I was an undergrad.
Good overview of different algorithms:
Computing pi;
Gauss-Legendre-Salamin.
I'm not sure about the complexity claimed for the Gauss-Legendre-Salamin algorithm in the first link (I'd say O(N log^2(N) log(log(N)))).
I do encourage you to try it, though, the convergence is really fast.
Also, I'm not really sure about why trying to convert a quite simple procedural algorithm into a recursive one?
Note that if you are interested in performance, then working at a bounded precision (typically, requiring a 'double', 'float',... output) does not really make sense, as the obvious answer in such a case is just to hardcode the value.
What is PI? The circumference of a circle divided by its diameter.
In computer graphics you can plot/draw a circle with its centre at 0,0 from a initial point x,y, the next point x',y' can be found using a simple formula:
x' = x + y / h : y' = y - x' / h
h is usually a power of 2 so that the divide can be done easily with a shift (or subtracting from the exponent on a double). h also wants to be the radius r of your circle. An easy start point would be x = r, y = 0, and then to count c the number of steps until x <= 0 to plot a quater of a circle. PI is 4 * c / r or PI is 4 * c / h
Recursion to any great depth, is usually impractical for a commercial program, but tail recursion allows an algorithm to be expressed recursively, while implemented as a loop. Recursive search algorithms can sometimes be implemented using a queue rather than the process's stack, the search has to backtrack from a deadend and take another path - these backtrack points can be put in a queue, and multiple processes can un-queue the points and try other paths.
Calculate like this:
x = 1 - 1/3 + 1/5 - 1/7 + 1/9 (... etc as far as possible.)
PI = x * 4
You have got Pi !!!
This is the simplest method I know of.
The value of PI slowly converges to the actual value of Pi (3.141592165......). If you iterate more times, the better.
Here's a nice approach (from the main Wikipedia entry on pi); it converges much faster than the simple formula discussed above, and is quite amenable to a recursive solution if your intent is to pursue recursion as a learning exercise. (Assuming that you're after the learning experience, I'm not giving any actual code.)
The underlying formula is the same as above, but this approach averages the partial sums to accelerate the convergence.
Define a two parameter function, pie(h, w), such that:
pie(0,1) = 4/1
pie(0,2) = 4/1 - 4/3
pie(0,3) = 4/1 - 4/3 + 4/5
pie(0,4) = 4/1 - 4/3 + 4/5 - 4/7
... and so on
So your first opportunity to explore recursion is to code that "horizontal" computation as the "width" parameter increases (for "height" of zero).
Then add the second dimension with this formula:
pie(h, w) = (pie(h-1,w) + pie(h-1,w+1)) / 2
which is used, of course, only for values of h greater than zero.
The nice thing about this algorithm is that you can easily mock it up with a spreadsheet to check your code as you explore the results produced by progressively larger parameters. By the time you compute pie(10,10), you'll have an approximate value for pi that's good enough for most engineering purposes.
Enumerable.Range(0, 100000000).Aggregate(0d, (tot, next) => tot += Math.Pow(-1d, next)/(2*next + 1)*4)
using System;
namespace Strings
{
class Program
{
static void Main(string[] args)
{
/* decimal pie = 1;
decimal e = -1;
*/
var stopwatch = new System.Diagnostics.Stopwatch();
stopwatch.Start(); //added this nice stopwatch start routine
//leibniz formula in C# - code written completely by Todd Mandell 2014
/*
for (decimal f = (e += 2); f < 1000001; f++)
{
e += 2;
pie -= 1 / e;
e += 2;
pie += 1 / e;
Console.WriteLine(pie * 4);
}
decimal finalDisplayString = (pie * 4);
Console.WriteLine("pie = {0}", finalDisplayString);
Console.WriteLine("Accuracy resulting from approximately {0} steps", e/4);
*/
// Nilakantha formula - code written completely by Todd Mandell 2014
// π = 3 + 4/(2*3*4) - 4/(4*5*6) + 4/(6*7*8) - 4/(8*9*10) + 4/(10*11*12) - (4/(12*13*14) etc
decimal pie = 0;
decimal a = 2;
decimal b = 3;
decimal c = 4;
decimal e = 1;
for (decimal f = (e += 1); f < 100000; f++)
// Increase f where "f < 100000" to increase number of steps
{
pie += 4 / (a * b * c);
a += 2;
b += 2;
c += 2;
pie -= 4 / (a * b * c);
a += 2;
b += 2;
c += 2;
e += 1;
}
decimal finalDisplayString = (pie + 3);
Console.WriteLine("pie = {0}", finalDisplayString);
Console.WriteLine("Accuracy resulting from {0} steps", e);
stopwatch.Stop();
TimeSpan ts = stopwatch.Elapsed;
Console.WriteLine("Calc Time {0}", ts);
Console.ReadLine();
}
}
}
public static string PiNumberFinder(int digitNumber)
{
string piNumber = "3,";
int dividedBy = 11080585;
int divisor = 78256779;
int result;
for (int i = 0; i < digitNumber; i++)
{
if (dividedBy < divisor)
dividedBy *= 10;
result = dividedBy / divisor;
string resultString = result.ToString();
piNumber += resultString;
dividedBy = dividedBy - divisor * result;
}
return piNumber;
}
In any production scenario, I would compel you to look up the value, to the desired number of decimal points, and store it as a 'const' somewhere your classes can get to it.
(unless you're writing scientific 'Pi' specific software...)
Regarding...
... how to go about it from a learning point of view.
Are you trying to learning to program scientific methods? or to produce production software? I hope the community sees this as a valid question and not a nitpick.
In either case, I think writing your own Pi is a solved problem. Dmitry showed the 'Math.PI' constant already. Attack another problem in the same space! Go for generic Newton approximations or something slick.
#Thomas Kammeyer:
Note that Atan(1.0) is quite often hardcoded, so 4*Atan(1.0) is not really an 'algorithm' if you're calling a library Atan function (an quite a few already suggested indeed proceed by replacing Atan(x) by a series (or infinite product) for it, then evaluating it at x=1.
Also, there are very few cases where you'd need pi at more precision than a few tens of bits (which can be easily hardcoded!). I've worked on applications in mathematics where, to compute some (quite complicated) mathematical objects (which were polynomial with integer coefficients), I had to do arithmetic on real and complex numbers (including computing pi) with a precision of up to a few million bits... but this is not very frequent 'in real life' :)
You can look up the following example code.
I like this paper, which explains how to calculate π based on a Taylor series expansion for Arctangent.
The paper starts with the simple assumption that
Atan(1) = π/4 radians
Atan(x) can be iteratively estimated with the Taylor series
atan(x) = x - x^3/3 + x^5/5 - x^7/7 + x^9/9...
The paper points out why this is not particularly efficient and goes on to make a number of logical refinements in the technique. They also provide a sample program that computes π to a few thousand digits, complete with source code, including the infinite-precision math routines required.
The following link shows how to calculate the pi constant based on its definition as an integral, that can be written as a limit of a summation, it's very interesting:
https://sites.google.com/site/rcorcs/posts/calculatingthepiconstant
The file "Pi as an integral" explains this method used in this post.
First, note that C# can use the Math.PI field of the .NET framework:
https://msdn.microsoft.com/en-us/library/system.math.pi(v=vs.110).aspx
The nice feature here is that it's a full-precision double that you can either use, or compare with computed results. The tabs at that URL have similar constants for C++, F# and Visual Basic.
To calculate more places, you can write your own extended-precision code. One that is quick to code and reasonably fast and easy to program is:
Pi = 4 * [4 * arctan (1/5) - arctan (1/239)]
This formula and many others, including some that converge at amazingly fast rates, such as 50 digits per term, are at Wolfram:
Wolfram Pi Formulas
PI (π) can be calculated by using infinite series. Here are two examples:
Gregory-Leibniz Series:
π/4 = 1 - 1/3 + 1/5 - 1/7 + 1/9 - ...
C# method :
public static decimal GregoryLeibnizGetPI(int n)
{
decimal sum = 0;
decimal temp = 0;
for (int i = 0; i < n; i++)
{
temp = 4m / (1 + 2 * i);
sum += i % 2 == 0 ? temp : -temp;
}
return sum;
}
Nilakantha Series:
π = 3 + 4 / (2x3x4) - 4 / (4x5x6) + 4 / (6x7x8) - 4 / (8x9x10) + ...
C# method:
public static decimal NilakanthaGetPI(int n)
{
decimal sum = 0;
decimal temp = 0;
decimal a = 2, b = 3, c = 4;
for (int i = 0; i < n; i++)
{
temp = 4 / (a * b * c);
sum += i % 2 == 0 ? temp : -temp;
a += 2; b += 2; c += 2;
}
return 3 + sum;
}
The input parameter n for both functions represents the number of iteration.
The Nilakantha Series in comparison with Gregory-Leibniz Series converges more quickly. The methods can be tested with the following code:
static void Main(string[] args)
{
const decimal pi = 3.1415926535897932384626433832m;
Console.WriteLine($"PI = {pi}");
//Nilakantha Series
int iterationsN = 100;
decimal nilakanthaPI = NilakanthaGetPI(iterationsN);
decimal CalcErrorNilakantha = pi - nilakanthaPI;
Console.WriteLine($"\nNilakantha Series -> PI = {nilakanthaPI}");
Console.WriteLine($"Calculation error = {CalcErrorNilakantha}");
int numDecNilakantha = pi.ToString().Zip(nilakanthaPI.ToString(), (x, y) => x == y).TakeWhile(x => x).Count() - 2;
Console.WriteLine($"Number of correct decimals = {numDecNilakantha}");
Console.WriteLine($"Number of iterations = {iterationsN}");
//Gregory-Leibniz Series
int iterationsGL = 1000000;
decimal GregoryLeibnizPI = GregoryLeibnizGetPI(iterationsGL);
decimal CalcErrorGregoryLeibniz = pi - GregoryLeibnizPI;
Console.WriteLine($"\nGregory-Leibniz Series -> PI = {GregoryLeibnizPI}");
Console.WriteLine($"Calculation error = {CalcErrorGregoryLeibniz}");
int numDecGregoryLeibniz = pi.ToString().Zip(GregoryLeibnizPI.ToString(), (x, y) => x == y).TakeWhile(x => x).Count() - 2;
Console.WriteLine($"Number of correct decimals = {numDecGregoryLeibniz}");
Console.WriteLine($"Number of iterations = {iterationsGL}");
Console.ReadKey();
}
The following output shows that Nilakantha Series returns six correct decimals of PI with one hundred iterations whereas Gregory-Leibniz Series returns five correct decimals of PI with one million iterations:
My code can be tested >> here
Here is a nice way:
Calculate a series of 1/x^2 for x from 1 to what ever you want- the bigger number- the better pie result. Multiply the result by 6 and to sqrt().
Here is the code in c# (main only):
static void Main(string[] args)
{
double counter = 0;
for (double i = 1; i < 1000000; i++)
{
counter = counter + (1 / (Math.Pow(i, 2)));
}
counter = counter * 6;
counter = Math.Sqrt(counter);
Console.WriteLine(counter);
}
public double PI = 22.0 / 7.0;

Categories