Find maximum value of a continuous function at a specific resolution - c#

Imagine having a function that is continuous over a range [0.0,n]. Are there any algorithms to find the maximum value of the function given a minimum step size s more quickly than simple iteration? The simple iteration is straightforward to program but the time complexity grows when n / s is large.
double maxValue = 0;
double maxValueX = 0;
double s = 0.1 * n;
for (double x = 0.0; x <= n; x += s)
{
double value = someFunction(x);
if(value > maxValue) {
maxValue = value;
maxValueX = x;
}
}
I have tried this approach which is much quicker, but don't know if it will get stuck on local maximums.
double min = 0;
double max = n;
int steps = 10;
increment = (max - min) / steps;
while (increment > s)
{
double maxValue = 0;
double maxValueX = X;
for (double x= min; x <= max; x+= increment)
{
double value = someFunction(x);
if(value > maxValue) {
maxValue = value;
maxValueX = x;
}
}
min = Math.Max(maxValueX - increment, 0.0);
max = Math.Min(maxValueX + increment, n);
increment = (max - min) / steps;
}

Suppose there was such an algorithm, that is, an algorithm that can find the maximum of an approximation of a continuous function without looking at every point of the approximation.
Now choose a positive integer n and choose any finite sequence of n doubles you care to name. There are infinitely many continuous functions such that f(n) is equal to the nth double in the sequence, and smaller than or equal to the largest of them everywhere. Choose one of them.
Now use your algorithm to find the largest double of the n doubles. By assumption, it examines fewer than n of the doubles. Let's suppose it examines all of them except the kth double.
Now suppose we create a new sequence identical to the first one except that the kth double is the maximum. Is the algorithm magical, that when given an input that it does not read, it changes its output?
Now is it clear why there is no such algorithm? If you want to find the longest piece of string in the drawer, you're going to have to look at all of them.
The continuity of the function doesn't help you at all. All continuity gives you is a guarantee that given a point on the function, you can find another point on the function that is as close to the first point as you like. That tells you nothing about the maximum value taken on by the function. (Well, OK, it tells you something. On a closed bounded interval it implies that a maximum exists, which is something. But it doesn't help you find it.)

Given the function you are talking about is code, then no, that function could return any arbitrary maximum at any point.
If you can make assumptions about the function ( like maximum rate of change ) then you could optimize.

Related

How can I modify this random double generator to return more zeros

I have this extension method that, given a minimum and maximum double, generates a double between them.
public static double NextDouble(this Random random, double minValue, double maxValue)
{
return random.NextDouble() * (maxValue - minValue) + minValue;
}
I mainly use this extension method to generate random dollar amounts, and sometimes 0 dollars is an OK value! That being said, I need to increase the odds of returning a 0. More specifically, if I try the following:
Random rando = new Random();
List<double> doubles = new List<double>();
for (int i = 0; i < 100000; i++)
{
double d = rando.NextDouble(0, .25);
Console.WriteLine(d.ToString());
}
I don't get a single zero.
A less than ideal solution I thought of is I can just catch every value less than 1 and return 0 instead.
public static double NextDouble(this Random random, double minValue, double maxValue)
{
double d = random.NextDouble() * (maxValue - minValue) + minValue;
if (d < 1)
{
return 0;
}
return d;
}
This obviously removes the ability to return values less than 1 (.25, .50, .125, etc..). I'm looking for some clever ways around this!
A simple way of approaching this is to generate two random numbers: the first to determine if you return 0, and if not, you return the second number. Say for instance you want a 5% chance of returning zero. Generate a random integer between 1 and 100 inclusive, and if its 5 or less, simply return zero.
if (minValue <= 0.0 && 0.0 <= maxValue)
{
var shouldWeReturnZero = rando.Next(1, 101) <= 5;
if (shouldWeReturnZero)
return 0;
}
Otherwise, generate the actual random number using the code you already have.
What you might want to do is instead of generating a random double, generate a random int and let it represent cents.
that way, 0 will be just as likely as any other amount of cents. When showing them to the user, you can display them as dollars.
var random = new Random();
for (var i = 0; i < 1000; i++)
{
var cents = random.Next(0, 200);
Console.WriteLine("Dollar amount: ${0:0}.{1:00}", cents / 100, cents % 100);
}
Console.ReadKey(false);
So the reason why you are getting no zeroes is that the probability of getting an absolute zero when generating a double is very very very unlikely. If you have a 32-bit floating number, the probability of getting a zero is somewhere around the number 1/2^32. If you want to know more, check out https://en.wikipedia.org/wiki/Single-precision_floating-point_format and how floating numbers are constructed from memory.
In your case I would create a floor-function that instead of flooring to integers, it does so in steps of 0.25. So, a floor function takes any floating number and removes the decimals so what's left is an integer, for example from 1.7888 to 1. You want something that is a bit less rough, so if you insert a value of 1.7888, it would spit out 1.75.

Finding the point corresponding to an arc length on the ellipse iteratively

Given a distance (arc length) anticlockwise away from a known point (P_0) on an ellipse, I am trying to find the point at that distance (P_1).
Since I cannot evaluate the t corresponding to a specific arc length analytically, I am forced to iterate through each discrete point until I arrive at an answer.
My initial code is something like this:
// t_0 is the parametric t on the ellipse corresponding to P_0
Point GetPos(double distance, double t_0, double res = 5000, double epsilon = 0.1)
{
for(int i = 0; i < res; ++i)
{
// The minus is to make the point move in an clockwise direction
t = t_0 - (double)(i)/(double)res * t_0;
// Find the integral from t to t_0 to get the arc length
// If arc length is within epsilon, return the corresponding point
}
}
Unfortunately, this code may not converge if the arc length given by the t value just nicely overshoots the epsilon value. And since this is a loop that decreases t, the overshoot will not be corrected.
I was thinking of modelling this as a control problem, using something like a PID controller. However, I realised that since the set point (which is my desired arc length), and my output (which is essentially the parametric t), are referring to different variables, I do not know how to proceed.
Is there a better method of solving this kind of problem or am I missing something from my current approach?
After some thought I used a binary search method instead, since a PID controller is difficult to tune and usually does not converge fast enough for all cases of the ellipses under consideration.
double max = t_0; double min = 0; double result = 0; double mid = 0;
mid = (max - min) / 2.0;
while ((Math.Abs(distance - result) > epsilon))
{
result = // Arc Length from t_0 to mid
if (result > distance)
{
min = mid;
mid = ((max - mid) / 2.0) + min;
}
else
{
max = mid;
mid = (mid - min) / 2.0;
}
}
// Return the point at t = max
The binary search works as the search is always over an ordered range (from t_0 to 0).

Adding an "average" parameter to .NET's Random.Next() to curve results

I'd like to be able to add a "average" parameter to Random.Next(Lower, Upper). This method would have min, max and average parameters. I created a method like this a while back for testing (It used lists and was horrible), so I'd like some ideas on how to write a correct implementation.
The reason for having this functionality is for many procedural/random events in my game. Say you want trees to be 10 units tall most of the time, but still can be as low as 5 or 15. A normal Random.Next(5,15) would return results all over, but this method would have more of a bell curve toward it's results. Meaning 10 would be the most common, and going out in each direction would be less common. Moving the average down to 7 for example, would make relatively small trees (or whatever this is being used on), but large ones are still possible, however uncommon.
Previous method (pseudo-code-ish)
Loop from min to max
Closer to average numbers are added to the list more times
A random element is selected from the list, elements closer to average are added
more, so they will be more likely to be chosen.
Okay, so that's like throwing a bunch of candies in a bag and picking a random one. Yeah, slow. What are your thoughts on improving this?
Illustration: (Not exactly accurate but you see the idea)
NOTE: Many people have suggested a bell curve, but the question is how to be able to change the peak of the curve to favor one side in this sense.
I'm expanding on the idea of generating n random numbers, and taking their average to get a bell-curve effect. The "tightness" parameter controls how steep the curve is.
Edit: Summing a set of random points to get a "normal" distribution is supported by the Central Limit Theorem. Using a bias function to sway results in a particular direction is a common technique, but I'm no expert there.
To address the note at the end of your question, I'm skewing the curve by manipulating the "inner" random number. In this example, I'm raising it to the exponent you provide. Since a Random returns values less than one, raising it to any power will still never be more than one. But the average skews towards zero, as squares, cubes, etc of numbers less than one are even smaller than the base number. exp = 1 has no skew, whereas exp = 4 has a pretty significant skew.
private Random r = new Random();
public double RandomDist(double min, double max, int tightness, double exp)
{
double total = 0.0;
for (int i = 1; i <= tightness; i++)
{
total += Math.Pow(r.NextDouble(), exp);
}
return ((total / tightness) * (max - min)) + min;
}
I ran trials for different values for exp, generating 100,000 integers between 0 and 99. Here's how the distributions turned out.
I'm not sure how the peak relates to the exp value, but the higher the exp, the lower the peak appears in the range.
You could also reverse the direction of the skew by changing the line in the inside of the loop to:
total += (1 - Math.Pow(r.NextDouble(), exp));
...which would give the bias on the high side of the curve.
Edit: So, how do we know what to make "exp" in order to get the peak where we want it? That's a tricky one, and could probably be worked out analytically, but I'm a developer, not a mathematician. So, applying my trade, I ran lots of trials, gathered peak data for various values of exp, and ran the data through the cubic fit calculator at Wolfram Alpha to get an equation for exp as a function of peak.
Here's a new set of functions which implement this logic. The GetExp(...) function implements the equation found by WolframAlpha.
RandomBiasedPow(...) is the function of interest. It returns a random number in the specified ranges, but tends towards the peak. The strength of that tendency is governed by the tightness parameter.
private Random r = new Random();
public double RandomNormal(double min, double max, int tightness)
{
double total = 0.0;
for (int i = 1; i <= tightness; i++)
{
total += r.NextDouble();
}
return ((total / tightness) * (max - min)) + min;
}
public double RandomNormalDist(double min, double max, int tightness, double exp)
{
double total = 0.0;
for (int i = 1; i <= tightness; i++)
{
total += Math.Pow(r.NextDouble(), exp);
}
return ((total / tightness) * (max - min)) + min;
}
public double RandomBiasedPow(double min, double max, int tightness, double peak)
{
// Calculate skewed normal distribution, skewed by Math.Pow(...), specifiying where in the range the peak is
// NOTE: This peak will yield unreliable results in the top 20% and bottom 20% of the range.
// To peak at extreme ends of the range, consider using a different bias function
double total = 0.0;
double scaledPeak = peak / (max - min) + min;
if (scaledPeak < 0.2 || scaledPeak > 0.8)
{
throw new Exception("Peak cannot be in bottom 20% or top 20% of range.");
}
double exp = GetExp(scaledPeak);
for (int i = 1; i <= tightness; i++)
{
// Bias the random number to one side or another, but keep in the range of 0 - 1
// The exp parameter controls how far to bias the peak from normal distribution
total += BiasPow(r.NextDouble(), exp);
}
return ((total / tightness) * (max - min)) + min;
}
public double GetExp(double peak)
{
// Get the exponent necessary for BiasPow(...) to result in the desired peak
// Based on empirical trials, and curve fit to a cubic equation, using WolframAlpha
return -12.7588 * Math.Pow(peak, 3) + 27.3205 * Math.Pow(peak, 2) - 21.2365 * peak + 6.31735;
}
public double BiasPow(double input, double exp)
{
return Math.Pow(input, exp);
}
Here is a histogram using RandomBiasedPow(0, 100, 5, peak), with the various values of peak shown in the legend. I rounded down to get integers between 0 and 99, set tightness to 5, and tried peak values between 20 and 80. (Things get wonky at extreme peak values, so I left that out, and put a warning in the code.) You can see the peaks right where they should be.
Next, I tried boosting Tightness to 10...
Distribution is tighter, and the peaks are still where they should be. It's pretty fast too!
Here's a simple way to achieve this. Since you already have answers detailing how to generate normal distributions, and there are plenty of resources on that, I won't reiterate that. Instead I'll refer to a method I'll call GetNextNormal() which should generate a value from a normal distribution with mean 0 and standard deviation 1.
public int Next(int min, int max, int center)
{
int rand = GetNextNormal();
if(rand >= 0)
return center + rand*(max-center);
return center + rand*(center-min);
}
(This can be simplified a little, I've written it that way for clarity)
For a rough image of what this is doing, imagine two normal distributions. They're both centered around your center, but for one the min is one standard deviation away, to the left, and for the other, the max is one standard deviation away, to the right. Now imagine chopping them both in half at the center. On the left, you keep the one with the standard deviation corresponding to min, and on the right, the one corresponding to max.
Of course, normal distributions aren't guaranteed to stay within one standard deviation, so there are two things you probably want to do:
Add an extra parameter which controls how tight the distribution is
If you want min and max to be hard limits, you will have to add rejection for values outside those bounds.
A complete method, with those two additions (again keeping everything as ints for now), might look like;
public int Next(int min, int max, int center, int tightness)
{
int rand = GetNextNormal();
int candidate;
do
{
if(rand >= 0)
candidate = center + rand*(max-center)/tightness;
else
candidate = center + rand*(center-min)/tightness;
} while(candidate < min || candidate > max);
return candidate;
}
If you graph the results of this (especially a float/double version), it won't be the most beautiful distribution, but it should be adequate for your purposes.
EDIT
Above I said the results of this aren't particularly beautiful. To expand on that, the most glaring 'ugliness' is a discontinuity at the center point, due to the height of the peak of a normal distribution depending on its standard deviation. Because of this, the distribution you'll end up with will look something like this:
(For min 10, max 100 and center point 70, using a 'tightness' of 3)
So while the probability of a value below the center is equal to the probability above, results will be much more tightly "bunched" around the average on one side than the other. If that's too ugly for you, or you think the results of generating features by a distribution like that will seem too unnatural, we can add an additional modification, weighing which side of the center is picked by the proportions of the range to the left or right of center. Adding that to the code (with the assumption you have access to a Random which I've just called RandomGen) we get:
public int Next(int min, int max, int center, int tightness)
{
int rand = Math.Abs(GetNextNormal());
int candidate;
do
{
if(ChooseSide())
candidate = center + rand*(max-center)/tightness;
else
candidate = center - rand*(center-min)/tightness;
} while(candidate < min || candidate > max);
return candidate;
}
public bool ChooseSide(int min, int max, int center)
{
return RandomGen.Next(min, max) >= center;
}
For comparison, the distribution this will produce with the same min, max, center and tightness is:
As you can see, this is now continuous in frequency, as well as the first derivative (giving a smooth peak). The disadvantage to this version over the other is now you're more likely to get results on one side of the center than the other. The center is now the modal average, not the mean. So it's up to you whether you prefer a smoother distribution or having the center be the true mean of the distribution.
Since you are looking for a normal-ish distribution with a value around a point, within bounds, why not use Random instead to give you two values that you then use to walk a distance from the middle? The following yields what I believe you need:
// NOTE: scoped outside of the function to be random
Random rnd = new Random();
int GetNormalizedRandomValue(int mid, int maxDistance)
{
var distance = rnd.Next(0, maxDistance + 1);
var isPositive = (rnd.Next() % 2) == 0;
if (!isPositive)
{
distance = -distance;
}
return mid + distance;
}
Plugging in http://www.codeproject.com/Articles/25172/Simple-Random-Number-Generation makes this easier and correctly normalized:
int GetNormalizedRandomValue(int mid, int maxDistance)
{
int distance;
do
{
distance = (int)((SimpleRNG.GetNormal() / 5) * maxDistance);
} while (distance > maxDistance);
return mid + distance;
}
I would do something like this:
compute uniform distributed double
using that, use the formula for normal distribution (if i remember right you call it "inverse density function"? well, the one that maps [0,1] "back" to the accumulated probabilities) or similar to compute desired value - e.g. you can slightly adjust normal distribution to not only take average and stddev/variance, but average and two such values to take care of min/max
round to int, assure min, max, etc
You have two choices here:
Sum up N random numbers from (0,1/N) which gathers up the results around 0.5 and the scale the results witin x_min and x_max. The number N depends on how narrow the results are. The higher the count the more narrow the results.
Random rnd = new Random();
int N=10;
double r = 0;
for(int i=0; i<N; i++) { r+= rnd.NextDouble()/N; }
double x = x_min+(x_max-x_min)*r;
Use the actual normal distribution with a mean and a standard deviation. This will not guarantee a minimum or maximum though.
public double RandomNormal(double mu, double sigma)
{
return NormalDistribution(rnd.NextDouble(), mu, sigma);
}
public double RandomNormal()
{
return RandomNormal(0d, 1d);
}
/// <summary>
/// Normal distribution
/// </summary>
/// <arg name="probability">probability value 0..1</arg>
/// <arg name="mean">mean value</arg>
/// <arg name="sigma">std. deviation</arg>
/// <returns>A normal distribution</returns>
public double NormalDistribution(double probability, double mean, double sigma)
{
return mean+sigma*NormalDistribution(probability);
}
/// <summary>
/// Normal distribution
/// </summary>
/// <arg name="probability">probability value 0.0 to 1.0</arg>
/// <see cref="NormalDistribution(double,double,double)"/>
public double NormalDistribution(double probability)
{
return Math.Sqrt(2)*InverseErrorFunction(2*probability-1);
}
public double InverseErrorFunction(double P)
{
double Y, A, B, X, Z, W, WI, SN, SD, F, Z2, SIGMA;
const double A1=-.5751703, A2=-1.896513, A3=-.5496261E-1;
const double B0=-.1137730, B1=-3.293474, B2=-2.374996, B3=-1.187515;
const double C0=-.1146666, C1=-.1314774, C2=-.2368201, C3=.5073975e-1;
const double D0=-44.27977, D1=21.98546, D2=-7.586103;
const double E0=-.5668422E-1, E1=.3937021, E2=-.3166501, E3=.6208963E-1;
const double F0=-6.266786, F1=4.666263, F2=-2.962883;
const double G0=.1851159E-3, G1=-.2028152E-2, G2=-.1498384, G3=.1078639E-1;
const double H0=.9952975E-1, H1=.5211733, H2=-.6888301E-1;
X=P;
SIGMA=Math.Sign(X);
if(P<-1d||P>1d)
throw new System.ArgumentException();
Z=Math.Abs(X);
if(Z>.85)
{
A=1-Z;
B=Z;
W=Math.Sqrt(-Math.Log(A+A*B));
if(W>=2.5)
{
if(W>=4.0)
{
WI=1.0/W;
SN=((G3*WI+G2)*WI+G1)*WI;
SD=((WI+H2)*WI+H1)*WI+H0;
F=W+W*(G0+SN/SD);
}
else
{
SN=((E3*W+E2)*W+E1)*W;
SD=((W+F2)*W+F1)*W+F0;
F=W+W*(E0+SN/SD);
}
}
else
{
SN=((C3*W+C2)*W+C1)*W;
SD=((W+D2)*W+D1)*W+D0;
F=W+W*(C0+SN/SD);
}
}
else
{
Z2=Z*Z;
F=Z+Z*(B0+A1*Z2/(B1+Z2+A2/(B2+Z2+A3/(B3+Z2))));
}
Y=SIGMA*F;
return Y;
}
Here is my solution. The MyRandom class features an equivalent function to Next() with 3 additional parameters. center and span indicate the desirable range, retry is the retry count, with each retry, the probability of generating a number in the desired range should increase with exactly 50% in theory.
static void Main()
{
MyRandom myRnd = new MyRandom();
List<int> results = new List<int>();
Console.WriteLine("123456789012345\r\n");
int bnd = 30;
for (int ctr = 0; ctr < bnd; ctr++)
{
int nextAvg = myRnd.NextAvg(5, 16, 10, 2, 2);
results.Add(nextAvg);
Console.WriteLine(new string((char)9608, nextAvg));
}
Console.WriteLine("\r\n" + String.Format("Out of range: {0}%", results.Where(x => x < 8 || x > 12).Count() * 100 / bnd)); // calculate out-of-range percentage
Console.ReadLine();
}
class MyRandom : Random
{
public MyRandom() { }
public int NextAvg(int min, int max, int center, int span, int retry)
{
int left = (center - span);
int right = (center + span);
if (left < 0 || right >= max)
{
throw new ArgumentException();
}
int next = this.Next(min, max);
int ctr = 0;
while (++ctr <= retry && (next < left || next > right))
{
next = this.Next(min, max);
}
return next;
}
}
Is there any reason that the distribution must actually be a bell curve? For example, using:
public int RandomDist(int min, int max, int average)
{
rnd = new Math.Random();
n = rnd.NextDouble();
if (n < 0.75)
{
return Math.Sqrt(n * 4 / 3) * (average - min) + min;
} else {
return Math.Sqrt(n * 4 - 3) * (max - average) + average;
}
}
will give a number between min and max, with the mode at average.
You could use the Normal distribution class from MathNet.Numerics (mathdotnet.com).
An example of it's use:
// Distribution with mean = 10, stddev = 1.25 (5 ~ 15 99.993%)
var dist = new MathNet.Numerics.Distributions.Normal(10, 1.25);
var samples = dist.Samples().Take(10000);
Assert.True(samples.Average().AlmostEqualInDecimalPlaces(10, 3));
You can adjust the spread by changing the standard deviation (the 1.25 I used). Only problem is that it will occasionally give you values outside of your desired range so you'd have for them. If you want something which is more skewed one way or another you could try other distribution functions from the library too.
Update - Example class:
public class Random
{
MathNet.Numerics.Distributions.Normal _dist;
int _min, _max, _mean;
public Random(int mean, int min, int max)
{
_mean = mean;
_min = min;
_max = max;
var stddev = Math.Min(Math.Abs(mean - min), Math.Abs(max - mean)) / 3.0;
_dist = new MathNet.Numerics.Distributions.Normal(mean, stddev);
}
public int Next()
{
int next;
do
{
next = (int)_dist.Sample();
} while (next < _min || next > _max);
return next;
}
public static int Next(int mean, int min, int max)
{
return new Random(mean, min, max).Next();
}
}
Not sure this is what you want, but here is a way to draw a random number with a distribution which is uniform from min to avg and from avg to max while ensuring that the mean equals avg.
Assume probability p for a draw from [min avg] and probability 1-p from [avg max]. The expected value will be p.(min+avg)/2 + (1-p).(avg+max)/2 = p.min/2 + avg/2 + (1-p).max/2 = avg. We solve for p: p=(max-avg)/(max-min).
The generator works as follows: draw a random number in [0 1]. If less than p, draw a random number from [min avg]; otherwise, draw one from [avg max].
The plot of the probability is piecewise constant, p from min to avg and 1-p from avg to max. Extreme values are not penalized.

How to initialize an array with numbers separated by a specific interval in C#

I want to create an array containing values from 0 to 1 with interval of 0.1. I can use:
float[] myArray = new float[10];
float increment = 0.1;
for(i = 0; i < 10; i++)
{
myArray[i] = increment;
increment += 0.1;
}
I was wondering whether there is a function like Enumerable.Range that permits to specify also the increment interval.
An interesting fact is that every answer posted so far has fixed the bug in your proposed code, but only one has called out that they've done so.
Binary floating point numbers have representation error when dealing with any quantity that is not a fraction of an exact power of two. ("3.0/4.0" is a representable fraction because the bottom is a power of two; "1.0/10.0" is not.)
Therefore, when you say:
for(i = 0; i < 10; i++)
{
myArray[i] = increment;
increment += 0.1;
}
You are not actually incrementing "increment" by 1.0/10.0. You are incrementing it by the closest representable fraction that has an exact power of two on the bottom. So in fact this is equivalent to:
for(i = 0; i < 10; i++)
{
myArray[i] = increment;
increment += (exactly_one_tenth + small_representation_error);
}
So, what is the value of the tenth increment? Clearly it is 10 * (exactly_one_tenth + small_representation_error) which is obviously equal to exactly_one + 10 * small_representation_error. You have multiplied the size of the representation error by ten.
Any time you repeatedly add together two floating point numbers, each subsequent addition increases the total representation error of the sum slightly and that adds up, literally, to a potentially large error. In some cases where you are summing thousands or millions of small numbers the error can become far larger than the actual total.
The far better solution is to do what everyone else has done. Recompute the fraction from integers every time. That way each result gets its own small representation error; it does not accumulate the representation errors of previously computed results.
Ugly, but...
Enumerable.Range(0,10).Select(i => i/10.0).ToArray();
No, there's no enumerable range that allows you to do that, but you could always divide by 10:
foreach (int i in Enumerable.Range(0, 10))
array[i] = (i + 1) / 10.0f
Note that this avoids the error that will accumulate if you repeatedly sum 0.1f. For example, if you sum the 10 elements in the myArray in your sample code, you get a value that's closer to 5.50000048 than 5.5.
Here is one way:
Enumerable.Range(1,10).Select(i => i /10.0)
Well you could use this:
Enumerable.Range(1,10).Select(x => x / 10.0).ToArray()
Not sure if that's better though.

Division to the nearest 1 decimal place without floating point math?

I am having some speed issues with my C# program and identified that this percentage calculation is causing a slow down. The calculation is simply n/d * 100. Both the numerator and denominator can be any integer number. The numerator can never be greater than the denominator and is never negative. Therefore, the result is always from 0-100. Right now, this is done by simply using floating point math and is somewhat slow, since it's being calculated tens of millions of times. I really don't need anything more accurate than to the nearest 0.1 percent. And, I just use this calculated value to see if it's bigger than a fixed constant value. I am thinking that everything should be kept as an integer, so the range with 0.1 accuracy would be 0-1000. Is there some way to calculate this percentage without floating point math?
Here is the loop that I am using with calculation:
for (int i = 0; i < simulationList.Count; i++)
{
for (int j = i + 1; j < simulationList.Count; j++)
{
int matches = GetMatchCount(simulationList[i], simulationList[j]);
if ((float)matches / (float)simulationList[j].Catchments.Count > thresPercent)
{
simulationList[j].IsOverThreshold = true;
}
}
}
Instead of n/d > c, you can use n > d * c (supposing that d > 0).
(c is the constant value you are comparing to.)
This way you don't need division at all.
However, watch out for the overflows.
If your units are in tenths instead of ones, then you can get your 0.1 accuracy using integer arithmetic:
Instead of:
for (...)
{
float n = ...;
float d = ...;
if (n / d > 1.4) // greater than 140% ?
...do something like:
for (...)
{
int n = 10 * ...;
int d = ...;
if (n / d > 14) // greater than 140% ?
Instead of writing
if ((float)matches / (float)simulationList[j].Catchments.Count > thresPercent)
write this:
if (matches * theresPercent_Denominator > simulationList[j].Catchments.Count * thresPercent_Numerator)
In this way, you get rid of the floating points.
Note: thresPercent can be expressed as thresPercent_Numerator / theresPercent_Denominator, as long as the number is a rational number.) I think this is the optimal way on PC. For some other platform, you may further optimize it by left-shift or right-shift, if theresPercent_Denominator and/or thresPercent_Numerator are 2's power. (Normally left-shift is enough, but may need use right-shift by rearrange the equation to division, to prevent from overflow)

Categories