Adding an "average" parameter to .NET's Random.Next() to curve results - c#

I'd like to be able to add a "average" parameter to Random.Next(Lower, Upper). This method would have min, max and average parameters. I created a method like this a while back for testing (It used lists and was horrible), so I'd like some ideas on how to write a correct implementation.
The reason for having this functionality is for many procedural/random events in my game. Say you want trees to be 10 units tall most of the time, but still can be as low as 5 or 15. A normal Random.Next(5,15) would return results all over, but this method would have more of a bell curve toward it's results. Meaning 10 would be the most common, and going out in each direction would be less common. Moving the average down to 7 for example, would make relatively small trees (or whatever this is being used on), but large ones are still possible, however uncommon.
Previous method (pseudo-code-ish)
Loop from min to max
Closer to average numbers are added to the list more times
A random element is selected from the list, elements closer to average are added
more, so they will be more likely to be chosen.
Okay, so that's like throwing a bunch of candies in a bag and picking a random one. Yeah, slow. What are your thoughts on improving this?
Illustration: (Not exactly accurate but you see the idea)
NOTE: Many people have suggested a bell curve, but the question is how to be able to change the peak of the curve to favor one side in this sense.

I'm expanding on the idea of generating n random numbers, and taking their average to get a bell-curve effect. The "tightness" parameter controls how steep the curve is.
Edit: Summing a set of random points to get a "normal" distribution is supported by the Central Limit Theorem. Using a bias function to sway results in a particular direction is a common technique, but I'm no expert there.
To address the note at the end of your question, I'm skewing the curve by manipulating the "inner" random number. In this example, I'm raising it to the exponent you provide. Since a Random returns values less than one, raising it to any power will still never be more than one. But the average skews towards zero, as squares, cubes, etc of numbers less than one are even smaller than the base number. exp = 1 has no skew, whereas exp = 4 has a pretty significant skew.
private Random r = new Random();
public double RandomDist(double min, double max, int tightness, double exp)
{
double total = 0.0;
for (int i = 1; i <= tightness; i++)
{
total += Math.Pow(r.NextDouble(), exp);
}
return ((total / tightness) * (max - min)) + min;
}
I ran trials for different values for exp, generating 100,000 integers between 0 and 99. Here's how the distributions turned out.
I'm not sure how the peak relates to the exp value, but the higher the exp, the lower the peak appears in the range.
You could also reverse the direction of the skew by changing the line in the inside of the loop to:
total += (1 - Math.Pow(r.NextDouble(), exp));
...which would give the bias on the high side of the curve.
Edit: So, how do we know what to make "exp" in order to get the peak where we want it? That's a tricky one, and could probably be worked out analytically, but I'm a developer, not a mathematician. So, applying my trade, I ran lots of trials, gathered peak data for various values of exp, and ran the data through the cubic fit calculator at Wolfram Alpha to get an equation for exp as a function of peak.
Here's a new set of functions which implement this logic. The GetExp(...) function implements the equation found by WolframAlpha.
RandomBiasedPow(...) is the function of interest. It returns a random number in the specified ranges, but tends towards the peak. The strength of that tendency is governed by the tightness parameter.
private Random r = new Random();
public double RandomNormal(double min, double max, int tightness)
{
double total = 0.0;
for (int i = 1; i <= tightness; i++)
{
total += r.NextDouble();
}
return ((total / tightness) * (max - min)) + min;
}
public double RandomNormalDist(double min, double max, int tightness, double exp)
{
double total = 0.0;
for (int i = 1; i <= tightness; i++)
{
total += Math.Pow(r.NextDouble(), exp);
}
return ((total / tightness) * (max - min)) + min;
}
public double RandomBiasedPow(double min, double max, int tightness, double peak)
{
// Calculate skewed normal distribution, skewed by Math.Pow(...), specifiying where in the range the peak is
// NOTE: This peak will yield unreliable results in the top 20% and bottom 20% of the range.
// To peak at extreme ends of the range, consider using a different bias function
double total = 0.0;
double scaledPeak = peak / (max - min) + min;
if (scaledPeak < 0.2 || scaledPeak > 0.8)
{
throw new Exception("Peak cannot be in bottom 20% or top 20% of range.");
}
double exp = GetExp(scaledPeak);
for (int i = 1; i <= tightness; i++)
{
// Bias the random number to one side or another, but keep in the range of 0 - 1
// The exp parameter controls how far to bias the peak from normal distribution
total += BiasPow(r.NextDouble(), exp);
}
return ((total / tightness) * (max - min)) + min;
}
public double GetExp(double peak)
{
// Get the exponent necessary for BiasPow(...) to result in the desired peak
// Based on empirical trials, and curve fit to a cubic equation, using WolframAlpha
return -12.7588 * Math.Pow(peak, 3) + 27.3205 * Math.Pow(peak, 2) - 21.2365 * peak + 6.31735;
}
public double BiasPow(double input, double exp)
{
return Math.Pow(input, exp);
}
Here is a histogram using RandomBiasedPow(0, 100, 5, peak), with the various values of peak shown in the legend. I rounded down to get integers between 0 and 99, set tightness to 5, and tried peak values between 20 and 80. (Things get wonky at extreme peak values, so I left that out, and put a warning in the code.) You can see the peaks right where they should be.
Next, I tried boosting Tightness to 10...
Distribution is tighter, and the peaks are still where they should be. It's pretty fast too!

Here's a simple way to achieve this. Since you already have answers detailing how to generate normal distributions, and there are plenty of resources on that, I won't reiterate that. Instead I'll refer to a method I'll call GetNextNormal() which should generate a value from a normal distribution with mean 0 and standard deviation 1.
public int Next(int min, int max, int center)
{
int rand = GetNextNormal();
if(rand >= 0)
return center + rand*(max-center);
return center + rand*(center-min);
}
(This can be simplified a little, I've written it that way for clarity)
For a rough image of what this is doing, imagine two normal distributions. They're both centered around your center, but for one the min is one standard deviation away, to the left, and for the other, the max is one standard deviation away, to the right. Now imagine chopping them both in half at the center. On the left, you keep the one with the standard deviation corresponding to min, and on the right, the one corresponding to max.
Of course, normal distributions aren't guaranteed to stay within one standard deviation, so there are two things you probably want to do:
Add an extra parameter which controls how tight the distribution is
If you want min and max to be hard limits, you will have to add rejection for values outside those bounds.
A complete method, with those two additions (again keeping everything as ints for now), might look like;
public int Next(int min, int max, int center, int tightness)
{
int rand = GetNextNormal();
int candidate;
do
{
if(rand >= 0)
candidate = center + rand*(max-center)/tightness;
else
candidate = center + rand*(center-min)/tightness;
} while(candidate < min || candidate > max);
return candidate;
}
If you graph the results of this (especially a float/double version), it won't be the most beautiful distribution, but it should be adequate for your purposes.
EDIT
Above I said the results of this aren't particularly beautiful. To expand on that, the most glaring 'ugliness' is a discontinuity at the center point, due to the height of the peak of a normal distribution depending on its standard deviation. Because of this, the distribution you'll end up with will look something like this:
(For min 10, max 100 and center point 70, using a 'tightness' of 3)
So while the probability of a value below the center is equal to the probability above, results will be much more tightly "bunched" around the average on one side than the other. If that's too ugly for you, or you think the results of generating features by a distribution like that will seem too unnatural, we can add an additional modification, weighing which side of the center is picked by the proportions of the range to the left or right of center. Adding that to the code (with the assumption you have access to a Random which I've just called RandomGen) we get:
public int Next(int min, int max, int center, int tightness)
{
int rand = Math.Abs(GetNextNormal());
int candidate;
do
{
if(ChooseSide())
candidate = center + rand*(max-center)/tightness;
else
candidate = center - rand*(center-min)/tightness;
} while(candidate < min || candidate > max);
return candidate;
}
public bool ChooseSide(int min, int max, int center)
{
return RandomGen.Next(min, max) >= center;
}
For comparison, the distribution this will produce with the same min, max, center and tightness is:
As you can see, this is now continuous in frequency, as well as the first derivative (giving a smooth peak). The disadvantage to this version over the other is now you're more likely to get results on one side of the center than the other. The center is now the modal average, not the mean. So it's up to you whether you prefer a smoother distribution or having the center be the true mean of the distribution.

Since you are looking for a normal-ish distribution with a value around a point, within bounds, why not use Random instead to give you two values that you then use to walk a distance from the middle? The following yields what I believe you need:
// NOTE: scoped outside of the function to be random
Random rnd = new Random();
int GetNormalizedRandomValue(int mid, int maxDistance)
{
var distance = rnd.Next(0, maxDistance + 1);
var isPositive = (rnd.Next() % 2) == 0;
if (!isPositive)
{
distance = -distance;
}
return mid + distance;
}
Plugging in http://www.codeproject.com/Articles/25172/Simple-Random-Number-Generation makes this easier and correctly normalized:
int GetNormalizedRandomValue(int mid, int maxDistance)
{
int distance;
do
{
distance = (int)((SimpleRNG.GetNormal() / 5) * maxDistance);
} while (distance > maxDistance);
return mid + distance;
}

I would do something like this:
compute uniform distributed double
using that, use the formula for normal distribution (if i remember right you call it "inverse density function"? well, the one that maps [0,1] "back" to the accumulated probabilities) or similar to compute desired value - e.g. you can slightly adjust normal distribution to not only take average and stddev/variance, but average and two such values to take care of min/max
round to int, assure min, max, etc

You have two choices here:
Sum up N random numbers from (0,1/N) which gathers up the results around 0.5 and the scale the results witin x_min and x_max. The number N depends on how narrow the results are. The higher the count the more narrow the results.
Random rnd = new Random();
int N=10;
double r = 0;
for(int i=0; i<N; i++) { r+= rnd.NextDouble()/N; }
double x = x_min+(x_max-x_min)*r;
Use the actual normal distribution with a mean and a standard deviation. This will not guarantee a minimum or maximum though.
public double RandomNormal(double mu, double sigma)
{
return NormalDistribution(rnd.NextDouble(), mu, sigma);
}
public double RandomNormal()
{
return RandomNormal(0d, 1d);
}
/// <summary>
/// Normal distribution
/// </summary>
/// <arg name="probability">probability value 0..1</arg>
/// <arg name="mean">mean value</arg>
/// <arg name="sigma">std. deviation</arg>
/// <returns>A normal distribution</returns>
public double NormalDistribution(double probability, double mean, double sigma)
{
return mean+sigma*NormalDistribution(probability);
}
/// <summary>
/// Normal distribution
/// </summary>
/// <arg name="probability">probability value 0.0 to 1.0</arg>
/// <see cref="NormalDistribution(double,double,double)"/>
public double NormalDistribution(double probability)
{
return Math.Sqrt(2)*InverseErrorFunction(2*probability-1);
}
public double InverseErrorFunction(double P)
{
double Y, A, B, X, Z, W, WI, SN, SD, F, Z2, SIGMA;
const double A1=-.5751703, A2=-1.896513, A3=-.5496261E-1;
const double B0=-.1137730, B1=-3.293474, B2=-2.374996, B3=-1.187515;
const double C0=-.1146666, C1=-.1314774, C2=-.2368201, C3=.5073975e-1;
const double D0=-44.27977, D1=21.98546, D2=-7.586103;
const double E0=-.5668422E-1, E1=.3937021, E2=-.3166501, E3=.6208963E-1;
const double F0=-6.266786, F1=4.666263, F2=-2.962883;
const double G0=.1851159E-3, G1=-.2028152E-2, G2=-.1498384, G3=.1078639E-1;
const double H0=.9952975E-1, H1=.5211733, H2=-.6888301E-1;
X=P;
SIGMA=Math.Sign(X);
if(P<-1d||P>1d)
throw new System.ArgumentException();
Z=Math.Abs(X);
if(Z>.85)
{
A=1-Z;
B=Z;
W=Math.Sqrt(-Math.Log(A+A*B));
if(W>=2.5)
{
if(W>=4.0)
{
WI=1.0/W;
SN=((G3*WI+G2)*WI+G1)*WI;
SD=((WI+H2)*WI+H1)*WI+H0;
F=W+W*(G0+SN/SD);
}
else
{
SN=((E3*W+E2)*W+E1)*W;
SD=((W+F2)*W+F1)*W+F0;
F=W+W*(E0+SN/SD);
}
}
else
{
SN=((C3*W+C2)*W+C1)*W;
SD=((W+D2)*W+D1)*W+D0;
F=W+W*(C0+SN/SD);
}
}
else
{
Z2=Z*Z;
F=Z+Z*(B0+A1*Z2/(B1+Z2+A2/(B2+Z2+A3/(B3+Z2))));
}
Y=SIGMA*F;
return Y;
}

Here is my solution. The MyRandom class features an equivalent function to Next() with 3 additional parameters. center and span indicate the desirable range, retry is the retry count, with each retry, the probability of generating a number in the desired range should increase with exactly 50% in theory.
static void Main()
{
MyRandom myRnd = new MyRandom();
List<int> results = new List<int>();
Console.WriteLine("123456789012345\r\n");
int bnd = 30;
for (int ctr = 0; ctr < bnd; ctr++)
{
int nextAvg = myRnd.NextAvg(5, 16, 10, 2, 2);
results.Add(nextAvg);
Console.WriteLine(new string((char)9608, nextAvg));
}
Console.WriteLine("\r\n" + String.Format("Out of range: {0}%", results.Where(x => x < 8 || x > 12).Count() * 100 / bnd)); // calculate out-of-range percentage
Console.ReadLine();
}
class MyRandom : Random
{
public MyRandom() { }
public int NextAvg(int min, int max, int center, int span, int retry)
{
int left = (center - span);
int right = (center + span);
if (left < 0 || right >= max)
{
throw new ArgumentException();
}
int next = this.Next(min, max);
int ctr = 0;
while (++ctr <= retry && (next < left || next > right))
{
next = this.Next(min, max);
}
return next;
}
}

Is there any reason that the distribution must actually be a bell curve? For example, using:
public int RandomDist(int min, int max, int average)
{
rnd = new Math.Random();
n = rnd.NextDouble();
if (n < 0.75)
{
return Math.Sqrt(n * 4 / 3) * (average - min) + min;
} else {
return Math.Sqrt(n * 4 - 3) * (max - average) + average;
}
}
will give a number between min and max, with the mode at average.

You could use the Normal distribution class from MathNet.Numerics (mathdotnet.com).
An example of it's use:
// Distribution with mean = 10, stddev = 1.25 (5 ~ 15 99.993%)
var dist = new MathNet.Numerics.Distributions.Normal(10, 1.25);
var samples = dist.Samples().Take(10000);
Assert.True(samples.Average().AlmostEqualInDecimalPlaces(10, 3));
You can adjust the spread by changing the standard deviation (the 1.25 I used). Only problem is that it will occasionally give you values outside of your desired range so you'd have for them. If you want something which is more skewed one way or another you could try other distribution functions from the library too.
Update - Example class:
public class Random
{
MathNet.Numerics.Distributions.Normal _dist;
int _min, _max, _mean;
public Random(int mean, int min, int max)
{
_mean = mean;
_min = min;
_max = max;
var stddev = Math.Min(Math.Abs(mean - min), Math.Abs(max - mean)) / 3.0;
_dist = new MathNet.Numerics.Distributions.Normal(mean, stddev);
}
public int Next()
{
int next;
do
{
next = (int)_dist.Sample();
} while (next < _min || next > _max);
return next;
}
public static int Next(int mean, int min, int max)
{
return new Random(mean, min, max).Next();
}
}

Not sure this is what you want, but here is a way to draw a random number with a distribution which is uniform from min to avg and from avg to max while ensuring that the mean equals avg.
Assume probability p for a draw from [min avg] and probability 1-p from [avg max]. The expected value will be p.(min+avg)/2 + (1-p).(avg+max)/2 = p.min/2 + avg/2 + (1-p).max/2 = avg. We solve for p: p=(max-avg)/(max-min).
The generator works as follows: draw a random number in [0 1]. If less than p, draw a random number from [min avg]; otherwise, draw one from [avg max].
The plot of the probability is piecewise constant, p from min to avg and 1-p from avg to max. Extreme values are not penalized.

Related

How can I modify this random double generator to return more zeros

I have this extension method that, given a minimum and maximum double, generates a double between them.
public static double NextDouble(this Random random, double minValue, double maxValue)
{
return random.NextDouble() * (maxValue - minValue) + minValue;
}
I mainly use this extension method to generate random dollar amounts, and sometimes 0 dollars is an OK value! That being said, I need to increase the odds of returning a 0. More specifically, if I try the following:
Random rando = new Random();
List<double> doubles = new List<double>();
for (int i = 0; i < 100000; i++)
{
double d = rando.NextDouble(0, .25);
Console.WriteLine(d.ToString());
}
I don't get a single zero.
A less than ideal solution I thought of is I can just catch every value less than 1 and return 0 instead.
public static double NextDouble(this Random random, double minValue, double maxValue)
{
double d = random.NextDouble() * (maxValue - minValue) + minValue;
if (d < 1)
{
return 0;
}
return d;
}
This obviously removes the ability to return values less than 1 (.25, .50, .125, etc..). I'm looking for some clever ways around this!
A simple way of approaching this is to generate two random numbers: the first to determine if you return 0, and if not, you return the second number. Say for instance you want a 5% chance of returning zero. Generate a random integer between 1 and 100 inclusive, and if its 5 or less, simply return zero.
if (minValue <= 0.0 && 0.0 <= maxValue)
{
var shouldWeReturnZero = rando.Next(1, 101) <= 5;
if (shouldWeReturnZero)
return 0;
}
Otherwise, generate the actual random number using the code you already have.
What you might want to do is instead of generating a random double, generate a random int and let it represent cents.
that way, 0 will be just as likely as any other amount of cents. When showing them to the user, you can display them as dollars.
var random = new Random();
for (var i = 0; i < 1000; i++)
{
var cents = random.Next(0, 200);
Console.WriteLine("Dollar amount: ${0:0}.{1:00}", cents / 100, cents % 100);
}
Console.ReadKey(false);
So the reason why you are getting no zeroes is that the probability of getting an absolute zero when generating a double is very very very unlikely. If you have a 32-bit floating number, the probability of getting a zero is somewhere around the number 1/2^32. If you want to know more, check out https://en.wikipedia.org/wiki/Single-precision_floating-point_format and how floating numbers are constructed from memory.
In your case I would create a floor-function that instead of flooring to integers, it does so in steps of 0.25. So, a floor function takes any floating number and removes the decimals so what's left is an integer, for example from 1.7888 to 1. You want something that is a bit less rough, so if you insert a value of 1.7888, it would spit out 1.75.

Finding the point corresponding to an arc length on the ellipse iteratively

Given a distance (arc length) anticlockwise away from a known point (P_0) on an ellipse, I am trying to find the point at that distance (P_1).
Since I cannot evaluate the t corresponding to a specific arc length analytically, I am forced to iterate through each discrete point until I arrive at an answer.
My initial code is something like this:
// t_0 is the parametric t on the ellipse corresponding to P_0
Point GetPos(double distance, double t_0, double res = 5000, double epsilon = 0.1)
{
for(int i = 0; i < res; ++i)
{
// The minus is to make the point move in an clockwise direction
t = t_0 - (double)(i)/(double)res * t_0;
// Find the integral from t to t_0 to get the arc length
// If arc length is within epsilon, return the corresponding point
}
}
Unfortunately, this code may not converge if the arc length given by the t value just nicely overshoots the epsilon value. And since this is a loop that decreases t, the overshoot will not be corrected.
I was thinking of modelling this as a control problem, using something like a PID controller. However, I realised that since the set point (which is my desired arc length), and my output (which is essentially the parametric t), are referring to different variables, I do not know how to proceed.
Is there a better method of solving this kind of problem or am I missing something from my current approach?
After some thought I used a binary search method instead, since a PID controller is difficult to tune and usually does not converge fast enough for all cases of the ellipses under consideration.
double max = t_0; double min = 0; double result = 0; double mid = 0;
mid = (max - min) / 2.0;
while ((Math.Abs(distance - result) > epsilon))
{
result = // Arc Length from t_0 to mid
if (result > distance)
{
min = mid;
mid = ((max - mid) / 2.0) + min;
}
else
{
max = mid;
mid = (mid - min) / 2.0;
}
}
// Return the point at t = max
The binary search works as the search is always over an ordered range (from t_0 to 0).

Multiple iterations of random double numbers tend to get smaller

I am creating a stock trading simulator where the last days's trade price is taken as opening price and simulated through out the current day.
For that I am generating random double numbers that may be somewhere -5% of lastTradePrice and 5% above the lastTradePrice. However after around 240 iterations I see how the produced double number gets smaller and smaller closing to zero.
Random rand = new Random();
Thread.Sleep(rand.Next(0,10));
Random random = new Random();
double lastTradeMinus5p = model.LastTradePrice - model.LastTradePrice * 0.05;
double lastTradePlus5p = model.LastTradePrice + model.LastTradePrice * 0.05;
model.LastTradePrice = random.NextDouble() * (lastTradePlus5p - lastTradeMinus5p) + lastTradeMinus5p;
As you can see I am trying to get random seed by utilising Thread.sleep(). And yet its not truly randomised. Why is there this tendency to always produce smaller numbers?
Update:
The math itself is actually fine, despite the downwards trend as Jon has proven it.
Getting random double numbers between range is also explained here.
The real problem was the seed of Random. I have followed Jon's advice to keep the same Random instance across the thread for all three prices. And this already is producing better results; the price is actually bouncing back upwards. I am still investigating and open to suggestions how to improve this. The link Jon has given provides an excellent article how to produce a random instance per thread.
Btw the whole project is open source if you are interested. (Using WCF, WPF in Browser, PRISM 4.2, .NET 4.5 Stack)
The TransformPrices call is happening here on one separate thread.
This is what happens if I keep the same instance of random:
And this is generated via RandomProvider.GetThreadRandom(); as pointed out in the article:
Firstly, calling Thread.Sleep like this is not a good way of getting a different seed. It would be better to use a single instance of Random per thread. See my article on randomness for some suggested approaches.
However, your code is also inherently biased downwards. Suppose we "randomly" get 0.0 and 1.0 from the random number generator, starting with a price of $100. That will give:
Day 0: $100
Day 1: $95 (-5% = $5)
Day 2: $99.75 (+5% = $4.75)
Now we can equally randomly get 1.0 and 0.0:
Day 0: $100
Day 1: $105 (+5% = $5)
Day 2: $99.75 (-5% = $5.25)
Note how we've got down in both cases, despite this being "fair". If the value increases, that means it can go down further on the next roll of the dice, so to speak... but if the value decreases, it can't bounce back as far.
EDIT: To give an idea of how a "reasonably fair" RNG is still likely to give a decreasing value, here's a little console app:
using System;
class Test
{
static void Main()
{
Random random = new Random();
int under100 = 0;
for (int i = 0; i < 100; i++)
{
double price = 100;
double sum = 0;
for (int j = 0; j < 1000; j++)
{
double lowerBound = price * 0.95;
double upperBound = price * 1.05;
double sample = random.NextDouble();
sum += sample;
price = sample * (upperBound - lowerBound) + lowerBound;
}
Console.WriteLine("Average: {0:f2} Price: {1:f2}", sum / 1000, price);
if (price < 100)
{
under100++;
}
}
Console.WriteLine("Samples with a final price < 100: {0}", under100);
}
}
On my machine, the "average" value is always very close to 0.5 (rarely less then 0.48 or more than 0.52) but the majority of "final prices" are always below 100 - about 65-70% of them.
Quick guess: This is a math-thing, and not really related to the random generator.
When you reduce the trade price by 5%, you get a resulting value that is lower than that which you began with (obviously!).
The problem is that when you then increase the trade price by 5% of that new value, those 5% will be a smaller value than the 5% you reduced by previously, since you started out with a smaller value this time. Get it?
I obviously haven't verified this, but I have strong hunch this is your problem. When you repeat these operations a bunch of times, the effect will get noticeable over time.
Your math should be:
double lastTradeMinus5p = model.LastTradePrice * 0.95;
double lastTradePlus5p = model.LastTradePrice * (1/0.95);
UPDATE: As Dialecticus pointed out, you should probably use some other distribution than this one:
random.NextDouble() * (lastTradePlus5p - lastTradeMinus5p)
Also, your range of 5% seems pretty narrow to me.
I think this is mainly because the random number generator you are using is technically pants.
For better 'randomness' use RNGCryptoServiceProvider to generate the random numbers instead. It's technically a pseudo-random number generated, but the quality of 'randomness' is much higher (suitable for cryptographic purposes).
Taken from here
//The following sample uses the Cryptography class to simulate the roll of a dice.
using System;
using System.IO;
using System.Text;
using System.Security.Cryptography;
class RNGCSP
{
private static RNGCryptoServiceProvider rngCsp = new RNGCryptoServiceProvider();
// Main method.
public static void Main()
{
const int totalRolls = 25000;
int[] results = new int[6];
// Roll the dice 25000 times and display
// the results to the console.
for (int x = 0; x < totalRolls; x++)
{
byte roll = RollDice((byte)results.Length);
results[roll - 1]++;
}
for (int i = 0; i < results.Length; ++i)
{
Console.WriteLine("{0}: {1} ({2:p1})", i + 1, results[i], (double)results[i] / (double)totalRolls);
}
rngCsp.Dispose();
Console.ReadLine();
}
// This method simulates a roll of the dice. The input parameter is the
// number of sides of the dice.
public static byte RollDice(byte numberSides)
{
if (numberSides <= 0)
throw new ArgumentOutOfRangeException("numberSides");
// Create a byte array to hold the random value.
byte[] randomNumber = new byte[1];
do
{
// Fill the array with a random value.
rngCsp.GetBytes(randomNumber);
}
while (!IsFairRoll(randomNumber[0], numberSides));
// Return the random number mod the number
// of sides. The possible values are zero-
// based, so we add one.
return (byte)((randomNumber[0] % numberSides) + 1);
}
private static bool IsFairRoll(byte roll, byte numSides)
{
// There are MaxValue / numSides full sets of numbers that can come up
// in a single byte. For instance, if we have a 6 sided die, there are
// 42 full sets of 1-6 that come up. The 43rd set is incomplete.
int fullSetsOfValues = Byte.MaxValue / numSides;
// If the roll is within this range of fair values, then we let it continue.
// In the 6 sided die case, a roll between 0 and 251 is allowed. (We use
// < rather than <= since the = portion allows through an extra 0 value).
// 252 through 255 would provide an extra 0, 1, 2, 3 so they are not fair
// to use.
return roll < numSides * fullSetsOfValues;
}
}
According to your code, I can derive it in a simpler version as below:
Random rand = new Random();
Thread.Sleep(rand.Next(0,10));
Random random = new Random();
double lastTradeMinus5p = model.LastTradePrice * 0.95; // model.LastTradePrice - model.LastTradePrice * 0.05 => model.LastTradePrice * ( 1 - 0.05 )
double lastTradePlus5p = model.LastTradePrice * 1.05; // model.LastTradePrice + model.LastTradePrice * 0.05 => model.LastTradePrice * ( 1 + 0.05 )
model.LastTradePrice = model.LastTradePrice * ( random.NextDouble() * 0.1 + 0.95 ) // lastTradePlus5p - lastTradeMinus5p => ( model.LastTradePrice * 1.05 ) - ( model.LastTradePrice * 0.95 ) => model.LastTradePrice * ( 1.05 - 0.95)
So you are taking model.LastTradePrice times a fractional number(between 0 to 1) times 0.1 which will always decrease more to zero, but increase less to 1 !
The litle fraction positive part comes because of the + 0.95 part with the zero-tending random.NextDouble() * 0.1

Find maximum value of a continuous function at a specific resolution

Imagine having a function that is continuous over a range [0.0,n]. Are there any algorithms to find the maximum value of the function given a minimum step size s more quickly than simple iteration? The simple iteration is straightforward to program but the time complexity grows when n / s is large.
double maxValue = 0;
double maxValueX = 0;
double s = 0.1 * n;
for (double x = 0.0; x <= n; x += s)
{
double value = someFunction(x);
if(value > maxValue) {
maxValue = value;
maxValueX = x;
}
}
I have tried this approach which is much quicker, but don't know if it will get stuck on local maximums.
double min = 0;
double max = n;
int steps = 10;
increment = (max - min) / steps;
while (increment > s)
{
double maxValue = 0;
double maxValueX = X;
for (double x= min; x <= max; x+= increment)
{
double value = someFunction(x);
if(value > maxValue) {
maxValue = value;
maxValueX = x;
}
}
min = Math.Max(maxValueX - increment, 0.0);
max = Math.Min(maxValueX + increment, n);
increment = (max - min) / steps;
}
Suppose there was such an algorithm, that is, an algorithm that can find the maximum of an approximation of a continuous function without looking at every point of the approximation.
Now choose a positive integer n and choose any finite sequence of n doubles you care to name. There are infinitely many continuous functions such that f(n) is equal to the nth double in the sequence, and smaller than or equal to the largest of them everywhere. Choose one of them.
Now use your algorithm to find the largest double of the n doubles. By assumption, it examines fewer than n of the doubles. Let's suppose it examines all of them except the kth double.
Now suppose we create a new sequence identical to the first one except that the kth double is the maximum. Is the algorithm magical, that when given an input that it does not read, it changes its output?
Now is it clear why there is no such algorithm? If you want to find the longest piece of string in the drawer, you're going to have to look at all of them.
The continuity of the function doesn't help you at all. All continuity gives you is a guarantee that given a point on the function, you can find another point on the function that is as close to the first point as you like. That tells you nothing about the maximum value taken on by the function. (Well, OK, it tells you something. On a closed bounded interval it implies that a maximum exists, which is something. But it doesn't help you find it.)
Given the function you are talking about is code, then no, that function could return any arbitrary maximum at any point.
If you can make assumptions about the function ( like maximum rate of change ) then you could optimize.

Time to Temperature Calculation

This might not be the correct place for this, so apologies in advance if it isn't.
My situation - I need to come up with a simple formula/method of giving it an hour E.g. 13, 15, 01 etc, and based on that number, the method will return the 'approx' temperature for that particular time.
This is very approximate and it will not use weather data or anything like that, it will just take the hour of the day and return a value between say -6 deg C > 35 deg C. (very extreme weather, but you get the idea.)
This is the sort of examples I would like to know how to do:
Just as a note, I COULD use an ugly array of 24 items, each referencing the temp for that hour, but this needs to be float based - e.g. 19.76 should return 9.25 deg...
Another note: I don't want a complete solution - I'm a confident programmer in various languages, but the maths have really stumped me on this. I've tried various methods on paper using TimeToPeak (the peak hour being 1pm or around there) but to no avail. Any help would be appreciated at this point.
EDIT
Following your comment, here is a function that provides a sinusoidal distribution with various useful optional parameters.
private static double SinDistribution(
double value,
double lowToHighMeanPoint = 0.0,
double length = 10.0,
double low = -1.0,
double high = 1.0)
{
var amplitude = (high - low) / 2;
var mean = low + amplitude;
return mean + (amplitude * Math.Sin(
(((value - lowToHighMeanPoint) % length) / length) * 2 * Math.PI));
}
You could use it like this, to get the results you desired.
for (double i = 0.0; i < 24.0; i++)
{
Console.WriteLine("{0}: {1}", i, SinDistribution(i, 6.5, 24.0, -6.0, 35.0));
}
This obviously discounts environmental factors and assumes the day is an equinox but I think it answers the question.
So,
double EstimatedTemperature(double hour, double[] distribution)
{
var low = Math.Floor(hour);
var lowIndex = (int)low;
var highIndex = (int)Math.Ceiling(hour);
if (highIndex > distribution.Count - 1)
{
highIndex = 0;
}
if (lowIndex < 0)
{
lowIndex = distribution.Count - 1;
}
var lowValue = distribution.ElementAt(lowIndex);
var highValue = distribution.ElementAt(highIndex);
return lowValue + ((hour - low) * (highValue - lowValue));
}
assuming a rather simplistic linear transition between each point in the distibution. You'll get erroneous results if the hour is mapped to elements that are not present in the distribution.
For arbitrary data points, I would go with one of the other linear interpolation solutions that have been provided.
However, this particular set of data is generated by a triangle wave:
temp = 45*Math.Abs(2*((t-1)/24-Math.Floor((t-1)/24+.5)))-10;
The data in your table is linear up and down from a peak at hour 13 and a minimum at hour 1. If that is the type of model that you want then this is really easy to put into a formulaic solution. You would just simply perform linear interpolation between the two extremes of the temperature based upon the hour value. You would have two data points:
(xmin, ymin) as (hour-min, temp-min)
(xmax, ymax) as (hour-max, temp-max)
You would have two equations of the form:
The two equations would use the (x0, y0) and (x1, y1) values as the above two data points but apply them the opposite assignment (ie peak would be (x0, y0) on one and (x1, y1) in the other equation.
You would then select which equation to use based upon the hour value, insert the X value as the hour and compute as Y for the temperature value.
You will want to offset the X values used in the equations so that you take care of the offset between when Hour 0 and where the minimum temperature peak happens.
Here is an example of how you could do this using a simple set of values in the function, if you wish, add these as parameters;
public double GetTemp(double hour)
{
int min = 1;
int max = min + 12;
double lowest = -10;
double highest = 35;
double change = 3.75;
return (hour > max) ? ((max - hour) * change) + highest : (hour < min) ? ((min - hour)*change) + lowest : ((hour - max) * change) + highest;
}
I have tested this according to your example and it is working with 19.75 = 9.6875.
There is no check to see whether the value entered is within 0-24, but that you can probably manage yourself :)
You can use simple 2 point linear approximation. Try somthing like this:
function double hourTemp(double hour)
{
idx1 = round(hour);
idx2 = idx1 + 1;
return (data[idx2] - data[idx1]) * (hour - idx1) + data[idx1];
}
Or use 3,5 or more points to get polynom cofficients with Ordinary Least Squares method.
Your sample data similar to the sin function so you can make sin function approximation.

Categories