Topological Sorting, finding dependencies faster - c#

I am working on a graph problem for months now and I am at a point where I am looking for new input from others.
I spend hours in the library and hit the books. I am pretty confident that I found a solution but maybe someone here can give me a new perspective.
Problem:
I have a 2d plane and rectangles on it. Rectangles can overlap each other and I have to find an order of these rectangles. To visualize imagine windows on your screen and you have to find an order so that it looks right.
To give you a picture of it this may be one output:
Given:
A function that decides whether two rectangles overlap each other
public bool overlap(Rect a, Rect b) {...}
A function that given 2 rectangles overlap, decides which is has to be drawn first
//returns [1 => a before b, -1 => b before a, 0 => a & b have no "before (<)" relation]
public int compare(Rect a, Rect b) {...}
Rectangle entities with
int x,y,width,height
Screen width and height
int screen.width, int screen.height
runtime complexity of these two functions can be neglected for the solution of this problem.
The problem can be abstracted to a dependency graph in which I want to find a correct evaluation order. The rectangles are nodes and the isBefore relation specifies arcs between the nodes. The graph can have multiple connected components as shown in the pictures. So just throwing Sort over all nodes will not do. Note: compare avoids circular dependencies, so the graph will remain acyclic. So the good news is : and order actually exists, yayy!
Now here comes the hard part:
How do I find the dependencies as fast as possible in order to build the graph and run a topological sorting algorithm on it.
The most naive and worst way to do it is to just execute compare for each object on each object thus ending up with O(n²) complexity. But this is just not acceptable here since I may have thousands of these rectangles on the screen.
So how do I minimize the number of nodes I have to compare a node with in oder to find all dependencies?
Now here is my solution. Maybe you should read this after finding something yourself in oder to avoid to be biased.
First of all the problem can be simplified by taking away 1 dimension. The problems will still be isomorphic but much easier to understand, at least for me.
So let's just take lines(rectangles) on a big line(screen). A line has a position and a length. Lines that overlap build a connected component.
Since we have a finite amount of lines we can find the smallest line
of our set of lines in O(n).
In order for 2 lines to overlap their maximum distance is just the length of our smallest line. Anything above can't overlap with us.
We divide the screen by the size of the smallest line and end up with discrete chunks. We create a HashMap and a bucket for each chunk. We can now sort a line into these buckets.
we run over the set again O(n) and can decide very easy in which buckets we have to put our line. position % smallest.length = i and (position + length) % smallest.length = j will give the indicies of our HashMap. We sort the line into our HashMap from bucket[i] to bucket[j].
We have now minimized the set of lines we have to compare a line with in order to find all its dependencies. After doing this for all lines we only have to compare a line with all other lines in bucket[i-1] to bucket[j+1]. Any other line would be fo far away from us to overlap anyways. The modulo operation is efficent. The additional memory for the buckets shouldn't be very much.
This is the best I came up with. Maybe someone here has a better solution.

Some observations:
dividing screen by size of smallest line would make the algorithm very unpredictable and is not even necessary. You can use any size of bucket and the algorithm would work.
inspecting bucket[i-1] to bucket[j+1] is not necessary, bucket[i] to bucket[j] is enough
Let A and B be rectangles, B is not wider than A. Then part of left or right edge of B lies on A or those rectangles do not overlap (will be used later).
So the algorithm I made:
For every rectangle calculate ranges of buckets it belongs to
(bucketXFrom, bucketXTo, bucketYFrom, bucketYTo). This is class
RectangleInBucket.
Sort them by (bucketXTo - bucketXFrom). As there are not so many buckets, it is basically one step radix sort.
For every rectangle, starting from those with smallest width, scan all buckets it belongs to. If there are rectangles, compare to them and save relations where exist. Save rectagle into buckets under left and right edge.
I made total number of buckets equal to number of rectangles, it seems to work best.
It is usually faster than naive algorithm, but not as much as one could expect. As rectagles can (and do) belong to many buckets, one relation is reexamined many times. This increases number of steps. Moreover it uses not so cheap structures for this deduplication. But number of compare calls is reduced easily by several folds. It pays off even when this call is dirt cheap and the difference increases when the compare fuction is not trivial. And finally the code:
public class Rectangle
{
public int x;
public int y;
public int width;
public int height;
}
/// <summary>
/// Creates array of objects
/// </summary>
protected T[] InitializeArray<T>(int length) where T : new()
{
T[] array = new T[length];
for (int i = 0; i < length; ++i)
{
array[i] = new T();
}
return array;
}
/// <summary>
/// Creates array of objects
/// </summary>
protected T[,] InitializeArray<T>(int length, int width) where T : new()
{
T[,] array = new T[length, width];
for (int i = 0; i < length; ++i)
{
for (int j = 0; j < width; ++j)
{
array[i, j] = new T();
}
}
return array;
}
protected class RectangleInBucket
{
public readonly Rectangle Rect;
public readonly int RecNo;
public readonly int bucketXFrom;
public readonly int bucketXTo;
public readonly int bucketYFrom;
public readonly int bucketYTo;
public RectangleInBucket(Rectangle rectangle, int recNo, int bucketSizeX, int bucketSizeY)
{
Rect = rectangle;
RecNo = recNo;// arbitrary number unique for this rectangle
bucketXFrom = Rect.x / bucketSizeX;
bucketXTo = (Rect.x + Rect.width) / bucketSizeX;
bucketYFrom = Rect.y / bucketSizeY;
bucketYTo = (Rect.y + Rect.height) / bucketSizeY;
}
}
/// <summary>
/// Evaluates rectagle wrapped in RectangleInBucket object against all rectangles in bucket.
/// Saves result into tmpResult.
/// </summary>
protected void processBucket(Dictionary<long, int> tmpResult, List<RectangleInBucket> bucket, RectangleInBucket rib)
{
foreach (RectangleInBucket bucketRect in bucket)
{
if (bucketRect.RecNo < rib.RecNo)
{
long actualCouple = bucketRect.RecNo + (((long)rib.RecNo) << 32);
if (tmpResult.ContainsKey(actualCouple)) { continue; }
tmpResult[actualCouple] = overlap(bucketRect.Rect, rib.Rect) ? compare(bucketRect.Rect, rib.Rect) : 0;
}
else
{
long actualCouple = rib.RecNo + (((long)bucketRect.RecNo) << 32);
if (tmpResult.ContainsKey(actualCouple)) { continue; }
tmpResult[actualCouple] = overlap(rib.Rect, bucketRect.Rect) ? compare(rib.Rect, bucketRect.Rect) : 0;
}
}
}
/// <summary>
/// Calculates all couples of rectangles where result of "compare" function is not zero
/// </summary>
/// <param name="ra">Array of all rectangles</param>
/// <param name="screenWidth"></param>
/// <param name="screenHeight"></param>
/// <returns>Couple of rectangles and value of "compare" function</returns>
public List<Tuple<Rectangle, Rectangle, int>> GetRelations(Rectangle[] ra, int screenWidth, int screenHeight)
{
Dictionary<long, int> tmpResult = new Dictionary<long, int>();
// the key represents couple of rectangles. As index of one rectangle is int,
// two indexes can be stored in long. First index must be smaller than second,
// this ensures couple can be inserted only once. Value of dictionary is result
// of "compare" function for this couple.
int bucketSizeX = Math.Max(1, (int)Math.Sqrt(screenWidth * screenHeight / ra.Length));
int bucketSizeY = bucketSizeX;
int bucketsNoX = (screenWidth + bucketSizeX - 1) / bucketSizeX;
int bucketsNoY = (screenHeight + bucketSizeY - 1) / bucketSizeY;
List<RectangleInBucket>[,] buckets = InitializeArray<List<RectangleInBucket>>(bucketsNoX, bucketsNoY);
List<RectangleInBucket>[] sortedRects = InitializeArray<List<RectangleInBucket>>(bucketsNoX);
for (int i = 0; i < ra.Length; ++i)
{
RectangleInBucket rib = new RectangleInBucket(ra[i], i, bucketSizeX, bucketSizeY);
sortedRects[rib.bucketXTo - rib.bucketXFrom].Add(rib);// basically radix sort
}
foreach (List<RectangleInBucket> sorted in sortedRects) // start with most narrow rectangles
{
foreach (RectangleInBucket rib in sorted) // all of one width (measured in buckets)
{
for (int x = rib.bucketXFrom; x <= rib.bucketXTo; ++x)
{
for (int y = rib.bucketYFrom; y <= rib.bucketYTo; ++y)
{
processBucket(tmpResult, buckets[x, y], rib);
}
}
for (int y = rib.bucketYFrom; y <= rib.bucketYTo; ++y)
{
buckets[rib.bucketXFrom, y].Add(rib); // left edge of rectangle
if (rib.bucketXFrom != rib.bucketXTo)
{
buckets[rib.bucketXTo, y].Add(rib); // right edge of rectangle
}
}
}
}
List<Tuple<Rectangle, Rectangle, int>> result = new List<Tuple<Rectangle, Rectangle, int>>(tmpResult.Count);
foreach (var t in tmpResult) // transform dictionary into final list
{
if (t.Value != 0)
{
result.Add(Tuple.Create(ra[(int)t.Key], ra[(int)(t.Key >> 32)], t.Value));
}
}
return result;
}

Related

Trying to do a flexible/generic minimum distance level of detail distance ranges system

Well, I'll like to create a level of detail system for my game to load the terrain.
The point here is that I'll like to use the minimum expression for distances, so all the chunks that I load match at the corners but there are no redundant parts, like here:
So, more something like this:
The purpose here is to have the maximum render distance with the minimum count of gameobjects to load.
I have two main concerns the system is not flexible for initial minimum distances that are non-base of 2.
As something like this is browsed:
Another thing is that distance must be patched:
//// If the number is even, then we must do this patch
if (currentDistanceLevel == 128)
sum += currentDistanceLevel / 2; // TODO, why?
I don't understand why...
This is what happens:
This is my code:
/// <summary>
/// Calculate the the minimum distances between level of detail chunks.
/// </summary>
/// <param name="lodLevels">The number of lod levels to go into.</param>
/// <param name="numberOfChunks">The number of chunks after the view distance from the ChunkGenerator system finishes.</param>
/// <returns>A list of distances to use on a grid of level of detail. (See https://i.gyazo.com/0ceb8ff3a4ebab8391532892fc8b6f82.png)</returns>
public static IEnumerable<float> CalculateOptimumLoDValues(int lodLevels, int numberOfChunks, int chunkSize)
{
var back = false;
var sum = 0;
var currentDistanceLevel = chunkSize;
for (var l = 1; l < lodLevels; l++)
{
// The last step is give the previous sum
yield return sum;
// At the second step, if we didn't went to the previous level
if (!back)
{
// We half the number of chunks of the previous level
numberOfChunks /= 2;
// And we double distance for the next level
currentDistanceLevel *= 2;
}
else
back = false;
numberOfChunks += 2;
// First, if we want to avoid this effect, we must do the following step.
// https://i.stack.imgur.com/1nfgd.png
// at https://stackoverflow.com/questions/72636128/trying-to-position-all-chunks-for-distinct-lod-levels
// ----------------
// If the following distance level has an odd number of elements then I try to match the current row/column by doubling it, in that way non of the following corner will be half of the previous one.
// The number of chunk on the rows goes, 16-10(8+2)-12(10+2)-8(12/2+2)-6(8/2+2)-8(12/2+2)-6(12/2+2)....
if (numberOfChunks / 2 % 2 != 0)
{
// Then, if the number is odd after summing the two corners and dividing it,
// we must go into he back level, set the flag to true and sum the distance
l--;
back = true;
sum += currentDistanceLevel * 3 / 4;
}
else
{
//// If the number is even, then we must do this patch
if (currentDistanceLevel == 128)
sum += currentDistanceLevel / 2; // TODO, why?
else
sum += currentDistanceLevel; // For the rest of the levels we sum it
}
}
}

Minecraft like terrain in unity3d

I'm a fan of Minecraft's old terrain generation with amazing overhangs, mountains and generally interesting worlds. My problem is that right now I'm using perlin noise, which while good for smooth terrain doesn't really give sporadic jumps that would allow mountains in a mostly flat area.
On top of that with the method I'm using gets 2d perlin noise, puts it in an array and then gets every Y value under it and sets it to a block; This stops generation of overhangs like this: Old Minecraft Terrain Image
Right now I have this:
public class GenerateIdMap : MonoBehaviour {
[Serializable] public class IBSerDict : SerializableDictionaryBase<int, byte> {};
public int size = 60;
public int worldHeight = 3;
public float perlinScale = 15f;
public int seed;
public int heightScale = 10;
public int maxHeight = 256;
public IBSerDict defaultBlocks = new IBSerDict();
void Start()
{
if (seed != 0) seed = (int)Network.time * 10;
CreateMap();
}
byte[,,] CreateMap()
{
byte[,,] map = new byte[size, maxHeight, size];
for (int x = 0; x < size; x++)
{
for (int z = 0; z < size; z++)
{
int y = (int)(Mathf.PerlinNoise((x + seed) / perlinScale, (z + seed) / perlinScale) * heightScale) + worldHeight;
y = Mathf.Clamp(y, 0, maxHeight-1);
while (y > 0)
{
map[x, y, z] = GetBlockType(y);
y--;
}
}
}
return map;
}
byte GetBlockType(int y)
{
SortedDictionary<int, byte> s_defaultBlocks = new SortedDictionary<int, byte>(defaultBlocks);
foreach (var item in s_defaultBlocks.OrderBy(key => key.Key))
{
if (y <= item.Key)
{
print(item.Value);
return item.Value;
}
}
return 0;
} }
The GetBlockType function is new and for getting the default block at that height, I'll fix it up later but it works for now. If you instantiate a prefab at that vector3 you would see terrain. Can someone help me figure out how to make better terrain? Thanks in advance!
Both of your problems should be tackled individually.
The first issue regarding the lack of variation in the generated values can usually be fixed in one of two ways, the first way is to modify the input into the perlin noise, i.e. the octaves and persistance and the second is to mix the output of multiple functions and even use the output of one function as the input to another. By functions, I mean Perlin/Simplex/Voronoi etc.
With the former method, as you mentioned, it can be pretty difficult to get terrain with interesting features over a large area (the generated values are homogeneous), but by playing with the coordinate range and octaves/persistance, it can be possible. The second approach is probably recommended however, because by mixing the inputs and outputs of different functions you can get some really interesting shapes (Voronoi produces circular crator-like shapes).
In order to fix the problem you are having with the overhangs, you would need to change your approach to generating the world slightly. Currently, you are just generating the height values of the terrain and assigning each of those values to give you the terrain surface only. What you ideally would want to do is, generate a pseudo-random value to use as a pass flag for each of the blocks in the 3d space (also those underground). The flag would indicate whether a block should be placed or not in the 3d world.
This is slower, but would generate caves and overhangs as you need.

How can I print somethying that isn't there?

I'm doing a text based game for a school project and I see myself stuck with a quite stupid problem.
The concept is simple, there's a map, a player, some monsters and some items.
For the map data structure I decided to use a 2d array of char's that have a unicode for content.
On top of this, I have a camera, which has a radius. The player never moves on screen, it has a x and y, but what has motion on screen is the camera itself. This works quite fine except when I get to the corners or any outside wall.
I get my camera doing this
int size = cameraSize/2;
int top = player.GetY() - size, bottom = player.GetY() + size;
char[,] camera = new char[cameraSize, cameraSize];
Console.SetCursorPosition(0,0);
for (int i = top; i < bottom; i++)
{
for (int j = top; j < bottom; j++)
{
camera[i, j] = map.ReMapPosition(i, j);
Console.Write(camera[i,j]);
}
Console.Write("\n");
}
Console.SetCursorPosition(cameraSize,cameraSize);
Console.Write(player.GetPlayerChar());
My 'cameraSize' is declared on the beginning of the class and is filled when the constructor is called
private int cameraSize;
cameraSize = difficulty.GetCameraSize();
The class 'difficulty' is irrelevant for my problem.
My problem itself is that I can't make the player positioned on the center when I get to the border walls as there is nothing to get from the array, since these are negative positions.
There are two approaches to this sort of problem.
1) In your loop, check if a value if out of range and output the value by hand.
2) Wrap your array in a custom class which ignores out of range values.
Something like this:
class MyWrapper
{
private readonly char[,] data;
public MyWrapper(char[,] data)
{
this.data=data;
}
private bool InRange(int x, int y)
{
return x >= 0 && y >= 0 && x < data.GetLength(0) && y < data.GetLength(1);
}
public char this[int x, int y]
{
get
{
return InRange(x,y) ? data[x,y] : ' ';
}
set
{
if(InRange(x,y)) data[x,y] = value;
}
}
}
My recommendation is for set to throw an exception when called on out of range values, but my example swallows the failure instead.
C# can't "retrieve values that aren't there", but you do have a couple options
Check to see if you are trying to get a negative position, or a position that's too big, and return a space
Or
Increase the size of the array by cameraSize/2 on all sides, which would effectively increase both the width and the height by cameraSize, and then make it to where your player can only move around in the coordinates (cameraSize/2,cameraSize/2) and (mapWidth-cameraSize/2,mapHeight-cameraSize/2). (<-- The coordinates might by (y,x) because of how 2d arrays work and depending on how your code is written). That way, the camera always has a padding around it, so there shouldn't ever be negative indicies

Adding an "average" parameter to .NET's Random.Next() to curve results

I'd like to be able to add a "average" parameter to Random.Next(Lower, Upper). This method would have min, max and average parameters. I created a method like this a while back for testing (It used lists and was horrible), so I'd like some ideas on how to write a correct implementation.
The reason for having this functionality is for many procedural/random events in my game. Say you want trees to be 10 units tall most of the time, but still can be as low as 5 or 15. A normal Random.Next(5,15) would return results all over, but this method would have more of a bell curve toward it's results. Meaning 10 would be the most common, and going out in each direction would be less common. Moving the average down to 7 for example, would make relatively small trees (or whatever this is being used on), but large ones are still possible, however uncommon.
Previous method (pseudo-code-ish)
Loop from min to max
Closer to average numbers are added to the list more times
A random element is selected from the list, elements closer to average are added
more, so they will be more likely to be chosen.
Okay, so that's like throwing a bunch of candies in a bag and picking a random one. Yeah, slow. What are your thoughts on improving this?
Illustration: (Not exactly accurate but you see the idea)
NOTE: Many people have suggested a bell curve, but the question is how to be able to change the peak of the curve to favor one side in this sense.
I'm expanding on the idea of generating n random numbers, and taking their average to get a bell-curve effect. The "tightness" parameter controls how steep the curve is.
Edit: Summing a set of random points to get a "normal" distribution is supported by the Central Limit Theorem. Using a bias function to sway results in a particular direction is a common technique, but I'm no expert there.
To address the note at the end of your question, I'm skewing the curve by manipulating the "inner" random number. In this example, I'm raising it to the exponent you provide. Since a Random returns values less than one, raising it to any power will still never be more than one. But the average skews towards zero, as squares, cubes, etc of numbers less than one are even smaller than the base number. exp = 1 has no skew, whereas exp = 4 has a pretty significant skew.
private Random r = new Random();
public double RandomDist(double min, double max, int tightness, double exp)
{
double total = 0.0;
for (int i = 1; i <= tightness; i++)
{
total += Math.Pow(r.NextDouble(), exp);
}
return ((total / tightness) * (max - min)) + min;
}
I ran trials for different values for exp, generating 100,000 integers between 0 and 99. Here's how the distributions turned out.
I'm not sure how the peak relates to the exp value, but the higher the exp, the lower the peak appears in the range.
You could also reverse the direction of the skew by changing the line in the inside of the loop to:
total += (1 - Math.Pow(r.NextDouble(), exp));
...which would give the bias on the high side of the curve.
Edit: So, how do we know what to make "exp" in order to get the peak where we want it? That's a tricky one, and could probably be worked out analytically, but I'm a developer, not a mathematician. So, applying my trade, I ran lots of trials, gathered peak data for various values of exp, and ran the data through the cubic fit calculator at Wolfram Alpha to get an equation for exp as a function of peak.
Here's a new set of functions which implement this logic. The GetExp(...) function implements the equation found by WolframAlpha.
RandomBiasedPow(...) is the function of interest. It returns a random number in the specified ranges, but tends towards the peak. The strength of that tendency is governed by the tightness parameter.
private Random r = new Random();
public double RandomNormal(double min, double max, int tightness)
{
double total = 0.0;
for (int i = 1; i <= tightness; i++)
{
total += r.NextDouble();
}
return ((total / tightness) * (max - min)) + min;
}
public double RandomNormalDist(double min, double max, int tightness, double exp)
{
double total = 0.0;
for (int i = 1; i <= tightness; i++)
{
total += Math.Pow(r.NextDouble(), exp);
}
return ((total / tightness) * (max - min)) + min;
}
public double RandomBiasedPow(double min, double max, int tightness, double peak)
{
// Calculate skewed normal distribution, skewed by Math.Pow(...), specifiying where in the range the peak is
// NOTE: This peak will yield unreliable results in the top 20% and bottom 20% of the range.
// To peak at extreme ends of the range, consider using a different bias function
double total = 0.0;
double scaledPeak = peak / (max - min) + min;
if (scaledPeak < 0.2 || scaledPeak > 0.8)
{
throw new Exception("Peak cannot be in bottom 20% or top 20% of range.");
}
double exp = GetExp(scaledPeak);
for (int i = 1; i <= tightness; i++)
{
// Bias the random number to one side or another, but keep in the range of 0 - 1
// The exp parameter controls how far to bias the peak from normal distribution
total += BiasPow(r.NextDouble(), exp);
}
return ((total / tightness) * (max - min)) + min;
}
public double GetExp(double peak)
{
// Get the exponent necessary for BiasPow(...) to result in the desired peak
// Based on empirical trials, and curve fit to a cubic equation, using WolframAlpha
return -12.7588 * Math.Pow(peak, 3) + 27.3205 * Math.Pow(peak, 2) - 21.2365 * peak + 6.31735;
}
public double BiasPow(double input, double exp)
{
return Math.Pow(input, exp);
}
Here is a histogram using RandomBiasedPow(0, 100, 5, peak), with the various values of peak shown in the legend. I rounded down to get integers between 0 and 99, set tightness to 5, and tried peak values between 20 and 80. (Things get wonky at extreme peak values, so I left that out, and put a warning in the code.) You can see the peaks right where they should be.
Next, I tried boosting Tightness to 10...
Distribution is tighter, and the peaks are still where they should be. It's pretty fast too!
Here's a simple way to achieve this. Since you already have answers detailing how to generate normal distributions, and there are plenty of resources on that, I won't reiterate that. Instead I'll refer to a method I'll call GetNextNormal() which should generate a value from a normal distribution with mean 0 and standard deviation 1.
public int Next(int min, int max, int center)
{
int rand = GetNextNormal();
if(rand >= 0)
return center + rand*(max-center);
return center + rand*(center-min);
}
(This can be simplified a little, I've written it that way for clarity)
For a rough image of what this is doing, imagine two normal distributions. They're both centered around your center, but for one the min is one standard deviation away, to the left, and for the other, the max is one standard deviation away, to the right. Now imagine chopping them both in half at the center. On the left, you keep the one with the standard deviation corresponding to min, and on the right, the one corresponding to max.
Of course, normal distributions aren't guaranteed to stay within one standard deviation, so there are two things you probably want to do:
Add an extra parameter which controls how tight the distribution is
If you want min and max to be hard limits, you will have to add rejection for values outside those bounds.
A complete method, with those two additions (again keeping everything as ints for now), might look like;
public int Next(int min, int max, int center, int tightness)
{
int rand = GetNextNormal();
int candidate;
do
{
if(rand >= 0)
candidate = center + rand*(max-center)/tightness;
else
candidate = center + rand*(center-min)/tightness;
} while(candidate < min || candidate > max);
return candidate;
}
If you graph the results of this (especially a float/double version), it won't be the most beautiful distribution, but it should be adequate for your purposes.
EDIT
Above I said the results of this aren't particularly beautiful. To expand on that, the most glaring 'ugliness' is a discontinuity at the center point, due to the height of the peak of a normal distribution depending on its standard deviation. Because of this, the distribution you'll end up with will look something like this:
(For min 10, max 100 and center point 70, using a 'tightness' of 3)
So while the probability of a value below the center is equal to the probability above, results will be much more tightly "bunched" around the average on one side than the other. If that's too ugly for you, or you think the results of generating features by a distribution like that will seem too unnatural, we can add an additional modification, weighing which side of the center is picked by the proportions of the range to the left or right of center. Adding that to the code (with the assumption you have access to a Random which I've just called RandomGen) we get:
public int Next(int min, int max, int center, int tightness)
{
int rand = Math.Abs(GetNextNormal());
int candidate;
do
{
if(ChooseSide())
candidate = center + rand*(max-center)/tightness;
else
candidate = center - rand*(center-min)/tightness;
} while(candidate < min || candidate > max);
return candidate;
}
public bool ChooseSide(int min, int max, int center)
{
return RandomGen.Next(min, max) >= center;
}
For comparison, the distribution this will produce with the same min, max, center and tightness is:
As you can see, this is now continuous in frequency, as well as the first derivative (giving a smooth peak). The disadvantage to this version over the other is now you're more likely to get results on one side of the center than the other. The center is now the modal average, not the mean. So it's up to you whether you prefer a smoother distribution or having the center be the true mean of the distribution.
Since you are looking for a normal-ish distribution with a value around a point, within bounds, why not use Random instead to give you two values that you then use to walk a distance from the middle? The following yields what I believe you need:
// NOTE: scoped outside of the function to be random
Random rnd = new Random();
int GetNormalizedRandomValue(int mid, int maxDistance)
{
var distance = rnd.Next(0, maxDistance + 1);
var isPositive = (rnd.Next() % 2) == 0;
if (!isPositive)
{
distance = -distance;
}
return mid + distance;
}
Plugging in http://www.codeproject.com/Articles/25172/Simple-Random-Number-Generation makes this easier and correctly normalized:
int GetNormalizedRandomValue(int mid, int maxDistance)
{
int distance;
do
{
distance = (int)((SimpleRNG.GetNormal() / 5) * maxDistance);
} while (distance > maxDistance);
return mid + distance;
}
I would do something like this:
compute uniform distributed double
using that, use the formula for normal distribution (if i remember right you call it "inverse density function"? well, the one that maps [0,1] "back" to the accumulated probabilities) or similar to compute desired value - e.g. you can slightly adjust normal distribution to not only take average and stddev/variance, but average and two such values to take care of min/max
round to int, assure min, max, etc
You have two choices here:
Sum up N random numbers from (0,1/N) which gathers up the results around 0.5 and the scale the results witin x_min and x_max. The number N depends on how narrow the results are. The higher the count the more narrow the results.
Random rnd = new Random();
int N=10;
double r = 0;
for(int i=0; i<N; i++) { r+= rnd.NextDouble()/N; }
double x = x_min+(x_max-x_min)*r;
Use the actual normal distribution with a mean and a standard deviation. This will not guarantee a minimum or maximum though.
public double RandomNormal(double mu, double sigma)
{
return NormalDistribution(rnd.NextDouble(), mu, sigma);
}
public double RandomNormal()
{
return RandomNormal(0d, 1d);
}
/// <summary>
/// Normal distribution
/// </summary>
/// <arg name="probability">probability value 0..1</arg>
/// <arg name="mean">mean value</arg>
/// <arg name="sigma">std. deviation</arg>
/// <returns>A normal distribution</returns>
public double NormalDistribution(double probability, double mean, double sigma)
{
return mean+sigma*NormalDistribution(probability);
}
/// <summary>
/// Normal distribution
/// </summary>
/// <arg name="probability">probability value 0.0 to 1.0</arg>
/// <see cref="NormalDistribution(double,double,double)"/>
public double NormalDistribution(double probability)
{
return Math.Sqrt(2)*InverseErrorFunction(2*probability-1);
}
public double InverseErrorFunction(double P)
{
double Y, A, B, X, Z, W, WI, SN, SD, F, Z2, SIGMA;
const double A1=-.5751703, A2=-1.896513, A3=-.5496261E-1;
const double B0=-.1137730, B1=-3.293474, B2=-2.374996, B3=-1.187515;
const double C0=-.1146666, C1=-.1314774, C2=-.2368201, C3=.5073975e-1;
const double D0=-44.27977, D1=21.98546, D2=-7.586103;
const double E0=-.5668422E-1, E1=.3937021, E2=-.3166501, E3=.6208963E-1;
const double F0=-6.266786, F1=4.666263, F2=-2.962883;
const double G0=.1851159E-3, G1=-.2028152E-2, G2=-.1498384, G3=.1078639E-1;
const double H0=.9952975E-1, H1=.5211733, H2=-.6888301E-1;
X=P;
SIGMA=Math.Sign(X);
if(P<-1d||P>1d)
throw new System.ArgumentException();
Z=Math.Abs(X);
if(Z>.85)
{
A=1-Z;
B=Z;
W=Math.Sqrt(-Math.Log(A+A*B));
if(W>=2.5)
{
if(W>=4.0)
{
WI=1.0/W;
SN=((G3*WI+G2)*WI+G1)*WI;
SD=((WI+H2)*WI+H1)*WI+H0;
F=W+W*(G0+SN/SD);
}
else
{
SN=((E3*W+E2)*W+E1)*W;
SD=((W+F2)*W+F1)*W+F0;
F=W+W*(E0+SN/SD);
}
}
else
{
SN=((C3*W+C2)*W+C1)*W;
SD=((W+D2)*W+D1)*W+D0;
F=W+W*(C0+SN/SD);
}
}
else
{
Z2=Z*Z;
F=Z+Z*(B0+A1*Z2/(B1+Z2+A2/(B2+Z2+A3/(B3+Z2))));
}
Y=SIGMA*F;
return Y;
}
Here is my solution. The MyRandom class features an equivalent function to Next() with 3 additional parameters. center and span indicate the desirable range, retry is the retry count, with each retry, the probability of generating a number in the desired range should increase with exactly 50% in theory.
static void Main()
{
MyRandom myRnd = new MyRandom();
List<int> results = new List<int>();
Console.WriteLine("123456789012345\r\n");
int bnd = 30;
for (int ctr = 0; ctr < bnd; ctr++)
{
int nextAvg = myRnd.NextAvg(5, 16, 10, 2, 2);
results.Add(nextAvg);
Console.WriteLine(new string((char)9608, nextAvg));
}
Console.WriteLine("\r\n" + String.Format("Out of range: {0}%", results.Where(x => x < 8 || x > 12).Count() * 100 / bnd)); // calculate out-of-range percentage
Console.ReadLine();
}
class MyRandom : Random
{
public MyRandom() { }
public int NextAvg(int min, int max, int center, int span, int retry)
{
int left = (center - span);
int right = (center + span);
if (left < 0 || right >= max)
{
throw new ArgumentException();
}
int next = this.Next(min, max);
int ctr = 0;
while (++ctr <= retry && (next < left || next > right))
{
next = this.Next(min, max);
}
return next;
}
}
Is there any reason that the distribution must actually be a bell curve? For example, using:
public int RandomDist(int min, int max, int average)
{
rnd = new Math.Random();
n = rnd.NextDouble();
if (n < 0.75)
{
return Math.Sqrt(n * 4 / 3) * (average - min) + min;
} else {
return Math.Sqrt(n * 4 - 3) * (max - average) + average;
}
}
will give a number between min and max, with the mode at average.
You could use the Normal distribution class from MathNet.Numerics (mathdotnet.com).
An example of it's use:
// Distribution with mean = 10, stddev = 1.25 (5 ~ 15 99.993%)
var dist = new MathNet.Numerics.Distributions.Normal(10, 1.25);
var samples = dist.Samples().Take(10000);
Assert.True(samples.Average().AlmostEqualInDecimalPlaces(10, 3));
You can adjust the spread by changing the standard deviation (the 1.25 I used). Only problem is that it will occasionally give you values outside of your desired range so you'd have for them. If you want something which is more skewed one way or another you could try other distribution functions from the library too.
Update - Example class:
public class Random
{
MathNet.Numerics.Distributions.Normal _dist;
int _min, _max, _mean;
public Random(int mean, int min, int max)
{
_mean = mean;
_min = min;
_max = max;
var stddev = Math.Min(Math.Abs(mean - min), Math.Abs(max - mean)) / 3.0;
_dist = new MathNet.Numerics.Distributions.Normal(mean, stddev);
}
public int Next()
{
int next;
do
{
next = (int)_dist.Sample();
} while (next < _min || next > _max);
return next;
}
public static int Next(int mean, int min, int max)
{
return new Random(mean, min, max).Next();
}
}
Not sure this is what you want, but here is a way to draw a random number with a distribution which is uniform from min to avg and from avg to max while ensuring that the mean equals avg.
Assume probability p for a draw from [min avg] and probability 1-p from [avg max]. The expected value will be p.(min+avg)/2 + (1-p).(avg+max)/2 = p.min/2 + avg/2 + (1-p).max/2 = avg. We solve for p: p=(max-avg)/(max-min).
The generator works as follows: draw a random number in [0 1]. If less than p, draw a random number from [min avg]; otherwise, draw one from [avg max].
The plot of the probability is piecewise constant, p from min to avg and 1-p from avg to max. Extreme values are not penalized.

Two types of iterators

I apologize for all the code but I had a hard time describing what I was trying to do. I am creating a 2D grid for a tile map. The tiles (blocks) are broken up into say a 10x10 square of tiles called a chunk, the chunks form 10x10 squares to form regions, a 10x10 square of regions forms a world. The square side dimension is blockSize.
All coordinates are [X,Y], 0,0 is upper left, all scanning left to right and then top to down.
From Largest to Smallest: World -> Region -> Chunk -> Block.
This image shows how a World with a 2x2 Blocksize would be laid out:
The "Large" addresses show what each blocks address would be if it were in a single giant array instead of broken up into subunits.
At the end of this post I give a cliff notes version of the code I already have working. I can create all the structures. I have iterators set up (see code at end) so that each level can iterate only the level lower than it. My code can do the following:
// Create a world of 10x10 regions, each made up of 10x10 chunks, each made up of 10x10 tiles
World world = new World(blockSize);
// Address the upper left hand corner of the world. The first region's first chunk's first block.
Block block = World.Regions[0,0].Chunks[0,0].Blocks[0,0];
// Address a random chunk
Chunk chunk = World.Regions[1,2].Chunks[6,2];
// Iterate over the Block[,] grid of the given chunk from left to right, up to down
// This will give us every block in Region 1,2 Chunk 6,2
foreach (Block block in chunk) {}
// Address a random region
Region region = World.Regions[4,5];
// Iterate over the Chunk[,] grid of the given region from left to right, up to down
// This will give us every Chunk in Region 4,5
foreach (Chunk chunk in region) {}
// In World, iterate over the Region[,] grid from left to right, up to down
// This will give us every Region in the World
foreach (Region region in World.regions) {}
...
What I want is for my iterators to be able to iterate across two levels of my data. For example, given a Region, scan over all the chunks in that region and give me a list of all blocks in the whole thing. Or given a world, get all the chunks in the world. Or an enormous list of all the blocks in the entire World.
...
// Given a region, return all chunks in that region
foreach (Chunk chunk in region) {}
// Given a region, return all blocks in all chunks in that region
foreach (Block block in region.GetAllBlocks)
{
// Scan the chunks from left to right.
// In each chunk, scan the blocks from left to right.
// Cover every block in the region.
}
// Given a world, return all the regions
Region regionArray = World.region; // For clarity
foreach (Region region in regionArray ) {}
// Given a world, return all the chunks in all the regions
Region regionArray = World.region; // For clarity
foreach (Chunk chunk in regionArray.GetAllChunks)
{
// Scan the regions from left to right, up to down.
// In each region, scan the chunks from left to right, up to down.
}
// Given a world, return all the blocks in all the chunks in all the regions
Region regionArray = World.region; // For clarity
foreach (Block block in regionArray.GetAllBlocks)
{
// Scan the regions from left to right, up to down.
// In each region, scan the chunks from left to right, up to down.
// In each chunk, scan the blocks from left to right, up to down.
}
...
How do I write my iterator code so I generate the lists I want? This is the first time I've done anything complex with iterators and I'm having a lot of trouble with figuring out how to do it. Is trying this sort of thing a good idea? Is it efficient/inefficient?
Here is a trimmed down version of what my code looks like to show what I have working:
/* **************************************** */
class Block
{
int blockX, blockY; // Grid X/Y of this block in it's chunk
public Block(blockX, blockY)
{
this.blockX = blockX;
this.blockY = blockY;
}
} // Block Class
/* **************************************** */
class Chunk : IEnumerator, IENumerable
{
int chunkX, chunkY; // X/Y of the chunk in it's region
int blockSize;
int containsBlocks;
public Block[,] blocks;
int enumeratorIndex = -1;
public Chunk(int chunkX, int chunkY, int blockSize)
{
this.chunkX = chunkX;
this.chunkY = chunkY;
this.blockSize = blockSize;
this.containsBlocks = blockSize * blockSize;
blocks = new Block[blockSize, blockSize];
for (int x = 0; x < blockSize; ++x)
{
for (int y = 0; y < blockSize; ++y)
{
blocks[x, y] = new Block(x, y);
}
}
} // constructor
public IEnumerator GetEnumerator()
{
return (IEnumerator)this;
}
public bool MoveNext()
{
enumeratorIndex++;
return (enumeratorIndex < this.containsBlocks);
}
public void Reset() { enumeratorIndex = 0; }
public object Current
{
get
{
int y = enumeratorIndex / blockSize;
int x = enumeratorIndex % blockSize;
return blocks[x, y];
}
}
} // Chunk Class
/* **************************************** */
class Region : IEnumerator, IENumerable
{
int regionX, int regionY; // X/Y of region in the world
int blockSize;
int containsBlocks;
public Chunks[,] chunks;
int enumeratorIndex = -1;
...
Same kind of constructor to setup the Chunks[,] array, but this time the iterator is
...
public object Current
{
get
{
int y = enumeratorIndex / blockSize;
int x = enumeratorIndex % blockSize;
return Chunks[x, y];
}
}
} // Region Class
/* **************************************** */
class World : IEnumerator, IENumerable
{
public Region[,] regions; // There is only one world, here are it's regions
int blockSize;
...
etc
...
public object Current
{
get
{
int y = enumeratorIndex / blockSize;
int x = enumeratorIndex % blockSize;
return Region[x, y];
}
}
}

Categories