Software Perlin noise implementation - c#

I have written a 2D Perlin noise implementation based on information from here, here, here, and here. However, the output looks like this.
public static double Perlin(double X, double XScale, double Y, double YScale, double Persistance, double Octaves) {
double total=0.0;
for(int i=0;i<Octaves;i++){
int frq = (int) Math.Pow(2,i);
int amp = (int) Math.Pow(Persistance,i);
total += InterpolatedSmoothNoise((X / XScale) * frq, (Y / YScale) * frq) * amp;
}
return total;
}
private static double InterpolatedSmoothNoise (double X, double Y) {
int ix = (int) Math.Floor(X);
double fx = X-ix;
int iy = (int) Math.Floor(Y);
double fy = Y-iy;
double v1 = SmoothPerlin(ix,iy); // --
double v2 = SmoothPerlin(ix+1,iy); // +-
double v3 = SmoothPerlin(ix,iy+1);// -+
double v4 = SmoothPerlin(ix+1,iy+1);// ++
double i1 = Interpolate(v1,v2,fx);
double i2 = Interpolate(v3,v4,fx);
return Interpolate(i1,i2,fy);
}
private static double SmoothPerlin (int X, int Y) {
double sides=(Noise(X-1,Y,Z)+Noise(X+1,Y,Z)+Noise(X,Y-1,Z)+Noise(X,Y+1,Z)+Noise(X,Y,Z-1)+Noise(X,Y,Z+1))/12.0;
double center=Noise(X,Y,Z)/2.0;
return sides + center;
}
private static double Noise (int X, int Y) {
uint m_z = (uint) (36969 * (X & 65535) + (X >> 16));
uint m_w = (uint) (18000 * (Y & 65535) + (Y >> 16));
uint ou = (m_z << 16) + m_w;
return ((ou + 1.0) * 2.328306435454494e-10);
}
Any input on what is wrong is appreciated.
EDIT: I found a way to solve this: I used an array of doubles generated at load to fix this. Any way to implement a good random number generator is appreciated though.

I suppose this effect is due to your noise function (all other code looks ok).
The function
private static double Noise (int X, int Y) {
uint m_z = (uint) (36969 * (X & 65535) + (X >> 16));
uint m_w = (uint) (18000 * (Y & 65535) + (Y >> 16));
uint ou = (m_z << 16) + m_w;
return ((ou + 1.0) * 2.328306435454494e-10);
}
isn't very noisy but strongly correlated with your input X and Y variables. Try using any other pseudo-random function which you seed with you input.

I reconstructed your code in C and following suggestion from #Howard and this code is working well for me. I am not sure which Interpolate function you used. I used a linear interpolation in my code. I used following noise function:
static double Noise2(int x, int y) {
int n = x + y * 57;
n = (n<<13) ^ n;
return ( 1.0 - ( (n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0);
}

Related

Pack/Unpack 4 bytes into Integer [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
As the title says, I have been trying to pack 4 full 0-255 bytes into 1 integer in c# using BitShifting. I'm trying to Compress some data, currently using 40 bytes of data. But really in theory I only need 12 bytes, but to do so I need to compress all my data into 3 Integers.
Currently my data is:
float3 pos; // Position Relative to Object
float4 color; // RGB is Compressed into X, Y is MatID, Z and W are unused
float3 normal; // A simple Normal -1 to 1 range
But in theory i can compress to:
int pos; // X, Y, Z, MatID - These are never > 200 nor negative
int color; // R, G, B, Unused Fourth Byte
int normal; // X, Y, Z, [0, 255] = [-1, 1] with 128 being 0, Unused Fourth Byte Should be plenty accurate for my needs
So my question is how would i go about doing this? Im reasonably new to Bit Shifting and havent managed to get much working.
If I understand it right, you need to store 4 values in 4 bytes (one value per byte) and then use individual values by performing bit shift operations.
You can do it like this:
using System;
public class Program
{
public static void Main()
{
uint pos = 0x4a5b6c7d;
// x is first byte, y is second byte, z is third byte, matId is fourth byte
uint x = (pos & 0xff);
uint y = (pos & 0xff00) >> 8;
uint z = (pos & 0xff0000) >> 16;
uint matId = (pos & (0xff << 24)) >> 24;
Console.WriteLine(x + " " + y + " " + z + " " + matId);
Console.WriteLine((0x7d) + " " + (0x6c) + " " + (0x5b) + " " + (0x4a));
}
}
x will be equal to result of 0x4a5b6c7d & 0x000000ff = 0x7d
y will be equal to result of 0x4a5b6c7d & 0x0000ff00 right shifted by 8 bits = 0x6c
Similar for z and matId.
Edit
For packing, you need to use | operator:
Left shift fourth value by 24, say a
Left shift third value by 16, say b
Left shift second value by 8, say c
Nothing for fourth value, say d
Do a binary OR of all 4 and store it in an int: int packed = a | b | c | d
using System;
public class Program
{
static void Unpack(uint p)
{
uint pos = p;
// x is first byte, y is second byte, z is third byte, matId is fourth byte
uint x = (pos & 0xff);
uint y = (pos & 0xff00) >> 8;
uint z = (pos & 0xff0000) >> 16;
uint matId = (pos & (0xff << 24)) >> 24;
Console.WriteLine(x + " " + y + " " + z + " " + matId);
}
static uint Pack(int x, int y, int z, int matId)
{
uint newx = x, newy = y, newz = z, newMaxId = matId;
uint pos2 = (newMaxId << 24) | (newz << 16) | (newy << 8) | newx;
Console.WriteLine(pos2);
return pos2;
}
public static void Main()
{
uint packedInt = Pack(10, 20, 30, 40);
Unpack(packedInt);
}
}

Hash function for noise generation

I'm making value noise generator and I found that my current hash produces a kind of pattern in the image:
So I am looking for a better hash function less predictable/repeatable.
I'm using a hash instead of just random numbers because I want it to be deterministic. Given an (x, y) coordinates it should always produce the same result.
It would also be nice, but not mandatory if it's possible to scale the hash function to accept more parameters easily, like (x, y, z), or (x, y, z, t), instead of just two.
My current hash is:
public static class Hash
{
public static float GetHash(int x)
{
x = x ^ 61 ^ (x >> 16);
x += x << 3;
x ^= x >> 4;
x *= 0x27d4eb2d;
x ^= x >> 15;
return x / (float)int.MaxValue;
}
public static float GetHash(int x, int y) => GetHash((y << 8) + x);
}
I added the line x / (float)int.MaxValue because I want a float result from 0 to 1.
But I must admit that I just copy-paste it from somewhere. Bitwise operations (and hashes) are not my strength.
Original answer:
I would use a library like https://github.com/Auburns/FastNoise_CSharp
Maybe you can learn from the source code that is in that FastNoise.cs file
Modified answer including code, hashing functions from referenced sources "Copyright(c) 2017 Jordan Peck"
private const int X_PRIME = 1619;
private const int Y_PRIME = 31337;
private const int Z_PRIME = 6971;
private const int W_PRIME = 1013;
private static int Hash2D(int seed, int x, int y)
{
int hash = seed;
hash ^= X_PRIME * x;
hash ^= Y_PRIME * y;
hash = hash * hash * hash * 60493;
hash = (hash >> 13) ^ hash;
return hash;
}
private static int Hash3D(int seed, int x, int y, int z)
{
int hash = seed;
hash ^= X_PRIME * x;
hash ^= Y_PRIME * y;
hash ^= Z_PRIME * z;
hash = hash * hash * hash * 60493;
hash = (hash >> 13) ^ hash;
return hash;
}
private static int Hash4D(int seed, int x, int y, int z, int w)
{
int hash = seed;
hash ^= X_PRIME * x;
hash ^= Y_PRIME * y;
hash ^= Z_PRIME * z;
hash ^= W_PRIME * w;
hash = hash * hash * hash * 60493;
hash = (hash >> 13) ^ hash;
return hash;
}

How to extract multiple parameters from a binary chromosome

I am trying to use AForge.Net Genetics library to create a simple application for optimization purposes. I have a scenario where I have four input parameters, therefore I tried to modify the "OptimizationFunction2D.cs" class located in the AForge.Genetic project to handle four parameters.
While converting the binary chromosomes into 4 parameters (type = double) I am not sure if my approach is correct as I don't know how to verify the extracted values. Below is the code segment where my code differs from the original AForge code:
public double[] Translate( IChromosome chromosome )
{
// get chromosome's value
ulong val = ((BinaryChromosome) chromosome).Value;
// chromosome's length
int length = ((BinaryChromosome) chromosome).Length;
// length of W component
int wLength = length/4;
// length of X component
int xLength = length / 4;
// length of Y component
int yLength = length / 4;
// length of Z component
int zLength = length / 4;
// W maximum value - equal to X mask
ulong wMax = 0xFFFFFFFFFFFFFFFF >> (64 - wLength);
// X maximum value
ulong xMax = 0xFFFFFFFFFFFFFFFF >> (64 - xLength);
// Y maximum value - equal to X mask
ulong yMax = 0xFFFFFFFFFFFFFFFF >> (64 - yLength);
// Z maximum value
ulong zMax = 0xFFFFFFFFFFFFFFFF >> (64 - zLength);
// W component
double wPart = val & wMax;
// X component;
double xPart = (val >> wLength) & xMax;
// Y component;
double yPart = (val >> (wLength + xLength) & yMax);
// Z component;
double zPart = val >> (wLength + xLength + yLength);
// translate to optimization's function space
double[] ret = new double[4];
ret[0] = wPart * _rangeW.Length / wMax + _rangeW.Min;
ret[1] = xPart * _rangeX.Length / xMax + _rangeX.Min;
ret[2] = yPart * _rangeY.Length / yMax + _rangeY.Min;
ret[3] = zPart * _rangeZ.Length / zMax + _rangeZ.Min;
return ret;
}
I am not sure if am correctly separating the chromosome value into four part (wPart/xPart/yPart/zPart). The original function in the AForge.Genetic library looks like this:
public double[] Translate( IChromosome chromosome )
{
// get chromosome's value
ulong val = ( (BinaryChromosome) chromosome ).Value;
// chromosome's length
int length = ( (BinaryChromosome) chromosome ).Length;
// length of X component
int xLength = length / 2;
// length of Y component
int yLength = length - xLength;
// X maximum value - equal to X mask
ulong xMax = 0xFFFFFFFFFFFFFFFF >> ( 64 - xLength );
// Y maximum value
ulong yMax = 0xFFFFFFFFFFFFFFFF >> ( 64 - yLength );
// X component
double xPart = val & xMax;
// Y component;
double yPart = val >> xLength;
// translate to optimization's function space
double[] ret = new double[2];
ret[0] = xPart * rangeX.Length / xMax + rangeX.Min;
ret[1] = yPart * rangeY.Length / yMax + rangeY.Min;
return ret;
}
Can someone please confirm if my conversion process is correct or is there a better way of doing it.
No, it works but you don't need it to be so complicated.
ulong wMax = 0xFFFFFFFFFFFFFFFF >> (64 - wLength);
this returns the same value to all the results wMax xMax yMax zMax so just do one and call it componentMask
part = (val >> (wLength * pos) & componentMask);
where pos is the 0 based position of the component. so 0 for w, 1 for x ...
the rest is ok.
EDIT:
if the Length is not divided by 4 you can make the last part be just val >> (wLength * pos) to make it have the remaining bits.

Concatenate three 4-bit values

I am trying to get the original 12-bit value from from a base15 (edit) string. I figured that I need a zerofill right shift operator like in Java to deal with the zero padding. How do I do this?
No luck so far with the following code:
static string chars = "0123456789ABCDEFGHIJKLMNOP";
static int FromStr(string s)
{
int n = (chars.IndexOf(s[0]) << 4) +
(chars.IndexOf(s[1]) << 4) +
(chars.IndexOf(s[2]));
return n;
}
Edit; I'll post the full code to complete the context
static string chars = "0123456789ABCDEFGHIJKLMNOP";
static void Main()
{
int n = FromStr(ToStr(182));
Console.WriteLine(n);
Console.ReadLine();
}
static string ToStr(int n)
{
if (n <= 4095)
{
char[] cx = new char[3];
cx[0] = chars[n >> 8];
cx[1] = chars[(n >> 4) & 25];
cx[2] = chars[n & 25];
return new string(cx);
}
return string.Empty;
}
static int FromStr(string s)
{
int n = (chars.IndexOf(s[0]) << 8) +
(chars.IndexOf(s[1]) << 4) +
(chars.IndexOf(s[2]));
return n;
}
Your representation is base26, so the answer that you are going to get from a three-character value is not going to be 12 bits: it's going to be in the range 0..17575, inclusive, which requires 15 bits.
Recall that shifting left by k bits is the same as multiplying by 2^k. Hence, your x << 4 operations are equivalent to multiplying by 16. Also recall that when you convert a base-X number, you need to multiply its digits by a power of X, so your code should be multiplying by 26, rather than shifting the number left, like this:
int n = (chars.IndexOf(s[0]) * 26*26) +
(chars.IndexOf(s[1]) * 26) +
(chars.IndexOf(s[2]));

Saving a vector as a single number?

I was wondering if it would be possible to get a vector with an X and a Y value as a single number, knowing that both X and Y can range from -65000 to +65000.
Is this possible in any way?
Code examples on how to convert from this kind of number and to it would be nice.
Store it in a ulong:
ulong rslt = (uint)x;
rslt = rslt << 32;
rslt |= ((uint)y);
To get it out:
int x = (int)(rslt >> 32);
int y = (int)(rslt & 0xFFFFFFFF);
Assuming X and Y are both integer values and there is no overflow (32bit values is not enough) you can use e.g. (pseudocode)
V = fromXY(X, Y) = (y+65000)*130001+(x+65000)
(X,Y) = toXY(V) = (V%130001-65000,V/130001-65000) // <= / is integer division
(130001 is the number of distinct values for X or Y)
To combine:
var limit = 65000;
var x = 1;
var y = 2;
var single = x * (limit + 1) + y;
And then:
y = single % (limit + 1);
x = single - y / (limit + 1);
See it in action.
Of course, you have to assume that the maximum value for single fits within the size of the data type that stores it (which in this case it does).
the union does what you want very easily.
See also: http://www.cplusplus.com/doc/tutorial/other_data_types/
typedef long int64;
typedef int int32;
union {
struct { int32 a, b; };
int64 a_and_b;
} stacker;
int main ()
{
stacker.a = -1000;
stacker.b = 2000;
cout << stacker.a << ", " << stacker.b << endl;
cout << stacker.a_and_b << endl;
}
this will output:
-1000, 2000 <-- a and b read as two int32
8594229558296 <-- a and b interprested as a single int64

Categories