Hash function for noise generation - c#

I'm making value noise generator and I found that my current hash produces a kind of pattern in the image:
So I am looking for a better hash function less predictable/repeatable.
I'm using a hash instead of just random numbers because I want it to be deterministic. Given an (x, y) coordinates it should always produce the same result.
It would also be nice, but not mandatory if it's possible to scale the hash function to accept more parameters easily, like (x, y, z), or (x, y, z, t), instead of just two.
My current hash is:
public static class Hash
{
public static float GetHash(int x)
{
x = x ^ 61 ^ (x >> 16);
x += x << 3;
x ^= x >> 4;
x *= 0x27d4eb2d;
x ^= x >> 15;
return x / (float)int.MaxValue;
}
public static float GetHash(int x, int y) => GetHash((y << 8) + x);
}
I added the line x / (float)int.MaxValue because I want a float result from 0 to 1.
But I must admit that I just copy-paste it from somewhere. Bitwise operations (and hashes) are not my strength.

Original answer:
I would use a library like https://github.com/Auburns/FastNoise_CSharp
Maybe you can learn from the source code that is in that FastNoise.cs file
Modified answer including code, hashing functions from referenced sources "Copyright(c) 2017 Jordan Peck"
private const int X_PRIME = 1619;
private const int Y_PRIME = 31337;
private const int Z_PRIME = 6971;
private const int W_PRIME = 1013;
private static int Hash2D(int seed, int x, int y)
{
int hash = seed;
hash ^= X_PRIME * x;
hash ^= Y_PRIME * y;
hash = hash * hash * hash * 60493;
hash = (hash >> 13) ^ hash;
return hash;
}
private static int Hash3D(int seed, int x, int y, int z)
{
int hash = seed;
hash ^= X_PRIME * x;
hash ^= Y_PRIME * y;
hash ^= Z_PRIME * z;
hash = hash * hash * hash * 60493;
hash = (hash >> 13) ^ hash;
return hash;
}
private static int Hash4D(int seed, int x, int y, int z, int w)
{
int hash = seed;
hash ^= X_PRIME * x;
hash ^= Y_PRIME * y;
hash ^= Z_PRIME * z;
hash ^= W_PRIME * w;
hash = hash * hash * hash * 60493;
hash = (hash >> 13) ^ hash;
return hash;
}

Related

Custom Random Number Generator

Is it possible to get an extremely fast, but reliable (Same input = same output, so I can't use time) pseudo-random number generator? I want the end result to be something like float NumGen( int x, int y, int seed ); so that it creates a random number between 0 and 1 based on those three values. I found several random number generators, but I can't get them to work, and the random number generator that comes with Unity is far to slow to use. I have to make about 9 calls to the generator per 1 meter of terrain, so I don't really care if it's not perfectly statistically random, just that it works really quickly. Does anyone know of an algorithm that fits my needs? Thanks :)
I think you are underestimating the System.Random class. It is quite speedy. I believe your slow down is related to creating a new instance of the Random class on each call to your NumGen method.
In my quick test I was able to generate 100,000 random numbers using System.Random in about 1 millisecond.
To avoid the slow down consider seed points in your 2D plane. Disperse the seed points so that they cover a distance no greater than 100,000 meters. Then associate (or calculate) the nearest seed point for each meter, and use that point as your seed to System.Random.
Yes, you will be generating a ton of random numbers you will never use, but they are virtually free.
Pseudo-code:
double NumGen(x, y, distance, seed) {
Random random = new Random(seed);
double result = 0;
for (int i=0; i<distance; i++) {
result = random.NextDouble();
}
}
You could modify this simple outline to return a sequence of random numbers (possibly representing a grid), and couple that with a caching mechanism. That would let you conserve memory and improve (lessen) CPU consumption.
I guess you had to create a Random instance on every call to NumGen. To get the function to return the same number for the same parameters you could use a hash function.
I tested a few things, and this code was about 3 times faster than recreating intances of Random.
//System.Security.Cryptography
static MD5 hasher = MD5.Create();
static byte[] outbuf;
static byte[] inbuf = new byte[12];
static float floatHash(uint x, uint y, uint z) {
inbuf[0]= (byte)(x >> 24);
inbuf[1]=(byte)(x >> 16);
inbuf[2]=(byte)(x >> 8);
inbuf[3]=(byte)(x);
inbuf[4]=(byte)(y >> 24);
inbuf[5]=(byte)(y >> 16);
inbuf[6]=(byte)(y >> 8);
inbuf[7]=(byte)(y);
inbuf[8]=(byte)(z >> 24);
inbuf[9]=(byte)(z >> 16);
inbuf[10]=(byte)(z >> 8);
inbuf[11]=(byte)(z);
outbuf = hasher.ComputeHash(inbuf);
return ((float)BitConverter.ToUInt64(outbuf, 0))/ulong.MaxValue;
}
Another method using some RSA methods is about 5 times faster than new System.Random(seed):
static uint prime = 4294967291;
static uint ord = 4294967290;
static uint generator = 4294967279;
static uint sy;
static uint xs;
static uint xy;
static float getFloat(uint x, uint y, uint seed) {
//will return values 1=> x >0; replace 'ord' with 'prime' to get 1> x >0
//one call to modPow would be enough if all data fits into an ulong
sy = modPow(generator, (((ulong)seed) << 32) + (ulong)y, prime);
xs = modPow(generator, (((ulong)x) << 32) + (ulong)seed, prime);
xy = modPow(generator, (((ulong)sy) << 32) + (ulong)xy, prime);
return ((float)xy) / ord;
}
static ulong b;
static ulong ret;
static uint modPow(uint bb, ulong e, uint m) {
b = bb;
ret = 1;
while (e > 0) {
if (e % 2 == 1) {
ret = (ret * b) % m;
}
e = e >> 1;
b = (b * b) % m;
}
return (uint)ret;
}
I ran a test to generate 100000 floats. I used the index as seed for System.Random and as x parameter of floatHash (y and z were 0).
System.Random: Min: 2.921559E-06 Max: 0.9999979 Repetitions: 0
floatHash MD5: Min: 7.011156E-06 Max: 0.9999931 Repetitions: 210 (values were returned twice)
getFloat RSA: Min: 1.547858E-06 Max: 0.9999989 Repetitions: 190

Big-Endian Conversion from a texture

I am trying to extract the height from a file like this:
http://visibleearth.nasa.gov/view.php?id=73934
The pixels are loaded into an Int32 array
private Int16[] heights;
private int Width, Height;
public TextureData(Texture2D t)
{
Int32[] data = new Int32[t.Width * t.Height];
t.GetData<Int32>(data);
Width = t.Width;
Height = t.Height;
t.Dispose();
heights= new Int16[t.Width * t.Height];
for (int i = 0; i < data.Length; ++i)
{
heights[i] = ReverseBytes(data[i]);
}
}
// reverse byte order (16-bit)
public static Int16 ReverseBytes(Int32 value)
{
return (Int16)( ((value << 8) | (value >> 8)) );
}
I dont know why but the heights are not correct...
I think the Big Endian conversion is wrong, can you help me please?
this is the result, the heights are higher than expected...
http://i.imgur.com/FukdmLF.png
EDIT:
public static int ReverseBytes(int value)
{
int sign = (value & 0x8000) >> 15;
int msb = (value & 0x7F) >> 7;
int lsb = (value & 0xFF) << 8;
return (msb | lsb | sign);
}
is this ok? I don't know why but it is still wrong...
int refers to a 32 bit signed integer but your byte-reverser is written for a 16 bit signed integer so it will only work for positive values up to 32767. If you have any values higher than that you will need to shift and then mask one byte at a time before "orring" them together.

Data conversion issue possibly, char to unsigned char. A software and firmware CRC32 interaction issue

My current issue is that I am computing a CRC32 hash in software and then checking it in the firmware, however when I compute the hash in firmware its double what it is supposed to be.
software(written in C#):
public string SCRC(string input)
{
//Calculate CRC-32
Crc32 crc32 = new Crc32();
string hash = "";
byte[] convert = Encoding.ASCII.GetBytes(input);
MemoryStream ms = new MemoryStream(System.Text.Encoding.Default.GetBytes(input));
foreach (byte b in crc32.ComputeHash(ms))
hash += b.ToString("x2").ToLower();
return hash;
}
firmware functions used(written in C):
unsigned long chksum_crc32 (unsigned char *block, unsigned int length)
{
register unsigned long crc;
unsigned long i;
crc = 0xFFFFFFFF;
for (i = 0; i < length; i++)
{
crc = ((crc >> 8) & 0x00FFFFFF) ^ crc_tab[(crc ^ *block++) & 0xFF];
}
return (crc ^ 0xFFFFFFFF);
}
/* chksum_crc32gentab() -- to a global crc_tab[256], this one will
* calculate the crcTable for crc32-checksums.
* it is generated to the polynom [..]
*/
void chksum_crc32gentab ()
{
unsigned long crc, poly;
int i, j;
poly = 0xEDB88320L;
for (i = 0; i < 256; i++)
{
crc = i;
for (j = 8; j > 0; j--)
{
if (crc & 1)
{
crc = (crc >> 1) ^ poly;
}
else
{
crc >>= 1;
}
}
crc_tab[i] = crc;
}
}
Firmware Code where the functions above are called(Written in C):
//CommandPtr should now be pointing to the rest of the command
chksum_crc32gentab();
HardCRC = chksum_crc32( (unsigned)CommandPtr, strlen(CommandPtr));
printf("Hardware CRC val is %lu\n", HardCRC);
Note, the CommandPTR is a refrence to the same data named, "string input" in the software method.
Does anyone have any idea why I could be getting approximately double the value I am using in the software?? Aka HardCRC is double what its supposed to be, I am guessing it has something to do with my unsigned char cast.

Software Perlin noise implementation

I have written a 2D Perlin noise implementation based on information from here, here, here, and here. However, the output looks like this.
public static double Perlin(double X, double XScale, double Y, double YScale, double Persistance, double Octaves) {
double total=0.0;
for(int i=0;i<Octaves;i++){
int frq = (int) Math.Pow(2,i);
int amp = (int) Math.Pow(Persistance,i);
total += InterpolatedSmoothNoise((X / XScale) * frq, (Y / YScale) * frq) * amp;
}
return total;
}
private static double InterpolatedSmoothNoise (double X, double Y) {
int ix = (int) Math.Floor(X);
double fx = X-ix;
int iy = (int) Math.Floor(Y);
double fy = Y-iy;
double v1 = SmoothPerlin(ix,iy); // --
double v2 = SmoothPerlin(ix+1,iy); // +-
double v3 = SmoothPerlin(ix,iy+1);// -+
double v4 = SmoothPerlin(ix+1,iy+1);// ++
double i1 = Interpolate(v1,v2,fx);
double i2 = Interpolate(v3,v4,fx);
return Interpolate(i1,i2,fy);
}
private static double SmoothPerlin (int X, int Y) {
double sides=(Noise(X-1,Y,Z)+Noise(X+1,Y,Z)+Noise(X,Y-1,Z)+Noise(X,Y+1,Z)+Noise(X,Y,Z-1)+Noise(X,Y,Z+1))/12.0;
double center=Noise(X,Y,Z)/2.0;
return sides + center;
}
private static double Noise (int X, int Y) {
uint m_z = (uint) (36969 * (X & 65535) + (X >> 16));
uint m_w = (uint) (18000 * (Y & 65535) + (Y >> 16));
uint ou = (m_z << 16) + m_w;
return ((ou + 1.0) * 2.328306435454494e-10);
}
Any input on what is wrong is appreciated.
EDIT: I found a way to solve this: I used an array of doubles generated at load to fix this. Any way to implement a good random number generator is appreciated though.
I suppose this effect is due to your noise function (all other code looks ok).
The function
private static double Noise (int X, int Y) {
uint m_z = (uint) (36969 * (X & 65535) + (X >> 16));
uint m_w = (uint) (18000 * (Y & 65535) + (Y >> 16));
uint ou = (m_z << 16) + m_w;
return ((ou + 1.0) * 2.328306435454494e-10);
}
isn't very noisy but strongly correlated with your input X and Y variables. Try using any other pseudo-random function which you seed with you input.
I reconstructed your code in C and following suggestion from #Howard and this code is working well for me. I am not sure which Interpolate function you used. I used a linear interpolation in my code. I used following noise function:
static double Noise2(int x, int y) {
int n = x + y * 57;
n = (n<<13) ^ n;
return ( 1.0 - ( (n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0);
}

Does anyone have CRC128 and CRC256 Code in C++ and C#?

I am learning, trying to get thoughts behind CRC. I can't find CRC128 and CRC256 code anywhere. If anyone of you have the C++ or C# Code for them, please share them with me. Also provide online links to the websites. I am a newbie and can't code it by myself at all, neither can convert theories and mathematics to the coding. So I ask for help from you. It will be so nice of you who provide me the proper and simple codes. If anyone provides me these codes, please do also provide CRC Table generator functions. Thank you.
I agree with you except that the accidental collision rate is higher than 1 in 2^32 or 1 in 2^64 for 32 bit and 64 bit CRCs respectively.
I wrote an app that kept track of things by their CRC values for tracking items. We needed to track potentially millions of items and we started with a CRC32 which in real world practice has a collision rate of around 1 in 2^16 which was an unpleasant surprise. We then re-coded to use a CRC64 which had a real world collision rate of about 1 in 2^23. We tested this after the unpleasant surprise of the 32 bit one we started with and accepted the small error rate of the 64 bit one.
I can't really explain the statistics behind the expected collision rate but it makes sense that you would experience a collision much sooner that the width of the bits. Just like a hashtable...some hash buckets are empty and others have more than one entry....
Even for a 256 bit CRC the first 2 CRC's could be the same...it would be almost incredible but possible.
Though CRC-128 and CRC-256 were defined, I don't know of anyone who actually uses them.
Most of the time, developers who think they want a CRC should really be using a cryptographic hash function, which have succeeded CRCs for many applications. It would be a rare case indeed where CRC-128 or CRC-256 would be a superior choice to even the broken MD5, much less the SHA-2 family.
Here is a Java class I wrote recently for playing with CRCs. Beware that changing the bit order is implemented only for bitwise computation.
/**
* A CRC algorithm for computing check values.
*/
public class Crc
{
public static final Crc CRC_16_CCITT =
new Crc(16, 0x1021, 0xffff, 0xffff, true);
public static final Crc CRC_32 =
new Crc(32, 0x04c11db7, 0xffffffffL, 0xffffffffL, true);
private final int _width;
private final long _polynomial;
private final long _mask;
private final long _highBitMask;
private final long _preset;
private final long _postComplementMask;
private final boolean _msbFirstBitOrder;
private final int _shift;
private final long[] _crcs;
/**
* Constructs a CRC specification.
*
* #param width
* #param polynomial
* #param msbFirstBitOrder
*/
public Crc(
int width,
long polynomial)
{
this(width, polynomial, 0, 0, true);
}
/**
* Constructs a CRC specification.
*
* #param width
* #param polynomial
* #param msbFirstBitOrder
*/
public Crc(
int width,
long polynomial,
long preset,
long postComplementMask,
boolean msbFirstBitOrder)
{
super();
_width = width;
_polynomial = polynomial;
_mask = (1L << width) - 1;
_highBitMask = (1L << (width - 1));
_preset = preset;
_postComplementMask = postComplementMask;
_msbFirstBitOrder = msbFirstBitOrder;
_shift = _width - 8;
_crcs = new long[256];
for (int i = 0; i < 256; i++)
{
_crcs[i] = crcForByte(i);
}
}
/**
* Gets the width.
*
* #return The width.
*/
public int getWidth()
{
return _width;
}
/**
* Gets the polynomial.
*
* #return The polynomial.
*/
public long getPolynomial()
{
return _polynomial;
}
/**
* Gets the mask.
*
* #return The mask.
*/
public long getMask()
{
return _mask;
}
/**
* Gets the preset.
*
* #return The preset.
*/
public long getPreset()
{
return _preset;
}
/**
* Gets the post-complement mask.
*
* #return The post-complement mask.
*/
public long getPostComplementMask()
{
return _postComplementMask;
}
/**
* #return True if this CRC uses MSB first bit order.
*/
public boolean isMsbFirstBitOrder()
{
return _msbFirstBitOrder;
}
public long computeBitwise(byte[] message)
{
long result = _preset;
for (int i = 0; i < message.length; i++)
{
for (int j = 0; j < 8; j++)
{
final int bitIndex = _msbFirstBitOrder ? 7 - j : j;
final boolean messageBit = (message[i] & (1 << bitIndex)) != 0;
final boolean crcBit = (result & _highBitMask) != 0;
result <<= 1;
if (messageBit ^ crcBit)
{
result ^= _polynomial;
}
result &= _mask;
}
}
return result ^ _postComplementMask;
}
public long compute(byte[] message)
{
long result = _preset;
for (int i = 0; i < message.length; i++)
{
final int b = (int) (message[i] ^ (result >>> _shift)) & 0xff;
result = ((result << 8) ^ _crcs[b]) & _mask;
}
return result ^ _postComplementMask;
}
private long crcForByte(int b)
{
long result1 = (b & 0xff) << _shift;
for (int j = 0; j < 8; j++)
{
final boolean crcBit = (result1 & (1L << (_width - 1))) != 0;
result1 <<= 1;
if (crcBit)
{
result1 ^= _polynomial;
}
result1 &= _mask;
}
return result1;
}
public String crcTable()
{
final int digits = (_width + 3) / 4;
final int itemsPerLine = (digits + 4) * 8 < 72 ? 8 : 4;
final String format = "0x%0" + digits + "x, ";
final StringBuilder builder = new StringBuilder();
builder.append("{\n");
for (int i = 0; i < _crcs.length; i += itemsPerLine)
{
builder.append(" ");
for (int j = i; j < i + itemsPerLine; j++)
{
builder.append(String.format(format, _crcs[j]));
}
builder.append("\n");
}
builder.append("}\n");
return builder.toString();
}
}
CRC-128 and CRC-256 only make sense if the three following point are true :
You are CPU constrained to the point where a crypto hash would significantly slow you down
Accidental collision must statistically never happen, 1 in 2^64 is still too high
OTOH deliberate collisions are not a problem
A typical case where 2 and 3 can be true together is if an accidental collision would create a data loss that only affects the sender of the data, and not the platform.

Categories