It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I've a fallowing code which receives byte and probably has to perform convert to float and represent its converted values :
public float DecodeFloat(byte[] data)
{
float x = data[3]|data[2]<<8|data[1]<<16|data[0]<<24;
return x;
}
// receive thread
private void ReceiveData()
{
int count=0;
IPEndPoint remoteIP = new IPEndPoint(IPAddress.Parse("10.0.2.213"), port);
client = new UdpClient(remoteIP);
while (true)
{
try
{
IPEndPoint anyIP = new IPEndPoint(IPAddress.Any, 0);
byte[] data = client.Receive(ref anyIP);
Vector3 vec,rot;
float x= DecodeFloat (data);
float y= DecodeFloat (data + 4);
float z= DecodeFloat (data + 8);
float alpha= DecodeFloat (data + 12);
float theta= DecodeFloat (data +16);
float phi= DecodeFloat (data+20);
vec.Set(x,y,z);
rot.Set (alpha,theta,phi);
print(">> " + x.ToString() + ", "+ y.ToString() + ", "+ z.ToString() + ", "
+ alpha.ToString() + ", "+ theta.ToString() + ", "+ phi.ToString());
// latest UDPpacket
lastReceivedUDPPacket=x.ToString()+" Packet#: "+count.ToString();
count = count+1;
}
Is there anyone to put me in the right way, please?
Given 4 bytes, you would normally only "shift" (<<) if it is integer data. The code in the question basically reads the data as an int (via "shift"), then casts the int to a float. Which is almost certainly not what was intended.
Since you want to interpret it as float, you should probably use:
float val = BitConverter.ToSingle(data, offset);
where offset is the 0, 4, 8, 12 etc shown in your data + 4, data + 8, etc. This treats the 4 bytes (relative to offset) as raw IEEE 754 floating point data. For example:
float x= BitConverter.ToSingle(data, 0);
float y= BitConverter.ToSingle(data, 4);
float z= BitConverter.ToSingle(data, 8);
float alpha= BitConverter.ToSingle(data, 12);
float theta= BitConverter.ToSingle(data, 16);
float phi= BitConverter.ToSingle(data, 20);
Note that this makes assumptions about "endianness" - see BitConverter.IsLittleEndian.
Edit: from comments, it sounds like the data is other-endian; try:
public static float ReadSingleBigEndian(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
byte tmp = data[offset];
data[offset] = data[offset + 3];
data[offset + 3] = tmp;
tmp = data[offset + 1];
data[offset + 1] = data[offset + 2];
data[offset + 2] = tmp;
}
return BitConverter.ToSingle(data, offset);
}
public static float ReadSingleLittleEndian(byte[] data, int offset)
{
if (!BitConverter.IsLittleEndian)
{
byte tmp = data[offset];
data[offset] = data[offset + 3];
data[offset + 3] = tmp;
tmp = data[offset + 1];
data[offset + 1] = data[offset + 2];
data[offset + 2] = tmp;
}
return BitConverter.ToSingle(data, offset);
}
...
float x= ReadSingleBigEndian(data, 0);
float y= ReadSingleBigEndian(data, 4);
float z= ReadSingleBigEndian(data, 8);
float alpha= ReadSingleBigEndian(data, 12);
float theta= ReadSingleBigEndian(data, 16);
float phi= ReadSingleBigEndian(data, 20);
If you need to optimize this massively, there are also things you can do with unsafe code to build an int from shifting (picking the endianness when shifting), then do an unsafe coerce to get the int as a float; for example (noting that I haven't checked endianness here - it might misbehave on a big-endian machine, but most people don't have those):
public static unsafe float ReadSingleBigEndian(byte[] data, int offset)
{
int i = (data[offset++] << 24) | (data[offset++] << 16) |
(data[offset++] << 8) | data[offset];
return *(float*)&i;
}
public static unsafe float ReadSingleBigEndian(byte[] data, int offset)
{
int i = (data[offset++]) | (data[offset++] << 8) |
(data[offset++] << 16) | (data[offset] << 24);
return *(float*)&i;
}
Or crazier, and CPU-safer:
public static float ReadSingleBigEndian(byte[] data, int offset)
{
return ReadSingle(data, offset, false);
}
public static float ReadSingleLittleEndian(byte[] data, int offset)
{
return ReadSingle(data, offset, true);
}
private static unsafe float ReadSingle(byte[] data, int offset,
bool littleEndian)
{
fixed (byte* ptr = &data[offset])
{
if (littleEndian != BitConverter.IsLittleEndian)
{ // other-endian; swap
byte b = ptr[0];
ptr[0] = ptr[3];
ptr[3] = b;
b = ptr[1];
ptr[1] = ptr[2];
ptr[2] = b;
}
return *(float*)ptr;
}
}
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
As the title says, I have been trying to pack 4 full 0-255 bytes into 1 integer in c# using BitShifting. I'm trying to Compress some data, currently using 40 bytes of data. But really in theory I only need 12 bytes, but to do so I need to compress all my data into 3 Integers.
Currently my data is:
float3 pos; // Position Relative to Object
float4 color; // RGB is Compressed into X, Y is MatID, Z and W are unused
float3 normal; // A simple Normal -1 to 1 range
But in theory i can compress to:
int pos; // X, Y, Z, MatID - These are never > 200 nor negative
int color; // R, G, B, Unused Fourth Byte
int normal; // X, Y, Z, [0, 255] = [-1, 1] with 128 being 0, Unused Fourth Byte Should be plenty accurate for my needs
So my question is how would i go about doing this? Im reasonably new to Bit Shifting and havent managed to get much working.
If I understand it right, you need to store 4 values in 4 bytes (one value per byte) and then use individual values by performing bit shift operations.
You can do it like this:
using System;
public class Program
{
public static void Main()
{
uint pos = 0x4a5b6c7d;
// x is first byte, y is second byte, z is third byte, matId is fourth byte
uint x = (pos & 0xff);
uint y = (pos & 0xff00) >> 8;
uint z = (pos & 0xff0000) >> 16;
uint matId = (pos & (0xff << 24)) >> 24;
Console.WriteLine(x + " " + y + " " + z + " " + matId);
Console.WriteLine((0x7d) + " " + (0x6c) + " " + (0x5b) + " " + (0x4a));
}
}
x will be equal to result of 0x4a5b6c7d & 0x000000ff = 0x7d
y will be equal to result of 0x4a5b6c7d & 0x0000ff00 right shifted by 8 bits = 0x6c
Similar for z and matId.
Edit
For packing, you need to use | operator:
Left shift fourth value by 24, say a
Left shift third value by 16, say b
Left shift second value by 8, say c
Nothing for fourth value, say d
Do a binary OR of all 4 and store it in an int: int packed = a | b | c | d
using System;
public class Program
{
static void Unpack(uint p)
{
uint pos = p;
// x is first byte, y is second byte, z is third byte, matId is fourth byte
uint x = (pos & 0xff);
uint y = (pos & 0xff00) >> 8;
uint z = (pos & 0xff0000) >> 16;
uint matId = (pos & (0xff << 24)) >> 24;
Console.WriteLine(x + " " + y + " " + z + " " + matId);
}
static uint Pack(int x, int y, int z, int matId)
{
uint newx = x, newy = y, newz = z, newMaxId = matId;
uint pos2 = (newMaxId << 24) | (newz << 16) | (newy << 8) | newx;
Console.WriteLine(pos2);
return pos2;
}
public static void Main()
{
uint packedInt = Pack(10, 20, 30, 40);
Unpack(packedInt);
}
}
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I have a RGB image (RGB 4:4:4 colorspace, 24-bit per pixel), captured from camera. I use Gorgon 2D library (build base on SharpDX) to display this image as a texture so i have to convert it to ARGB. I use this code (not my code) to convert from RGB camera image to RGBA.
[StructLayout(LayoutKind.Sequential)]
public struct RGBA
{
public byte r;
public byte g;
public byte b;
public byte a;
}
[StructLayout(LayoutKind.Sequential)]
public struct RGB
{
public byte r;
public byte g;
public byte b;
}
unsafe void internalCvt(long pixelCount, byte* rgbP, byte* rgbaP)
{
for (long i = 0, offsetRgb = 0; i < pixelCount; i += 4, offsetRgb += 12)
{
uint c1 = *(uint*)(rgbP + offsetRgb);
uint c2 = *(uint*)(rgbP + offsetRgb + 3);
uint c3 = *(uint*)(rgbP + offsetRgb + 6);
uint c4 = *(uint*)(rgbP + offsetRgb + 9);
((uint*)rgbaP)[i] = c1 | 0xff000000;
((uint*)rgbaP)[i + 1] = c2 | 0xff000000;
((uint*)rgbaP)[i + 2] = c3 | 0xff000000;
((uint*)rgbaP)[i + 3] = c4 | 0xff000000;
}
}
public unsafe void RGB2RGBA(int pixelCount, byte[] rgbData, byte[] rgbaData)
{
if ((pixelCount & 3) != 0) throw new ArgumentException();
fixed (byte* rgbP = &rgbData[0], rgbaP = &rgbaData[0])
{
internalCvt(pixelCount, rgbP, rgbaP);
}
}
Then convert RGB to RGBA like this:
byte[] rgb = new byte[800*600*3]; //Stored data
byte[] rgba = new byte[800 * 600 * 4];
RGB2RGBA(800*600, rgb, rgba)
And I use rgba as data of Gorgon texture:
unsafe
{
fixed(void* rgbaPtr = rgba)
{
var buff = new GorgonNativeBuffer<byte>(rgbaPtr, 800*600*4);
GorgonImageBuffer imb = new GorgonImageBuffer(buff, 800, 600, BufferFormat.R8G8B8A8_UNorm);
//Set Texture data GorgonTexture2D
Texture.SetData(imb, new SharpDX.Rectangle(0, 0, 800, 600), 0, 0, CopyMode.NoOverwrite);
}
}
But the color of texture image is not like the color of camera image.
so I think I have to convert camera image to ARGB, (not RGBA) so that it can display with Gorgon texture but i dont know how to do it with the code above. Can you guys please point me some hints?
Thanks!
Below are the links to types of Gorgon Library i used in the code above
GorgonTexture2D
GorgonNativeBuffer
GorgonImageBuffer
The byte order of the target is, in memory, R G B A. The byte order of the source is, in memory, B G R. So in addition to expanding every 3 bytes to 4 bytes and putting FF in the new A channel, the R and B channel need to swap position. For example,
unsafe void internalCvt(long pixelCount, byte* rgbP, uint* rgbaP)
{
for (long i = 0, offsetRgb = 0; i < pixelCount; i += 4, offsetRgb += 12)
{
uint c1 = *(uint*)(rgbP + offsetRgb);
uint c2 = *(uint*)(rgbP + offsetRgb + 3);
uint c3 = *(uint*)(rgbP + offsetRgb + 6);
uint c4 = *(uint*)(rgbP + offsetRgb + 9);
// swap R and B
c1 = (c1 << 16) | (c1 & 0xFF00) | ((c1 >> 16) & 0xFF);
c2 = (c2 << 16) | (c2 & 0xFF00) | ((c2 >> 16) & 0xFF);
c3 = (c3 << 16) | (c3 & 0xFF00) | ((c3 >> 16) & 0xFF);
c4 = (c4 << 16) | (c4 & 0xFF00) | ((c4 >> 16) & 0xFF);
// set alpha to FF
rgbaP[i] = c1 | 0xff000000;
rgbaP[i + 1] = c2 | 0xff000000;
rgbaP[i + 2] = c3 | 0xff000000;
rgbaP[i + 3] = c4 | 0xff000000;
}
}
Or a simpler version that processes one pixel per iteration and doesn't use unsafe,
void internalCvt(long pixelCount, byte[] bgr, byte[] rgba)
{
long byteCount = pixelCount * 4;
for (long i = 0, offsetBgr = 0; i < byteCount; i += 4, offsetRgb += 3)
{
// R
rgba[i] = bgr[offsetBgr + 2];
// G
rgba[i + 1] = bgr[offsetBgr + 1];
// B
rgba[i + 2] = bgr[offsetBgr];
// A
rgba[i + 3] = 0xFF;
}
}
I have this function in a Java program.
private static byte[] converToByte(String s)
{
byte[] output = new byte[s.length() / 2];
for (int i = 0, j = 0; i < s.length(); i += 2, j++)
{
output[j] = (byte)(Integer.parseInt(s.substring(i, i + 2), 16));
}
return output;
}
I am trying to create the same thing with C# but I'm having troubles. I tried this:
output[j] = (byte)(Int16.Parse(str.Substring(i, i + 2)));
But after a couple of iterations I got a System.OverflowException, what would be the instruction in C#?
Thanks.
private static sbyte[] converToByte(string s)
{
sbyte[] output = new sbyte[s.Length / 2];
for (int i = 0, j = 0; i < s.Length; i += 2, j++)
{
output[j] = (sbyte)(Convert.ToInt32(s.Substring(i, 2), 16));
}
return output;
}
You are using the wrong data type in your line:
output[j] = (byte)(Int16.Parse(str.Substring(i, i + 2)));
Short Name .NET Class Type Width Range (bits)
byte Byte Unsigned integer 8 0 to 255
short Int16 Signed integer 16 -32,768 to 32,767
You are getting an overflow exception because an Int16 (short) is far to big to fit into a byte.
After struggling with tihs problem myself I realised the real problem is that Java's substring method is:
substring(int beginIndex, int endIndex)
While C#'s implementation takes:
substring(int beginIndex, int length)
This means in the C# the same code is grabbing larger chunks of bytes causing an overflow.
#Dave Doknjas was on the right track but you can still convert to a byte with the new smaller chunk size.
output[j] = Convert.ToByte(str.Substring(i, i + 2), 16);
I'm currently trying to do pitch shifting of a wave file using this algorithm
https://sites.google.com/site/mikescoderama/pitch-shifting
Here my code which use the above implementation, but with no luck. The outputted wave file seems to be corrupted or not valid.
The code is quite simple, except for the pitch shift algorithm :)
It load a wave file, it reads the wave file data and put it in a
byte[] array.
Then it "normalize" bytes data into -1.0f to 1.0f format (as
requested by the creator of the pitch shift algorithm).
It applies the pitch shift algorithm and then convert back the
normalized data into a bytes[] array.
Finally saves a wave file with the same header of the original wave
file and the pitch shifted data.
Am I missing something?
static void Main(string[] args)
{
// Read the wave file data bytes
byte[] waveheader = null;
byte[] wavedata = null;
using (BinaryReader reader = new BinaryReader(File.OpenRead("sound.wav")))
{
// Read first 44 bytes (header);
waveheader= reader.ReadBytes(44);
// Read data
wavedata = reader.ReadBytes((int)reader.BaseStream.Length - 44);
}
short nChannels = BitConverter.ToInt16(waveheader, 22);
int sampleRate = BitConverter.ToInt32(waveheader, 24);
short bitRate = BitConverter.ToInt16(waveheader, 34);
// Normalized data store. Store values in the format -1.0 to 1.0
float[] in_data = new float[wavedata.Length / 2];
// Normalize wave data into -1.0 to 1.0 values
using(BinaryReader reader = new BinaryReader(new MemoryStream(wavedata)))
{
for (int i = 0; i < in_data.Length; i++)
{
if(bitRate == 16)
in_data[i] = reader.ReadInt16() / 32768f;
if (bitRate == 8)
in_data[i] = (reader.ReadByte() - 128) / 128f;
}
}
//PitchShifter.PitchShift(1f, in_data.Length, (long)1024, (long)32, sampleRate, in_data);
// Backup wave data
byte[] copydata = new byte[wavedata.Length];
Array.Copy(wavedata, copydata, wavedata.Length);
// Revert data to byte format
Array.Clear(wavedata, 0, wavedata.Length);
using (BinaryWriter writer = new BinaryWriter(new MemoryStream(wavedata)))
{
for (int i = 0; i < in_data.Length; i++)
{
if(bitRate == 16)
writer.Write((short)(in_data[i] * 32768f));
if (bitRate == 8)
writer.Write((byte)((in_data[i] * 128f) + 128));
}
}
// Compare new wavedata with copydata
if (wavedata.SequenceEqual(copydata))
{
Console.WriteLine("Data has no changes");
}
else
{
Console.WriteLine("Data has changed!");
}
// Save modified wavedata
string targetFilePath = "sound_low.wav";
if (File.Exists(targetFilePath))
File.Delete(targetFilePath);
using (BinaryWriter writer = new BinaryWriter(File.OpenWrite(targetFilePath)))
{
writer.Write(waveheader);
writer.Write(wavedata);
}
Console.ReadLine();
}
The algorithm here works fine
https://sites.google.com/site/mikescoderama/pitch-shifting
My mistake was on how i was reading the wave header and wave data. I post here the fully working code
WARNING: this code works only for PCM 16 bit (stereo/mono) waves. Can be easily adapted to works with PCM 8 bit.
static void Main(string[] args)
{
// Read header, data and channels as separated data
// Normalized data stores. Store values in the format -1.0 to 1.0
byte[] waveheader = null;
byte[] wavedata = null;
int sampleRate = 0;
float[] in_data_l = null;
float[] in_data_r = null;
GetWaveData("sound.wav", out waveheader, out wavedata, out sampleRate, out in_data_l, out in_data_r);
//
// Apply Pitch Shifting
//
if(in_data_l != null)
PitchShifter.PitchShift(2f, in_data_l.Length, (long)1024, (long)10, sampleRate, in_data_l);
if(in_data_r != null)
PitchShifter.PitchShift(2f, in_data_r.Length, (long)1024, (long)10, sampleRate, in_data_r);
//
// Time to save the processed data
//
// Backup wave data
byte[] copydata = new byte[wavedata.Length];
Array.Copy(wavedata, copydata, wavedata.Length);
GetWaveData(in_data_l, in_data_r, ref wavedata);
//
// Check if data actually changed
//
bool noChanges = true;
for (int i = 0; i < wavedata.Length; i++)
{
if (wavedata[i] != copydata[i])
{
noChanges = false;
Console.WriteLine("Data has changed!");
break;
}
}
if(noChanges)
Console.WriteLine("Data has no changes");
// Save modified wavedata
string targetFilePath = "sound_low.wav";
if (File.Exists(targetFilePath))
File.Delete(targetFilePath);
using (BinaryWriter writer = new BinaryWriter(File.OpenWrite(targetFilePath)))
{
writer.Write(waveheader);
writer.Write(wavedata);
}
Console.ReadLine();
}
// Returns left and right float arrays. 'right' will be null if sound is mono.
public static void GetWaveData(string filename, out byte[] header, out byte[] data, out int sampleRate, out float[] left, out float[] right)
{
byte[] wav = File.ReadAllBytes(filename);
// Determine if mono or stereo
int channels = wav[22]; // Forget byte 23 as 99.999% of WAVs are 1 or 2 channels
// Get sample rate
sampleRate = BitConverter.ToInt32(wav, 24);
int pos = 12;
// Keep iterating until we find the data chunk (i.e. 64 61 74 61 ...... (i.e. 100 97 116 97 in decimal))
while(!(wav[pos]==100 && wav[pos+1]==97 && wav[pos+2]==116 && wav[pos+3]==97)) {
pos += 4;
int chunkSize = wav[pos] + wav[pos + 1] * 256 + wav[pos + 2] * 65536 + wav[pos + 3] * 16777216;
pos += 4 + chunkSize;
}
pos += 4;
int subchunk2Size = BitConverter.ToInt32(wav, pos);
pos += 4;
// Pos is now positioned to start of actual sound data.
int samples = subchunk2Size / 2; // 2 bytes per sample (16 bit sound mono)
if (channels == 2)
samples /= 2; // 4 bytes per sample (16 bit stereo)
// Allocate memory (right will be null if only mono sound)
left = new float[samples];
if (channels == 2)
right = new float[samples];
else
right = null;
header = new byte[pos];
Array.Copy(wav, header, pos);
data = new byte[subchunk2Size];
Array.Copy(wav, pos, data, 0, subchunk2Size);
// Write to float array/s:
int i=0;
while (pos < subchunk2Size)
{
left[i] = BytesToNormalized_16(wav[pos], wav[pos + 1]);
pos += 2;
if (channels == 2)
{
right[i] = BytesToNormalized_16(wav[pos], wav[pos + 1]);
pos += 2;
}
i++;
}
}
// Return byte data from left and right float data. Ignore right when sound is mono
public static void GetWaveData(float[] left, float[] right, ref byte[] data)
{
// Calculate k
// This value will be used to convert float to Int16
// We are not using Int16.Max to avoid peaks due to overflow conversions
float k = (float)Int16.MaxValue / left.Select(x => Math.Abs(x)).Max();
// Revert data to byte format
Array.Clear(data, 0, data.Length);
int dataLenght = left.Length;
int byteId = -1;
using (BinaryWriter writer = new BinaryWriter(new MemoryStream(data)))
{
for (int i = 0; i < dataLenght; i++)
{
byte byte1 = 0;
byte byte2 = 0;
byteId++;
NormalizedToBytes_16(left[i], k, out byte1, out byte2);
writer.Write(byte1);
writer.Write(byte2);
if (right != null)
{
byteId++;
NormalizedToBytes_16(right[i], k, out byte1, out byte2);
writer.Write(byte1);
writer.Write(byte2);
}
}
}
}
// Convert two bytes to one double in the range -1 to 1
static float BytesToNormalized_16(byte firstByte, byte secondByte)
{
// convert two bytes to one short (little endian)
short s = (short)((secondByte << 8) | firstByte);
// convert to range from -1 to (just below) 1
return s / 32678f;
}
// Convert a float value into two bytes (use k as conversion value and not Int16.MaxValue to avoid peaks)
static void NormalizedToBytes_16(float value, float k, out byte firstByte, out byte secondByte)
{
short s = (short)(value * k);
firstByte = (byte)(s & 0x00FF);
secondByte = (byte)(s >> 8);
}
sorry to revive this but I tried that pitchshifter class and, while it works, I get crackles in the audio while pitching down(0.5f). You work out a way around that?
I have written a 2D Perlin noise implementation based on information from here, here, here, and here. However, the output looks like this.
public static double Perlin(double X, double XScale, double Y, double YScale, double Persistance, double Octaves) {
double total=0.0;
for(int i=0;i<Octaves;i++){
int frq = (int) Math.Pow(2,i);
int amp = (int) Math.Pow(Persistance,i);
total += InterpolatedSmoothNoise((X / XScale) * frq, (Y / YScale) * frq) * amp;
}
return total;
}
private static double InterpolatedSmoothNoise (double X, double Y) {
int ix = (int) Math.Floor(X);
double fx = X-ix;
int iy = (int) Math.Floor(Y);
double fy = Y-iy;
double v1 = SmoothPerlin(ix,iy); // --
double v2 = SmoothPerlin(ix+1,iy); // +-
double v3 = SmoothPerlin(ix,iy+1);// -+
double v4 = SmoothPerlin(ix+1,iy+1);// ++
double i1 = Interpolate(v1,v2,fx);
double i2 = Interpolate(v3,v4,fx);
return Interpolate(i1,i2,fy);
}
private static double SmoothPerlin (int X, int Y) {
double sides=(Noise(X-1,Y,Z)+Noise(X+1,Y,Z)+Noise(X,Y-1,Z)+Noise(X,Y+1,Z)+Noise(X,Y,Z-1)+Noise(X,Y,Z+1))/12.0;
double center=Noise(X,Y,Z)/2.0;
return sides + center;
}
private static double Noise (int X, int Y) {
uint m_z = (uint) (36969 * (X & 65535) + (X >> 16));
uint m_w = (uint) (18000 * (Y & 65535) + (Y >> 16));
uint ou = (m_z << 16) + m_w;
return ((ou + 1.0) * 2.328306435454494e-10);
}
Any input on what is wrong is appreciated.
EDIT: I found a way to solve this: I used an array of doubles generated at load to fix this. Any way to implement a good random number generator is appreciated though.
I suppose this effect is due to your noise function (all other code looks ok).
The function
private static double Noise (int X, int Y) {
uint m_z = (uint) (36969 * (X & 65535) + (X >> 16));
uint m_w = (uint) (18000 * (Y & 65535) + (Y >> 16));
uint ou = (m_z << 16) + m_w;
return ((ou + 1.0) * 2.328306435454494e-10);
}
isn't very noisy but strongly correlated with your input X and Y variables. Try using any other pseudo-random function which you seed with you input.
I reconstructed your code in C and following suggestion from #Howard and this code is working well for me. I am not sure which Interpolate function you used. I used a linear interpolation in my code. I used following noise function:
static double Noise2(int x, int y) {
int n = x + y * 57;
n = (n<<13) ^ n;
return ( 1.0 - ( (n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0);
}