How to changes doubles to a greyscale image? - c#

I've created a 2d array of 1024 x 1024 values ranging from -1 to 1, but I do not know how I am supposed to change this to a greyscale image.
What I have been doing is assigning a certain color to certain values, but this is not what I was going for.
What I have:
Specific ranges of values between -1 and 1 are mapped to distinct colors in a noncontinuous way (see the code snippet below)
What I want:
Values between -1 and 1 are mapped to greyscale varying uniformly from black at -1 to white at 1 or vice-versa
Code for the current version
private void button1_Click(object sender, EventArgs e)
{
sw.Start();
LibNoise.Perlin perlinMap = new LibNoise.Perlin();
perlinMap.Lacunarity = lacunarity + 0.01d;
perlinMap.NoiseQuality = LibNoise.NoiseQuality.High;
perlinMap.OctaveCount = octaveCount;
perlinMap.Persistence = persistence;
perlinMap.Frequency = frequency;
perlinMap.Seed = 1024;
if (radioButton1.Checked)
perlinMap.NoiseQuality = LibNoise.NoiseQuality.Low;
else if (radioButton2.Checked)
perlinMap.NoiseQuality = LibNoise.NoiseQuality.Standard;
else if (radioButton3.Checked)
perlinMap.NoiseQuality = LibNoise.NoiseQuality.High;
double sample = trackBar6.Value * 10;
double[,] perlinArray = new double[resolutieX, resolutieY];
for (int x = 0; x < resolutieX; x++)
{
for (int y = 0; y < resolutieY; y++)
{
perlinArray[x, y] = perlinMap.GetValue(x / sample, y / sample, 1d);
}
}
draw(perlinArray);
textBox12.Text = sw.ElapsedMilliseconds.ToString() + "ms";
sw.Reset();
}
public void draw(double[,] array)
{
Color color = Color.DarkBlue;
// Bitmap b = new Bitmap(1, 1);
Color[,] colorArray = new Color[resolutieX, resolutieY];
Bitmap afbeelding = new Bitmap( 1024, 1024);
// int tileSize = 1024 / resolutieY;
for (int y = 1; y < resolutieY; y++)
{
for (int x = 1; x < resolutieX; x++)
{
colorArray[x, y] = array[x, y] <= 0.0 ? Color.DarkBlue :
array[x, y] <= 0.1 ? Color.Blue :
array[x, y] <= 0.2 ? Color.Beige :
array[x, y] <= 0.22 ? Color.LightGreen :
array[x, y] <= 0.40 ? Color.Green :
array[x, y] <= 0.75 ? Color.DarkGreen :
array[x, y] <= 0.8 ? Color.LightSlateGray :
array[x, y] <= 0.9 ? Color.Gray :
array[x, y] <= 1.0 ? Color.DarkSlateGray :
Color.DarkSlateGray;
// colorArray[]
// afbeelding.SetPixel(x, y, color);
}
}
for (int y = 1; y < resolutieY; y++)
{
for (int x = 1; x < resolutieX; x++)
{
afbeelding.SetPixel(x, y, colorArray[x, y]);
}
}
pictureBox1.Image = afbeelding;
}

Ohhh, lovely fractals... :)
As you are using a 2d vector from -1 to 1, you have to recalculate it to 0..255. Your function is
f(x) = 255 * (x+1)/2
Then all you have to do, is to create a 2D Color vector with f(x)
foreach (int value in 2dVector)
{
2dColorVector.add(new Color.fromArgb(255, f(x), f(x), f(x));
}
Is pseudocode, but i think you could understand it clearly :)

You can get a grayscale image by setting the primary colors (RGB) to equal values.
One way of achieving this is to make a function that calculates the average of the RGB components of the colors you have, and then set each of the components to the average value.
Example - Average method:
Color ToGrayscale(Color c)
{
int avg = (c.R + c.G + c.B)/3;
return Color.FromArgb(avg, avg, avg);
}
Then apply that function for each output pixel:
for (int y = 1; y < resolutieY; y++)
{
for (int x = 1; x < resolutieX; x++)
{
afbeelding.SetPixel(x, y, ToGrayscale(colorArray[x, y]));
}
}
Luminosity
A more sophisticated version of grayscaling is the Luminosity method. It also averages the values, but it is a weighted average that takes human perception into account.
The formula is: 0.2126*Red + 0.7152*Green + 0.0722*Blue.
You can see how the formula is weighted towards which colors the human eye is most sensitive to.
To see if this alternate approach looks better for your project you simply use the Luminosity formula for calculating the average instead:
Color ToGrayscaleLuminosity(Color c)
{
var avg = (int)Math.Round(0.2126 * c.R + 0.7152 * c.G + 0.0722 * c.B);
return Color.FromArgb(avg, avg, avg);
}

Related

how to set the SetAlphamaps to one certain texture?

i want to change the texture of my terrain with certain texture. i got confuse to set the splatmapdata, anyone can help me out??
private void ChangeTexture(Vector3 WorldPos)
{
print ("changeTexture");
int mapX = (int)(((WorldPos.x - terrainPos.x) / terrainData.size.x) * terrainData.alphamapWidth);
int mapZ = (int)(((WorldPos.z - terrainPos.z) / terrainData.size.z) * terrainData.alphamapHeight);
float[,,] splatmapData = terrainData.GetAlphamaps(3, 3, 15, 15);
terrainData.SetAlphamaps (mapX, mapZ, splatmapData);
terrain.Flush ();
}
The data returned by GetAlphamaps
The returned array is three-dimensional - the first two dimensions represent x and y coordinates on the map, while the third denotes the splatmap texture to which the alphamap is applied.
Or in simple words a float[x, y, l] where
x = width in pixels
y = height in pixels
l = Texture-Layer
So lets say you want to set it to a certain texture at this pixel coordinates what you do is
set the weight for the texture's layer to 1
set all other layers weight to 0
So let's say you have e.g. 3 Layers and you want the second one (= index 1) to be the full weighted texture:
float[,,] splatmapData = terrainData.GetAlphamaps(mapX, mapZ, 15, 15);
// Iterate over x-y coordinates within the array
for(var y = 0; i < 15; y++)
{
for(var x = 0; x < 15; x++)
{
// Set first layers weight to 0
splatmapData[x, y, 0] = 0;
// Set second layer's weight to 1
splatmapData[x, y, 1] = 1;
// Set third layer's weight to 0
splatmapData[x, y, 2] = 0;
}
}
terrainData.SetAlphamaps(mapX, mapZ, splatmapData);
I would then implement an enum for the layers like let's say
public enum TerrainLayer
{
Default = 0,
Green,
Red
}
so you can simply pass the according layer index as a parameter - a bit more secure than passing in the int values themselves:
private void ChangeTexture(Vector3 worldPos, TerrainLayer toLayer)
{
print ("changeTexture");
int mapX = (int)(((worldPos.x - terrainPos.x) / terrainData.size.x) * terrainData.alphamapWidth);
int mapZ = (int)(((worldPos.z - terrainPos.z) / terrainData.size.z) * terrainData.alphamapHeight);
float[,,] splatmapData = terrainData.GetAlphamaps(mapX, mapZ, 15, 15);
for(var z = 0; z < 15; z++)
{
for(var x = 0; x < 15; x++)
{
// This ofcourse would be more efficient if you do this only once
// e.g. in Awake since the enum won't change on runtime
var values = (TerrainLAyer[])Enum.GetValues(typeof(TerrainLayer));
// Iterate through the enum and
for(var l = 0; l < values.Length; l++)
{
// set all layers to 0 except the toLayer
splatmapData[x, z, l] = values[l] == toLayer ? 1 : 0;
}
}
}
terrainData.SetAlphamaps (mapX, mapZ, splatmapData);
terrain.Flush ();
}
Now you would simply call it e.g.
ChangeTexture(somePosition, TerrainLayer.Green);

Algorithm to scale down YUV 4:2:2

Trying to write an efficient algorithm to scale down YUV 4:2:2 by a factor of 2 - and which doesn't require a conversion to RGB (which is CPU intensive).
I've seen plenty of code on stack overflow for YUV to RGB conversion - but only an example of scaling for YUV 4:2:0 here which I have started based my code on. However, this produces an image which is effectively 3 columns of the same image with corrupt colours, so something is wrong with the algo when applied to 4:2:2.
Can anybody see what is wrong with this code?
public static byte[] HalveYuv(byte[] data, int imageWidth, int imageHeight)
{
byte[] yuv = new byte[imageWidth / 2 * imageHeight / 2 * 3 / 2];
int i = 0;
for (int y = 0; y < imageHeight; y += 2)
{
for (int x = 0; x < imageWidth; x += 2)
{
yuv[i] = data[y * imageWidth + x];
i++;
}
}
for (int y = 0; y < imageHeight / 2; y += 2)
{
for (int x = 0; x < imageWidth; x += 4)
{
yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + x];
i++;
yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + (x + 1)];
i++;
}
}
return yuv;
}
A fast way to generate a low quality thumbnail would be to discard half of the data in each dimension.
We break the image in 4x2 grid of pixels - each pair of pixels in the grid is represented by 4 bytes. In the down-scaled image, we take the color values for the first 2 pixels in the grid by copying the first 4 bytes, whilst discarding the other 12 bytes worth of data.
This scaling can be generalized to any power of 2 (1/2, 1/4, 1/8, ...) - this method is quick because it doesn't use any interpolation. This will give a lower quality image which appears blocky however - for better results consider some sampling approach.
public static byte[] FastResize(
byte[] data,
int imageWidth,
int imageHeight,
int scaleDownExponent)
{
var scaleDownFactor = (uint)Math.Pow(2, scaleDownExponent);
var outputImageWidth = imageWidth / scaleDownFactor;
var outputImageHeight = imageHeight / scaleDownFactor;
// 2 bytes per pixel.
byte[] yuv = new byte[outputImageWidth * outputImageHeight * 2];
var pos = 0;
// Process every other line.
for (uint pixelY = 0; pixelY < imageHeight; pixelY += scaleDownFactor)
{
// Work in blocks of 2 pixels, we discard the second.
for (uint pixelX = 0; pixelX < imageWidth; pixelX += 2*scaleDownFactor)
{
// Position of pixel bytes.
var start = ((pixelY * imageWidth) + pixelX) * 2;
yuv[pos] = data[start];
yuv[pos + 1] = data[start + 1];
yuv[pos + 2] = data[start + 2];
yuv[pos + 3] = data[start + 3];
pos += 4;
}
}
return yuv;
}
I assume that the original data is in the following order (as it seems so from your example code): First there are the luminance (Y) values of the pixels of the image (size = imageWidth*imageHeight bytes). After that there are the chrominance components UV, s.t., the values for a single pixel are given after each other. This means that the total size of the original image is 3*size.
Now for 4:2:2 subsampling means that every other value of the horizontal chrominance component are discarded. This reduces the data to size size + 0.5*size + 0.5*size = 2*size, i.e., luminance is kept completely and both chrominance components are divided to half. Therefore, the result image should be allocated as:
byte[] yuv = new byte[2*imageWidth*imageHeight];
As the first part of the image is copied in full the first loop becomes:
int i = 0;
for (int y = 0; y < imageHeight; y++)
{
for (int x = 0; x < imageWidth; x++)
{
yuv[i] = data[y * imageWidth + x];
i++;
}
}
Because this just copies the beginning of data this can be simplified to
int size = imageHeight*imageWidth;
int i = 0;
for (; i < size; i++)
{
yuv[i] = data[i];
}
Now to copy the rest we need to skip every other horizontal coordinate
for (int y = 0; y < imageHeight; y++)
{
for (int x = 0; x < imageWidth; x += 2) // +2 skip each other horizontal component
{
yuv[i] = data[size + y*2*imageWidth + 2*x];
i++;
yuv[i] = data[size + y*2*imageWidth + 2*x + 1];
i++;
}
}
The factor two in data-array index is needed because there are 2 bytes for each pixel (both chrominance components), so each "row" has 2*imageWidth bytes of data.

mandlebrot fractal error in the plotted function

Hello guys i am trying plotting the mandlebrot fractal but the result is very far from it, can you help me to find the why?
Here is the code:
void Button1Click(object sender, EventArgs e)
{
Graphics g = pbx.CreateGraphics();
Pen p = new Pen(Color.Black);
double xmin = -2.0;
double ymin = -1.2;
double xmax = 0;
double ymax = 0;
double x = xmin, y = ymin;
int MAX = 1000;
double stepY = Math.Abs(ymin - ymax)/(pbx.Height);
double stepX = Math.Abs(xmin - xmax)/(pbx.Width);
for(int i = 0; i < pbx.Width; i++)
{
y = ymin;
for(int j = 0; j < pbx.Height; j++)
{
double rez = x;
double imz = y;
int iter = 0;
while(rez * rez + imz * imz <= 4 && iter < MAX)
{
rez = rez * rez - imz * imz + x;
imz = 2 * rez * imz + y;
iter++;
}
if(iter == MAX)
{
p.Color = Color.Black;
g.DrawRectangle(p, i, j, 1, 1);
}
else
{
p.Color = Color.White;
g.DrawRectangle(p, i, j, 1, 1);
}
y += stepY;
}
x += stepX;
}
}
please help me my mind is getting crushed thinking how to get the beautiful mandlebrot set...
and sorry if i committed some mistakes but English is not my speaked language!
You have some irregularities elsewhere. The range you're plotting isn't the entire set, and I would calculate x and y directly for each pixel, rather than using increments (so as to avoid rounding error accumulating).
But it looks to me as though your main error is in the iterative computation. You are modifying the rez variable before you use it in the computation of the new imz value. Your loop should look more like this:
while(rez * rez + imz * imz <= 4 && iter < MAX)
{
double rT = rez * rez - imz * imz + x;
imz = 2 * rez * imz + y;
rez = rT;
iter++;
}
Additionally to Peters answer, you should use a color palette instead of drawing just black and white pixels.
Create a array of colors, like this: (very simple example)
Color[] colors = new Colors[768];
for (int i=0; i<256; i++) {
colors[i ]=Color.FromArgb( i, 0, 0);
colors[i+256]=Color.FromArgb(255-i, i, 0);
colors[i+512]=Color.FromArgb(0 , 255-i, i);
}
Then use the iter value to pull a color and draw it:
int index=(int)((double)iter/MAX*767);
p.Color c = colors[index];
g.DrawRectangle(p, i, j, 1, 1);
Replace the entire if (iter == MAX) ... else ... statement with this last step.

Retinex algorithm implementation

I need to implement Single Scale retinex and multiscale retinex algorithm in C#,
I searched a bit but couldn't find any useful practice projects and artilces with code
As I understood correctly I should:
Convert RGB to YUV
Blur the image using Gaussian blur filter
Use I'(x, y) = 255*log10( I(x, y)/G(x, y) ) + 127.5
I - is illumination, G - Gaussian kernel, I' - the result image
Сonvert back YUV to RGB
This code is not working correctly
public static Image<Bgr, byte> SingleScaleRetinex(this Image<Bgr, byte> img, int gaussianKernelSize, double sigma)
{
var radius = gaussianKernelSize / 2;
var kernelSize = 2 * radius + 1;
var ycc = img.Convert<Ycc, byte>();
var sum = 0f;
var gaussKernel = new float[kernelSize * kernelSize];
for (int i = -radius, k = 0; i <= radius; i++, k++)
{
for (int j = -radius; j <= radius; j++)
{
var val = (float)Math.Exp(-(i * i + j * j) / (sigma * sigma));
gaussKernel[k] = val;
sum += val;
}
}
for (int i = 0; i < gaussKernel.Length; i++)
gaussKernel[i] /= sum;
var gray = new Image<Gray, byte>(ycc.Size);
CvInvoke.cvSetImageCOI(ycc, 1);
CvInvoke.cvCopy(ycc, gray, IntPtr.Zero);
// Размеры изображения
var width = img.Width;
var height = img.Height;
var bmp = gray.Bitmap;
var bitmapData = bmp.LockBits(new Rectangle(Point.Empty, gray.Size), ImageLockMode.ReadWrite, PixelFormat.Format8bppIndexed);
unsafe
{
for (var y = 0; y < height; y++)
{
var row = (byte*)bitmapData.Scan0 + y * bitmapData.Stride;
for (var x = 0; x < width; x++)
{
var color = row + x;
float val = 0;
for (int i = -radius, k = 0; i <= radius; i++, k++)
{
var ii = y + i;
if (ii < 0) ii = 0; if (ii >= height) ii = height - 1;
var row2 = (byte*)bitmapData.Scan0 + ii * bitmapData.Stride;
for (int j = -radius; j <= radius; j++)
{
var jj = x + j;
if (jj < 0) jj = 0; if (jj >= width) jj = width - 1;
val += *(row2 + jj) * gaussKernel[k];
}
}
var newColor = 127.5 + 255 * Math.Log(*color / val);
if (newColor > 255)
newColor = 255;
else if (newColor < 0)
newColor = 0;
*color = (byte)newColor;
}
}
}
bmp.UnlockBits(bitmapData);
CvInvoke.cvCopy(gray, ycc, IntPtr.Zero);
CvInvoke.cvSetImageCOI(ycc, 0);
return ycc.Convert<Bgr, byte>();
}
Look at:
http://www.fer.unizg.hr/ipg/resources/color_constancy
These algorithms are modifications of the Retinex algorithm (with speed improvement) although the author gave them funny names :)
There is a full source code (C++, but it is written very nicely).
Sorry for necro-posting, but it seems that there's a mistake in step 3 of your procedure that can mislead someone passing by.
In order to apply the correction, you want to divide source image by Gauss-filtered copy of it, not the Gaussian kernel itself. Approximately, in pseudo-code:
I_filtered(x,y) = G(x,y) * I(x,y)
I'(x,y) = log(I(x,y) / I_filtered(x,y))
And then apply casting of I'(x,y) to required numeric type (uint8, as I can refer from original post).
More on that topic can be found in this paper:
Ri(x, y) = log(Ii(x, y)) − log(Ii(x, y) ∗ F(x, y))
where Ii
is the input image on the i-th color channel, Ri
is the retinex output image on the i-th
channel and F is the normalized surround function.
.

Compare 2 16x16 pixel images similarity

I would like to compare 2 images similarity with percentage. I want to detect 90% same images. Each image size is 16x16 pixel. I need some clue, help about it. Right now i am able to detect 100% same images when comparing with the code below
for (; x < irMainX; x++)
{
for (; y < irMainY; y++)
{
Color pixelColor = image.GetPixel(x, y);
if (pixelColor.A.ToString() != srClickedArray[x % 16, y % 16, 0])
{
blSame = false;
y = 16;
break;
}
if (pixelColor.R.ToString() != srClickedArray[x % 16, y % 16, 1])
{
blSame = false;
y = 16;
break;
}
if (pixelColor.G.ToString() != srClickedArray[x % 16, y % 16, 2])
{
blSame = false;
y = 16;
break;
}
if (pixelColor.B.ToString() != srClickedArray[x % 16, y % 16, 3])
{
blSame = false;
y = 16;
break;
}
}
y = y - 16;
if (blSame == false)
break;
}
For example i would like to recognize these 2 images as same. Currently the software recognizes them as different images since they are not exactly same
Use a count for the number of pixels that don't match:
public const double PERCENT_MATCH = 0.9;
int noMatchCount = 0;
for (int x = 0; x < irMainX; x++)
{
for (int y = 0; y < irMainY; y++)
{
if ( !pixelsMatch( image.GetPixel(x,y), srClickedArray[x%16, y%16] )
{
noMatchCount++;
if ( noMatchCount > ( 16 * 16 * ( 1.0 - PERCENT_MATCH ))
goto matchFailed;
}
}
}
Console.WriteLine("images are >=90% identical");
return;
matchFailed:
Console.WriteLine("image are <90% identical");
You could count matching pixels, but that will be slower. Consider measuring how much two pixels differ. For most purposes - you could have ALL the pixels not match exactly - yet have the images look visually identical
I wouldn't use image.GetPixel(x,y), as it's a lot slower than utilizing unsafe code to check specific bytes associated with each image.
Check out Lockbits

Categories