Unity: GetPixels() always results in the colour black across whole image - c#

I have an image (attached) which I'm using as a test. I'm trying to get and store all the colours of each pixel in an array.
I use the below code to do this;
Texture2D tex = mapImage.mainTexture as Texture2D;
int w = tex.width;
int h = tex.height;
Vector4[,] vals = new Vector4[w, h];
Color[] cols = tex.GetPixels();
for (int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{
if(cols[y+x] != Color.black)
{
Debug.Break();
}
vals[x, y] = cols[(y + x)];
}
}
Where mapImage is a public Material variable which I drag in into the scene on the prefab. As you can see, I've added a debug test there to pause the editor if a non-black colour is reached. This NEVER gets hit ever.
Interestingly, I've got another script which runs and tells me the colour values (GetPixel()) at the click position using the same image. It works fine (different methods, but both ultimately use the same material)
I'm at a loss as to why GetPixels() is always coming out black?
I've also been considering just loading the image data into a byte array, then parsing the values into a Vector4, but hoping this will work eventually.

You aren't indexing into the Color array properly. With the indices you are using, y+x, you keep checking the same values on the lowest rows of the texture, never getting past a certain point.
Instead, when calculating the index, you need to multiply the row that you are on by the row length and add that to the column you are on:
Texture2D tex = mapImage.mainTexture as Texture2D;
int w = tex.width;
int h = tex.height;
Vector4[,] vals = new Vector4[w, h];
Color[] cols = tex.GetPixels();
for (int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{
int index = y * w + x;
vals[x, y] = cols[index];
}
}
From the documentation on GetPixels:
The returned array is a flattened 2D array, where pixels are laid out left to right, bottom to top (i.e. row after row). Array size is width by height of the mip level used. The default mip level is zero (the base texture) in which case the size is just the size of the texture. In general case, mip level size is mipWidth=max(1,width>>miplevel) and similarly for height.

Related

Split Audio Waveform sprite that it width is out of range in a Scroll Rect

I'm new to Unity 3D and trying to split a texture2D sprite that contains an audio waveform in a Scroll Rect. The waveform comes from an audio source imported by the user and added to a scroll rect horizontally like a timeline. The script that creates the waveform works but the variable of the width (that came from another script, but this is not the problem) exceeds the limits of a Texture2D, only if I put manually a width less than 16000 the waveform appear but not to the maximum of the scroll rect. Usually, a song with 3-4min has a width of 55000-60000 width, and this can't be rendered. I need to split that waveform texture2D sprite horizontally into multiple parts (or Childs) together and render them only when appearing on the screen. How can I do that? Thank you in advance.
This creates the Waveform Sprite, and should split the sprite into multiple sprites and put together horizontally, render them only when appear on the screen):
public void LoadWaveform(AudioClip clip)
{
Texture2D texwav = waveformSprite.GetWaveform(clip);
Rect rect = new Rect(Vector2.zero, new Vector2(Realwidth, 180));
waveformImage.sprite = Sprite.Create(texwav, rect, Vector2.zero);
waveformImage.SetNativeSize();
}
This creates the waveform from an audio clip (getting from the internet and modifying for my project) :
public class WaveformSprite : MonoBehaviour
{
private int width = 16000; //This should be the variable from another script
private int height = 180;
public Color background = Color.black;
public Color foreground = Color.yellow;
private int samplesize;
private float[] samples = null;
private float[] waveform = null;
private float arrowoffsetx;
public Texture2D GetWaveform(AudioClip clip)
{
int halfheight = height / 2;
float heightscale = (float)height * 0.75f;
// get the sound data
Texture2D tex = new Texture2D(width, height, TextureFormat.RGBA32, false);
waveform = new float[width];
Debug.Log("NUMERO DE SAMPLES: " + clip.samples);
var clipSamples = clip.samples;
samplesize = clipSamples * clip.channels;
samples = new float[samplesize];
clip.GetData(samples, 0);
int packsize = (samplesize / width);
for (int w = 0; w < width; w++)
{
waveform[w] = Mathf.Abs(samples[w * packsize]);
}
// map the sound data to texture
// 1 - clear
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
tex.SetPixel(x, y, background);
}
}
// 2 - plot
for (int x = 0; x < width; x++)
{
for (int y = 0; y < waveform[x] * heightscale; y++)
{
tex.SetPixel(x, halfheight + y, foreground);
tex.SetPixel(x, halfheight - y, foreground);
}
}
tex.Apply();
return tex;
}
}
Instead of reading all the samples in one loop to populate waveform[], read only the amount needed for the current texture (utilizing an offset to track position in the array).
Calculate the number of textures your function will output.
var textureCount = Mathf.CeilToInt(totalWidth / maxTextureWidth); // max texture width 16,000
Create an outer loop to generate each texture.
for (int i = 0; i < textureCount; i++)
Calculate the current textures width (used for the waveform array and drawing loops).
var textureWidth = Mathf.CeilToInt(Mathf.Min(totalWidth - (maxTextureWidth * i), maxWidth));
Utilize an offset for populating the waveform array.
for (int w = 0; w < textureWidth; w++)
{
waveform[w] = Mathf.Abs(samples[(w + offset) * packSize]);
}
With offset increasing at the end of the textures loop by the number of samples used for that texture (ie texture width).
offset += textureWidth;
In the end the function will return an array of Texture2d instead of one.

Find last drawn pixel of C# Metafile

I have a Metafile object. For reasons outside of my control, it has been provided much larger (thousands of times larger) than what would be required to fit the image drawn inside it.
For example, it could be 40 000 x 40 000, yet only contains "real" (non-transparent) pixels in an area 2000 x 1600.
Originally, this metafile was simply drawn to a control, and the control bounds limited the area to a reasonable size.
Now I am trying to split it into different chunks of dynamic size, depending on user input. What I want to do it count how many of those chunks will be there (in x and in y, even the splitting is into a two-dimensional grid of chunks).
I am aware that, technically, I could go the O(N²) way, and just check the pixels one by one to find the "real" bounds of the drawn image.
But this will be painfully slow.
I am looking for a way of getting the position (x,y) of the very last drawn pixel in the entire metafile, without iterating through every single one of them.
Since The DrawImage method is not painfully slow, at least not N² slow, I assume that the metafile object has some optimisations on the inside that would allow something like this. Just like the List object has a .Count Property that is much faster than actually counting the objects, is there some way of getting the practical bounds of a metafile?
The drawn content, in this scenario, will always be rectangular. I can safely assume that the last pixel will be the same, whether I loop in x then y, or in y then x.
How can I find the coordinates of this "last" pixel?
Finding the bounding rectangle of the non-transparent pixels for such a large image is indeed an interesting challenge.
The most direct approach would be tackling the WMF content but that is also by far the hardest to get right.
Let's instead render the image to a bitmap and look at the bitmap.
First the basic approach, then a few optimizations.
To get the bounds one need to find the left, top, right and bottom borders.
Here is a simple function to do that:
Rectangle getBounds(Bitmap bmp)
{
int l, r, t, b; l = t = r = b = 0;
for (int x = 0; x < bmp.Width - 1; x++)
for (int y = 0; y < bmp.Height - 1; y++)
if (bmp.GetPixel(x,y).A > 0) { l = x; goto l1; }
l1:
for (int x = bmp.Width - 1; x > l ; x--)
for (int y = 0; y < bmp.Height - 1; y++)
if (bmp.GetPixel(x,y).A > 0) { r = x; goto l2; }
l2:
for (int y = 0; y < bmp.Height - 1; y++)
for (int x = l; x < r; x++)
if (bmp.GetPixel(x,y).A > 0) { t = y; goto l3; }
l3:
for (int y = bmp.Height - 1; y > t; y--)
for (int x = l; x < r; x++)
if (bmp.GetPixel(x,y).A > 0) { b = y; goto l4; }
l4:
return Rectangle.FromLTRB(l,t,r,b);
}
Note that is optimizes the last, vertical loops a little to look only at the portion not already tested by the horizontal loops.
It uses GetPixel, which is painfully slow; but even Lockbits only gains 'only' about 10x or so. So we need to reduce the sheer numbers; we need to do that anyway, because 40k x 40k pixels is too large for a Bitmap.
Since WMF is usually filled with vector data we probably can scale it down a lot. Here is an example:
string fn = "D:\\_test18b.emf";
Image img = Image.FromFile(fn);
int w = img.Width;
int h = img.Height;
float scale = 100;
Rectangle rScaled = Rectangle.Empty;
using (Bitmap bmp = new Bitmap((int)(w / scale), (int)(h / scale)))
using (Graphics g = Graphics.FromImage(bmp))
{
g.ScaleTransform(1f/scale, 1f/scale);
g.Clear(Color.Transparent);
g.DrawImage(img, 0, 0);
rScaled = getBounds(bmp);
Rectangle rUnscaled = Rectangle.Round(
new RectangleF(rScaled.Left * scale, rScaled.Top * scale,
rScaled.Width * scale, rScaled.Height * scale ));
}
Note that to properly draw the wmf file one may need to adapt the resolutions. Here is an example i used for testing:
using (Graphics g2 = pictureBox.CreateGraphics())
{
float scaleX = g2.DpiX / img.HorizontalResolution / scale;
float scaleY = g2.DpiY / img.VerticalResolution / scale;
g2.ScaleTransform(scaleX, scaleY);
g2.DrawImage(img, 0, 0); // draw the original emf image.. (*)
g2.ResetTransform();
// g2.DrawImage(bmp, 0, 0); // .. it will look the same as (*)
g2.DrawRectangle(Pens.Black, rScaled);
}
I left this out but for fully controlling the rendering, it ought have been included in the snippet above as well..
This may or may not be good enough, depending on the accuracy needed.
To measure the bounds perfectly one can do this trick: Use the bounds from the scaled down test and measure unscaled but only a tiny stripe around the four bound numbers. When creating the render bitmap we move the origin accordingly.
Example for the right bound:
Rectangle rScaled2 = Rectangle.Empty;
int delta = 80;
int right = (int)(rScaled.Right * scale);
using (Bitmap bmp = new Bitmap((int)(delta * 2 ), (int)(h )))
using (Graphics g = Graphics.FromImage(bmp))
{
g.Clear(Color.Transparent);
g.DrawImage(img, - right - delta, 0);
rScaled2 = getBounds(bmp);
}
I could have optimized by not going over the full height but only the portion (plus delte) we already found..
Further optimization can be achieved if one can use knowledge about the data. If we know that the image data are connected we could use larger steps in the loops until a pixel is found and then trace back one step..

Why does this code not create checker board pattern?

I am wondering why would this piece of code NOT generate a checkerboard pattern?
pbImage.Image = new Bitmap(8, 8);
Bitmap bmp = ((Bitmap)pbImage.Image);
byte[] bArr = new byte[64];
int currentX = 0;
int currentY = 0;
Color color = Color.Black;
do
{
currentY = 0;
do
{
bmp.SetPixel(currentX, currentY, color);
if (color == Color.Black) color = Color.White; else color = Color.Black;
currentY++;
} while (currentY < bitmapHeight);
currentX++;
} while (currentX < bitmapWidth);
pbImage.Refresh();
Edit: I realized that i need to expand Bitmaps ctor with
new Bitmap(bitmapWidth, bitmapHeight, PixelFormat.Format8bppIndexed)
and it seems SetPixel does not support Indexed Images and expects a Color.
My point is i want to create raw(pure byte array) grayscale images and show it on a picture box, while keeping it as simple as possible, without using any external libraries.
Your calculation fails, because, if you switch at every pixel, then even lines that start with colour 0 will end on the colour 1, meaning the next line will once again start with colour 0.
0101010101010101
0101010101010101
0101010101010101
0101010101010101
etc...
But since, in X and Y coordinates, any horizontal and vertical movement by 1 pixel across the pattern will change the colour, the actual calculation of whether you want a filled or non-filled pixel can be simplified to (x + y) % 2 == 0.
The checkerboard generating function I put below takes an array of colours as colour palette, and allows you to specify which specific indices from that palette to use as the two colours to use on the pattern. If you just want an image with nothing but a 2-colour palette containing black and white, you can just call it like this:
Bitmap check = GenerateCheckerboardImage(8, 8, new Color[]{Color.Black, Color.White}, 0,1);
The generating function:
public static Bitmap GenerateCheckerboardImage(Int32 width, Int32 height, Color[] colors, Byte color1, Byte color2)
{
Byte[] patternArray = new Byte[width * height];
for (Int32 y = 0; y < height; y++)
{
for (Int32 x = 0; x < width; x++)
{
Int32 offset = x + y * height;
patternArray[offset] = (((x + y) % 2 == 0) ? color1 : color2);
}
}
return BuildImage(patternArray, width, height, width, PixelFormat.Format8bppIndexed, colors, Color.Black);
}
The BuildImage function I used is a general-purpose function I made to convert a byte array to an image. You can find it in this answer.
As explained in the rest of that question and the answers on it, the stride argument is the amount of bytes on each line of the image data. For the constructed 8-bit array we got here, that's simply identical to the width, but when loading it's generally rounded to a multiple of 4, and can contain unused padding bytes. (The function takes care of all that, so the input byte array has no such requirements.)

How can I save the location of every single pixel of my image in c#?

I am writing a programme that I need to save the location of every single pixel in my bitmap image in an array and later on in need to for example randomly turn off 300 of black pixels randomly. However I am not sure how to do that. I have written the following code but of course it does not work. Can anyone please tell me the right way of doing that?
The locations of every pixel are constant (every pixel has exactly one x and one y coordinate) so the requirement of save the location of every single pixel is vague.
I guess what you try to do is: Turn 300 pixels in an image black, but save the previous color so you can restore single pixels?
You could try this:
class PixelHelper
{
public Point Coordinate;
public Color PixelColor;
}
PixelHelper[] pixelBackup = new PixelHelper[300];
Random r = new Random();
for (int i = 0; i < 300; i++)
{
int xRandom = r.Next(bmp.Width);
int yRandom = r.Next(bmp.Height);
Color c = bmp.GetPixel(xRandom, yRandom);
PixelHelper[i] = new PixelHelper() { Point = new Point(xRandom, yRandom), PixelColor = c };
}
After that the pixelBackup array contains 300 objects that contain a coordinate and the previous color.
EDIT: I guess from the comment that you want to turn 300 random black pixels white and then save the result as an image again?
Random r = new Random();
int n = 0;
while (n < 300)
{
int xRandom = r.Next(bmp.Width);
int yRandom = r.Next(bmp.Height);
if (bmp.GetPixel(xRandom, yRandom) == Color.Black)
{
bmp.SetPixel(xRandom, yRandom, Color.White);
n++;
}
}
bmp.Save(<filename>);
This turns 300 distinct pixels in your image from black to white. The while loop is used so I can increase n only if a black pixel is hit. If the random coordinate hits a white pixel, another pixel is picked.
Please note that this code loops forever in case there are less than 300 pixels in your image in total.
The following will open an image into memory, and then copy the pixel data into a 2d array. It then randomly converts 300 pixels in the 2d array to black. As an added bonus, it then saves the pixel data back into the bitmap object, and saves the file back to disk.
I edited the code to ensure 300 distinct pixels were selected.
int x = 0, y = 0;
///Get Data
Bitmap myBitmap = new Bitmap("mold.jpg");
Color[,] pixelData = new Color[myBitmap.Width, myBitmap.Height];
for (y = 0; y < myBitmap.Height; y++)
for (x = 0; x < myBitmap.Width; x++)
pixelData[x,y] = myBitmap.GetPixel(x, y);
///Randomly convert 3 pixels to black
Random rand = new Random();
List<Point> Used = new List<Point>();
for (int i = 0; i < 300; i++)
{
x = rand.Next(0, myBitmap.Width);
y = rand.Next(0, myBitmap.Height);
//Ensure we use 300 distinct pixels
while (Used.Contains(new Point(x,y)) || pixelData[x,y] != Color.Black)
{
x = rand.Next(0, myBitmap.Width);
y = rand.Next(0, myBitmap.Height);
}
Used.Add(new Point(x, y)); //Store the pixel we have used
pixelData[x, y] = Color.White;
}
///Save the new image
for (y = 0; y < myBitmap.Height; y++)
for (x = 0; x < myBitmap.Width; x++)
myBitmap.SetPixel(x, y, pixelData[x, y]);
myBitmap.Save("mold2.jpg");

XNA Getting colour data from a spritesheet is crashing the game

I'm building a small top down shooter in XNA using C#, and I am trying to implement per-pixel collision detection. I have the following code to do that, along a standard bounding box detection that returns the rectangle containing the collision.
private bool perPixel(Rectangle object1, Color[] dataA, Rectangle object2, Color[] dataB)
{
//Bounds of collision
int top = Math.Max(object1.Top, object2.Top);
int bottom = Math.Min(object1.Bottom, object2.Bottom);
int left = Math.Max(object1.Left, object2.Left);
int right = Math.Min(object1.Right, object2.Right);
//Check every pixel
for (int y = top; y < bottom; y++)
{
for (int x = left; x < right; x++)
{
//Check alpha values
Color colourA = dataA[(x - object1.Left) + (y - object1.Top) * object1.Width];
Color colourB = dataB[(x - object2.Left) + (y - object2.Top) * object2.Width];
if (colourA.A != 0 && colourB.A != 0)
{
return true;
}
}
}
return false;
}
I'm pretty sure that that will work, but I am also trying to get some of the objects to check against from a sprite sheet, and I'm trying to use this code to get the colour data, but it is getting an error saying that "The size of the data passed in is too large or small for this resource".
Color[] pacmanColour = new Color[frameSize.X * frameSize.Y];
pacman.GetData(0, new Rectangle(currentFrame.X * frameSize.X, currentFrame.Y * frameSize.Y, frameSize.X, frameSize.Y), pacmanColour,
currentFrame.X * currentFrame.Y, (sheetSize.X * sheetSize.Y));
What am I doing wrong?
Let me show you my method for dealing with Texture2D Colors
I used the following technique for loading premade structures from files
//Load the texture from the content pipeline
Texture2D texture = Content.Load<Texture2D>("Your Texture Name and Directory");
//Convert the 1D array, to a 2D array for accessing data easily (Much easier to do Colors[x,y] than Colors[i],because it specifies an easy to read pixel)
Color[,] Colors = TextureTo2DArray(texture);
And the function...
Color[,] TextureTo2DArray(Texture2D texture)
{
Color[] colors1D = new Color[texture.Width * texture.Height]; //The hard to read,1D array
texture.GetData(colors1D); //Get the colors and add them to the array
Color[,] colors2D = new Color[texture.Width, texture.Height]; //The new, easy to read 2D array
for (int x = 0; x < texture.Width; x++) //Convert!
for (int y = 0; y < texture.Height; y++)
colors2D[x, y] = colors1D[x + y * texture.Width];
return colors2D; //Done!
}
It will return a simple to use 2D array of colors, So you can simply check if Colors[1,1] (For pixel 1,1) equals whatever.

Categories