Is it possible to crop the captured image based on the shape that I want? I'm using raw image + web cam texture to activate the camera and save the image. And I'm using UI Image overlay method as a mask to cover the unwanted parts. I will be attaching the picture to the char model in the latter part. Sorry, I am new to unity. Grateful for your help!
Below is what I have in my code:
// start cam
void Start () {
devices = WebCamTexture.devices;
background = GetComponent<RawImage> ();
devCam = new WebCamTexture ();
background.texture = devCam;
devCam.deviceName = devices [0].name;
devCam.Play ();
}
void OnGUI()
{
GUI.skin = skin;
//swap front and back camera
if (GUI.Button (new Rect ((Screen.width / 2) - 1200, Screen.height - 650, 250, 250),"", GUI.skin.GetStyle("btn1"))) {
devCam.Stop();
devCam.deviceName = (devCam.deviceName == devices[0].name) ? devices[1].name : devices[0].name;
devCam.Play();
}
//snap picture
if (GUI.Button (new Rect ((Screen.width / 2) - 1200, Screen.height - 350, 250, 250), "", GUI.skin.GetStyle ("btn2"))) {
OnSelectCapture ();
//freeze cam here?
}
}
public void OnSelectCapture()
{
imgID++;
string fileName = imgID.ToString () + ".png";
Texture2D snap = new Texture2D (devCam.width, devCam.height);
Color[] c;
c = devCam.GetPixels ();
snap.SetPixels (c);
snap.Apply ();
// Save created Texture2D (snap) into disk as .png
System.IO.File.WriteAllBytes (Application.persistentDataPath +"/"+ fileName, snap.EncodeToPNG ());
}
}
Unless I am not understanding your question correctly, you can just call `devCam.pause!
Update
What you're looking for is basically to copy the pixels from the screen onto a separate image under some condition. So you could use something like this: https://docs.unity3d.com/ScriptReference/Texture2D.EncodeToPNG.html
I'm not 100% sure what you want to do with it exactly but if you want to have an image that you can use as a sprite, for instance, you can scan each pixel and if the pixel colour value is the same as the blue background, swap it for a 100% transparent pixel (0 in the alpha channel). That will give you just the face with the black hair and the ears.
Update 2
The link that I referred you to copies all pixels from the camera view so you don't have to worry about your source image. Here is the untested method, it should work plug and play so long as there is only one background colour else you will need to modify slightly to test for different colours.
IEnumerator GetPNG()
{
// Create a texture the size of the screen, RGB24 format
yield return new WaitForEndOfFrame();
int width = Screen.width;
int height = Screen.height;
Texture2D tex = new Texture2D(width, height, TextureFormat.RGB24, false);
// Read screen contents into the texture
tex.ReadPixels(new Rect(0, 0, width, height), 0, 0);
tex.Apply();
//Create second texture to copy the first texture into minus the background colour. RGBA32 needed for Alpha channel
Texture2D CroppedTexture = new Texture2D(tex.width, tex.height, TextureFormat.RGBA32, false);
Color BackGroundCol = Color.white;//This is your background colour/s
//Height of image in pixels
for(int y=0; y<tex.height; y++){
//Width of image in pixels
for(int x=0; x<tex.width; x++){
Color cPixelColour = tex.GetPixel(x,y);
if(cPixelColour != BackGroundCol){
CroppedTexture.SetPixel(x,y, cPixelColour);
}else{
CroppedTexture.SetPixel(x,y, Color.clear);
}
}
}
// Encode your cropped texture into PNG
byte[] bytes = CroppedTexture.EncodeToPNG();
Object.Destroy(CroppedTexture);
Object.Destroy(tex);
// For testing purposes, also write to a file in the project folder
File.WriteAllBytes(Application.dataPath + "/../CroppedImage.png", bytes);
}
Related
As the title suggests I have a problem with the error occurring at the row
targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
Error:
ReadPixels was called to read pixels from system frame buffer, while
not inside drawing frame. UnityEngine.Texture2D:ReadPixels(Rect,
Int32, Int32)
As I have understood from other posts one way to solve this issue is to make a Ienumerator method which yield return new WaitForSeconds or something, and call it like: StartCoroutine(methodname) so that the frames gets to load in time so that there will be pixels to read-ish.
What I don't get is where in the following code this method would make the most sense. Which part does not get to load in time?
PhotoCapture photoCaptureObject = null;
Texture2D targetTexture = null;
public string path = "";
CameraParameters cameraParameters = new CameraParameters();
private void Awake()
{
var cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture = new Texture2D(cameraResolution.width, cameraResolution.height);
// Create a PhotoCapture object
PhotoCapture.CreateAsync(false, captureObject =>
{
photoCaptureObject = captureObject;
cameraParameters.hologramOpacity = 0.0f;
cameraParameters.cameraResolutionWidth = cameraResolution.width;
cameraParameters.cameraResolutionHeight = cameraResolution.height;
cameraParameters.pixelFormat = CapturePixelFormat.BGRA32;
});
}
private void Update()
{
// if not initialized yet don't take input
if (photoCaptureObject == null) return;
if (Input.GetKey("k") || Input.GetKey("k"))
{
Debug.Log("k was pressed");
VuforiaBehaviour.Instance.gameObject.SetActive(false);
// Activate the camera
photoCaptureObject.StartPhotoModeAsync(cameraParameters, result =>
{
if (result.success)
{
// Take a picture
photoCaptureObject.TakePhotoAsync(OnCapturedPhotoToMemory);
}
else
{
Debug.LogError("Couldn't start photo mode!", this);
}
});
}
}
private static string FileName(int width, int height)
{
return $"screen_{width}x{height}_{DateTime.Now:yyyy-MM-dd_HH-mm-ss}.png";
}
private void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
{
// Copy the raw image data into the target texture
photoCaptureFrame.UploadImageDataToTexture(targetTexture);
Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
targetTexture.Apply();
byte[] bytes = targetTexture.EncodeToPNG();
string filename = FileName(Convert.ToInt32(targetTexture.width), Convert.ToInt32(targetTexture.height));
//save to folder under assets
File.WriteAllBytes(Application.streamingAssetsPath + "/Snapshots/" + filename, bytes);
Debug.Log("The picture was uploaded");
// Deactivate the camera
photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
}
private void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result)
{
// Shutdown the photo capture resource
VuforiaBehaviour.Instance.gameObject.SetActive(true);
photoCaptureObject.Dispose();
photoCaptureObject = null;
}
Sorry if this counts as a duplicate to this for example.
Edit
And this one might be useful when I get to that point.
Is it so that I don't need these three lines at all?
Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
targetTexture.Apply();
As written in the comments the difference between using these three lines and not is that the photo saved has a black background + the AR-GUI. Without the second line of code above is a photo with the AR-GUI but with the background is a live stream of my computer webcam. And really I don't wanna see the computer webcam but what the HoloLens sees.
Your three lines
Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
targetTexture.Apply();
make not much sense to me. Texture2D.ReadPixels is for creating a Screenshot so you would overwrite the texture you just received from PhotoCapture with a screenshot? (Also with incorrect dimensions since camera resolution very probably != screen resolution.)
That's also the reason for
As written in the comments the difference between using these three lines and not is that the photo saved has a black background + the AR-GUI.
After doing
photoCaptureFrame.UploadImageDataToTexture(targetTexture);
you already have the Texture2D received from the PhotoCapture in the targetTexture.
I think you probably confused it with Texture2D.GetPixels which is used to get the pixel data of a given Texture2D.
I would like to crop the captured photo from the center in the end and am thinking that maybe that is possible with this code row? Beginning the new rect at other pixels than 0, 0)
What you actually want is cropping the received Texture2D from the center as you mentioned in the comments. You can do that using GetPixels(int x, int y, int blockWidth, int blockHeight, int miplevel) which is used to cut out a certain area of a given Texture2D
public static Texture2D CropAroundCenter(Texture2D input, Vector2Int newSize)
{
if(input.width < newSize.x || input.height < newSize.y)
{
Debug.LogError("You can't cut out an area of an image which is bigger than the image itself!", this);
return null;
}
// get the pixel coordinate of the center of the input texture
var center = new Vector2Int(input.width / 2, input.height / 2);
// Get pixels around center
// Get Pixels starts width 0,0 in the bottom left corner
// so as the name says, center.x,center.y would get the pixel in the center
// we want to start getting pixels from center - half of the newSize
//
// than from starting there we want to read newSize pixels in both dimensions
var pixels = input.GetPixels(center.x - newSize.x / 2, center.y - newSize.y / 2, newSize.x, newSize.y, 0);
// Create a new texture with newSize
var output = new Texture2D(newSize.x, newSize.y);
output.SetPixels(pixels);
output.Apply();
return output;
}
for (hopefully) better understanding this is an illustration what that GetPixels overload with the given values does here:
and than use it in
private void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
{
// Copy the raw image data into the target texture
photoCaptureFrame.UploadImageDataToTexture(targetTexture);
// for example take only half of the textures width and height
targetTexture = CropAroundCenter(targetTexture, new Vector2Int(targetTexture.width / 2, targetTexture.height / 2);
byte[] bytes = targetTexture.EncodeToPNG();
string filename = FileName(Convert.ToInt32(targetTexture.width), Convert.ToInt32(targetTexture.height));
//save to folder under assets
File.WriteAllBytes(Application.streamingAssetsPath + "/Snapshots/" + filename, bytes);
Debug.Log("The picture was uploaded");
// Deactivate the camera
photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
}
Or you could make it an extension method in an apart static class like
public static class Texture2DExtensions
{
public static void CropAroundCenter(this Texture2D input, Vector2Int newSize)
{
if (input.width < newSize.x || input.height < newSize.y)
{
Debug.LogError("You can't cut out an area of an image which is bigger than the image itself!");
return;
}
// get the pixel coordinate of the center of the input texture
var center = new Vector2Int(input.width / 2, input.height / 2);
// Get pixels around center
// Get Pixels starts width 0,0 in the bottom left corner
// so as the name says, center.x,center.y would get the pixel in the center
// we want to start getting pixels from center - half of the newSize
//
// than from starting there we want to read newSize pixels in both dimensions
var pixels = input.GetPixels(center.x - newSize.x / 2, center.y - newSize.y / 2, newSize.x, newSize.y, 0);
// Resize the texture (creating a new one didn't work)
input.Resize(newSize.x, newSize.y);
input.SetPixels(pixels);
input.Apply(true);
}
}
and use it instead like
targetTexture.CropAroundCenter(new Vector2Int(targetTexture.width / 2, targetTexture.height / 2));
Note:
UploadImageDataToTexture: You may only use this method if you specified the BGRA32 format in your CameraParameters.
Luckily you use that anyway ;)
Keep in mind that this operation will happen on the main thread and therefore be slow.
However the only alternative would be CopyRawImageDataIntoBuffer and generate the texture in another thread or external, so I'ld say it is ok to stay with UploadImageDataToTexture ;)
and
The captured image will also appear flipped on the HoloLens. You can reorient the image by using a custom shader.
by flipped they actually mean that the Y-Axis of the texture is upside down. X-Axis is correct.
For flipping the Texture vertically you can use a second extension method:
public static class Texture2DExtensions
{
public static void CropAroundCenter(){....}
public static void FlipVertically(this Texture2D texture)
{
var pixels = texture.GetPixels();
var flippedPixels = new Color[pixels.Length];
// These for loops are for running through each individual pixel and
// write them with inverted Y coordinates into the flippedPixels
for (var x = 0; x < texture.width; x++)
{
for (var y = 0; y < texture.height; y++)
{
var pixelIndex = x + y * texture.width;
var flippedIndex = x + (texture.height - 1 - y) * texture.width;
flippedPixels[flippedIndex] = pixels[pixelIndex];
}
}
texture.SetPixels(flippedPixels);
texture.Apply();
}
}
and use it like
targetTexture.FlipVertically();
Result: (I used FlipVertically and cropp to the half of size every second for this example and a given Texture but it should work the same for a taken picture.)
Image source: http://developer.vuforia.com/sites/default/files/sample-apps/targets/imagetargets_targets.pdf
Update
To your button problem:
Don't use
if (Input.GetKey("k") || Input.GetKey("k"))
First of all you are checking the exact same condition twice. And than GetKey fires every frame while the key stays pressed. Instead rather use
if (Input.GetKeyDown("k"))
which fires only a single time. I guess there was an issue with Vuforia and PhotoCapture since your original version fired so often and maybe you had some concurrent PhotoCapture processes...
I am trying to add a watermark on my image, and this is the code I have for taking a screenshot. Can someone teach me how to implement watermark into my image? I want a small logo at the top right hand side of the image.
I am trying to research on maybe if I could implement what I have in the canvas to stay when a screenshot is taken ( real life ). But to no luck. Would really appreciate if someone could help me out here !
public string MakePhoto(bool openIt)
{
int resWidth = Screen.width;
int resHeight = Screen.height;
Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false); //Create new texture
RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
// hide the info-text, if any
if (infoText)
{
infoText.text = string.Empty;
}
// render background and foreground cameras
if (backroundCamera && backroundCamera.enabled)
{
backroundCamera.targetTexture = rt;
backroundCamera.Render();
backroundCamera.targetTexture = null;
}
if (backroundCamera2 && backroundCamera2.enabled)
{
backroundCamera2.targetTexture = rt;
backroundCamera2.Render();
backroundCamera2.targetTexture = null;
}
if (foreroundCamera && foreroundCamera.enabled)
{
foreroundCamera.targetTexture = rt;
foreroundCamera.Render();
foreroundCamera.targetTexture = null;
}
// get the screenshot
RenderTexture prevActiveTex = RenderTexture.active;
RenderTexture.active = rt;
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
// clean-up
RenderTexture.active = prevActiveTex;
Destroy(rt);
byte[] btScreenShot = screenShot.EncodeToJPG();
Destroy(screenShot);
#if !UNITY_WSA
// save the screenshot as jpeg file
string sDirName = Application.persistentDataPath + "/Screenshots";
if (!Directory.Exists(sDirName))
Directory.CreateDirectory (sDirName);
string sFileName = sDirName + "/" + string.Format ("{0:F0}", Time.realtimeSinceStartup * 10f) + ".jpg";
File.WriteAllBytes(sFileName, btScreenShot);
Debug.Log("Photo saved to: " + sFileName);
if (infoText)
{
infoText.text = "Saved to: " + sFileName;
}
// open file
if(openIt)
{
System.Diagnostics.Process.Start(sFileName);
}
return sFileName;
PS: I found this which might be useful?
public Texture2D AddWatermark(Texture2D background, Texture2D watermark)
{
int startX = 0;
int startY = background.height - watermark.height;
for (int x = startX; x < background.width; x++)
{
for (int y = startY; y < background.height; y++)
{
Color bgColor = background.GetPixel(x, y);
Color wmColor = watermark.GetPixel(x - startX, y - startY);
Color final_color = Color.Lerp(bgColor, wmColor, wmColor.a / 1.0f);
background.SetPixel(x, y, final_color);
}
}
background.Apply();
return background;
}
Select the imported image in the ProjectsView and in the inspector set TextureType to Sprite (2D and UI) (see Sprites Manual) and hit Apply
add a Sprite field for it to your class like
public Texture2D watermark;
Reference the watermark in the Inspector
You could simply add the watermark as overlay by adding the Color values from both textures for each pixel (assuming here they have the same size!)
If you want a watermark only in a certain rect of the texture you either have to scale it accordingly and use Texture2D.SetPixels(int x, int y, int blockWidth, int blockHeight, Color[] colors) (This assumes the watermark image is smaller in pixels than the screenShot!) like
private static void AddWaterMark(Texture2D texture, Texture2D watermarkTexture)
{
int watermarkWidth = watermarkTexture.width;
int watermarkHeight = watermarkTexture.height;
// In Unity differrent to most expectations the pixel corrdinate
// 0,0 is not the top-left corner but instead the bottom-left
// so since you want the whatermark in the top-right corner do
int startx = texture.width - watermarkWidth;
// optionally you could also still leave a border of e.g. 10 pixels by using
// int startx = texture.width - watermarkWidth - 10;
// same for the y axis
int starty = texture.height - watermarkHeight;
Color[] watermarkPixels = watermarkTexture.GetPixels();
// get the texture pixels for the given rect
Color[] originalPixels = texture.GetPixels(startx, starty, watermarkWidth, watermarkHeight);
for(int i = 0; i < watermarkPixels.Length; i++)
{
var pixel = watermarkPixels[i];
// adjust the alpha value of the whatermark
pixel.a *= 0.5f;
// add watermark pixel to original pixel
originalPixels[i] += pixel;
}
// write back the changed texture data
texture.SetPixels(startx, starty, watermarkWidth, watermarkHeight, originalPixels);
texture.Apply();
}
call it like
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
AddWaterMark(screenShot, watermark);
I have a gallery scene and I want to load PNG's from Persistence path .
The thing is that I want them as thumbnails, theirs no need for me to load the full size file .
How can I defined the scale of the sprite?
that the relevant line for the creating of the sprite:
Sprite sp1 = Sprite.Create(texture1, new Rect(0, 0, texture1.width, texture1.height), new Vector2(0.5f, 0.5f), 100, 0, SpriteMeshType.FullRect);
and that the texture creating code:
Texture2D takeScreenShotImage(string filePath)
{
Texture2D texture = null;
byte[] fileBytes;
if (File.Exists(filePath))
{
fileBytes = File.ReadAllBytes(filePath);
texture = new Texture2D(1, 1, TextureFormat.ETC2_RGB, false);
texture.LoadImage(fileBytes);
}
return texture;
}
The proper place to make that change is on the Texture2D after loading the Texture. If you really want to load them as thumbnails and want the size to be smaller to save memory then resize it with the Texture2D.Resize function after loading the Texture. The height and width of 60 should be fine. You can then create Sprite from it with Sprite.Create.
Texture2D takeScreenShotImage(string filePath)
{
Texture2D texture = null;
byte[] fileBytes;
if (File.Exists(filePath))
{
fileBytes = File.ReadAllBytes(filePath);
texture = new Texture2D(1, 1, TextureFormat.ETC2_RGB, false);
texture.LoadImage(fileBytes);
}
//RE-SIZE THE Texture2D
texture.Resize(60, 60);
texture.Apply();
return texture;
}
Note that TextureFormat.ETC2_RGB is compressed. To resize it you may have to create a copy of it. See #1 solution from my other post on how to do this.
The reason I wish to do so is that Unity has a nice DTX5 format that reduces the file size by a lot. But to get that, I need a sprite that's size is - both for height and width - a multiple of 4.
So I thought I create a new texture with the desired size, load its pixels with the original's pixels and make a sprite out of it that I save as an asset.
The issue is that while saving the texture works, I get the same texture with the proper sizes, saving the sprite doesn't work. It spits out something, but that isn't even close to what I need.
Here is the code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ResizeSprites
{
public void Resize(Sprite sprite)
{
int _hei, _wid;
//getting the closest higher values that are a multiple of 4.
for (_hei = sprite.texture.height; _hei % 4 != 0; _hei++) ;
for (_wid = sprite.texture.width; _wid % 4 != 0; _wid++) ;
//creating the new texture.
Texture2D tex = new Texture2D(_wid, _hei,TextureFormat.RGBA32,false);
//tex.alphaIsTransparency = true;
//tex.EncodeToPNG();
//giving the new texture the "improper" ratio sprite texture's pixel info
//pixel by pixel.
for (int wid = 0; wid < sprite.texture.width; wid++)
{
for (int hei = 0; hei < sprite.texture.height; hei++)
{
tex.SetPixel(wid, hei, sprite.texture.GetPixel(wid, hei));
}
}
//saving the asset. the save works, was used for both meshes as well as textures.
Sprite n_spr = Sprite.Create(tex,
new Rect(0, 0, tex.width, tex.height),
new Vector2(0.5f, 0.5f), 100.0f);
AssetSaver.CreateAsset(n_spr, sprite.name + "_dtx5");
}
}
And here are my results:
The first one is the original sprite, and the second is what I was given.
Edit: Even if I don't save my creation, just instantiate it as a GameObject, the result is still the same ugly one.
You really don't need all these code.Texture2D has a resize function so just pull the Texture2D from the Sprite then call the re-szie function to re-size it. That's it.
Something like this:
public void Resize(Sprite sprite)
{
Texture2D tex = sprite.texture;
tex.Resize(100, 100, TextureFormat.RGBA32, false);
Sprite n_spr = Sprite.Create(tex,
new Rect(0, 0, tex.width, tex.height),
new Vector2(0.5f, 0.5f), 100.0f);
AssetSaver.CreateAsset(n_spr, sprite.name + "_dtx5");
}
As for you original problem, that's because you did not call the Apply function. Each time you modify the pixels, you are supposed to call the Apply function. Finally, always use GetPixels32 not GetPixel or GetPixels. The reason is because GetPixels32 is extremely faster than the rest of the function.
public void Resize(Sprite sprite)
{
int _hei, _wid;
//getting the closest higher values that are a multiple of 4.
for (_hei = sprite.texture.height; _hei % 4 != 0; _hei++) ;
for (_wid = sprite.texture.width; _wid % 4 != 0; _wid++) ;
//creating the new texture.
Texture2D tex = new Texture2D(_wid, _hei, TextureFormat.RGBA32, false);
//tex.alphaIsTransparency = true;
//tex.EncodeToPNG();
//giving the new texture the "improper" ratio sprite texture's pixel info
//pixel by pixel.
Color32[] color = sprite.texture.GetPixels32();
tex.SetPixels32(color);
tex.Apply();
//saving the asset. the save works, was used for both meshes as well as textures.
Sprite n_spr = Sprite.Create(tex,
new Rect(0, 0, tex.width, tex.height),
new Vector2(0.5f, 0.5f), 100.0f);
AssetSaver.CreateAsset(n_spr, sprite.name + "_dtx5");
}
I have a image that contains a layout for a level, and I want to load the level in the game by reading each pixels color from the image, and drawing the corresponding block. I am using this code:
public void readLevel(string path, GraphicsDevice graphics)
{
//GET AN ARRAY OF COLORS
Texture2D level = Content.Load<Texture2D>(path);
Color[] colors = new Color[level.Width * level.Height];
level.GetData(colors);
//READ EACH PIXEL AND DRAW LEVEL
Vector3 brickRGB = new Vector3(128, 128, 128);
int placeX = 0;
int placeY = 0;
foreach (Color pixel in colors)
{
SpriteBatch spriteBatch = new SpriteBatch(graphics);
spriteBatch.Begin();
if (pixel == new Color(brickRGB))
{
Texture2D brick = Content.Load<Texture2D>("blocks/brick");
spriteBatch.Draw(brick, new Rectangle(placeX, placeY, 40, 40), Color.White);
}
if(placeX == 22)
{
placeX = 0;
placeY++;
}
else
spriteBatch.End();
}
}
But it just shows a blank screen. Help would be appreciated!
EDIT: PROBLEM FIXED! (Read htmlcoderexe's answer below) Also, there was another problem with this code, read here.
Your code seems to draw each sprite at one pixel offset from the previous, but your other parameter suggests they are 40 pixel wide. placeX and placeY will need to be multiplied by the stride of your tiles (40).
Also, in the bit where you compare colours, you might be having a problem with floating point colour values (0.0f-1.0f) and byte colours being used together.
new Color(brickRGB)
This translates to:
new Color(new Vector3(128f,128f,128f))
So it tries constructing a colour from the 0.0f-1.0f range, clips it down to 1f (the allowed maximum for float input for Color), and you end up with a white colour (255,255,255), which is not equal to your target colour (128,128,128).
To get around this, try changing
Vector3 brickRGB = new Vector3(128, 128, 128);
to
Color brickRGB = new Color(128, 128, 128);
and this part
if (pixel == new Color(brickRGB))
to just
if (pixel == brickRGB)
You will also need to create your drawing rectangle with placeX and placeY multiplied by 40, but do not write to the variable - just use placeX*40 for now and replace it with a constant later.