I'm trying to create Bitmap from byte array using this code:
var b = new Bitmap(pervoe, vtoroe, System.Drawing.Imaging.PixelFormat.Format8bppIndexed);
ColorPalette ncp = b.Palette;
for (int i = 0; i < 256; i++)
ncp.Entries[i] = System.Drawing.Color.FromArgb(255, i, i, i);
b.Palette = ncp;
var BoundsRect = new Rectangle(0, 0, Width, Height);
BitmapData bmpData = b.LockBits(BoundsRect,ImageLockMode.WriteOnly,b.PixelFormat);
IntPtr ptr = bmpData.Scan0;
int bytes = (bmpData.Stride)*(b.Height);
var rgbValues = new byte[bytes];
// filling values
Marshal.Copy(rgbValues, 0, ptr, bytes);
b.UnlockBits(bmpData);
return b;
the problem is that when I get the output image each row starting from the first is shifted to the right so the whole image doesnt look right .
The problem is not in the rgbValues - I've tried to use it with setPixel method and it works perfectly.
Any help with marshal class or what do I do to prevent that shifting?
The code you are showing looks fine and should work as intended. The problem is most likely related to how you've implemented the // filling values part. My guess is that the width of your image is such that bmpData.Stride != b.Width (and thus bytes != Width * Height) but you do not account for it and fill the first Width * Height bytes of the rgbValues array.
For optimilization purposes, the Bitmap implementation may choose to add padding bytes to each row of image data that are not actually part of the image. If it does so, you should account for this and write your image data to the first Width bytes of each row, where each rows starts at index rowIndex * BitmapData.Stride.
You should thus fill your buffer along the lines of:
for (int row = 0; row < b.Height; ++row)
{
int rowStart = row * bmpData.Stride;
for (int column = 0; column < b.Width; ++column)
{
rgbValues[rowStart + column] = GetColorForPixel(column, row);
}
}
Related
I have a data that is (2448*2048) 5Mpixel image data, but the picturebox only has (816*683) about 500,000 pixels, so I lowered the pixels and I only need a black and white image, so I used the G value to create the image, but The image I output is shown in the following figure. Which part of my mistake?
public int[,] lowered(int[,] greenar)
{
int[,] Sy = new int[816, 683];
int x = 0;
int y = 0;
for (int i = 1; i < 2448; i += 3)
{
for (int j = 1; j < 2048; j += 3)
{
Sy[x, y] = greenar[i, j];
y++;
}
y = 0;
x++;
}
return Sy;
}
static Bitmap Create(int[,] R, int[,] G, int[,] B)
{
int iWidth = G.GetLength(1);
int iHeight = G.GetLength(0);
Bitmap Result = new Bitmap(iWidth, iHeight,
System.Drawing.Imaging.PixelFormat.Format24bppRgb);
Rectangle rect = new Rectangle(0, 0, iWidth, iHeight);
System.Drawing.Imaging.BitmapData bmpData = Result.LockBits(rect,
System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
IntPtr iPtr = bmpData.Scan0;
int iStride = bmpData.Stride;
int iBytes = iWidth * iHeight * 3;
byte[] PixelValues = new byte[iBytes];
int iPoint = 0;
for (int i = 0; i < iHeight; i++)
{
for (int j = 0; j < iWidth; j++)
{
int iG = G[i, j];
int iB = G[i, j];
int iR = G[i, j];
PixelValues[iPoint] = Convert.ToByte(iB);
PixelValues[iPoint + 1] = Convert.ToByte(iG);
PixelValues[iPoint + 2] = Convert.ToByte(iR);
iPoint += 3;
}
}
System.Runtime.InteropServices.Marshal.Copy(PixelValues, 0, iPtr, iBytes);
Result.UnlockBits(bmpData);
return Result;
}
https://upload.cc/i1/2018/04/26/WHOXTJ.png
You don't need to downsample your image, you can do it in this way. Set picturebox property BackgroundImageLayout as either zoom or stretch and assign it as:
picturebox.BackgroundImageLayout = System.Windows.Forms.ImageLayout.Zoom;
picturebox.BackgroundImage = bitmap;
System.Windows.Forms.ImageLayout.Zoom will automatically adjust your bitmap to the size of picturebox.
You seem to be constantly mixing up your x and y offsets, which can easily be avoided simply by actually calling your loop variables x and y whenever you loop through image data. Also, image data is generally saved line by line, so your outer loop should be the Y loop going over the height, and the inner loop should process the X coordinates on one line, and should thus loop over the width.
Also, I'm not sure where your original data comes from, but in most of the cases I've seen where the image data is in multidimensional arrays like this, the Y is actually the first index in the array. Your actual image building function also assumes this, since it uses G.GetLength(0) to get the height of the image. But your channel resize function doesn't; it makes a multidimensional array as new int[816, 683], which would be a 683*816 image, not 816*683 as you said. So that certainly seems wrong.
Since you confirmed it to be [x,y], I adapted this solution to use it like that.
That aside, you hardcoded a lot of values in your functions, which is very bad practice. If you know you will reduce the image to 1/3rd by taking only one in three pixels, just give that 3 as parameter.
The reduction code:
public static Int32[,] ResizeChannel(Int32[,] origChannel, Int32 lossfactor)
{
Int32 newWidth = origChannel.GetLength(0) / lossfactor;
Int32 newHeight = origChannel.GetLength(1) / lossfactor;
// to avoid rounding errors
Int32 origHeight = newHeight * lossfactor;
Int32 origWidth = newWidth *lossfactor;
Int32[,] newChannel = new Int32[newWidth, newHeight];
Int32 newX = 0;
Int32 newY = 0;
for (Int32 y = 1; y < origHeight; y += lossfactor)
{
newX = 0;
for (Int32 x = 1; x < origWidth; x += lossfactor)
{
newChannel[newX, newY] = origChannel[x, y];
newX++;
}
newY++;
}
return newChannel;
}
The actual build code, as was remarked by GSerg in the comments, is wrong because you don't take the stride into account. The stride is the actual byte length of each line of pixels, and this is not just width * BytesPerPixel, since it gets rounded up to the next multiple of 4 bytes.
So you need to initialize your array as height * stride, not as height * width * 3, and you need to skip your write offset to the next multiple of the stride whenever you go to a lower Y line, rather than assuming it will just get there automatically because your X processing adds 3 for each pixel. Because it will not get there automatically, unless, by pure coincidence, your image width happens to be a multiple of 4 pixels.
Also, if you only use one channel for this, there is no reason to give it all three channels. Just give a single one.
public static Bitmap CreateGreyImage(Int32[,] greyChannel)
{
Int32 width = greyChannel.GetLength(0);
Int32 height = greyChannel.GetLength(1);
Bitmap result = new Bitmap(width, height, PixelFormat.Format24bppRgb);
Rectangle rect = new Rectangle(0, 0, width, height);
BitmapData bmpData = result.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
Int32 stride = bmpData.Stride;
// stride is the actual line width in bytes.
Int32 bytes = stride * height;
Byte[] pixelValues = new Byte[bytes];
Int32 offset = 0;
for (Int32 y = 0; y < height; y++)
{
Int32 workOffset = offset;
for (Int32 x = 0; x < width; x++)
{
pixelValues[workOffset + 0] = (Byte)greyChannel[x, y];
pixelValues[workOffset + 1] = (Byte)greyChannel[x, y];
pixelValues[workOffset + 2] = (Byte)greyChannel[x, y];
workOffset += 3;
}
// Add stride to get the start offset of the next line
offset += stride;
}
Marshal.Copy(pixelValues, 0, bmpData.Scan0, bytes);
result.UnlockBits(bmpData);
return result;
}
Now, this works as expected if your R, G and B channels are indeed identical, But if they are not, you have to realize there is a difference between reducing the image to grayscale and just building a grey image from the green channel. On a colour image, you will get totally different results if you take the blue or red channel instead.
This was the code I executed for this:
Int32[,] greyar = ResizeChannel(greenar, 3);
Bitmap newbm = CreateGreyImage(greyar);
In this case, a grayscale Array2D for ShapePredictor.
Here is what I am trying, without much success.
using DlibDotNet;
using Rectangle = System.Drawing.Rectangle;
using System.Runtime.InteropServices;
public static class Extension
{
public static Array2D<byte> ToArray2D(this Bitmap bitmap)
{
var bits = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
var length = bits.Stride * bits.Height;
var data = new byte[length];
Marshal.Copy(bits.Scan0, data, 0, length);
bitmap.UnlockBits(bits);
var array = new Array2D<byte>(bitmap.Width, bitmap.Height);
for (var x = 0; x < bitmap.Width; x++)
for (var y = 0; y < bitmap.Height; y++)
{
var offset = x * 4 + y * bitmap.Width * 4;
array[x][y] = data[offset];
}
return array;
}
I've searched and have not yet found a clear answer.
As noted before, you first need to convert your image to grayscale. There are plenty of answers here on StackOverflow to help you with that. I advise the ColorMatrix method used in this answer:
A: Convert an image to grayscale
I'll be using the MakeGrayscale3(Bitmap original) method shown in that answer in my code below.
Typically, images are looped through line by line for processing, so for clarity, you should put your Y loop as outer loop. It also makes the calculation of the data offsets a lot more efficient.
As for the actual data, if the image is grayscale, the R, G and B bytes should all be the same. The order "ARGB" in 32-bit pixel data refers to one UInt32 value, but those are little-endian, meaning the actual order of the bytes is [B, G, R, A]. This means that in each loop iteration we can just take the first of the four bytes, and it'll be the blue component.
public static Array2D<Byte> ToArray2D(this Bitmap bitmap)
{
Int32 stride;
Byte[] data;
// Removes unnecessary getter calls.
Int32 width = bitmap.Width;
Int32 height = bitmap.Height;
// 'using' block to properly dispose temp image.
using (Bitmap grayImage = MakeGrayscale(bitmap))
{
BitmapData bits = grayImage.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
stride = bits.Stride;
Int32 length = stride*height;
data = new Byte[length];
Marshal.Copy(bits.Scan0, data, 0, length);
grayImage.UnlockBits(bits);
}
// Constructor is (rows, columns), so (height, width)
Array2D<Byte> array = new Array2D<Byte>(height, width);
Int32 offset = 0;
for (Int32 y = 0; y < height; y++)
{
// Offset variable for processing one line
Int32 curOffset = offset;
// Get row in advance
Array2D<Byte>.Row<Byte> curRow = array[y];
for (Int32 x = 0; x < width; x++)
{
curRow[x] = data[curOffset]; // Should be the Blue component.
curOffset += 4;
}
// Stride is the actual data length of one line. No need to calculate that;
// not only is it already given by the BitmapData object, but in some situations
// it may differ from the actual data length. This also saves processing time
// by avoiding multiplications inside each loop.
offset += stride;
}
return array;
}
I have been trying to implement the image comparing algorithm seen here: http://www.dotnetexamples.com/2012/07/fast-bitmap-comparison-c.html
The problem I have been having is that when I try to compare a large amount of images one after another using the method pasted below (a slightly modified version from the link above), my results seem to be inaccurate. In particular, if I try to compare too many different images, even the ones that are the same will occasionally be detected as different. The problem seems to be that certain bytes in the array are different, as you can see in the screenshot I have included of two of the same images being compared (this occurs when I repeatedly compare images from an array of about 100 images - but there are actually only 3 unique images in the array):
private bool byteCompare(Bitmap image1, Bitmap image2) {
if (object.Equals(image1, image2))
return true;
if (image1 == null || image2 == null)
return false;
if (!image1.Size.Equals(image2.Size) || !image1.PixelFormat.Equals(image2.PixelFormat))
return false;
#region Optimized code for performance
int bytes = image1.Width * image1.Height * (Image.GetPixelFormatSize(image1.PixelFormat) / 8);
byte[] b1bytes = new byte[bytes];
byte[] b2bytes = new byte[bytes];
Rectangle rect = new Rectangle(0, 0, image1.Width - 1, image1.Height - 1);
BitmapData bmd1 = image1.LockBits(rect, ImageLockMode.ReadOnly, image1.PixelFormat);
BitmapData bmd2 = image2.LockBits(rect, ImageLockMode.ReadOnly, image2.PixelFormat);
try
{
Marshal.Copy(bmd1.Scan0, b1bytes, 0, bytes);
Marshal.Copy(bmd2.Scan0, b2bytes, 0, bytes);
for (int n = 0; n < bytes; n++)
{
if (b1bytes[n] != b2bytes[n]) //This line is where error occurs
return false;
}
}
finally
{
image1.UnlockBits(bmd1);
image2.UnlockBits(bmd2);
}
#endregion
return true;
}
I've added a comment to show where in the method this error is occurring. I assume it has something to do with the memory not being allocated properly, but I haven't been able to figure out what the source of the error is.
I should probably also mention that I don't get any issues when I convert the image to a byte array like so:
ImageConverter converter = new ImageConverter();
byte[] b1bytes = (byte[])converter.ConvertTo(image1, typeof(byte[]));
However, this approach is far slower.
If (Width * bytesperpixel) != Stride, then there will be unused bytes at the end of each line that are not guaranteed to have any particular value and in practice can be filled with random garbage.
You need to iterate line by line, increment by Stride each time, and only checking the bytes that actually correspond to pixels on each line.
Once you got the BitmapData object, the Stride can be found in that BitmapData object's Stride property. Make sure to extract that for both images.
Then, you have to loop over all pixels in the data so you can accurately determine where the image width for each line ends and the leftover data of the stride begins.
Also note this only works for high-colour images. Comparing 8-bit images is still possible (though you need to compare their palettes as well), but for lower than 8 you need to go bit-shifting to get the actual palette offset out of the image.
A simple workaround for that is to just paint your image on a new 32bpp image, effectively converting it to high colour.
public static Boolean CompareHiColorImages(Byte[] imageData1, Int32 stride1, Byte[] imageData2, Int32 stride2, Int32 width, Int32 height, PixelFormat pf)
{
Int32 byteSize = Image.GetPixelFormatSize(pf) / 8;
for (Int32 y = 0; y < height; y++)
{
for (Int32 x = 0; x < width; x++)
{
Int32 offset1 = y * stride1 + x * byteSize;
Int32 offset2 = y * stride2 + x * byteSize;
for (Int32 n = 0; n > byteSize; n++)
if (imageData1[offset1 + n] != imageData2[offset2 + n])
return false;
}
}
return true;
}
I've 2 images that I need to compare of 1280x800 each and I'm too worried about the efficiency as I'll have to do the same operations that include looping each second
I can think of so many ways to loop through a Bitmap object pixels but I don't know which would be more efficient, for now I am using simple for loop but it uses too much memory and processing which I could not afford at this point
Some tweaks here and there and all it does is less memory for more processing or the other way around
Any tips, information or experience is highly appreciated, I'm also open to use external libraries if they have much better efficiency.
from https://stackoverflow.com/a/6094092/1856345
Bitmap bmp = new Bitmap("SomeImage");
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
BitmapData bmpData = bmp.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
int bytes = bmpData.Stride * bmp.Height;
byte[] rgbValues = new byte[bytes];
byte[] r = new byte[bytes / 3];
byte[] g = new byte[bytes / 3];
byte[] b = new byte[bytes / 3];
// Copy the RGB values into the array.
Marshal.Copy(ptr, rgbValues, 0, bytes);
int count = 0;
int stride = bmpData.Stride;
for (int column = 0; column < bmpData.Height; column++)
{
for (int row = 0; row < bmpData.Width; row++)
{
b[count] = rgbValues[(column * stride) + (row * 3)];
g[count] = rgbValues[(column * stride) + (row * 3) + 1];
r[count++] = rgbValues[(column * stride) + (row * 3) + 2];
}
}
I have a drawing application developed in winforms C# which uses many System.Drawing.Bitmap object throughout the code.
Now I am writing it into WPF with c#. I have done almost 90% of the conversion.
Coming to the problem... I have the following code which is used to traverse the image pixel by pixel
Bitmap result = new Bitmap(img); // img is of System.Drawing.Image
result.SetResolution(img.HorizontalResolution, img.VerticalResolution);
BitmapData bmpData = result.LockBits(new Rectangle(0, 0, result.Width, result.Height), ImageLockMode.ReadWrite, img.PixelFormat);
int pixelBytes = System.Drawing.Image.GetPixelFormatSize(img.PixelFormat) / 8;
System.IntPtr ptr = bmpData.Scan0;
int size = bmpData.Stride * result.Height;
byte[] pixels = new byte[size];
int index = 0;
double R = 0;
double G = 0;
double B = 0;
System.Runtime.InteropServices.Marshal.Copy(ptr, pixels, 0, size);
for (int row = 0; row <= result.Height - 1; row++)
{
for (int col = 0; col <= result.Width - 1; col++)
{
index = (row * bmpData.Stride) + (col * pixelBytes);
R = pixels[index + 2];
G = pixels[index + 1];
B = pixels[index + 0];
.
.// logic code
.
}
}
result.UnlockBits(bmpData);
It uses System.Drawing's for the purpose.
Is it possible to achieve this thing in wpf as well keeping it simple as it is?
In addtion to Chris's anwser you might want to look at WriteableBitmap. It's another way to manipulate images pixels.
Example
You can use BitmapImage.CopyPixels to copy the image your pixel buffer.
BitmapImage img= new BitmapImage(...); // This is your image
int bytePerPixel = (img.Format.BitsPerPixel + 7) / 8;
int stride = img.PixelWidth * bytesPerPixel;
int size = img.PixelHeight * stride;
byte[] pixels = new byte[size];
img.CopyPixels(pixels, stride, 0);
// Now you can access 'pixels' to perform your logic
for (int row = 0; row < img.PixelHeight; row++)
{
for (int col = 0; col < img.PixelWidth; col++)
{
index = (row * stride) + (col * bytePerPixel );
...
}
}