I have a problem regarding some pixel-based operations in C#.
I wrote a class that serves as an image shell around a Bitmap. It can give you the RGB values of a pixel at a certain (x,y) location in the image much faster than the Bitmap.GetRGB(x,y) color object by using BitmapData and LockBits to get direct access to the image array and read the bytes from there. I added this function to get the RGB in an 0x00RRGGBB mask at a (x,y) pixel.
public unsafe int getPixel(int x, int y)
{
byte* imgPointer = (byte*)bmpData.Scan0;
int pixelPos = 0;
if (y > 0) pixelPos += (y * bmpData.Stride);
pixelPos += x * (hasAlpha ? 4 : 3);
int blue = *(imgPointer + pixelPos);
int green = *(imgPointer + pixelPos + 1);
int red = *(imgPointer + pixelPos + 2);
int rgb = red << 16;
rgb += green << 8;
rgb += blue;
return rgb;
}
This works flawlessly for all the images I've worked with thus far, except for any image I generate using MSPaint. For example, I made a 5x1 image in paint containing 5 shades of yellow. When I load this image into my program however, the image stride is 16! I suspected it to be 15 (3 bytes per pixel, 5 pixels) but for some reason after the first three bytes (first pixel) there is an extra byte, and then the rest of the pixels follow in the array.
I have only found this for images that are saved by MSpaint and I was hoping anyone could explain me what that extra byte is for and how to detect that extra byte.
From MSDN:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
So stride is always a multiple of 4, and for your 3x5, will round up to 16.
Related
So I want to grab a partial image from a byte array of colors. The image is a unity logo that is 64x64 pixels. I want to grab a third of the image (Unity Logo). How would I traverse the byte array to get this image?
Unity Byte Array
assuming each byte is a single pixel (which is only true for 8-bit depth images), the bytes 0-63 are the first row, 64-127 are the second row, etc etc.
meaning that to find out the position of a pixel in the one-dimensional array, based on its two-dimensional coordinates in the image itself, you do
int oneDimPos = (y*64) + x;
if each pixel were 3 bytes (24-bit color depth), the conversion from 2dimensional to 1dimensional coordinates would be:
int oneDimPos = (y * 64 * 3) + (x * 3);
(so the most generic equation is:
int oneDimPos = (y * imageWidth * colorDepth) + (x * colorDepth);
and you need to keep this in mind and adjust the code accordingly. or even better, use this most generic version, and actually read the image width and its color depth from the asset you're using as source.
BEWARE: if the image is anything else than 8bits per pixel, this equation will, naturally, only give you the first, starting bit belonging to that pixel, and you still need to take care to actually also read the other ones that belong to that pixel
i'm gonna finish the answer assuming 8bit color depth, for simplicity, as well as so that you can't just copypaste the answer, but also have to understand it and re-shape it according to your specific needs ;)
)
meaning you can now do classic two nested loops for x and y:
List<byte> result = new List(); //i'm going to use list so i can just .Add each byte instead of having to calculate and allocate the final size in advance, and having to mess around with recalculating the index from the source array into the destination one, because i'm lazy
for(int x=0; x < 22; x++){ //no way for you to grab precise third since that boundary is in the middle of a pixel for an image 64pixels wide
for(int y = 0; y < 64; y++){ //we go all the way to the bottom
result.Add(sourceAsset.bytes[(y*64) + x]);
}
}
//now just convert the list to actual byte array
byte[] resultBytes = result.ToArray();
The original issue that I was having was not exactly the same as the question. I wanted to simplify it by having a byte array that everyone could take a look at. The byte array from Unity's website wasn't exactly what I was getting.
So I have 3 x 1080p portrait screen (1080 x 1920 pixels) with RGBA channels. I grabbed a screenshot from this and got a 24,883,200 size byte array.
Note, 3 * width(1080) * height(1920) * channels(4) = 24,883,200.
byte[] colors = new byte[24883200]; // Screenshot of 3x1080p screen.
byte[] leftThird = new byte[colors.Length / 3];
Array.Copy(colors, 0, leftThird, 0, colors.Length / 3); // Grab the first third of array
This is an issue because the colors array is read from top to bottom, left to right. So instead, you should read a portion of the 3 x 1080 x 4 channels.
int width = 1080 * 4; // 4 channels of colors (RGBA)
int fullWidth = width * 3; // Three screens
int height = 1920;
byte[] leftScreen = new byte[screenShotByteArray.Length / 3];
for(int i = 0; i < height; i++)
{
Array.Copy(screenShotByteArray, (i * fullWidth) + (offset * 4), leftScreen, i * width, width);
}
I am sorry if the question in the header is not descriptive enough. But, basically what my problem is the following.
I am taking Bitmap and making it gray scale. It works nice if i do not reduce the number of bits and I still use 8 bits. However, the point of the hw I have is to show how the image changes when I reduce the number of bits holding the information. In the example bellow I am reducing the binary string to 4 bits and then rebuilding the image again. The problem is that the image becomes black. I think is because the image has mostly gray values (in the 80's range) and when I am reducing the binary string I am left with black image only. It seems to me that i heave to check for lower and high gray scale values and then make the more light-gray go to white and dark gray go to black. In the end with 1 bit representation I should only have black and white image.Any idea how can i do that separation?
Thanks
Bitmap bmpIn = (Bitmap)Bitmap.FromFile("c:\\test.jpg");
var grayscaleBmp = MakeGrayscale(bmpIn);
public Bitmap MakeGrayscale(Bitmap original)
{
//make an empty bitmap the same size as original
Bitmap newBitmap = new Bitmap(original.Width, original.Height);
for (int i = 0; i < original.Width; i++)
{
for (int j = 0; j < original.Height; j++)
{
//get the pixel from the original image
Color originalColor = original.GetPixel(i, j);
//create the grayscale version of the pixel
int grayScale = (int)((originalColor.R * .3) + (originalColor.G * .59)
+ (originalColor.B * .11));
//now turn it into binary and reduce the number of bits that hold information
byte test = (byte) grayScale;
string binary = Convert.ToString(test, 2).PadLeft(8, '0');
string cuted = binary.Remove(4);
var converted = Convert.ToInt32(cuted, 2);
//create the color object
Color newColor = Color.FromArgb(converted, converted, converted);
//set the new image's pixel to the grayscale version
newBitmap.SetPixel(i, j, newColor);
}
}
return newBitmap;
}
As mbeckish said, it is easier and much faster to use ImageAttributes.SetThreshold.
One way to do it manually is to get the median value of the grayscale pixels in the image, and use that for the threshold between black and white.
I am printing the mono chorme bit map image on thermal printer where i am able to print the image but at rightmost, one vertical line is getting printed. (The line is from Top right to bottom right with nearly 2mm thick)
Bitmap image = new Bitmap(imagePath, false);
int imageDepth = System.Drawing.Bitmap.GetPixelFormatSize(image.PixelFormat);
Rectangle monoChromeBitmapRectangle = new Rectangle(0, 0, image.Width, image.Height);
BitmapData monoChromebmpData = null;
int stride = 0;
monoChromebmpData = image.LockBits(monoChromeBitmapRectangle, ImageLockMode.ReadOnly, resizedImage.PixelFormat);
IntPtr ptr = monoChromebmpData.Scan0;
stride = monoChromebmpData.Stride;
int numbytes = stride * image.Height;
byte[] bitmapFileData = new byte[numbytes];
Marshal.Copy(ptr, bitmapFileData, 0, numbytes);
image.UnlockBits(monoChromebmpData);
//Invert bitmap colors
for (int i = 0; i < bitmapFileData.Length; i++)
{
bitmapFileData[i] ^= 0xFF;
}
StringBuilder hexaDecimalImageDataString = new StringBuilder(bitmapFileData.Length * 2);
foreach (byte b in bitmapFileData)
hexaDecimalImageDataString.AppendFormat("{0:X2}", b);
return hexaDecimalImageDataString;
Here i am converting the mono chrome bitmap image to byte array and from byte array to hexadecimal string.
i googled in forums but this kind of error is not discussed. (May be i am doing silly mistake)
Can any one suggest where exactly i am making the mistake.
Thanks in advance.
Cheers,
Siva.
Your are returning monoChromebmpData.Stride * image.Height bytes, i.e. each line in the image will be exactly monoChromebmpData.Stride * 8 pixels wide - but probably the original image has a pixel width that is less than that, hence the extra vertical line on the right.
Try something like this:
byte[] masks = new byte[]{0xff, 0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f, 0x7f};
int byteWidth = (image.Width+7)/8;
int nBits = imageWidth % 8;
byte[] actualBitmapFileData = new byte[byteWidth*image.Height];
int yFrom = 0;
for (int y=0; y<image.Height; y++) {
for (int x=0; x<byteWidth-1; x++) {
actualBitmapFileData[y*byteWidth + x] = (bitmapFileData[yFrom + x] ^ 0xFF);
}
int lastX = byteWidth - 1;
actualBitmapFileData[y*byteWidth + lastX] = (bitmapFileData[yFrom + lastX] ^ 0xFF) & masks[nBits];
yFrom += stride;
}
it creates an actualBitmapFileData array for bitmapFileData with of the correct size.
Note that the last byte of every line would contain only nBits pixels - and so needs to be 'masked' to clear out the extra bit not corresponding with any pixel. This is done by & masks[nBits], where masks is an array of 8 bytes with the 8 masks to use. The actual values of the mask depend on how the printer works: you might need to set the extra bits to 0 or to 1, and the extra bits can be the most-significant or the least-significant ones. The mask values used above assume that the most significant bits are rendered to the right, and that the masked bits should be set to 0. Depending on how the printer works it might be necessary to swap the bits and/or set the masked bits to 1 instead than zero (complementing the mask and using | instead than &)
For performance reasons each horizontal row in a Bitmap is buffered to a DWORD boundary (see this answer for more details). So if your Bitmap's width multiplied by it's bits-per-pixel(bpp) is not divisible by 32 (DWORD = 32bits) then it's padded with extra bits. So a 238x40 1bpp Bitmap has a memory foot print of 8 DWORDs per row or 256 bits.
The BitmapData object's Stride property is the number of bytes that each row of your bitmap consumes in memory. When you capture the Byte Array, you're capturing that padding as well.
Before you convert the byte array to hex you need to trim the buffer off the end. The following function should do that nicely.
public static byte[] TruncatePadding(byte[] PaddedImage, int Width, int Stride, int BitsPerPixel)
{
//Stride values can be negative
Stride = Math.Abs(Stride);
//Get the actual number of bytes each row contains.
int shortStride = (int)Math.Ceiling((double)(Width*BitsPerPixel/8));
//Figure out the height of the image from the array data
int height = PaddedImage.Length / Stride;
if (height < 1)
return null;
//Allocate the new array based on the image width
byte[] truncatedImage = new byte[shortStride * height];
//Copy the data minus the padding to a new array
for(int i = 0; i < height; i++)
Buffer.BlockCopy(PaddedImage,i*Stride,truncatedImage,i*shortStride,shortStride);
return truncatedImage;
}
The comments from MiMo and MyItchyChin helped me alot on resolving the issue.
The problem is getting the extra line at the end. So technically, on printing the each row of image, last few byte information is incorrect.
The reason for getting this problem is, the image size could be anything, but whne it sends to printer, the byte width should be divisible by eight. In my case my printer expects the bytewidth as input so i must be careful on passing the image.
Assume i have image 168x168 size.
byteWidth = Math.Ceiling(bitmapDataWidth / 8.0);
so the byteWidth is 21 here, As per printer expectation i did Left shift operation to 24 which is diviseble by 8, so virtually i increased the size of image by 3bytes and then started reading the byte information. The line i am talking about is that extra 3 bytes. Since no data is there, the black line is getting printed.
I wrote the logic in such a way, where byte array doesnot effect with shift operations hence it worked for me.
Early days for me in image processing, So please ignore, if i made a silly mistake and explaining the solution here.
I am writing an application that requires me to take a proprietary bitmap format (an MVTec Halcon HImage) and convert it into a System.Drawing.Bitmap in C#.
The only proprietary functions given to me to help me do this involve me writing to file, except for the use of a "get pointer" function.
This function is great, it gives me a pointer to the pixel data, the width, the height, and the type of the image.
My issue is that when I create my System.Drawing.Bitmap using the constructor:
new System.Drawing.Bitmap(width, height, stride, format, scan)
I need to specify a "stride" that is a multiple of 4.
This may be a problem as I am unsure what size bitmap my function will be hit with.
Supposing I end up with a bitmap that is 111x111 pixels, I have no way to run this function other than adding a bogus column to my image or subtracting 3 columns.
Is there a way I can sneak around this limitation?
This goes back to early CPU designs. The fastest way to crunch through the bits of the bitmap is by reading them 32-bits at a time, starting at the start of a scan line. That works best when the first byte of the scan line is aligned on a 32-bit address boundary. In other words, an address that's a multiple of 4. On early CPUs, having that first byte mis-aligned would cost extra CPU cycles to read two 32-bit words from RAM and shuffle the bytes to create the 32-bit value. Ensuring each scan line starts at an aligned address (automatic if the stride is a multiple of 4) avoids that.
This isn't a real concern anymore on modern CPUs, now alignment to the cache line boundary is much more important. Nevertheless, the multiple of 4 requirement for stride stuck around for appcompat reasons.
Btw, you can easily calculate the stride from the format and width with this:
int bitsPerPixel = ((int)format & 0xff00) >> 8;
int bytesPerPixel = (bitsPerPixel + 7) / 8;
int stride = 4 * ((width * bytesPerPixel + 3) / 4);
A much easier way is to just make the image with the (width, height, pixelformat) constructor. Then it takes care of the stride itself.
Then, you can just use LockBits to copy your image data into it, line by line, without bothering with the Stride stuff yourself; you can literally just request that from the BitmapData object. For the actual copy operation, for each scanline, you just increase the target pointer by the stride, and the source pointer by your line data width.
Here's an example where I got the image data in a byte array. If that's completely compact data, your input stride is normally just the image width multiplied by the amount of bytes per pixel. If it's 8-bit paletted data, it's simply exactly the width.
If the image data was extracted from an image object, you should've stored the original stride from that extraction process in exactly the same way, by getting it out of the BitmapData object.
/// <summary>
/// Creates a bitmap based on data, width, height, stride and pixel format.
/// </summary>
/// <param name="sourceData">Byte array of raw source data</param>
/// <param name="width">Width of the image</param>
/// <param name="height">Height of the image</param>
/// <param name="stride">Scanline length inside the data</param>
/// <param name="pixelFormat">Pixel format</param>
/// <param name="palette">Color palette</param>
/// <param name="defaultColor">Default color to fill in on the palette if the given colors don't fully fill it.</param>
/// <returns>The new image</returns>
public static Bitmap BuildImage(Byte[] sourceData, Int32 width, Int32 height, Int32 stride, PixelFormat pixelFormat, Color[] palette, Color? defaultColor)
{
Bitmap newImage = new Bitmap(width, height, pixelFormat);
BitmapData targetData = newImage.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, newImage.PixelFormat);
Int32 newDataWidth = ((Image.GetPixelFormatSize(pixelFormat) * width) + 7) / 8;
// Compensate for possible negative stride on BMP format.
Boolean isFlipped = stride < 0;
stride = Math.Abs(stride);
// Cache these to avoid unnecessary getter calls.
Int32 targetStride = targetData.Stride;
Int64 scan0 = targetData.Scan0.ToInt64();
for (Int32 y = 0; y < height; y++)
Marshal.Copy(sourceData, y * stride, new IntPtr(scan0 + y * targetStride), newDataWidth);
newImage.UnlockBits(targetData);
// Fix negative stride on BMP format.
if (isFlipped)
newImage.RotateFlip(RotateFlipType.Rotate180FlipX);
// For indexed images, set the palette.
if ((pixelFormat & PixelFormat.Indexed) != 0 && palette != null)
{
ColorPalette pal = newImage.Palette;
for (Int32 i = 0; i < pal.Entries.Length; i++)
{
if (i < palette.Length)
pal.Entries[i] = palette[i];
else if (defaultColor.HasValue)
pal.Entries[i] = defaultColor.Value;
else
break;
}
newImage.Palette = pal;
}
return newImage;
}
As has been stated before by Jake you calculate the stride by finding the bytes per pixel (2 for 16 bit, 4 for 32 bit) and then multiplying it by the width. So if you have a width of 111 and a 32 bit image you would have 444 which is a multiple of 4.
However, let's say for a minute that you have a 24 bit image. 24 bit is equal to 3 bytes, so with a 111 pixel width you would have 333 as your stride. This is, obviously, not a multiple of 4. So you would want to round up to 336 (the next highest multiple of 4). Even though you have a bit of extra, this unused space is not significant enough to really make much of a difference in most applications.
Unfortunately, there is no way around this restriction (unless you always use 32 bit or 64 bit imagines, which are always multiples of 4.
Remember stride is different from width. You can have an image that has 111 (8-bit) pixels per line, but each line is stored in memory 112 bytes.
This is done to make efficient use of memory and as #Ian said, it's storing the data in int32.
Because it's using int32 to store each pixel.
Sizeof(int32) = 4
But don't worry, when the image is saved from memory to file it will use the most efficient memory usage possible. Internally it uses 24 bits per pixel (8 bits red, 8 green and 8 blue) and leaves the last 8 bits redundant.
Correct code:
public static void GetStride(int width, PixelFormat format, ref int stride, ref int bytesPerPixel)
{
//int bitsPerPixel = ((int)format & 0xff00) >> 8;
int bitsPerPixel = System.Drawing.Image.GetPixelFormatSize(format);
bytesPerPixel = (bitsPerPixel + 7) / 8;
stride = 4 * ((width * bytesPerPixel + 3) / 4);
}
I don't know a better title, but I'll describe the problem.
A piece of hardware we use has the ability to display images.
It can display a black and white image with a resolution of 64 x 256.
The problem is the format of the image we have to send to the device.
It is not a standard bitmap format, but instead it is simply an array of
bytes representing each pixel of the image.
0 = black, 1 = white.
So if we had an image with the size: 4 x 4 the byte array might look something like:
1000 0100 0010 0001
And the image would look like:
Bitmap http://www.mediafire.com/imgbnc.php/6ee6a28148d0170708cb10ec7ce6512e4g.jpg
The problem is that we need to create this image by creating a monochrome bitmap
in C# and then convert it to the file format understood by the device.
For example, one might to display text on the device. In order to do so he would
have to create a bitmap and write text to it:
var bitmap = new Bitmap(256, 64);
using (var graphics = Graphics.FromImage(bitmap))
{
graphics.DrawString("Hello World", new Font("Courier", 10, FontStyle.Regular), new SolidBrush(Color.White), 1, 1);
}
There are 2 problems here:
The generated bitmap isn't monochrome
The generated bitmap has a different binary format
So I need a way to:
Generate a monochrome bitmap in .NET
Read the individual pixel colors for each pixel in the bitmap
I have found that you can set the pixel depth to 16, 24, or 32 bits, but haven't found monochrome and I have no idea how to read the pixel data.
Suggestions are welcome.
UPDATE: I cannot use Win32 PInvokes... has to be platform neutral!
FOLLOW UP: The following code works for me now. (Just in case anybody needs it)
private static byte[] GetLedBytes(Bitmap bitmap)
{
int threshold = 127;
int index = 0;
int dimensions = bitmap.Height * bitmap.Width;
BitArray bits = new BitArray(dimensions);
//Vertically
for (int y = 0; y < bitmap.Height; y++)
{
//Horizontally
for (int x = 0; x < bitmap.Width; x++)
{
Color c = bitmap.GetPixel(x, y);
int luminance = (int)(c.R * 0.3 + c.G * 0.59 + c.B * 0.11);
bits[index] = (luminance > threshold);
index++;
}
}
byte[] data = new byte[dimensions / 8];
bits.CopyTo(data, 0);
return data;
}
I'd compute the luminance of each pixel a then compare it to some threshold value.
y=0.3*R+0.59G*G+0.11*B
Say the threshold value is 127:
const int threshold = 127;
Bitmap bm = { some source bitmap };
byte[,] buffer = new byte[64,256];
for(int y=0;y<bm.Height;y++)
{
for(int x=0;x<bm.Width;x++)
{
Color c=source.GetPixel(x,y);
int luminance = (int)(c.R*0.3 + c.G*0.59+ c.B*0.11);
buffer[x,y] = (luminance > 127) ? 1 : 0;
}
}
I don't know C#. There are possibly many ways to do it. Here is a simple way.
Create a blank black bitmap image of size equal to your device requirement. Draw on it whatever you wish to draw like text, figures etc.
Now threshold the image i.e. set the pixel of image below an intensity value to zero else set it to. e.g. set all intensity values > 0 to 1.
Now convert to the format required by your device. Create a byte array of the size (64 * 256)/8. Set the corresponding bits to 1 where the corresponding pixel values in earlier bitmap are 1, else reset them to 0.
Edit: Step 3. Use bitwise operators to set the bits.
You shouldn't use GetPixel method of your bitmap to convert entire bitmap from one format to another! This will be ineffective. Instead you should use LockBits method to get access to a copy of image buffer and convert it into desired format. I'm not completely sure about converting it to monochrome but there is Format1bppIndexed value in PixelFormat enumeration which may help you.
You may try to supply a pixelformat in the constructor:
var bitmap = new Bitmap(256, 64, PixelFormat.Format1bppIndexed);
When I did draw monochrome bitmaps on other platforms I sometimes had
to disable antialiasing or the rendered text would not show up:
graphics.SmoothingMode=SmoothingMode.None;
YMMV.
Bitmap has a GetPixel method that you can use. This will let you draw on the Bitmap and later convert it to the format that you need.
Bitmaps in Windows forms (ie, accessed through Graphics.FromImage) are 24 bpp (maybe 32? It's too early and I honestly forget). Nonetheless, GetPixel returns a Color object, so the bit depth of the bitmap is immaterial. I suggest you write your code like this:
MyBitmapFormat ToMyBitmap(Bitmap b)
{
MyBitmapFormat mine = new MyBitmapFormat(b.Width, b.Height);
for (int y=0; y < b.Height; y++) {
for (int x=0; x < b.Width; x++) {
mine.SetPixel(x, y, ColorIsBlackish(b.GetPixel(x, y)));
}
}
}
bool ColorIsBlackish(Color c)
{
return Luminance(c) < 128; // 128 is midline
}
int Luminance(c)
{
return (int)(0.299 * Color.Red + 0.587 * Color.Green + 0.114 * Color.Blue);
}
This process is called simple thresholding. It's braindead, but it will work as a first cut.
thanks for the above code - I'm trying to convert a monochrome image into a 2d array where 1-black 0-white however I'm having some trouble - I used your code to load an 8x8 bmp image, and am outputting its contents to a textbox by using
myGrid =GetLedBytes(myBmp);
for (int x = 1; x < 8; x++)
{
textBox1.Text = textBox1.Text + Convert.ToString(myGrid[x])+ " ";
}
however I get this as a result in the textbox:
225 231 231 231 231 129 255
how do I get it so it's 0's and 1's?
This chap has some code that creates a mono bitmap. The SaveImage sample is the one of interest.