MSDN reference: [1] http://msdn.microsoft.com/en-us/library/5ey6h79d.aspx#Y1178
From the link it says that the first argument will "specifies the portion of the Bitmap to lock" which I set to be a smaller part of the Bitmap (Bitmap is 500x500, my rectangle is (0,0,50,50)) however the returned BitmapData has stride of 1500 (=500*3) so basically every scan will still scan through the whole picture horizontally. However, what I want is only the top left 50x50 part of the bitmap.
How does this work out?
The stride will always be of the full bitmap, but the Scan0 property will be different according to the start point of the lock rectangle, as well as the Height and Width of the BitmapData.
The reason for that is that you will still need to know the real bit-width of the bitmap, in order to iterate over the rows (add stride to address).
A simple way to go about it would be:
var bitmap = new Bitmap(100, 100);
var data = bitmap.LockBits(new Rectangle(0, 0, 10, 10),
ImageLockMode.ReadWrite,
bitmap.PixelFormat);
var pt = (byte*)data.Scan0;
var bpp = data.Stride / bitmap.Width;
for (var y = 0; y < data.Height; y++)
{
// This is why real scan-width is important to have!
var row = pt + (y * data.Stride);
for (var x = 0; x < data.Width; x++)
{
var pixel = row + x * bpp;
for (var bit = 0; bit < bpp; bit++)
{
var pixelComponent = pixel[bit];
}
}
}
bitmap.UnlockBits(data);
So it is basically really just locking the whole bitmap, but giving you a pointer to the top-left pixel of the rectangle in the bitmap, and setting the scan's width and height appropriately.
Related
I have this code but doesn't work.I'm trying to extract a image from a site that contains a captcha.
var width = Images.First().Image.Width; //all images in list have the same width so i take the first
var height = 0;
for (int i = 104; i < 140; i++) //the list has 300 images. I have to get 36 that contains the captcha separated into pieces
{
height += Images[i].Image.Height;
}
var bitmap2 = new Bitmap(width, height);
var g = Graphics.FromImage(bitmap2);
height = 0;
for (int i = 104; i < 140; i++)
{
Image image = Images[i].Image;
g.DrawImage(image, 0, height);
height += image.Height;
}
bitmap2.Save(#"C:\Users\user\Desktop\test\test.png", ImageFormat.Png);
With this code i get this result:
image
I don't know why it is of poor quality. I think it is repeating the images that are recorded in the result bitmap
I can see a few suboptimal things in the code, but, to be honest, not a single thing that can give that result. The only way you get problems like that is if you go messing with the raw back-end and perform operations that mess up how the data is interpreted as image.
The only two specific things that need fixing in the code seem to be:
Setting the resolution of all images to the same values. This affects how large they are drawn, and thus can mess up positioning
Closing the Graphics object after you're done with it, so all changes are confirmed to be finished before you attempt to save anything.
Note that in my adjusted code, images is just a List<Bitmap>, and the for-loop just goes over them all. You never specified what type your Images collection was, and this was much easier for me to test.
Int32 width = Images.First().Width;
Int32 height = 0;
for (Int32 i = 0; i < Images.Count; i++)
{
height += Images[i].Height;
}
Bitmap bitmap2 = new Bitmap(width, height);
bitmap2.SetResolution(72, 72); // <-- Set explicit resolution on bitmap2
// Always put Graphics objects in a 'using' block.
using (Graphics g = Graphics.FromImage(bitmap2))
{
height = 0;
for (Int32 i = 0; i < Images.Count; i++)
{
Bitmap image = Images[i];
image.SetResolution(72, 72); // <-- Set resolution equal to bitmap2
g.DrawImage(image, 0, height);
height += image.Height;
}
}
bitmap2.Save(#"C:\Users\user\Desktop\test\test.png", ImageFormat.Png);
In this case, a grayscale Array2D for ShapePredictor.
Here is what I am trying, without much success.
using DlibDotNet;
using Rectangle = System.Drawing.Rectangle;
using System.Runtime.InteropServices;
public static class Extension
{
public static Array2D<byte> ToArray2D(this Bitmap bitmap)
{
var bits = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
var length = bits.Stride * bits.Height;
var data = new byte[length];
Marshal.Copy(bits.Scan0, data, 0, length);
bitmap.UnlockBits(bits);
var array = new Array2D<byte>(bitmap.Width, bitmap.Height);
for (var x = 0; x < bitmap.Width; x++)
for (var y = 0; y < bitmap.Height; y++)
{
var offset = x * 4 + y * bitmap.Width * 4;
array[x][y] = data[offset];
}
return array;
}
I've searched and have not yet found a clear answer.
As noted before, you first need to convert your image to grayscale. There are plenty of answers here on StackOverflow to help you with that. I advise the ColorMatrix method used in this answer:
A: Convert an image to grayscale
I'll be using the MakeGrayscale3(Bitmap original) method shown in that answer in my code below.
Typically, images are looped through line by line for processing, so for clarity, you should put your Y loop as outer loop. It also makes the calculation of the data offsets a lot more efficient.
As for the actual data, if the image is grayscale, the R, G and B bytes should all be the same. The order "ARGB" in 32-bit pixel data refers to one UInt32 value, but those are little-endian, meaning the actual order of the bytes is [B, G, R, A]. This means that in each loop iteration we can just take the first of the four bytes, and it'll be the blue component.
public static Array2D<Byte> ToArray2D(this Bitmap bitmap)
{
Int32 stride;
Byte[] data;
// Removes unnecessary getter calls.
Int32 width = bitmap.Width;
Int32 height = bitmap.Height;
// 'using' block to properly dispose temp image.
using (Bitmap grayImage = MakeGrayscale(bitmap))
{
BitmapData bits = grayImage.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
stride = bits.Stride;
Int32 length = stride*height;
data = new Byte[length];
Marshal.Copy(bits.Scan0, data, 0, length);
grayImage.UnlockBits(bits);
}
// Constructor is (rows, columns), so (height, width)
Array2D<Byte> array = new Array2D<Byte>(height, width);
Int32 offset = 0;
for (Int32 y = 0; y < height; y++)
{
// Offset variable for processing one line
Int32 curOffset = offset;
// Get row in advance
Array2D<Byte>.Row<Byte> curRow = array[y];
for (Int32 x = 0; x < width; x++)
{
curRow[x] = data[curOffset]; // Should be the Blue component.
curOffset += 4;
}
// Stride is the actual data length of one line. No need to calculate that;
// not only is it already given by the BitmapData object, but in some situations
// it may differ from the actual data length. This also saves processing time
// by avoiding multiplications inside each loop.
offset += stride;
}
return array;
}
I'm having problems converting a grayscale array of ints (int32[,]) into BMP format in C#.
I tried cycling through the array to set pixel color in the BMP, it does work but it ends up being really slow and practically unusable.
I did a lot of googling but I cannot find the answer to my question.
I need to put that image in a PictureBox in real time so the method needs to be fast.
Relevant discussion here
Edit: the array is 8bit depth but stored as int32
Edit2: Just found this code
private unsafe Task<Bitmap> BitmapFromArray(Int32[,] pixels, int width, int height)
{
return Task.Run(() =>
{
Bitmap bitmap = new Bitmap(width, height, PixelFormat.Format24bppRgb);
BitmapData bitmapData = bitmap.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, PixelFormat.Format24bppRgb);
for (int y = 0; y < height; y++)
{
byte* row = (byte*)bitmapData.Scan0 + bitmapData.Stride * y;
for (int x = 0; x < width; x++)
{
byte grayShade8bit = (byte)(pixels[x, y] >> 4);
row[x * 3 + 0] = grayShade8bit;
row[x * 3 + 1] = grayShade8bit;
row[x * 3 + 2] = grayShade8bit;
}
}
bitmap.UnlockBits(bitmapData);
return bitmap;
});
}
Seems to work fast enough but the image is almost black. If I remove the top of the camera the Image should be completely white but it just displays a really dark grey. I guess it's interpreting the pixel value as 32bit, not 8bit. Then tried to cast (ushort)pixels[x, y] but doesn't work
I actually wrote a universally usable BuildImagefunction here on SO to build an image out of a byte array, but of course, you're not starting from a byte array, you're starting from a two-dimensional Int32 array. The easy way to get around it is simply to transform it in advance.
Your array of bytes-as-integers is a rather odd thing. If this is read from a grayscale image I'd rather assume this is 32-bit ARGB data, and you're just using the lowest component of each value (which would be the blue one), but if downshifting the values by 4 bits produced uniformally dark values I'm inclined to take your word for that; otherwise the bits of the next colour component (green) would bleed in, giving bright colours as well.
Anyway, musing and second-guessing aside, here's my actual answer.
You may think each of your values, when poured into an 8-bit image, is simply the brightness, but this is actually false. There is no specific type in the System.Drawing pixel formats to indicate 8-bit grayscale, and 8-bit images are paletted, which means that each value on the image refers to a colour on the colour palette. So, to actually make an 8-bit grayscale image where your byte values indicate the pixel's brightness, you'll need to explicitly define a colour palette where the indices of 0 to 255 on the palette contain gray colours going from (0,0,0) to (255,255,255). Of course, this is pretty easy to generate.
This code will transform your array into an 8-bit image. It uses the aforementioned BuildImage function. Note that that function uses no unsafe code. The use of Marshal.Copy means raw pointers are never handled directly, making the code completely managed.
public static Bitmap FromTwoDimIntArrayGray(Int32[,] data)
{
// Transform 2-dimensional Int32 array to 1-byte-per-pixel byte array
Int32 width = data.GetLength(0);
Int32 height = data.GetLength(1);
Int32 byteIndex = 0;
Byte[] dataBytes = new Byte[height * width];
for (Int32 y = 0; y < height; y++)
{
for (Int32 x = 0; x < width; x++)
{
// logical AND to be 100% sure the int32 value fits inside
// the byte even if it contains more data (like, full ARGB).
dataBytes[byteIndex] = (Byte)(((UInt32)data[x, y]) & 0xFF);
// More efficient than multiplying
byteIndex++;
}
}
// generate palette
Color[] palette = new Color[256];
for (Int32 b = 0; i < 256; b++)
palette[b] = Color.FromArgb(b, b, b);
// Build image
return BuildImage(dataBytes, width, height, width, PixelFormat.Format8bppIndexed, palette, null);
}
Note, even if the integers were full ARGB values, the above code would still work exactly the same; if you only use the lowest of the four bytes inside the integer, as I said, that'll simply be the blue component of the full ARGB integer. If the image is grayscale, all three colour components should be identical, so you'll still get the same result.
Assuming you ever find yourself with the same kind of byte array where the integers actually do contain full 32bpp ARGB data, you'd have to shift out all four byte values, and there would be no generated gray palette, but besides that, the code would be pretty similar. Just, handling 4 bytes per X iteration.
public static Bitmap fromTwoDimIntArrayGray(Int32[,] data)
{
Int32 width = data.GetLength(0);
Int32 height = data.GetLength(1);
Int32 stride = width * 4;
Int32 byteIndex = 0;
Byte[] dataBytes = new Byte[height * stride];
for (Int32 y = 0; y < height; y++)
{
for (Int32 x = 0; x < width; x++)
{
// UInt32 0xAARRGGBB = Byte[] { BB, GG, RR, AA }
UInt32 val = (UInt32)data[x, y];
// This code clears out everything but a specific part of the value
// and then shifts the remaining piece down to the lowest byte
dataBytes[byteIndex + 0] = (Byte)(val & 0x000000FF); // B
dataBytes[byteIndex + 1] = (Byte)((val & 0x0000FF00) >> 08); // G
dataBytes[byteIndex + 2] = (Byte)((val & 0x00FF0000) >> 16); // R
dataBytes[byteIndex + 3] = (Byte)((val & 0xFF000000) >> 24); // A
// More efficient than multiplying
byteIndex+=4;
}
}
return BuildImage(dataBytes, width, height, stride, PixelFormat.Format32bppArgb, null, null);
}
Of course, if you want this without transparency, you can either go with three bytes as you did, or simply change PixelFormat.Format32bppArgb in the final call to PixelFormat.Format32bppRgb, which changes the meaning of the fourth byte from alpha to mere padding.
Solved (had to remove the four bits shift):
private unsafe Task<Bitmap> BitmapFromArray(Int32[,] pixels, int width, int height)
{
return Task.Run(() =>
{
Bitmap bitmap = new Bitmap(width, height, PixelFormat.Format24bppRgb);
BitmapData bitmapData = bitmap.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, PixelFormat.Format24bppRgb);
for (int y = 0; y < height; y++)
{
byte* row = (byte*)bitmapData.Scan0 + bitmapData.Stride * y;
for (int x = 0; x < width; x++)
{
byte grayShade8bit = (byte)(pixels[x, y]);
row[x * 3 + 0] = grayShade8bit;
row[x * 3 + 1] = grayShade8bit;
row[x * 3 + 2] = grayShade8bit;
}
}
bitmap.UnlockBits(bitmapData);
return bitmap;
});
}
Still not sure why substituting Format24bppRgb with Format8bppIndexed doesn't work. Any clue?
I have bitmap extracted from BitmapSource (RenderTargetBitmap) with blue circle in it. RenderTargetBitmap is created with PixelFormats.Pbgra32.
PixelFormats Pbgra32 pre-multiplies each color channel with alpha value. So, when I try to convert bitmap to cursor I was getting less opaque image than is should have.
I found solution to the problem here which clone the bitmap to Format24bppRgb and manually set R,B,G and alpha values. However, solutions works perfectly fine but for cloned bitmap I see black border around visual.
Can I get rid of that black border in cloned bitmap? (I suspect it's something inside SafeCopy method)
Methods used from the link are:
private static void SafeCopy(BitmapData srcData, BitmapData dstData, byte alphaLevel)
{
for (int y = 0; y < srcData.Height; y++)
for (int x = 0; x < srcData.Width; x++)
{
byte b = Marshal.ReadByte(srcData.Scan0, y * srcData.Stride + x * 3);
byte g = Marshal.ReadByte(srcData.Scan0, y * srcData.Stride + x * 3 + 1);
byte r = Marshal.ReadByte(srcData.Scan0, y * srcData.Stride + x * 3 + 2);
Marshal.WriteByte(dstData.Scan0, y * dstData.Stride + x * 4, b);
Marshal.WriteByte(dstData.Scan0, y * dstData.Stride + x * 4 + 1, g);
Marshal.WriteByte(dstData.Scan0, y * dstData.Stride + x * 4 + 2, r);
Marshal.WriteByte(dstData.Scan0, y * dstData.Stride + x * 4 + 3, alphaLevel);
}
}
private static Cursor CreateCustomCursorInternal(Bitmap bitmap, double opacity)
{
Bitmap cursorBitmap = null;
IconInfo iconInfo = new IconInfo();
Rectangle rectangle = new Rectangle(0, 0, bitmap.Width, bitmap.Height);
try
{
byte alphaLevel = System.Convert.ToByte(byte.MaxValue * opacity);
// Here, the pre-multiplied alpha channel is specified
cursorBitmap = new Bitmap(bitmap.Width, bitmap.Height,
PixelFormat.Format32bppPArgb);
// Assuming the source bitmap can be locked in a 24 bits per pixel format
BitmapData bitmapData = bitmap.LockBits(rectangle, ImageLockMode.ReadOnly,
PixelFormat.Format24bppRgb);
BitmapData cursorBitmapData = cursorBitmap.LockBits(rectangle,
ImageLockMode.WriteOnly, cursorBitmap.PixelFormat);
// Use SafeCopy() to set the bitmap contents
SafeCopy(bitmapData, cursorBitmapData, alphaLevel);
cursorBitmap.UnlockBits(cursorBitmapData);
bitmap.UnlockBits(bitmapData);
.......
}
Original bitmap:
Cloned bitmap:
The simplest way to convert a WPF 32bit PBGRA bitmap to a WinForms PARGB bitmap and at the same time apply a global opacity seems to be just multiplying all A, R, G and B values with the opacity factor (a float value between 0 and 1) like in the method shown below. However, I would have expected that it would also be necessary to swap the bytes, but apparently it isn't.
private static void CopyBufferWithOpacity(byte[] sourceBuffer,
System.Drawing.Imaging.BitmapData targetBuffer, double opacity)
{
for (int i = 0; i < sourceBuffer.Length; i++)
{
sourceBuffer[i] = (byte)Math.Round(opacity * sourceBuffer[i]);
}
Marshal.Copy(sourceBuffer, 0, targetBuffer.Scan0, sourceBuffer.Length);
}
Given a 32bit PBGRA bitmap pbgraBitmap (e.g. a RenderTargetBitmap), you would use the method like this:
var width = pbgraBitmap.PixelWidth;
var height = pbgraBitmap.PixelHeight;
var stride = width * 4;
var buffer = new byte[stride * height];
pbgraBitmap.CopyPixels(buffer, stride, 0);
var targetFormat = System.Drawing.Imaging.PixelFormat.Format32bppPArgb;
var bitmap = new System.Drawing.Bitmap(width, height, targetFormat);
var bitmapData = bitmap.LockBits(
new System.Drawing.Rectangle(0, 0, width, height),
System.Drawing.Imaging.ImageLockMode.WriteOnly,
targetFormat);
CopyBufferWithOpacity(buffer, bitmapData, 0.6);
bitmap.UnlockBits(bitmapData);
How do you create a 1 bit per pixel mask from an image using GDI in C#? The image I am trying to create the mask from is held in a System.Drawing.Graphics object.
I have seen examples that use Get/SetPixel in a loop, which are too slow. The method that interests me is one that uses only BitBlits, like this. I just can't get it to work in C#, any help is much appreciated.
Try this:
using System.Drawing;
using System.Drawing.Imaging;
using System.Runtime.InteropServices;
...
public static Bitmap BitmapTo1Bpp(Bitmap img) {
int w = img.Width;
int h = img.Height;
Bitmap bmp = new Bitmap(w, h, PixelFormat.Format1bppIndexed);
BitmapData data = bmp.LockBits(new Rectangle(0, 0, w, h), ImageLockMode.ReadWrite, PixelFormat.Format1bppIndexed);
for (int y = 0; y < h; y++) {
byte[] scan = new byte[(w + 7) / 8];
for (int x = 0; x < w; x++) {
Color c = img.GetPixel(x, y);
if (c.GetBrightness() >= 0.5) scan[x / 8] |= (byte)(0x80 >> (x % 8));
}
Marshal.Copy(scan, 0, (IntPtr)((int)data.Scan0 + data.Stride * y), scan.Length);
}
bmp.UnlockBits(data);
return bmp;
}
GetPixel() is slow, you can speed it up with an unsafe byte*.
In the Win32 C API the process to create a mono mask is simple.
Create an uninitialzied 1bpp bitmap as big as the source bitmap.
Select it into a DC.
Select the source bitmap into a DC.
SetBkColor on the destination DC to match the mask color of the source bitmap.
BitBlt the source onto the destination using SRC_COPY.
For bonus points its then usually desirable to blit the mask back onto the source bitmap (using SRC_AND) to zero out the mask color there.
Do you mean LockBits? Bob Powell has an overview of LockBits here; this should provide access to the RGB values, to do what you need. You might also want to look at ColorMatrix, like so.