I use this method to get thumbnails of files (keeping transparency...):
public static Image GetIcon(string fileName, int size)
{
IShellItem shellItem;
Shell32.SHCreateItemFromParsingName(fileName, IntPtr.Zero, Shell32.IShellItem_GUID, out shellItem);
IntPtr hbitmap;
((IShellItemImageFactory)shellItem).GetImage(new SIZE(size, size), 0x0, out hbitmap);
// get the info about the HBITMAP inside the IPictureDisp
DIBSECTION dibsection = new DIBSECTION();
Gdi32.GetObjectDIBSection(hbitmap, Marshal.SizeOf(dibsection), ref dibsection);
int width = dibsection.dsBm.bmWidth;
int height = dibsection.dsBm.bmHeight;
// create the destination Bitmap object
Bitmap bitmap = new Bitmap(width, height, PixelFormat.Format32bppArgb);
unsafe
{
// get a pointer to the raw bits
RGBQUAD* pBits = (RGBQUAD*)(void*)dibsection.dsBm.bmBits;
// copy each pixel manually
for (int x = 0; x < dibsection.dsBmih.biWidth; x++)
{
for (int y = 0; y < dibsection.dsBmih.biHeight; y++)
{
int offset = y * dibsection.dsBmih.biWidth + x;
if (pBits[offset].rgbReserved != 0)
{
bitmap.SetPixel(x, y, Color.FromArgb(pBits[offset].rgbReserved, pBits[offset].rgbRed, pBits[offset].rgbGreen, pBits[offset].rgbBlue));
}
}
}
}
Gdi32.DeleteObject(hbitmap);
return bitmap;
}
But sometimes the image is upside down. When getting the same image for 2nd, 3rd time it's not upside down. Is there any way to determine wether it is upside down or not? If there was any solution, the code below should work:
if (isUpsideDown)
{
int offset = (dibsection.dsBmih.biHeight - y - 1) * dibsection.dsBmih.biWidth + x;
}
else
{
int offset = y * dibsection.dsBmih.biWidth + x;
}
I came across the same problem. Images from the clipboard where upside down. I managed to find out, that you can check the Stride value to see if the image is reversed:
BitmapData d = bmp.LockBits(rect, ImageLockMode.ReadWrite, bmp.PixelFormat);
bmp.UnlockBits(d);
if (d.Stride > 0)
{
bmp.RotateFlip(RotateFlipType.Rotate180FlipNone);
}
If the Stride value is greater than zero, the image is reversed.
Andy
Related
I have a data that is (2448*2048) 5Mpixel image data, but the picturebox only has (816*683) about 500,000 pixels, so I lowered the pixels and I only need a black and white image, so I used the G value to create the image, but The image I output is shown in the following figure. Which part of my mistake?
public int[,] lowered(int[,] greenar)
{
int[,] Sy = new int[816, 683];
int x = 0;
int y = 0;
for (int i = 1; i < 2448; i += 3)
{
for (int j = 1; j < 2048; j += 3)
{
Sy[x, y] = greenar[i, j];
y++;
}
y = 0;
x++;
}
return Sy;
}
static Bitmap Create(int[,] R, int[,] G, int[,] B)
{
int iWidth = G.GetLength(1);
int iHeight = G.GetLength(0);
Bitmap Result = new Bitmap(iWidth, iHeight,
System.Drawing.Imaging.PixelFormat.Format24bppRgb);
Rectangle rect = new Rectangle(0, 0, iWidth, iHeight);
System.Drawing.Imaging.BitmapData bmpData = Result.LockBits(rect,
System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
IntPtr iPtr = bmpData.Scan0;
int iStride = bmpData.Stride;
int iBytes = iWidth * iHeight * 3;
byte[] PixelValues = new byte[iBytes];
int iPoint = 0;
for (int i = 0; i < iHeight; i++)
{
for (int j = 0; j < iWidth; j++)
{
int iG = G[i, j];
int iB = G[i, j];
int iR = G[i, j];
PixelValues[iPoint] = Convert.ToByte(iB);
PixelValues[iPoint + 1] = Convert.ToByte(iG);
PixelValues[iPoint + 2] = Convert.ToByte(iR);
iPoint += 3;
}
}
System.Runtime.InteropServices.Marshal.Copy(PixelValues, 0, iPtr, iBytes);
Result.UnlockBits(bmpData);
return Result;
}
https://upload.cc/i1/2018/04/26/WHOXTJ.png
You don't need to downsample your image, you can do it in this way. Set picturebox property BackgroundImageLayout as either zoom or stretch and assign it as:
picturebox.BackgroundImageLayout = System.Windows.Forms.ImageLayout.Zoom;
picturebox.BackgroundImage = bitmap;
System.Windows.Forms.ImageLayout.Zoom will automatically adjust your bitmap to the size of picturebox.
You seem to be constantly mixing up your x and y offsets, which can easily be avoided simply by actually calling your loop variables x and y whenever you loop through image data. Also, image data is generally saved line by line, so your outer loop should be the Y loop going over the height, and the inner loop should process the X coordinates on one line, and should thus loop over the width.
Also, I'm not sure where your original data comes from, but in most of the cases I've seen where the image data is in multidimensional arrays like this, the Y is actually the first index in the array. Your actual image building function also assumes this, since it uses G.GetLength(0) to get the height of the image. But your channel resize function doesn't; it makes a multidimensional array as new int[816, 683], which would be a 683*816 image, not 816*683 as you said. So that certainly seems wrong.
Since you confirmed it to be [x,y], I adapted this solution to use it like that.
That aside, you hardcoded a lot of values in your functions, which is very bad practice. If you know you will reduce the image to 1/3rd by taking only one in three pixels, just give that 3 as parameter.
The reduction code:
public static Int32[,] ResizeChannel(Int32[,] origChannel, Int32 lossfactor)
{
Int32 newWidth = origChannel.GetLength(0) / lossfactor;
Int32 newHeight = origChannel.GetLength(1) / lossfactor;
// to avoid rounding errors
Int32 origHeight = newHeight * lossfactor;
Int32 origWidth = newWidth *lossfactor;
Int32[,] newChannel = new Int32[newWidth, newHeight];
Int32 newX = 0;
Int32 newY = 0;
for (Int32 y = 1; y < origHeight; y += lossfactor)
{
newX = 0;
for (Int32 x = 1; x < origWidth; x += lossfactor)
{
newChannel[newX, newY] = origChannel[x, y];
newX++;
}
newY++;
}
return newChannel;
}
The actual build code, as was remarked by GSerg in the comments, is wrong because you don't take the stride into account. The stride is the actual byte length of each line of pixels, and this is not just width * BytesPerPixel, since it gets rounded up to the next multiple of 4 bytes.
So you need to initialize your array as height * stride, not as height * width * 3, and you need to skip your write offset to the next multiple of the stride whenever you go to a lower Y line, rather than assuming it will just get there automatically because your X processing adds 3 for each pixel. Because it will not get there automatically, unless, by pure coincidence, your image width happens to be a multiple of 4 pixels.
Also, if you only use one channel for this, there is no reason to give it all three channels. Just give a single one.
public static Bitmap CreateGreyImage(Int32[,] greyChannel)
{
Int32 width = greyChannel.GetLength(0);
Int32 height = greyChannel.GetLength(1);
Bitmap result = new Bitmap(width, height, PixelFormat.Format24bppRgb);
Rectangle rect = new Rectangle(0, 0, width, height);
BitmapData bmpData = result.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
Int32 stride = bmpData.Stride;
// stride is the actual line width in bytes.
Int32 bytes = stride * height;
Byte[] pixelValues = new Byte[bytes];
Int32 offset = 0;
for (Int32 y = 0; y < height; y++)
{
Int32 workOffset = offset;
for (Int32 x = 0; x < width; x++)
{
pixelValues[workOffset + 0] = (Byte)greyChannel[x, y];
pixelValues[workOffset + 1] = (Byte)greyChannel[x, y];
pixelValues[workOffset + 2] = (Byte)greyChannel[x, y];
workOffset += 3;
}
// Add stride to get the start offset of the next line
offset += stride;
}
Marshal.Copy(pixelValues, 0, bmpData.Scan0, bytes);
result.UnlockBits(bmpData);
return result;
}
Now, this works as expected if your R, G and B channels are indeed identical, But if they are not, you have to realize there is a difference between reducing the image to grayscale and just building a grey image from the green channel. On a colour image, you will get totally different results if you take the blue or red channel instead.
This was the code I executed for this:
Int32[,] greyar = ResizeChannel(greenar, 3);
Bitmap newbm = CreateGreyImage(greyar);
In this case, a grayscale Array2D for ShapePredictor.
Here is what I am trying, without much success.
using DlibDotNet;
using Rectangle = System.Drawing.Rectangle;
using System.Runtime.InteropServices;
public static class Extension
{
public static Array2D<byte> ToArray2D(this Bitmap bitmap)
{
var bits = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
var length = bits.Stride * bits.Height;
var data = new byte[length];
Marshal.Copy(bits.Scan0, data, 0, length);
bitmap.UnlockBits(bits);
var array = new Array2D<byte>(bitmap.Width, bitmap.Height);
for (var x = 0; x < bitmap.Width; x++)
for (var y = 0; y < bitmap.Height; y++)
{
var offset = x * 4 + y * bitmap.Width * 4;
array[x][y] = data[offset];
}
return array;
}
I've searched and have not yet found a clear answer.
As noted before, you first need to convert your image to grayscale. There are plenty of answers here on StackOverflow to help you with that. I advise the ColorMatrix method used in this answer:
A: Convert an image to grayscale
I'll be using the MakeGrayscale3(Bitmap original) method shown in that answer in my code below.
Typically, images are looped through line by line for processing, so for clarity, you should put your Y loop as outer loop. It also makes the calculation of the data offsets a lot more efficient.
As for the actual data, if the image is grayscale, the R, G and B bytes should all be the same. The order "ARGB" in 32-bit pixel data refers to one UInt32 value, but those are little-endian, meaning the actual order of the bytes is [B, G, R, A]. This means that in each loop iteration we can just take the first of the four bytes, and it'll be the blue component.
public static Array2D<Byte> ToArray2D(this Bitmap bitmap)
{
Int32 stride;
Byte[] data;
// Removes unnecessary getter calls.
Int32 width = bitmap.Width;
Int32 height = bitmap.Height;
// 'using' block to properly dispose temp image.
using (Bitmap grayImage = MakeGrayscale(bitmap))
{
BitmapData bits = grayImage.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
stride = bits.Stride;
Int32 length = stride*height;
data = new Byte[length];
Marshal.Copy(bits.Scan0, data, 0, length);
grayImage.UnlockBits(bits);
}
// Constructor is (rows, columns), so (height, width)
Array2D<Byte> array = new Array2D<Byte>(height, width);
Int32 offset = 0;
for (Int32 y = 0; y < height; y++)
{
// Offset variable for processing one line
Int32 curOffset = offset;
// Get row in advance
Array2D<Byte>.Row<Byte> curRow = array[y];
for (Int32 x = 0; x < width; x++)
{
curRow[x] = data[curOffset]; // Should be the Blue component.
curOffset += 4;
}
// Stride is the actual data length of one line. No need to calculate that;
// not only is it already given by the BitmapData object, but in some situations
// it may differ from the actual data length. This also saves processing time
// by avoiding multiplications inside each loop.
offset += stride;
}
return array;
}
I used the example in openNI library called "SimpleViewer.net" to display images of kinect device with the library openNI.
Now, my aim is to save all the images that I display, I think that the place is:
lock (this)
{
Rectangle rect = new Rectangle(0, 0, this.bitmap.Width, this.bitmap.Height);
BitmapData data = this.bitmap.LockBits(rect, ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
ushort* pDepth = (ushort*)this.depth.DepthMapPtr.ToPointer();
// set pixels
for (int y = 0; y < depthMD.YRes; ++y)
{
byte* pDest = (byte*)data.Scan0.ToPointer() + y * data.Stride;
for (int x = 0; x < depthMD.XRes; ++x, ++pDepth, pDest += 3)
{
byte pixel = (byte)this.histogram[*pDepth];
pDest[0] = 0;
pDest[1] = pixel;
pDest[2] = pixel;
}
}
this.bitmap.UnlockBits(data);
}
before that I unlock the BitmapData....
I have not able to save a bmp image that containing the depth data.....
Thanks in advance
I'm currently working on a small library that enables you to get icons from files and folders. Now, I don't care if it only works on win8+ (cause that's the place I'm going to use it), however, I've run in to a tiny problem with regards to transparency. If you take a look at the following image:
The one I generate (from my library) is to the left, windows explorer is to the right.
Now, as you might see, first off there is 2 black lines in the upper right of the one I generate, second, there is a difference in the background color. So what I'm wondering is this; is there no way to get the exact same image used by windows explorer, or am I simply doing it wrong?
My code (with exception to structs/externs etc. for shortness) bellow, entire code here.
public static class Icon
{
public static Image GetIcon(string fileName, int size)
{
IShellItem shellItem;
Shell32.SHCreateItemFromParsingName(fileName, IntPtr.Zero, Shell32.IShellItem_GUID, out shellItem);
IntPtr hbitmap;
((IShellItemImageFactory)shellItem).GetImage(new SIZE(size, size), 0x0, out hbitmap);
// get the info about the HBITMAP inside the IPictureDisp
DIBSECTION dibsection = new DIBSECTION();
Gdi32.GetObjectDIBSection(hbitmap, Marshal.SizeOf(dibsection), ref dibsection);
int width = dibsection.dsBm.bmWidth;
int height = dibsection.dsBm.bmHeight;
// zero out the RGB values for all pixels with A == 0
// (AlphaBlend expects them to all be zero)
for (int i = 0; i < dibsection.dsBmih.biWidth * dibsection.dsBmih.biHeight; i++)
{
IntPtr ptr = dibsection.dsBm.bmBits + (i * Marshal.SizeOf(typeof(RGBQUAD)));
var rgbquad = (RGBQUAD)Marshal.PtrToStructure(ptr, typeof(RGBQUAD));
if (rgbquad.rgbReserved == 0)
{
rgbquad.rgbBlue = 0;
rgbquad.rgbGreen = 0;
rgbquad.rgbRed = 0;
}
else
{
;
}
Marshal.StructureToPtr(rgbquad, ptr, false);
}
// create the destination Bitmap object
Bitmap bitmap = new Bitmap(width, height, PixelFormat.Format32bppArgb);
// get the HDCs and select the HBITMAP
Graphics graphics = Graphics.FromImage(bitmap);
IntPtr hdcDest = graphics.GetHdc();
IntPtr hdcSrc = Gdi32.CreateCompatibleDC(hdcDest);
IntPtr hobjOriginal = Gdi32.SelectObject(hdcSrc, hbitmap);
// render the bitmap using AlphaBlend
BLENDFUNCTION blendfunction = new BLENDFUNCTION(BLENDFUNCTION.AC_SRC_OVER, 0, 0xFF, BLENDFUNCTION.AC_SRC_ALPHA);
Gdi32.AlphaBlend(hdcDest, 0, 0, width, height, hdcSrc, 0, 0, width, height, blendfunction);
// clean up
Gdi32.SelectObject(hdcSrc, hobjOriginal);
Gdi32.DeleteDC(hdcSrc);
graphics.ReleaseHdc(hdcDest);
graphics.Dispose();
Gdi32.DeleteObject(hbitmap);
return bitmap;
}
}
It seems copying pixel by pixel was the solution. The following seems to be pixel-perfect equal to the explorer one.
public static Image GetIcon(string fileName, int size)
{
IShellItem shellItem;
Shell32.SHCreateItemFromParsingName(fileName, IntPtr.Zero, Shell32.IShellItem_GUID, out shellItem);
IntPtr hbitmap;
((IShellItemImageFactory)shellItem).GetImage(new SIZE(size, size), 0x0, out hbitmap);
// get the info about the HBITMAP inside the IPictureDisp
DIBSECTION dibsection = new DIBSECTION();
Gdi32.GetObjectDIBSection(hbitmap, Marshal.SizeOf(dibsection), ref dibsection);
int width = dibsection.dsBm.bmWidth;
int height = dibsection.dsBm.bmHeight;
// create the destination Bitmap object
Bitmap bitmap = new Bitmap(width, height, PixelFormat.Format32bppArgb);
for (int x = 0; x < dibsection.dsBmih.biWidth; x++)
{
for (int y = 0; y < dibsection.dsBmih.biHeight; y++)
{
int i = y * dibsection.dsBmih.biWidth + x;
IntPtr ptr = dibsection.dsBm.bmBits + (i * Marshal.SizeOf(typeof(RGBQUAD)));
var rgbquad = (RGBQUAD)Marshal.PtrToStructure(ptr, typeof(RGBQUAD));
if (rgbquad.rgbReserved != 0)
bitmap.SetPixel(x, y, Color.FromArgb(rgbquad.rgbReserved, rgbquad.rgbRed, rgbquad.rgbGreen, rgbquad.rgbBlue));
}
}
Gdi32.DeleteObject(hbitmap);
return bitmap;
}
public unsafe Bitmap MedianFilter(Bitmap Img)
{
int Size =2;
List<byte> R = new List<byte>();
List<byte> G = new List<byte>();
List<byte> B = new List<byte>();
int ApetureMin = -(Size / 2);
int ApetureMax = (Size / 2);
BitmapData imageData = Img.LockBits(new Rectangle(0, 0, Img.Width, Img.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppRgb);
byte* start = (byte*)imageData.Scan0.ToPointer ();
for (int x = 0; x < imageData.Width; x++)
{
for (int y = 0; y < imageData.Height; y++)
{
for (int x1 = ApetureMin; x1 < ApetureMax; x1++)
{
int valx = x + x1;
if (valx >= 0 && valx < imageData.Width)
{
for (int y1 = ApetureMin; y1 < ApetureMax; y1++)
{
int valy = y + y1;
if (valy >= 0 && valy < imageData.Height)
{
Color tempColor = Img.GetPixel(valx, valy);// error come from here
R.Add(tempColor.R);
G.Add(tempColor.G);
B.Add(tempColor.B);
}
}
}
}
}
}
R.Sort();
G.Sort();
B.Sort();
Img.UnlockBits(imageData);
return Img;
}
I tried to do this. but i got an error call "Bitmap region is already locked" can anyone help how to solve this. (error position is highlighted)
GetPixel is the slooow way to access the image and doesn't work (as you noticed) anymore if someone else starts messing with the image buffer directly. Why would you want to do that?
Check Using the LockBits method to access image data for some good insight into fast image manipulation.
In this case, use something like this instead:
int pixelSize = 4 /* Check below or the site I linked to and make sure this is correct */
byte* color =(byte *)imageData .Scan0+(y*imageData .Stride) + x * pixelSize;
Note that this gives you the first byte for that pixel. Depending on the color format you are looking at (ARGB? RGB? ..) you need to access the following bytes as well. Seems to suite your usecase anyway, since you just care about byte values, not the Color value.
So, after having some spare minutes, this is what I'd came up with (please take your time to understand and check it, I just made sure it compiles):
public void SomeStuff(Bitmap image)
{
var imageWidth = image.Width;
var imageHeight = image.Height;
var imageData = image.LockBits(new Rectangle(0, 0, imageWidth, imageHeight), ImageLockMode.ReadOnly, PixelFormat.Format32bppRgb);
var imageByteCount = imageData.Stride*imageData.Height;
var imageBuffer = new byte[imageByteCount];
Marshal.Copy(imageData.Scan0, imageBuffer, 0, imageByteCount);
for (int x = 0; x < imageWidth; x++)
{
for (int y = 0; y < imageHeight; y++)
{
var pixelColor = GetPixel(imageBuffer, imageData.Stride, x, y);
// Do your stuff
}
}
}
private static Color GetPixel(byte[] imageBuffer, int imageStride, int x, int y)
{
int pixelBase = y*imageStride + x*3;
byte blue = imageBuffer[pixelBase];
byte green = imageBuffer[pixelBase + 1];
byte red = imageBuffer[pixelBase + 2];
return Color.FromArgb(red, green, blue);
}
This
Relies on the PixelFormat you used in your sample (regarding both the pixelsize/bytes per pixel and the order of the values). If you change the PixelFormat this will break.
Doesn't need the unsafe keyword. I doubt that it makes a lot of difference, but you are free to use the pointer based access instead, the method would be the same.