Aforge, converting pixel data to UnmanagedImage - c#

I'm trying to use AForge in Unity and I have trouble converting the input data.
I have a 2-dimensional array storing pixel values that I need to convert to UnmanagedImage. I came up with the following code, but I am not sure if its the most efficient way:
img = UnmanagedImage.Create(sizx,sizy, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
for (int i =0;i<sizx;i++)
for (int j =0;j<sizy;j++){
int index = i+j*sizx;
img.SetPixel(i,j, new AForge.Imaging.RGB(t[index].r, t[index].g, t[index].b).Color);
}
Any help appreciated!

I've ended up using unsafe code and accessing the UnmanagedImage.ImageData directly:
unsafe
{
byte* dst = (byte*)data.Scan0.ToPointer();
for (int y = 0; y < Height; y++)
{
for (int x = 0; x < Width; x++, src++, dst += pixelSize)
{
dst[RGB.A] = input[src].A;
dst[RGB.R] = input[src].R;
dst[RGB.G] = input[src].G;
dst[RGB.B] = input[src].B;
}
dst += offset;
}
}
As it cannot be done in Unity, I had to modify the AForge library.

If you have your image represented as a byte[] (where each value represents a pixel color value from 0 to 255), then you can use UnmanagedImage.FromByteArray to create an UnmanagedImage from those values.
This method should work on Unity.

Related

EMGU Gray TColor with UInt16 depth

I have a 12 bit gray-scale camera and I want to use EMGU to process the image.
My problem is that I want to process the image at "UInt16" TDepth and not the usual "Byte"
So initially I create an empty 2D image:
Image<Gray, UInt16> OnImage = new Image<Gray, UInt16>(960, 1280);
then I create a for loop to transfer my Image from 1D vector form to a 2D image:
for (int i=1; i< 960; i++)
{
for (int j = 1; j < 1280; j++)
{
OnImage[i, j] = MyImageVector[Counter];
Counter++;
}
}
where:
int[] MyImageVector = new int[1228800];
The problem is at the line :
OnImage[i, j] = MyImageVector[Counter];
where i get the following error message:
Cannot Implicitly convert type "int" to "EMGU.CV.Structure.Gray"
Why this is happening?
Do you know any way that i can store Int values to an Emgu Image object???
Any alternative workaround would be also helpful.
Thank you
I found an alternative solution to pass a 1D vector into a 2D EMGU.Image:
Image<Gray, Single> _myImage= new Image<Gray, Single>(Width, Height);
Buffer.BlockCopy(MyVector, 0, _myImage.Data, 0, MyVector.Length * sizeof(Single));
This works much faster that 2 for loops...

Performing multiple image averaging in c#

I wish to calculate an average image from 3 different images of the same size that I have. I know this can be done with ease in matlab..but how do I go about this in c#? Also is there an Aforge.net tool I can use directly for this purpose?
I have found an article on SO which might point you in the right direction. Here is the code (unsafe)
BitmapData srcData = bm.LockBits(
new Rectangle(0, 0, bm.Width, bm.Height),
ImageLockMode.ReadOnly,
PixelFormat.Format32bppArgb);
int stride = srcData.Stride;
IntPtr Scan0 = srcData.Scan0;
long[] totals = new long[] {0,0,0};
int width = bm.Width;
int height = bm.Height;
unsafe
{
byte* p = (byte*) (void*) Scan0;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
for (int color = 0; color < 3; color++)
{
int idx = (y*stride) + x*4 + color;
totals[color] += p[idx];
}
}
}
}
int avgB = totals[0] / (width*height);
int avgG = totals[1] / (width*height);
int avgR = totals[2] / (width*height);
Here is the link to the article:
How to calculate the average rgb color values of a bitmap
ImageMagick can do this for you very simply - at the command-line you could type this:
convert *.bmp -evaluate-sequence mean output.jpg
You could equally calculate the median (instead of the mean) of a bunch of TIFF files and save in a PNG output file like this:
convert *.tif -evaluate-sequence median output.png
Easy, eh?
There are bindings for C/C++, Perl, PHP and all sorts of other janguage interfaces, including (as kindly pointed out by #dlemstra) the Magick.Net binding available here.
ImageMagick is powerful, free and available here.

How to convert from System.Drawing.Bitmap to grayscale, then to array of doubles (Double[,]), then back to grayscale Bitmap?

I need to perform some mathematical operations in photographs, and for that I need the floating point grayscale version of an image (which might come from JPG, PNG or BMP files with various colordepths).
I used to do that in Python using PIL and scipy.ndimage, and it was very straightforward to convert to grayscale with PIL and then to an array of floating-point numbers with numpy, but now I need to do something similar in C#, and I'm confused how to do so.
I have read this very nice tutorial, that seems to be a recurring reference, but that only covers the "convert to grayscale" part, I am not sure how to get an array of doubles from a Bitmap, and then (at some moment) to convert it back to System.Drawing.Bitmap for viewing.
I'm sure there are loads of optimal ways to do this.
As #Groo points out perfectly in the comments section, one could use for instance the LockBits method to write and read pixel colors to and from a Bitmap instance.
Going even further, one could use the graphics card of the computer to do the actual computations.
Furthermore, the method Color ToGrayscaleColor(Color color) which turns a color into its
grayscale version is not optically correct. There is a set of ratios which actually need to be applied to the color component strengths. I just used 1, 1, 1 ratios. That's accceptable for me and probably horrible for an artist or a scientist.
In the comments section, #plinth was very nice to point out to this question you should look at, if you want to make an anatomically correct conversion: Converting RGB to grayscale/intensity
Just wanted to share this really easy to understand and implement solution:
First a little helper to turn a Color into it's grayscale version:
public static Color ToGrayscaleColor(Color color) {
var level = (byte)((color.R + color.G + color.B) / 3);
var result = Color.FromArgb(level, level, level);
return result;
}
Then for the color bitmap to grayscale bitmap conversion:
public static Bitmap ToGrayscale(Bitmap bitmap) {
var result = new Bitmap(bitmap.Width, bitmap.Height);
for (int x = 0; x < bitmap.Width; x++)
for (int y = 0; y < bitmap.Height; y++) {
var grayColor = ToGrayscaleColor(bitmap.GetPixel(x, y));
result.SetPixel(x, y, grayColor);
}
return result;
}
The doubles part is quite easy. The Bitmap object is a memory representation of the actual image which you can use in various operations. The colordepth and image format details are only the concern of loading and saving instances of Bitmap onto streams or files. We needn't care about those at this point:
public static double[,] FromGrayscaleToDoubles(Bitmap bitmap) {
var result = new double[bitmap.Width, bitmap.Height];
for (int x = 0; x < bitmap.Width; x++)
for (int y = 0; y < bitmap.Height; y++)
result[x, y] = (double)bitmap.GetPixel(x, y).R / 255;
return result;
}
And turning a double array back into a grayscale image:
public static Bitmap FromDoublesToGrayscal(double[,] doubles) {
var result = new Bitmap(doubles.GetLength(0), doubles.GetLength(1));
for (int x = 0; x < result.Width; x++)
for (int y = 0; y < result.Height; y++) {
int level = (int)Math.Round(doubles[x, y] * 255);
if (level > 255) level = 255; // just to be sure
if (level < 0) level = 0; // just to be sure
result.SetPixel(x, y, Color.FromArgb(level, level, level));
}
return result;
}
The following lines:
if (level > 255) level = 255; // just to be sure
level < 0) level = 0; // just to be sure
are really there in case you operate on the doubles and you want to allow room for little mistakes.
The final code, based mostly in tips taken from the comments, specifically the LockBits part (blog post here) and the perceptual balancing between R, G and B values (not paramount here, but something to know about):
private double[,] TransformaImagemEmArray(System.Drawing.Bitmap imagem) {
// Transforma a imagem de entrada em um array de doubles
// com os valores grayscale da imagem
BitmapData bitmap_data = imagem.LockBits(new System.Drawing.Rectangle(0,0,_foto_franjas_original.Width,_foto_franjas_original.Height),
ImageLockMode.ReadOnly, _foto_franjas_original.PixelFormat);
int pixelsize = System.Drawing.Image.GetPixelFormatSize(bitmap_data.PixelFormat)/8;
IntPtr pointer = bitmap_data.Scan0;
int nbytes = bitmap_data.Height * bitmap_data.Stride;
byte[] imagebytes = new byte[nbytes];
System.Runtime.InteropServices.Marshal.Copy(pointer, imagebytes, 0, nbytes);
double red;
double green;
double blue;
double gray;
var _grayscale_array = new Double[bitmap_data.Height, bitmap_data.Width];
if (pixelsize >= 3 ) {
for (int I = 0; I < bitmap_data.Height; I++) {
for (int J = 0; J < bitmap_data.Width; J++ ) {
int position = (I * bitmap_data.Stride) + (J * pixelsize);
blue = imagebytes[position];
green = imagebytes[position + 1];
red = imagebytes[position + 2];
gray = 0.299 * red + 0.587 * green + 0.114 * blue;
_grayscale_array[I,J] = gray;
}
}
}
_foto_franjas_original.UnlockBits(bitmap_data);
return _grayscale_array;
}

Converting a Bitmap image to a matrix

In C#, I need to convert an image that I have already converted to Bitmap in to a matrix of the size of the image's width and height that consists of the uint8 of the Bitmap data. In another word placing the Bitmap data inside of a matrix and converting them to uint8, so I can do the calculations that I am intended to do on the matrix rows and column.
Try something like this:
public Color[][] GetBitMapColorMatrix(string bitmapFilePath)
{
bitmapFilePath = #"C:\9673780.jpg";
Bitmap b1 = new Bitmap(bitmapFilePath);
int hight = b1.Height;
int width = b1.Width;
Color[][] colorMatrix = new Color[width][];
for (int i = 0; i < width; i++)
{
colorMatrix[i] = new Color[hight];
for (int j = 0; j < hight; j++)
{
colorMatrix[i][j] = b1.GetPixel(i, j);
}
}
return colorMatrix;
}

Locate pixel by color return point

I need to locate the points/cordinates (x,y) of the first pixel it finds with the specified color.
I have used the GetPixel() method, but it's a bit too slow and were looking into LockBits. How ever I can't figure out if this actually could solve my problem. Can I return the points for the found pixel using LockBits?
Here is my current code:
public Point FindPixel(Image Screen, Color ColorToFind)
{
Bitmap bit = new Bitmap(Screen);
BitmapData bmpData = bit.LockBits(new Rectangle(0, 0, bit.Width, bit.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format32bppPArgb);
unsafe
{
byte* ptrSrc = (byte*)bmpData.Scan0;
for (int y = 0; y < bmpData.Height; y++)
{
for (int x = 0; x < bmpData.Width; x++)
{
Color c = bit.GetPixel(x, y);
if (c == ColorToFind)
return new Point(x, y);
}
}
}
bit.UnlockBits(bmpData);
return new Point(0, 0);
}
You didn't stop using GetPixel() so you are not ahead. Write it like this instead:
int IntToFind = ColorToFind.ToArgb();
int height = bmpData.Height; // These properties are slow so read them only once
int width = bmpData.Width;
unsafe
{
for (int y = 0; y < height; y++)
{
int* pline = (int*)bmpData.Scan0 + y * bmpData.Stride/4;
for (int x = 0; x < width; x++)
{
if (pline[x] == IntToFind)
return new Point(x, bit.Height - y - 1);
}
}
}
The odd looking Point constructor is necessary because lines are stored upside-down in a bitmap. And don't return new Point(0, 0) on failure, that's a valid pixel.
There are few things wrong with your code:
You are using PixelFormat.Format32bppPArgb - you should use pixel format of the image, if they won't match, all pixels will be copied under the hood anyways.
You are still using GetPixel, so all this hassle will not give you any advantage.
To use LockBits efficiently, you basically want to lock your image and then use unsafe pointers to get values of pixels. Code to do this will vary a bit for different pixel formats, assuming you really will have 32bpp format with blue being on LSB, your code could look like this:
for (int y = 0; y < bmpData.Height; ++y)
{
byte* ptrSrc = (byte*)(bmpData.Scan0 + y * bmpData.Stride);
int* pixelPtr = (int*)ptrSrc;
for (int x = 0; x < bmpData.Width; ++x)
{
Color col = Color.FromArgb(*pixelPtr);
if (col == ColorToFind) return new Point(x, y);
++pixelPtr; //Increate ptr by 4 bytes, because it is int
}
}
Few remarks:
For each line new ptrSrc is computed using Scan0 + stride value. This is because just increasing the pointer might fail, if Stride != bpp * width, which may be the case.
I assumed that blue pixel is represented as LSB, and alpha as MSB, which I think was not the case, because those GDI pixel formats were.. strange ;), just make sure you check it, if it's the other way, inverse bytes before using FromArgb() method.
If your pixel format is 24bpp, it's a bit more tricky, because you can't use int pointer and increase it by 1 (4 bytes), for obvious reasons.

Categories