Emgucv crop detected shape automatically - c#

I have an application which is going to be used to crop blank spaces from scanned documents for example this image. What I want to do is extract only the card and remove all the white/blank area. I'm using Emgucv FindContours to do this and at the moment I'm able to find the card contour and some noise captured by the scanner in the image as you can see below.
My question is how can I crop the largest contour found or how to extract it by removing other contours and blanks/whitespaces? Or maybe it is possible with the contour index?
Edit: Maybe another possible solution is if is possible to draw the contour to another pictureBox.
Here is the code that I'm using:
Image<Bgr, byte> imgInput;
Image<Bgr, byte> imgCrop;
private void abrirToolStripMenuItem_Click(object sender, EventArgs e)
{
try
{
OpenFileDialog dialog = new OpenFileDialog();
if (dialog.ShowDialog() ==DialogResult.OK)
{
imgInput = new Image<Bgr, byte>(dialog.FileName);
pictureBox1.Image = imgInput.Bitmap;
imgCrop = imgInput;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
private void shapeToolStripMenuItem_Click(object sender, EventArgs e)
{
if (imgCrop == null)
{
return;
}
try
{
var temp = imgCrop.SmoothGaussian(5).Convert<Gray, byte>().ThresholdBinaryInv(new Gray(230), new Gray(255));
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat m = new Mat();
CvInvoke.FindContours(temp, contours, m, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
for (int i = 0; i < contours.Size; i++)
{
double perimeter = CvInvoke.ArcLength(contours[i], true);
VectorOfPoint approx = new VectorOfPoint();
CvInvoke.ApproxPolyDP(contours[i], approx, 0.04 * perimeter, true);
CvInvoke.DrawContours(imgCrop, contours, i, new MCvScalar(0, 0, 255), 2);
pictureBox2.Image = imgCrop.Bitmap;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

I'll give you my answer in C++, but the same operations should be available in Emgu CV.
I propose the following approach: Segment (that is – separate) the target object using the HSV color space. Calculate a binary mask for the object of interest. Get the biggest blob in the binary mask, this should be the card. Compute the bounding box of the card. Crop the card out of the input image
Ok, first get (or read) the input image. Apply a median blur filter, it will help in getting rid of that high-frequency noise (the little grey blobs) that you see on the input. The main parameter to adjust is the size of the kernel (or filter aperture) be careful, though – a high value will result in an aggressive effect and will likely destroy your image:
//read input image:
std::string imageName = "C://opencvImages//yoshiButNotYoshi.png";
cv::Mat imageInput = cv::imread( imageName );
//apply a median blur filter, the size of the kernel is 5 x 5:
cv::Mat blurredImage;
cv::medianBlur ( imageInput, blurredImage, 5 );
This is the result of the blur filter (The embedded image is resized):
Next, segment the image. Exploit the fact that the background is white, and everything else (the object of interest, mainly) has some color information. You can use the HSV color space. First, convert the BGR image into HSV:
//BGR to HSV conversion:
cv::Mat hsvImg;
cv::cvtColor( blurredImage, hsvImg, CV_RGB2HSV );
The HSV color space encodes color information differently than the typical BGR/RGB color space. Its advantage over other color models pretty much depends on the application, but in general, it is more robust while working with hue gradients. I'll try to get an HSV-based binary mask for the object of interest.
In a binary mask, everything you are interested on the input image is colored in white, everything else in black (or vice versa). You can obtain this mask using the inRange function. However, you must specify the color ranges that will be rendered in white (or black) in the output mask. For your image, and using the HSV color model those values are:
cv::Scalar minColor( 0, 0, 100 ); //the lower range of colors
cv::Scalar maxColor( 0, 0, 255 ); //the upper range of colors
Now, get the binary mask:
//prepare the binary mask:
cv::Mat binaryMask;
//create the binary mask using the specified range of color
cv::inRange( hsvImg, minColor, maxColor, binaryMask );
//invert the mask:
binaryMask = 255 - binaryMask;
You get this image:
Now, you can get rid of some of the noise (that survived the blur filter) via morphological filtering. Morphological filters are, essentially, logical rules applied on binary (or gray) images. They take a "neighborhood" of pixels in the input and apply logical functions to get an output. They are quite handy while cleaning up binary images. I'll apply a series of logical filters to achieve just that.
I'll first erode the image and then dilate it using 3 iterations. The structuring element is a rectangle of size 3 x 3:
//apply some morphology the clean the binary mask a little bit:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
int morphIterations = 3;
cv::morphologyEx( binaryMask, binaryMask, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphIterations );
cv::morphologyEx( binaryMask, binaryMask, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphIterations );
You get this output. Check out how the noisy blobs are mostly gone:
Now, comes the cool part. You can loop through all the contours in this image and get the biggest of them all. That's a typical operation that I constantly perform, so, I've written a function that does that. It is called findBiggestBlob. I'll present the function later. Check out the result you get after finding and extracting the biggest blob:
//find the biggest blob in the binary image:
cv::Mat biggestBlob = findBiggestBlob( binaryMask );
You get this:
Now, you can get the bounding box of the biggest blob using boundingRect:
//Get the bounding box of the biggest blob:
cv::Rect bBox = cv::boundingRect( biggestBlob );
Let's draw the bounding box on the input image:
cv::Mat imageClone = imageInput.clone();
cv::rectangle( imageClone, bBox, cv::Scalar(255,0,0), 2 );
Finally, let's crop the card out of the input image:
cv::Mat croppedImage = imageInput( bBox );
This is the cropped output:
This is the code for the findBiggestBlob function. The idea is just to compute all the contours in the binary input, calculate their area and store the contour with the largest area of the bunch:
//Function to get the largest blob in a binary image:
cv::Mat findBiggestBlob( cv::Mat &inputImage ){
cv::Mat biggestBlob = inputImage.clone();
int largest_area = 0;
int largest_contour_index = 0;
std::vector< std::vector<cv::Point> > contours; // Vector for storing contour
std::vector< cv::Vec4i > hierarchy;
// Find the contours in the image
cv::findContours( biggestBlob, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int i = 0; i < (int)contours.size(); i++ ) {
//Find the area of the contour
double a = cv::contourArea( contours[i], false);
//Store the index of largest contour:
if( a > largest_area ){
largest_area = a;
largest_contour_index = i;
}
}
//Once you get the biggest blob, paint it black:
cv::Mat tempMat = biggestBlob.clone();
cv::drawContours( tempMat, contours, largest_contour_index, cv::Scalar(0),
CV_FILLED, 8, hierarchy );
//Erase the smaller blobs:
biggestBlob = biggestBlob - tempMat;
tempMat.release();
return biggestBlob;
}

Related

Morphological Operations On Image

I am currently doing a project in which I am trying to identify humans based on hands vascular pattern in C# using Emgu CV.
The gray-scale image of the hand was first processed using the Adaptive Threshold function.
Now I want to create a mask of the image using the morphological operations.
The purpose is to remove the noise from the image.
This is the adaptive-thresholded image:
Kindly guide me which function should I use and how to use.
The code here is in C++. It shouldn't be difficult to port to C#, since it's mostly OpenCV functions calls. You can use that as guideline. Sorry about that.
You can apply a open operation with a small kernel to remove most of the noise:
Mat1b opened;
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(thresholded, opened, MORPH_OPEN, kernel);
As you can see, some noise is still present, and you can't remove it with other morphological operations. You can simply consider the largest blob as the correct one (in green here):
Then you can floodfill the inside of the hand (in gray here):
And set to 0 all values in original image where the corresponding mask in not the same color of the inside of the image:
This is the full code (again, it's C++):
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char**)
{
// Load grayscale image
Mat1b thresholded = imread("path_to_image", IMREAD_GRAYSCALE);
// Get rid of JPEG compression artifacts
thresholded = thresholded > 100;
// Needed so findContours handles borders contours correctly
Mat1b bin;
copyMakeBorder(thresholded, bin, 1,1,1,1, BORDER_CONSTANT, 0);
// Apply morphological operation "close"
Mat1b closed;
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(bin, closed, MORPH_OPEN, kernel);
// Find contours
vector<vector<Point>> contours;
findContours(bin.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_NONE, Point(-1,-1)); // Point(-1,-1) accounts for previous copyMakeBorder
// Keep largest contour
int size_largest = 0;
int idx_largest = -1;
for (int i = 0; i < contours.size(); ++i)
{
if (contours[i].size() > size_largest)
{
size_largest = contours[i].size();
idx_largest = i;
}
}
Mat3b dbg;
cvtColor(closed, dbg, COLOR_GRAY2BGR);
// Black initialized mask
Mat1b mask(thresholded.rows, thresholded.cols, uchar(0));
if (idx_largest >= 0)
{
drawContours(dbg, contours, idx_largest, Scalar(0, 255, 0), CV_FILLED);
// Draw filled polygin on mask
drawContours(mask, contours, idx_largest, Scalar(255), 1);
}
// Get a point inside the contour
Moments m = moments(contours[idx_largest]);
Point2f inside(m.m10 / m.m00, m.m01 / m.m00);
floodFill(mask, inside, Scalar(127));
Mat3b result;
cvtColor(thresholded, result, COLOR_GRAY2BGR);
result.setTo(Scalar(0), mask != 127);
imshow("Closed", closed);
imshow("Contour", dbg);
imshow("Result", result);
waitKey();
return 0;
}

C# Convert an image from color to a black and white

I've tried several examples from here but have not found anything that works.
I need to be able to convert a color image to black and white so that I can take that data and send it to a thermal printer.
Getting the image from color to black and white seems to be the trouble as I've found no methods in the C# libraries.
The image I'm testing with is a PixelFormat.Format32bppArgb and I believe I want to covert that down to PixelFormat.Format1bppIndexed
EDIT: Not sure how I can make it any clearer. I don't want greyscale I want black and white which is what "PixelFormst.Format1bppIndexed"
You can do it with ImageMagick which is installed on most Linux distros and is available for free for OSX (ideally via homebrew) and also for Windows from here.
If you start with this smooth greyscale ramp:
At the command line, you can use this for Floyd-Steinberg dithering:
convert grey.png -dither FloydSteinberg -monochrome fs.bmp
or, this for Riemersma dithering:
convert grey.png -dither Riemersma -monochrome riem.bmp
The Ordered Dither that Glenn was referring to is available like this with differing tile options:
convert grey.png -ordered-dither o8x8 -monochrome od8.bmp
convert grey.png -ordered-dither o2x2 -monochrome od2.bmp
A check of the format shows it is 1bpp with a 2-colour palette:
identify -verbose riem.bmp
Image: riem.bmp
Format: BMP (Microsoft Windows bitmap image)
Class: PseudoClass
Geometry: 262x86+0+0
Units: PixelsPerCentimeter
Type: Bilevel
Base type: Bilevel <--- 1 bpp
Endianess: Undefined
Colorspace: Gray
Depth: 1-bit <--- 1 bpp
Channel depth:
gray: 1-bit
Channel statistics:
Pixels: 22532
Gray:
min: 0 (0)
max: 1 (1)
mean: 0.470486 (0.470486)
standard deviation: 0.499128 (0.499128)
kurtosis: -1.98601
skewness: 0.118261
entropy: 0.997485
Colors: 2
Histogram:
11931: ( 0, 0, 0) #000000 gray(0)
10601: (255,255,255) #FFFFFF gray(255)
Colormap entries: 2
Colormap:
0: ( 0, 0, 0) #000000 gray(0) <--- colourmap has only black...
1: (255,255,255) #FFFFFF gray(255) <--- ... and white
If you start with a colour image like this:
and process like this:
convert colour.png -ordered-dither o8x8 -monochrome od8.bmp
you will get this
As Glenn says, there are C# bindings for ImageMagick - or you can just use the above commands in a batch file, or use the C# equivalent of the system() call to execute the above ImageMagick commands.
Imagemagick can convert images to black-and-white by various methods including dithering (-dither) and ordered-dithering (-ordered-dither). I generally use the command-line interface, but there is a C# binding called magick.net that you might try. See magick.codeplex.com.
For some examples, see this Q&A at codegolf.stackexchange.com
enter image description here
You can use this codes for converting image to black and white.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace Converting_Image_to__Black_and_White
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
public static int siyahbeyazsinirnoktasi=0;
public static string DosyaYolu = "";
#region resim üzerinde işlemler yapma (siyah beyaz)
Bitmap BlackandWhite(Bitmap Goruntu)
{
Bitmap yeniGoruntu = new Bitmap(Goruntu.Width,
Goruntu.Height);//Bitmap sınıfımızı oluşturduk.
double toplampikselsayisi = Goruntu.Width * Goruntu.Height;
int GriTonlama;
for (int i = 0; i < Goruntu.Width; i++)//resmi yatay olarak
taramak için
{
for (int j = 0; j < Goruntu.Height; j++)//resmi dikey olarak
taramak için
{
Color Pixel = Goruntu.GetPixel(i, j);//color sınıfını ile
pixel rengini alıyoruz.
GriTonlama = (Pixel.R + Pixel.G + Pixel.B) / 3;//almış
olduğumuz renk değerini gri tona çevirmek için kullanmamız gereken
formül.
if (GriTonlama < siyahbeyazsinirnoktasi)
{
yeniGoruntu.SetPixel(i, j, Color.FromArgb(0, 0,
0));//yeni görüntümüze gri tonlamadaki pixel değerini veriyoruz.
}
if (GriTonlama >= siyahbeyazsinirnoktasi)
{
yeniGoruntu.SetPixel(i, j, Color.FromArgb(255, 255,
255));//yeni görüntümüze gri tonlamadaki pixel değerini veriyoruz.
}
}
}
return yeniGoruntu;
}
#endregion
private void btnLoadImage_Click(object sender, EventArgs e)
{
FolderBrowserDialog Klasor = new FolderBrowserDialog();
openFileDialog1.Title = "Resimdosyası seçiniz.";
openFileDialog1.Filter = "Image files (*.jpg)|*.jpg|Tüm
dosyalar(*.*)|*.*";
if (openFileDialog1.ShowDialog() == DialogResult.OK)
{
DosyaYolu = openFileDialog1.FileName;
var dosyaboyutu = new FileInfo(DosyaYolu).Length;
if (dosyaboyutu <= 500000)
{
pictureBox1.Image = new
Bitmap(openFileDialog1.OpenFile());
btnConvertBlackandWhite.Enabled = true;
label1.Visible = true;
label2.Visible= true;
label3.Visible = true;
label4.Visible = true;
label5.Visible = true;
label6.Visible = true;
trackBar1.Visible = true;
}
else
{
MessageBox.Show("Seçtiğiniz resim boyutu 500 KB'nın
altında olmalıdır.");
}
}
}
private void btnConvertBlackandWhite_Click(object sender, EventArgs
e)
{
pictureBox1.Image = BlackandWhite(new Bitmap(DosyaYolu));
btnSave.Enabled = true;
}
private void trackBar1_Scroll(object sender, EventArgs e)
{
siyahbeyazsinirnoktasi = trackBar1.Value ;
label3.Text = Convert.ToString(siyahbeyazsinirnoktasi);
}
private void Form1_Load(object sender, EventArgs e)
{
label3.Text = "130";
siyahbeyazsinirnoktasi = 130;
}
private void btnSave_Click(object sender, EventArgs e)
{
{
Image pngoptikform = new Bitmap(pictureBox1.Image);
SaveFileDialog sf = new SaveFileDialog();//yeni bir kaydetme
diyaloğu oluşturuyoruz.
sf.Filter = "Image file (Jpg dosyası (*.jpg)|*.jpg ";//.bmp
veya .jpg olarak kayıt imkanı sağlıyoruz.
sf.Title = "Kayıt";//diğaloğumuzun başlığını belirliyoruz.
sf.CheckPathExists = true;
sf.DefaultExt = "jpg";
sf.FilterIndex = 1;
DialogResult sonuc = sf.ShowDialog();
if (sonuc == DialogResult.OK)
{
if (sf.FilterIndex == 1)
{
pngoptikform.Save(sf.FileName);
System.Diagnostics.Process.Start(sf.FileName);
}
}
}
}
}
}
I believe you can get fairly good results with a decent dithering algorithm. There's a similar post here, please see if it helps you:
Converting a bitmap to monochrome
I'd personally do this in two steps:
Convert the image to an 8-bit image with a palette of two colours.
Convert the 8-bit image to a 1-bit image.
The first step I already explained before, in this answer. (You may want to use more detailed dithering methods as described in the other answers here, but even then the answer is needed for its methods to convert and manipulate images as byte arrays.)
The basic method detailed there is:
Paint the image on a new 32bpp ARGB image so you have a predictable four-byte-per-pixel data structure.
Extract the image's bytes.
Make a colour out of every block of 4 bytes, match that to the closest match on a given palette, and store the result of that matching in a byte array (with exactly width * height bytes).
Make a new image using the 8-bit data array and the used palette.
That'll give you your picture converted to black and white 8-bit image. Now, all we need to do to get a 1-bit image is put a new step before the last, where we compact the 8-bit data to 1-bit data, and then do the final call to the BuildImage function with PixelFormat.Format1bppIndexed instead.
Here is the function to reduce the image to a lower bits length. It requires the original image data and stride, and will return the converted image data and the new stride.
Note, I'm not sure what the bits order inside the data bytes is for normal dotNet 1-bit images, since I only used this function to convert custom game file formats, so you'll just need to test it out to see what you need to give in the bigEndian parameter. If you give the wrong value, each column of 8 pixels will be left-right mirrored, so it should be obvious in the result.
/// <summary>
/// Converts given raw image data for a paletted 8-bit image to lower amount of bits per pixel.
/// </summary>
/// <param name="data8bit">The eight bit per pixel image data</param>
/// <param name="width">The width of the image</param>
/// <param name="height">The height of the image</param>
/// <param name="bitsLength">The new amount of bits per pixel</param>
/// <param name="bigEndian">True if the bits in the new image data are to be stored as big-endian.</param>
/// <param name="stride">Stride used in the original image data. Will be adjusted to the new stride value.</param>
/// <returns>The image data converted to the requested amount of bits per pixel.</returns>
public static Byte[] ConvertFrom8Bit(Byte[] data8bit, Int32 width, Int32 height, Int32 bitsLength, Boolean bigEndian, ref Int32 stride)
{
Int32 parts = 8 / bitsLength;
// Amount of bytes to write per width. This rounds the bits up to the nearest byte.
Int32 newStride = ((bitsLength * width) + 7) / 8;
// Bit mask for reducing original data to actual bits maximum.
// Should not be needed if data is correct, but eh.
Int32 bitmask = (1 << bitsLength) - 1;
Byte[] dataXbit = new Byte[newStride * height];
// Actual conversion porcess.
for (Int32 y = 0; y < height; y++)
{
for (Int32 x = 0; x < width; x++)
{
// This will hit the same byte multiple times
Int32 indexXbit = y * newStride + x / parts;
// This will always get a new index
Int32 index8bit = y * stride + x;
// Amount of bits to shift the data to get to the current pixel data
Int32 shift = (x % parts) * bitsLength;
// Reversed for big-endian
if (bigEndian)
shift = 8 - shift - bitsLength;
// Get data, reduce to bit rate, shift it and store it.
dataXbit[indexXbit] |= (Byte)((data8bit[index8bit] & bitmask) << shift);
}
}
stride = newStride;
return dataXbit;
}
The term for showing colors is called Gray Scale:
A quick search for: color to gray scale converter c# yielded the following web site: http://www.codeproject.com/Questions/315939/How-To-Convert-Grayscale-Image-to-Color-Image-in-c

Fillholes function in Aforge

I need to use Fillholes function of Aforge, it accepts binary image. I manipulated all pixels to black or white pixels using following code in c#:
bitmapimage.SetPixel(i, j, Color.FromArgb(255,255,255)); // for white pixel
bitmapimage.SetPixel(i, j, Color.FromArgb(0,0,0)); // for black pixel
But when I apply fillholes function to bitmap image, I get this exception:
"Source pixel format is not supported by the filter"
Kindly anyone help why I am getting this exception ... is bitmap image not converted to Binary by all using setpixel?
Just changing the pixel colors will not change the pixel format of your image.
You first need to make sure that you have a gray scale image using some gray scale filter, then make sure that the gray scale image is binary through some threshold filter. Once the image has been pre-processed using these steps, you may apply the FillHoles filter.
AForge.NET offers helper classes to merge several filters, so you can combine all three filters into one total filter using the FiltersSequence class.
Assuming that your original Bitmap image is named bitmap, you can then apply the fill holes filter for example like this:
var filter = new FiltersSequence(Grayscale.CommonAlgorithms.BT709,
new Threshold(100), new FillHoles());
var newBitmap = filter.Apply(bitmap);
AForge FillHoles Class
The filter allows to fill black holes in white object in a binary image. It is possible to specify maximum holes' size to fill using MaxHoleWidth and MaxHoleHeight properties.
The filter accepts binary image only, which are represented as 8 bpp images.
Sample usage:
C#
// create and configure the filter
FillHoles filter = new FillHoles( );
filter.MaxHoleHeight = 20;
filter.MaxHoleWidth = 20;
filter.CoupledSizeFiltering = false;
// apply the filter
Bitmap result = filter.Apply( image );
The above was found at http://www.aforgenet.com/framework/docs/html/68bd57bd-1fd6-6c4e-4500-ed4726bc836e.htm
You have to convert your bitmapImage to a binary image represented as an 8 bpp image. Here is one way to do it.
UnmanagedImage grayImage = null;
if (image.PixelFormat == PixelFormat.Format8bppIndexed)
{
grayImage = bitmapImage;
}
else
{
grayImage = UnmanagedImage.Create(image.Width, image.Height, PixelFormat.Format8bppIndexed);
}

Color tracking using EMGUcv

I am trying to make an colored object tracker which uses a binary image and blob detector to follow the target sort of like this: https://www.youtube.com/watch?v=9qky6g8NRmI . However I can not figure out how the ThresholdBinary() method work and if it is even the right one.
Here is a relevant bit of the code:
cam._SmoothGaussian(3);
blobDetector.Update(cam);
Image<Bgr,byte> binaryImage = cam.ThresholdBinary(new Bgr(145,0,145),new Bgr(0,0,0));
Image<Gray,byte> binaryImageGray = binaryImage.Conver<Gray,byte>();
blobTracker.Process(cam, binaryImageGray);
foreach (MCvBlob blob in blobTracker)
{
cam.Draw((Rectangle)blob, new Bgr(0,0,255),2);
}
When I display the binaryImage I do not even get blobs. I just get a black image.
Typically, the colored blob detection part of such an application works along the lines of:
Convert the image to HSV (hue, saturation, value) color space.
Filter the hue channel for all pixels with a hue value near the target value. Thresholding will typically give you all pixels with a value above or below the threshold. You are interested in the pixels near some target value.
Filter the obtained mask some more, possibly using the saturation/value channels or by removing small blobs. Ideally only the target blob remains.
Some sample code that aims to find a green object (hue ~50) such as the green ball shown in the video:
// 1. Convert the image to HSV
using (Image<Hsv, byte> hsv = original.Convert<Hsv, byte>())
{
// 2. Obtain the 3 channels (hue, saturation and value) that compose the HSV image
Image<Gray, byte>[] channels = hsv.Split();
try
{
// 3. Remove all pixels from the hue channel that are not in the range [40, 60]
CvInvoke.cvInRangeS(channels[0], new Gray(40).MCvScalar, new Gray(60).MCvScalar, channels[0]);
// 4. Display the result
imageBox1.Image = channels[0];
}
finally
{
channels[1].Dispose();
channels[2].Dispose();
}
}

How to render an image with a color-keyed green mask in C#?

Trying to figure out the most elegant way to render an image inside of a specific color of mask in C# (via System.Drawing or equivalent that will work in both desktop and ASP.NET applications).
The mask image will contain green keys where the image should be 'painted'.
(Desired Result image below is not perfect, hand lasso'd...)
There are various techniques for this:
Scan pixel data and build a mask image (as already suggested by itsme86 and Moby Disk)
A variant of scanning that builds a clipping region from the mask and uses that when drawing (refer to this article by Bob Powell)
Use color keys to mask in the Graphics.DrawImage call.
I'll focus on the third option.
Assuming that the image color that you want to eliminate from your mask is Color.Lime, we can use ImageAttributes.SetColorKey to stop any of that color from being drawn during a call to Graphics.DrawImage like this:
using (Image background = Bitmap.FromFile("tree.png"))
using (Image masksource = Bitmap.FromFile("mask.png"))
using (var imgattr = new ImageAttributes())
{
// set color key to Lime
imgattr.SetColorKey(Color.Lime, Color.Lime);
// Draw non-lime portions of mask onto original
using (var g = Graphics.FromImage(background))
{
g.DrawImage(
masksource,
new Rectangle(0, 0, masksource.Width, masksource.Height),
0, 0, masksource.Width, masksource.Height,
GraphicsUnit.Pixel, imgattr
);
}
// Do something with the composited image here...
background.Save("Composited.png");
}
And the results:
You can use the same technique (with color key on Color.Fuchsia) if you want to put those bits of tree into another image.
You want something like this:
Bitmap original = new Bitmap(#"tree.jpg");
Bitmap mask = new Bitmap(#"mask.jpg");
int width = original.Width;
int height = original.Height;
// This is the color that will be replaced in the mask
Color key = Color.FromArgb(0,255,0);
// Processing one pixel at a time is slow, but easy to understand
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
// Is this pixel "green" ?
if (mask.GetPixel(x,y) == key)
{
// Copy the pixel color from the original
Color c = original.GetPixel(x,y);
// Into the mask
mask.SetPixel(x,y,c);
}
}
}
You could probably read in the mask and translate it into an image that has the alpha channel set to 0 when the pixel is green and the alpha channel set to 0xFF when the pixel is any other color. Then you could draw the mask image over the original image.

Categories