Color tracking using EMGUcv - c#

I am trying to make an colored object tracker which uses a binary image and blob detector to follow the target sort of like this: https://www.youtube.com/watch?v=9qky6g8NRmI . However I can not figure out how the ThresholdBinary() method work and if it is even the right one.
Here is a relevant bit of the code:
cam._SmoothGaussian(3);
blobDetector.Update(cam);
Image<Bgr,byte> binaryImage = cam.ThresholdBinary(new Bgr(145,0,145),new Bgr(0,0,0));
Image<Gray,byte> binaryImageGray = binaryImage.Conver<Gray,byte>();
blobTracker.Process(cam, binaryImageGray);
foreach (MCvBlob blob in blobTracker)
{
cam.Draw((Rectangle)blob, new Bgr(0,0,255),2);
}
When I display the binaryImage I do not even get blobs. I just get a black image.

Typically, the colored blob detection part of such an application works along the lines of:
Convert the image to HSV (hue, saturation, value) color space.
Filter the hue channel for all pixels with a hue value near the target value. Thresholding will typically give you all pixels with a value above or below the threshold. You are interested in the pixels near some target value.
Filter the obtained mask some more, possibly using the saturation/value channels or by removing small blobs. Ideally only the target blob remains.
Some sample code that aims to find a green object (hue ~50) such as the green ball shown in the video:
// 1. Convert the image to HSV
using (Image<Hsv, byte> hsv = original.Convert<Hsv, byte>())
{
// 2. Obtain the 3 channels (hue, saturation and value) that compose the HSV image
Image<Gray, byte>[] channels = hsv.Split();
try
{
// 3. Remove all pixels from the hue channel that are not in the range [40, 60]
CvInvoke.cvInRangeS(channels[0], new Gray(40).MCvScalar, new Gray(60).MCvScalar, channels[0]);
// 4. Display the result
imageBox1.Image = channels[0];
}
finally
{
channels[1].Dispose();
channels[2].Dispose();
}
}

Related

threshold depth distance in image opencvsharp c# intel realsense

This maybe a stupid question but how can you make a threshold so that the depth distance of the camera can get changed. Now I am using the Cv2.threshold to to that but with the otsu method the whole picture changes to one color instead of different kinds of a color.
The code used:
var colorizedDepth = colorizer.Process<VideoFrame>(depthFrame).DisposeWith(frames);
Mat testcd = new Mat(colorizedDepth.Height, colorizedDepth.Width, MatType.CV_8UC3, colorizedDepth.Data);
Mat testgd = new Mat();
Cv2.CvtColor(testcd, testgd, ColorConversionCodes.RGBA2GRAY);
Mat testbd = new Mat();
Cv2.Threshold(testgd, testbd, 0, 255, ThresholdTypes.Otsu | ThresholdTypes.Binary);
Cv2.ImShow("camera", testgd);
Cv2.WaitKey(0);
The code to get the colored depth is from the wrapper librealsense:
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/csharp
Does anyone know what I am doing wrong for the threshold so that the depth distances get changed?

Emgucv crop detected shape automatically

I have an application which is going to be used to crop blank spaces from scanned documents for example this image. What I want to do is extract only the card and remove all the white/blank area. I'm using Emgucv FindContours to do this and at the moment I'm able to find the card contour and some noise captured by the scanner in the image as you can see below.
My question is how can I crop the largest contour found or how to extract it by removing other contours and blanks/whitespaces? Or maybe it is possible with the contour index?
Edit: Maybe another possible solution is if is possible to draw the contour to another pictureBox.
Here is the code that I'm using:
Image<Bgr, byte> imgInput;
Image<Bgr, byte> imgCrop;
private void abrirToolStripMenuItem_Click(object sender, EventArgs e)
{
try
{
OpenFileDialog dialog = new OpenFileDialog();
if (dialog.ShowDialog() ==DialogResult.OK)
{
imgInput = new Image<Bgr, byte>(dialog.FileName);
pictureBox1.Image = imgInput.Bitmap;
imgCrop = imgInput;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
private void shapeToolStripMenuItem_Click(object sender, EventArgs e)
{
if (imgCrop == null)
{
return;
}
try
{
var temp = imgCrop.SmoothGaussian(5).Convert<Gray, byte>().ThresholdBinaryInv(new Gray(230), new Gray(255));
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat m = new Mat();
CvInvoke.FindContours(temp, contours, m, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
for (int i = 0; i < contours.Size; i++)
{
double perimeter = CvInvoke.ArcLength(contours[i], true);
VectorOfPoint approx = new VectorOfPoint();
CvInvoke.ApproxPolyDP(contours[i], approx, 0.04 * perimeter, true);
CvInvoke.DrawContours(imgCrop, contours, i, new MCvScalar(0, 0, 255), 2);
pictureBox2.Image = imgCrop.Bitmap;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
I'll give you my answer in C++, but the same operations should be available in Emgu CV.
I propose the following approach: Segment (that is – separate) the target object using the HSV color space. Calculate a binary mask for the object of interest. Get the biggest blob in the binary mask, this should be the card. Compute the bounding box of the card. Crop the card out of the input image
Ok, first get (or read) the input image. Apply a median blur filter, it will help in getting rid of that high-frequency noise (the little grey blobs) that you see on the input. The main parameter to adjust is the size of the kernel (or filter aperture) be careful, though – a high value will result in an aggressive effect and will likely destroy your image:
//read input image:
std::string imageName = "C://opencvImages//yoshiButNotYoshi.png";
cv::Mat imageInput = cv::imread( imageName );
//apply a median blur filter, the size of the kernel is 5 x 5:
cv::Mat blurredImage;
cv::medianBlur ( imageInput, blurredImage, 5 );
This is the result of the blur filter (The embedded image is resized):
Next, segment the image. Exploit the fact that the background is white, and everything else (the object of interest, mainly) has some color information. You can use the HSV color space. First, convert the BGR image into HSV:
//BGR to HSV conversion:
cv::Mat hsvImg;
cv::cvtColor( blurredImage, hsvImg, CV_RGB2HSV );
The HSV color space encodes color information differently than the typical BGR/RGB color space. Its advantage over other color models pretty much depends on the application, but in general, it is more robust while working with hue gradients. I'll try to get an HSV-based binary mask for the object of interest.
In a binary mask, everything you are interested on the input image is colored in white, everything else in black (or vice versa). You can obtain this mask using the inRange function. However, you must specify the color ranges that will be rendered in white (or black) in the output mask. For your image, and using the HSV color model those values are:
cv::Scalar minColor( 0, 0, 100 ); //the lower range of colors
cv::Scalar maxColor( 0, 0, 255 ); //the upper range of colors
Now, get the binary mask:
//prepare the binary mask:
cv::Mat binaryMask;
//create the binary mask using the specified range of color
cv::inRange( hsvImg, minColor, maxColor, binaryMask );
//invert the mask:
binaryMask = 255 - binaryMask;
You get this image:
Now, you can get rid of some of the noise (that survived the blur filter) via morphological filtering. Morphological filters are, essentially, logical rules applied on binary (or gray) images. They take a "neighborhood" of pixels in the input and apply logical functions to get an output. They are quite handy while cleaning up binary images. I'll apply a series of logical filters to achieve just that.
I'll first erode the image and then dilate it using 3 iterations. The structuring element is a rectangle of size 3 x 3:
//apply some morphology the clean the binary mask a little bit:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
int morphIterations = 3;
cv::morphologyEx( binaryMask, binaryMask, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphIterations );
cv::morphologyEx( binaryMask, binaryMask, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphIterations );
You get this output. Check out how the noisy blobs are mostly gone:
Now, comes the cool part. You can loop through all the contours in this image and get the biggest of them all. That's a typical operation that I constantly perform, so, I've written a function that does that. It is called findBiggestBlob. I'll present the function later. Check out the result you get after finding and extracting the biggest blob:
//find the biggest blob in the binary image:
cv::Mat biggestBlob = findBiggestBlob( binaryMask );
You get this:
Now, you can get the bounding box of the biggest blob using boundingRect:
//Get the bounding box of the biggest blob:
cv::Rect bBox = cv::boundingRect( biggestBlob );
Let's draw the bounding box on the input image:
cv::Mat imageClone = imageInput.clone();
cv::rectangle( imageClone, bBox, cv::Scalar(255,0,0), 2 );
Finally, let's crop the card out of the input image:
cv::Mat croppedImage = imageInput( bBox );
This is the cropped output:
This is the code for the findBiggestBlob function. The idea is just to compute all the contours in the binary input, calculate their area and store the contour with the largest area of the bunch:
//Function to get the largest blob in a binary image:
cv::Mat findBiggestBlob( cv::Mat &inputImage ){
cv::Mat biggestBlob = inputImage.clone();
int largest_area = 0;
int largest_contour_index = 0;
std::vector< std::vector<cv::Point> > contours; // Vector for storing contour
std::vector< cv::Vec4i > hierarchy;
// Find the contours in the image
cv::findContours( biggestBlob, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int i = 0; i < (int)contours.size(); i++ ) {
//Find the area of the contour
double a = cv::contourArea( contours[i], false);
//Store the index of largest contour:
if( a > largest_area ){
largest_area = a;
largest_contour_index = i;
}
}
//Once you get the biggest blob, paint it black:
cv::Mat tempMat = biggestBlob.clone();
cv::drawContours( tempMat, contours, largest_contour_index, cv::Scalar(0),
CV_FILLED, 8, hierarchy );
//Erase the smaller blobs:
biggestBlob = biggestBlob - tempMat;
tempMat.release();
return biggestBlob;
}

Change image size (width, height) to have defined dpi

i try to explain what i need.
1 -My goal condition is to transform black pixels of image in Cartesian points without resize image. (ok, Done)
2 -Second goal condition is to resize image and redo step 1. (ok, i have normalized points with original image size and the job is done when changed width or height!)
3 - Now i need to reduced number of pixels in image to have defined number of DPI in my image to redo step 1. How? I have found the method setResolution(..,..) but how i must change the width and height of my image to obtain the correct resolution in terms of DPI? (see [?????] in code )
var image2 = new Bitmap(canvasWidth, canvasHeight);
image2.SetResolution(200.0f,200.0f); // I need this for example!
using (System.Drawing.Graphics gr = System.Drawing.Graphics.FromImage(image2)) {
gr.SmoothingMode = SmoothingMode.HighSpeed;
gr.InterpolationMode = InterpolationMode.Low;
gr.PixelOffsetMode = PixelOffsetMode.None;
gr.Clear(Color.White);
gr.DrawImage(this.LoadedImage, new System.Drawing.RectangleF(new PointF((float)this.Centre.x, (float)this.Centre.y), new Size(canvasWidth [?????], canvasHeight[??????])));
return image2;
}
If i run over everyone pixel in my new image2 i have the same result like run over OriginalImage. Well,in the end i need to reduced the number of pixels of my image to obtain a defined Dots Per Inch result.
I hope I was clear.
Thanks.

Fillholes function in Aforge

I need to use Fillholes function of Aforge, it accepts binary image. I manipulated all pixels to black or white pixels using following code in c#:
bitmapimage.SetPixel(i, j, Color.FromArgb(255,255,255)); // for white pixel
bitmapimage.SetPixel(i, j, Color.FromArgb(0,0,0)); // for black pixel
But when I apply fillholes function to bitmap image, I get this exception:
"Source pixel format is not supported by the filter"
Kindly anyone help why I am getting this exception ... is bitmap image not converted to Binary by all using setpixel?
Just changing the pixel colors will not change the pixel format of your image.
You first need to make sure that you have a gray scale image using some gray scale filter, then make sure that the gray scale image is binary through some threshold filter. Once the image has been pre-processed using these steps, you may apply the FillHoles filter.
AForge.NET offers helper classes to merge several filters, so you can combine all three filters into one total filter using the FiltersSequence class.
Assuming that your original Bitmap image is named bitmap, you can then apply the fill holes filter for example like this:
var filter = new FiltersSequence(Grayscale.CommonAlgorithms.BT709,
new Threshold(100), new FillHoles());
var newBitmap = filter.Apply(bitmap);
AForge FillHoles Class
The filter allows to fill black holes in white object in a binary image. It is possible to specify maximum holes' size to fill using MaxHoleWidth and MaxHoleHeight properties.
The filter accepts binary image only, which are represented as 8 bpp images.
Sample usage:
C#
// create and configure the filter
FillHoles filter = new FillHoles( );
filter.MaxHoleHeight = 20;
filter.MaxHoleWidth = 20;
filter.CoupledSizeFiltering = false;
// apply the filter
Bitmap result = filter.Apply( image );
The above was found at http://www.aforgenet.com/framework/docs/html/68bd57bd-1fd6-6c4e-4500-ed4726bc836e.htm
You have to convert your bitmapImage to a binary image represented as an 8 bpp image. Here is one way to do it.
UnmanagedImage grayImage = null;
if (image.PixelFormat == PixelFormat.Format8bppIndexed)
{
grayImage = bitmapImage;
}
else
{
grayImage = UnmanagedImage.Create(image.Width, image.Height, PixelFormat.Format8bppIndexed);
}

How to render an image with a color-keyed green mask in C#?

Trying to figure out the most elegant way to render an image inside of a specific color of mask in C# (via System.Drawing or equivalent that will work in both desktop and ASP.NET applications).
The mask image will contain green keys where the image should be 'painted'.
(Desired Result image below is not perfect, hand lasso'd...)
There are various techniques for this:
Scan pixel data and build a mask image (as already suggested by itsme86 and Moby Disk)
A variant of scanning that builds a clipping region from the mask and uses that when drawing (refer to this article by Bob Powell)
Use color keys to mask in the Graphics.DrawImage call.
I'll focus on the third option.
Assuming that the image color that you want to eliminate from your mask is Color.Lime, we can use ImageAttributes.SetColorKey to stop any of that color from being drawn during a call to Graphics.DrawImage like this:
using (Image background = Bitmap.FromFile("tree.png"))
using (Image masksource = Bitmap.FromFile("mask.png"))
using (var imgattr = new ImageAttributes())
{
// set color key to Lime
imgattr.SetColorKey(Color.Lime, Color.Lime);
// Draw non-lime portions of mask onto original
using (var g = Graphics.FromImage(background))
{
g.DrawImage(
masksource,
new Rectangle(0, 0, masksource.Width, masksource.Height),
0, 0, masksource.Width, masksource.Height,
GraphicsUnit.Pixel, imgattr
);
}
// Do something with the composited image here...
background.Save("Composited.png");
}
And the results:
You can use the same technique (with color key on Color.Fuchsia) if you want to put those bits of tree into another image.
You want something like this:
Bitmap original = new Bitmap(#"tree.jpg");
Bitmap mask = new Bitmap(#"mask.jpg");
int width = original.Width;
int height = original.Height;
// This is the color that will be replaced in the mask
Color key = Color.FromArgb(0,255,0);
// Processing one pixel at a time is slow, but easy to understand
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
// Is this pixel "green" ?
if (mask.GetPixel(x,y) == key)
{
// Copy the pixel color from the original
Color c = original.GetPixel(x,y);
// Into the mask
mask.SetPixel(x,y,c);
}
}
}
You could probably read in the mask and translate it into an image that has the alpha channel set to 0 when the pixel is green and the alpha channel set to 0xFF when the pixel is any other color. Then you could draw the mask image over the original image.

Categories