Finding contour points in emgucv - c#

I am working with emguCV for finding contours essential points then saving this point in a file and user redraw this shape in future. so, my goal is this image:
example
my solution is this:
1. import image to picturebox
2. edge detection with canny algorithm
3. finding contours and save points
I found a lot of points with below codes but i can't drawing first shape with this point!
using Emgu.CV;
using Emgu.Util;
private void button1_Click(object sender, EventArgs e)
{
Bitmap bmp = new Bitmap(pictureBox1.Image);
Image<Bgr, Byte> img = new Image<Bgr, byte>(bmp);
Image<Gray, Byte> gray = img.Convert<Gray, Byte>().PyrDown().PyrUp();
Gray cannyThreshold = new Gray(80);
Gray cannyThresholdLinking = new Gray(120);
Gray circleAccumulatorThreshold = new Gray(120);
Image<Gray, Byte> cannyEdges = gray.Canny(cannyThreshold, cannyThresholdLinking).Not();
Bitmap color;
Bitmap bgray;
IdentifyContours(cannyEdges.Bitmap, 50, true, out bgray, out color);
pictureBox1.Image = color;
}
public void IdentifyContours(Bitmap colorImage, int thresholdValue, bool invert, out Bitmap processedGray, out Bitmap processedColor)
{
Image<Gray, byte> grayImage = new Image<Gray, byte>(colorImage);
Image<Bgr, byte> color = new Image<Bgr, byte>(colorImage);
grayImage = grayImage.ThresholdBinary(new Gray(thresholdValue), new Gray(255));
if (invert)
{
grayImage._Not();
}
using (MemStorage storage = new MemStorage())
{
for (Contour<Point> contours = grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_LIST, storage); contours != null; contours = contours.HNext)
{
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * 0.015, storage);
if (currentContour.BoundingRectangle.Width > 20)
{
CvInvoke.cvDrawContours(color, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
color.Draw(currentContour.BoundingRectangle, new Bgr(0, 255, 0), 1);
}
Point[] pts = currentContour.ToArray();
foreach (Point p in pts)
{
//add points to listbox
listBox1.Items.Add(p);
}
}
}
processedColor = color.ToBitmap();
processedGray = grayImage.ToBitmap();
}

In your code you have added contour approximation operation
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * 0.015, storage);
This contour approximation will approximate your Contour to a nearest polygon & so your actual points got shifted. If you want to reproduce the same image you need not to do any approximation.
Refer this thread.

Related

How to perform image lighting correction with OpenCV?

I have an image which I grab using a camera. Sometimes, the lighting is uneven in them image. There are some dark shades. This causes incorrect optimal thresholding in EMGU as well as Aforge to process the image for OCR.
This is the image:
This is what I get after thresholding:
How do I correct the lighting? I tried adaptive threshold, gives about the same result. Tried gamma correction too using the code below:
ImageAttributes attributes = new ImageAttributes();
attributes.SetGamma(10);
// Draw the image onto the new bitmap
// while applying the new gamma value.
System.Drawing.Point[] points =
{
new System.Drawing.Point(0, 0),
new System.Drawing.Point(image.Width, 0),
new System.Drawing.Point(0, image.Height),
};
Rectangle rect =
new Rectangle(0, 0, image.Width, image.Height);
// Make the result bitmap.
Bitmap bm = new Bitmap(image.Width, image.Height);
using (Graphics gr = Graphics.FromImage(bm))
{
gr.DrawImage(HSICONV.Bitmap, points, rect,
GraphicsUnit.Pixel, attributes);
}
same result. Please help.
UPDATE:
as per Nathancy's suggestion I converted his code to c# for uneven lighting correction and it works:
Image<Gray, byte> smoothedGrayFrame = grayImage.PyrDown();
smoothedGrayFrame = smoothedGrayFrame.PyrUp();
//canny
Image<Gray, byte> cannyFrame = null;
cannyFrame = smoothedGrayFrame.Canny(50, 50);
//smoothing
grayImage = smoothedGrayFrame;
//binarize
Image<Gray, byte> grayout = grayImage.Clone();
CvInvoke.AdaptiveThreshold(grayImage, grayout, 255, AdaptiveThresholdType.GaussianC, ThresholdType.BinaryInv, Convert.ToInt32(numericmainthreshold.Value) + Convert.ToInt32(numericmainthreshold.Value) % 2 + 1, 1.2d);
grayout._Not();
Mat kernelCl = CvInvoke.GetStructuringElement(ElementShape.Rectangle, new Size(3, 3), new System.Drawing.Point(-1, -1));
CvInvoke.MorphologyEx(grayout, grayout, MorphOp.Close, kernelCl, new System.Drawing.Point(-1, -1), 1, BorderType.Default, new MCvScalar());
Here's an approach:
Convert image to grayscale and Gaussian blur to smooth image
Adaptive threshold to obtain binary image
Perform morphological transformations to smooth image
Dilate to enhance text
Invert image
After converting to grayscale and blurring, we adaptive threshold
There are small holes and imperfections so we perform a morph close to smooth the image
From we here can optionally dilate to enhance the text
Now we invert the image to get our result
I implemented this method in OpenCV and Python but you can adapt the same strategy into C#
import cv2
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, \
cv2.THRESH_BINARY_INV,9,11)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
dilate = cv2.dilate(close, kernel, iterations=1)
result = 255 - dilate
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.imshow('dilate', dilate)
cv2.imshow('result', result)
cv2.waitKey()

Emgu CV Image sharpening and controur detection

I am working on a project where I need to identify dots from IR lasers on a surface. I use for that a camera with IR filter
Some input images:
There can be several dots, too. So I tried to sharpen this image from webcam and then use FindContours method of Emgu CV.
There is my code:
public static Image<Gray, byte> Sharpen(Image<Gray, byte> image, int w, int h, double sigma1, double sigma2, int k)
{
w = (w % 2 == 0) ? w - 1 : w;
h = (h % 2 == 0) ? h - 1 : h;
//apply gaussian smoothing using w, h and sigma
var gaussianSmooth = image.SmoothGaussian(w, h, sigma1, sigma2);
//obtain the mask by subtracting the gaussian smoothed image from the original one
var mask = image - gaussianSmooth;
//add a weighted value k to the obtained mask
mask *= k;
//sum with the original image
image += mask;
return image;
}
private void ProcessFrame(object sender, EventArgs arg)
{
Mat frame = new Mat();
if (_capture.Retrieve(frame, CameraDevice))
{
Image<Bgr, byte> original = frame.ToImage<Bgr, byte>();
Image<Gray, byte> img = Sharpen(frame.ToImage<Gray, byte>(), 100, 100, 100, 100, 30);
Image<Gray, byte> thresh = new Image<Gray, byte>(img.Size);
CvInvoke.PyrDown(img, thresh);
CvInvoke.PyrUp(thresh, thresh);
Image<Gray, byte> mask = new Image<Gray, byte>(thresh.Size);
Image<Gray, byte> cannyImg = thresh.Canny(10, 50);
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(
cannyImg,
contours,
hierarchy,
RetrType.External,
ChainApproxMethod.ChainApproxSimple
);
Image<Bgr, byte> resultImage = img.Copy().Convert<Bgr, byte>();
int contCount = contours.Size;
for (int i = 0; i < contCount; i++)
{
using (VectorOfPoint contour = contours[i])
{
resultImage.Draw(CvInvoke.BoundingRectangle(contour), new Bgr(255, 0, 0), 5);
}
}
captureBox.Image = original.Bitmap;
cvBox.Image = resultImage.Bitmap;
}
}
Example of result image:
So it almost all the time works as I expect it to, but framerate is very low. I'm getting like 10-15 fps with resolution of 640x480. I need to be able to do the same thing for 1920x1080 with at least 30 fps. It's my first time with OpenCV and Emgu.CV. What can I do to make it perform better?
I solved this just setting the threshold, so that image turns black and white only. By adjusting the threshold I was able to achieve the same results if not better in terms of clarity, but also performance drastically improved since there is not heavy processing going on
Here is a snippet with ARCore library instead on EmguCV
var bitmap = eventArgs.Frame;
var filter = new Grayscale(0.2125, 0.7154, 0.0721);
var grayImage = filter.Apply(bitmap);
var thresholdFilter = new Threshold(CurrentThreshold);
thresholdFilter.ApplyInPlace(grayImage);
var blobCounter = new BlobCounter();
blobCounter.ProcessImage(grayImage);
var rectangles = blobCounter.GetObjectsRectangles();

C# EmguCV - Circle line thickness calculation

I want to Circle line thickness calculation like this below:
Which method can help me to do so?
Thanks for your reply David. I'm new to emgucv. So I do not know where I'll start. I can do the following image using canny edge. But I can not calculate distance, because I do not know what I would use the code. Which can I use code?
private void button1_Click(object sender, EventArgs e)
{
string strFileName = string.Empty;
OpenFileDialog ofd = new OpenFileDialog();
if (ofd.ShowDialog() == DialogResult.OK)
{
//Load image
Image<Bgr, Byte> img1 = new Image<Bgr, Byte>(ofd.FileName);
//Convert the img1 to grayscale and then filter out the noise
Image<Gray, Byte> gray1 = img1.Convert<Gray, Byte>().PyrDown().PyrUp();
//Canny Edge Detector
Image<Gray, Byte> cannyGray = gray1.Canny(120, 180);
pictureBox1.Image = cannyGray.ToBitmap();
}
}
Let me Guide you a little bit further.
//load image
Image<Gray, Byte> loaded_img = new Image<Gray, byte>(Filename);
Image<Gray, Byte> Thresh_img = loaded_img.CopyBlank();
//threshold to make it binary if necessaray
CvInvoke.cvThreshold(loaded_img.Ptr, Thresh_img.Ptr, 0, 255, Emgu.CV.CvEnum.THRESH.CV_THRESH_OTSU | Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY);
//get contours of circle
Contour<Point> Circle_cont = Thresh_img.FindContours();
//get height & width of the bounding rectangle
int height_Circle_1 = Circle_cont.BoundingRectangle.Height;
int width_circle_1 = Circle_cont.BoundingRectangle.Width;
Circle_cont = Circle_cont.HNext;
int height_Circle_2 = Circle_cont.BoundingRectangle.Height;
int width_Circle_2 = Circle_cont.BoundingRectangle.Width;
//get ring thicknes in Px
double ring_thickness_1 = Math.Abs(height_Circle_1 - height_Circle_2) / 2;
double ring_thickness_2 = Math.Abs(width_circle_1 - width_Circle_2) / 2;
This should get you the thickness of your ring in Pixels. If you want it in cm you need to scale the Px value with the real Pixel length. I applied this code snippet to your example image and got 56 Pixel for both Thickness values.
I hope this will help you.

Add two sub-images into one new image using emgu cv

I have two images with different dimensions and I want to create another large image includes them vertically.
private Image<Gray, Byte> newImage(Image<Gray, Byte> image1, Image<Gray, Byte> image2)
{
int ImageWidth = 0;
int ImageHeight = 0;
//get max width
if (image1.Width > image2.Width)
ImageWidth = image1.Width;
else
ImageWidth = image2.Width;
//calculate new height
ImageHeight = image1.Height + image2.Height;
//declare new image (large image).
Image<Gray, Byte> imageResult = new Image<Gray, Byte>(ImageWidth, ImageHeight);
imageResult.ROI = new Rectangle(0, 0, image1.Width, image1.Height);
image1.CopyTo(imageResult);
imageResult.ROI = new Rectangle(0, image1.Height, image2.Width, image2.Height);
image2.CopyTo(imageResult);
return imageResult;
}
The returned image is a black image and doesn't contain the two images, please help me where's the problem?
Thanks.
Your approach was correct. You simply had to remove the ROI. Just add at the end:
imageResult.ROI = Rectangle.Empty;
The Final Result shoul look like this:
imageResult.ROI = new Rectangle(0, 0, image1.Width, image1.Height);
image1.CopyTo(imageResult);
imageResult.ROI = new Rectangle(0, image1.Height, image2.Width, image2.Height);
image2.CopyTo(imageResult);
imageResult.ROI = Rectangle.Empty;
The solution in the following:
private Image<Gray, Byte> newImage(Image<Gray, Byte> image1, Image<Gray, Byte> image2)
{
int ImageWidth = 0;
int ImageHeight = 0;
//get max width
if (image1.Width > image2.Width)
ImageWidth = image1.Width;
else
ImageWidth = image2.Width;
//calculate new height
ImageHeight = image1.Height + image2.Height;
//declare new image (large image).
Image<Gray, Byte> imageResult;
Bitmap bitmap = new Bitmap(Math.Max(image1.Width , image2.Width), image1.Height + image2.Height);
using (Graphics g = Graphics.FromImage(bitmap))
{
g.DrawImage(image1.Bitmap, 0, 0);
g.DrawImage(image2.Bitmap, 0, image1.Height);
}
imageResult = new Image<Gray, byte>(bitmap);
return imageResult;
}

Emgu CV EigenObjectRecognizer not working

I've tried to code a face recognition program and need some help from the community.
The code posted below compiled with no error but the recognizer seems to be not working?
Basically target.jpg contain a person crop out of the pic1.jpg(3 person inside) so the recognizer should be able to detect it more easily.
The code below run with no errors but all 3 person in pic1.jpg is boxed, and the GetEigenDistances for all 3 faces is 0. By right only the person in pic1.jpg(person in target.jpg) should be boxed.
Any idea on where have i gone wrong? Thanks in advance.
I'm using emgu cv 2.4 with c# 2010 express
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Emgu.CV;
using Emgu.Util;
using Emgu.CV.Structure;
using Emgu.CV.UI;
using Emgu.CV.CvEnum;
namespace FaceReco
{
public partial class Form1 : Form
{
private HaarCascade haar;
List<Image<Gray, byte>> trainingImages = new List<Image<Gray, byte>>();
Image<Gray, byte> TrainedFace, UnknownFace = null;
MCvFont font = new MCvFont(FONT.CV_FONT_HERSHEY_TRIPLEX, 0.5d, 0.5d);
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
// adjust path to find your XML file
haar = new HaarCascade("haarcascade_frontalface_alt_tree.xml");
//Read an target image
Image TargetImg = Image.FromFile(Environment.CurrentDirectory + "\\target\\target.jpg");
Image<Bgr, byte> TargetFrame = new Image<Bgr, byte>(new Bitmap(TargetImg));
//FACE DETECTION FOR TARGET FACE
if (TargetImg != null) // confirm that image is valid
{
//convert the image to gray scale
Image<Gray, byte> grayframe = TargetFrame.Convert<Gray, byte>();
var faces = grayframe.DetectHaarCascade(haar, 1.4, 4,
HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(25, 25))[0];
foreach (var face in faces)
{
//add into training array
TrainedFace = TargetFrame.Copy(face.rect).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
trainingImages.Add(TrainedFace);
break;
}
TargetImageBox.Image = TrainedFace;
}
//Read an unknown image
Image UnknownImg = Image.FromFile(Environment.CurrentDirectory + "\\img\\pic1.jpg");
Image<Bgr, byte> UnknownFrame = new Image<Bgr, byte>(new Bitmap(UnknownImg));
//FACE DETECTION PROCESS
if (UnknownFrame != null) // confirm that image is valid
{
//convert the image to gray scale
Image<Gray, byte> grayframe = UnknownFrame.Convert<Gray, byte>();
//Detect faces from the gray-scale image and store into an array of type 'var',i.e 'MCvAvgComp[]'
var faces = grayframe.DetectHaarCascade(haar, 1.4, 4,
HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(25, 25))[0];
//draw a green rectangle on each detected face in image
foreach (var face in faces)
{
UnknownFace = UnknownFrame.Copy(face.rect).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
MCvTermCriteria termCrit = new MCvTermCriteria(16, 0.001);
//Eigen face recognizer
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(trainingImages.ToArray(), ref termCrit);
// if recognise face, draw green box
if (recognizer.Recognize(UnknownFace) != null)
{
UnknownFrame.Draw(face.rect, new Bgr(Color.Green), 3);
}
float f = recognizer.GetEigenDistances(UnknownFace)[0];
// display threshold
UnknownFrame.Draw(f.ToString("R"), ref font, new Point(face.rect.X - 3, face.rect.Y - 3), new Bgr(Color.Red));
}
//Display the image
CamImageBox.Image = UnknownFrame;
}
}
}
}
This area is not yet my specialty, but if I can help I will try. This is what I am using and its working quite nicely.
Try to do all your work with the GPU, its a lot faster than the CPU for doing this stuff!
List<Rectangle> faces = new List<Rectangle>();
List<Rectangle> eyes = new List<Rectangle>();
RightCameraImage = RightCameraImageCapture.QueryFrame().Resize(480, 360, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC); //Read the files as an 8-bit Bgr image
//Emgu.CV.GPU.GpuInvoke.HasCuda
if (GpuInvoke.HasCuda)
{
Video.DetectFace.UsingGPU(RightCameraImage, Main.FaceGpuCascadeClassifier, Main.EyeGpuCascadeClassifier, faces, eyes, out detectionTime);
}
else
{
Video.DetectFace.UsingCPU(RightCameraImage, Main.FaceCascadeClassifier, Main.EyeCascadeClassifier, faces, eyes, out detectionTime);
}
string PersonsName = string.Empty;
Image<Gray, byte> GreyScaleFaceImage;
foreach (Rectangle face in faces)
{
RightCameraImage.Draw(face, new Bgr(Color.Red), 2);
GreyScaleFaceImage = RightCameraImage.Copy(face).Convert<Gray, byte>().Resize(200, 200, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
if (KnownFacesList.Count > 0)
{
// MCvTermCriteria for face recognition...
MCvTermCriteria mCvTermCriteria = new MCvTermCriteria(KnownFacesList.Count, 0.001);
// Recognize Known Faces with Eigen Object Recognizer...
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(KnownFacesList.ToArray(), KnownNamesList.ToArray(), eigenDistanceThreashhold, ref mCvTermCriteria);
EigenObjectRecognizer.RecognitionResult recognitionResult = recognizer.Recognize(GreyScaleFaceImage);
if (recognitionResult != null)
{
// Set the Persons Name...
PersonsName = recognitionResult.Label;
// Draw the label for each face detected and recognized...
RightCameraImage.Draw(PersonsName, ref mCvFont, new Point(face.X - 2, face.Y - 2), new Bgr(Color.LightGreen));
}
else
{
// Draw the label for each face NOT Detected...
RightCameraImage.Draw(FaceUnknown, ref mCvFont, new Point(face.X - 2, face.Y - 2), new Bgr(Color.LightGreen));
}
}
}
My Code in the Class: Video.DetectFace:
using System;
using Emgu.CV;
using Emgu.CV.GPU;
using System.Drawing;
using Emgu.CV.Structure;
using System.Diagnostics;
using System.Collections.Generic;
namespace Video
{
//-----------------------------------------------------------------------------------
// Copyright (C) 2004-2012 by EMGU. All rights reserved. Modified by Chris Sykes.
//-----------------------------------------------------------------------------------
public static class DetectFace
{
// Use me like this:
/*
//Emgu.CV.GPU.GpuInvoke.HasCuda
if (GpuInvoke.HasCuda)
{
DetectUsingGPU(...);
}
else
{
DetectUsingCPU(...);
}
*/
private static Stopwatch watch;
public static void UsingGPU(Image<Bgr, Byte> image, GpuCascadeClassifier face, GpuCascadeClassifier eye, List<Rectangle> faces, List<Rectangle> eyes, out long detectionTime)
{
watch = Stopwatch.StartNew();
using (GpuImage<Bgr, Byte> gpuImage = new GpuImage<Bgr, byte>(image))
using (GpuImage<Gray, Byte> gpuGray = gpuImage.Convert<Gray, Byte>())
{
Rectangle[] faceRegion = face.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
faces.AddRange(faceRegion);
foreach (Rectangle f in faceRegion)
{
using (GpuImage<Gray, Byte> faceImg = gpuGray.GetSubRect(f))
{
//For some reason a clone is required.
//Might be a bug of GpuCascadeClassifier in opencv
using (GpuImage<Gray, Byte> clone = faceImg.Clone())
{
Rectangle[] eyeRegion = eye.DetectMultiScale(clone, 1.1, 10, Size.Empty);
foreach (Rectangle e in eyeRegion)
{
Rectangle eyeRect = e;
eyeRect.Offset(f.X, f.Y);
eyes.Add(eyeRect);
}
}
}
}
}
watch.Stop();
detectionTime = watch.ElapsedMilliseconds;
}
public static void UsingCPU(Image<Bgr, Byte> image, CascadeClassifier face, CascadeClassifier eye, List<Rectangle> faces, List<Rectangle> eyes, out long detectionTime)
{
watch = Stopwatch.StartNew();
using (Image<Gray, Byte> gray = image.Convert<Gray, Byte>()) //Convert it to Grayscale
{
//normalizes brightness and increases contrast of the image
gray._EqualizeHist();
//Detect the faces from the gray scale image and store the locations as rectangle
//The first dimensional is the channel
//The second dimension is the index of the rectangle in the specific channel
Rectangle[] facesDetected = face.DetectMultiScale(gray, 1.1, 10, new Size(20, 20), Size.Empty);
faces.AddRange(facesDetected);
foreach (Rectangle f in facesDetected)
{
//Set the region of interest on the faces
gray.ROI = f;
Rectangle[] eyesDetected = eye.DetectMultiScale(gray, 1.1, 10, new Size(20, 20), Size.Empty);
gray.ROI = Rectangle.Empty;
foreach (Rectangle e in eyesDetected)
{
Rectangle eyeRect = e;
eyeRect.Offset(f.X, f.Y);
eyes.Add(eyeRect);
}
}
}
watch.Stop();
detectionTime = watch.ElapsedMilliseconds;
}
} // END of CLASS...
}// END of NAMESPACE...

Categories