Imgproc.FindContours return empty contours - c#

I use OpenCV 2.4.11 for Xamarin.Android with OpenCvBinding. I'm trying to find the largest color area in image.
static public Tuple<Bitmap,double> GetArea(Bitmap srcBitmap)
{
Mat mat = new Mat();
Mat gray = new Mat();
Mat mat2 = new Mat();
double max = 0;
Mat Hierarchy = new Mat();
List<MatOfPoint> contours = new List<MatOfPoint>();
Utils.BitmapToMat(srcBitmap, mat);
Imgproc.CvtColor(mat, gray, Imgproc.ColorRgba2gray);
Imgproc.AdaptiveThreshold(gray, mat2, 255, Imgproc.AdaptiveThreshGaussianC, Imgproc.ThreshBinaryInv,1111,0);
Imgproc.FindContours(mat2, contours, Hierarchy, Imgproc.RetrTree, Imgproc.ChainApproxSimple);
foreach (MatOfPoint contour in contours)
{ // never goes here
if (max < Imgproc.ContourArea(contour)) max = Imgproc.ContourArea(contour);
}
Utils.MatToBitmap(mat2,srcBitmap);
return new Tuple<Bitmap, double>(srcBitmap,max);
}
Input Image
If I comment the line with FindContours, I'll get an excellent picture for searching contours.
Threshholded image
FindContours returns correct image(Reputation doesn't allow to add another link), but(!!) list of contours standing empty. So i can't get the area of these contures.
I would be glad of any help. Thanks!

use IList contours = new JavaList();

Related

EmguCV PCACompute InputData - Convert VectorOfPoint to Mat

The CvInvoke.PCACompute method expects a IInputArray of data, to do the analysis.
I tried using the source image as the input Mat, but the eigenvectors computed are abnormal, as per my understanding. And I am not able to convert my Contour VectorOfPoint to Mat, which can me fed.
I could also not find a good literature online about implementing PCA Analysis in EmguCV / C#.
Can someone please point me in the right direction.
Below is my code -
public static void getOrientation(Image<Gray,byte> inputImage)
{
Image<Gray, Byte> cannyGray = inputImage.Canny(85, 255);
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat eigen_vectors = new Mat(inputImage.Size,DepthType.Cv8U,1);
Mat mean_mat = new Mat(inputImage.Size, DepthType.Cv8U, 1);
CvInvoke.FindContours(cannyGray, contours, null, RetrType.External, ChainApproxMethod.ChainApproxSimple);
Point[][] cont_points = contours.ToArrayOfArray();
Mat contour_mat = new Mat();
contour_mat.SetTo(cont_points[0]);
//CvInvoke.PCACompute(cannyGray.Mat, mean_mat, eigen_vectors,2);
CvInvoke.PCACompute(contours, mean_mat, eigen_vectors);
}
You have to convert each of your contour to a Mat containing your coordinates.
Here is an example of how you can do it:
// points are the point of one contour
var pointList = points.ToArray();
// use DepthType.Cv64F to allow numbers > 255
Mat dataPoints = new Mat(pointList.Length, 2, DepthType.Cv64F, 1);
double[] pointsData = new double[((int)dataPoints.Total * dataPoints.NumberOfChannels)];
// store the points coordinates in the Mat
for (int i = 0; i < dataPoints.Rows; i++)
{
pointsData[i * dataPoints.Cols] = pointList[i].X;
pointsData[i * dataPoints.Cols + 1] = pointList[i].Y;
}
// set the Mat to dataPointsData values
dataPoints.SetTo(pointsData);
// compute PCA
Mat mean = new Mat();
Mat eigenvectors = new Mat();
Mat eigenvalues = new Mat();
CvInvoke.PCACompute(dataPoints, mean, eigenvectors);

How to detect the bullet holes on the target using open cv in c#

I am trying to identify the holes in the target and score them accordingly. I have tried to find contours and it does a lot of work but it did not give me the 100% result. Sometimes it gives me an accurate result and sometimes it misses some bullets. I do not know how to do it. I am new in open CV and image processing. May be its due to the live streaming of the camera and light frequency. Kindly help me to solve this problem.
Details of my target
top is 6 feet from the ground surface
camera is 1 feet from the ground level
Target image
Image With Holes
Gray Scale Image
Here is my code to get video from the camera:
private void button1_Click(object sender, EventArgs e)
{
if (capture == null)
{
Cursor.Current = Cursors.WaitCursor;
//capture = new Capture(0);
capture = new Capture("rtsp://admin:admin123#192.168.1.64:554/live.avi");
capture.ImageGrabbed += Capture_ImageGrabbed;
capture.Start();
Cursor.Current = Cursors.Default;
}
index = 0;
if (index < panlist.Count)
{
panlist[++index].BringToFront();
}
CamPnelList[0].BackColor = Color.Red;
Rifle = true;
}
private void Capture_ImageGrabbed(object sender, EventArgs e)
{
try
{
Mat m = new Mat();
capture.Retrieve(m);
imginpt = m.ToImage<Gray, byte>();
RecImg = m.ToImage<Rgb, byte>();
if (rec.X != 0 && rec.Y != 0 && CamPnelList[0].BackColor == Color.LightGreen)
{
imginpt.ROI = rec;
RecImg.ROI = rec;
imgout1 = new Image<Gray, byte>(imginpt.Width, imginpt.Height, new Gray(0));
imgout1 = imginpt.Convert<Gray, byte>().ThresholdBinary(new Gray(100), new Gray(255));
imginpt.ROI = Rectangle.Empty;
tempimg1 = imgout1.CopyBlank();
imgout1.CopyTo(tempimg1);
cam1pictureBox.Image = imgout1.Bitmap;
//Application.DoEvents();
}
else
{
cam1pictureBox.Image = imginpt.Bitmap;
}
//System.Threading.Thread.Sleep(50);
}
catch (Exception x)
{
// MessageBox.Show(x.ToString());
}
}
Here is how I am extracting contours:
contoursimg1 = new Image<Gray, byte>(tempimg1.Width, tempimg1.Height, new Gray(0));
Emgu.CV.Util.VectorOfVectorOfPoint contours = new Emgu.CV.Util.VectorOfVectorOfPoint();
Mat Hier = new Mat();
CvInvoke.FindContours(tempimg1, contours, Hier, Emgu.CV.CvEnum.RetrType.Tree, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(contoursimg1, contours, -1, new MCvScalar(255, 0, 0));
I've done a few similar projects using video as a source and when the target objects have been small but a fairly well defined size the I've taken the difference between frames and used blob detection which is a good fast algorithm to use when dealing with real-time video. I noticed the perspective seems to have changed a little between your two example shots so rather than do that I tried the following code:
const int blobSizeMin = 1;
const int blobSizeMax = 5;
var white = new Bgr(255, 255, 255).MCvScalar;
Mat frame = CvInvoke.Imread(#"e:\temp\Frame.jpg", ImreadModes.Grayscale);
Mat mask = CvInvoke.Imread(#"e:\temp\Mask.jpg", ImreadModes.Grayscale);
frame.CopyTo(frame = new Mat(), mask);
CvInvoke.BitwiseNot(frame, frame);
CvInvoke.Threshold(frame, frame, 128, 255, ThresholdType.ToZero);
var blobs = new Emgu.CV.Cvb.CvBlobs();
var blobDetector = new Emgu.CV.Cvb.CvBlobDetector();
Image<Gray, Byte> img = frame.ToImage<Gray, Byte>();
blobDetector.Detect(img, blobs);
int bulletNumber = 0;
foreach (var blob in blobs.Values)
{
if (blob.BoundingBox.Width >= blobSizeMin && blob.BoundingBox.Width <= blobSizeMax
&& blob.BoundingBox.Height >= blobSizeMin && blob.BoundingBox.Height <= blobSizeMax)
{
bulletNumber++;
Point textPos = new Point((int) blob.Centroid.X - 1, (int) blob.Centroid.Y - 1);
CvInvoke.PutText(frame, bulletNumber.ToString(), textPos, FontFace.HersheyPlain,
fontScale: 1, color: white);
}
}
CvInvoke.Imwrite(#"e:\temp\Out.png", frame);
It inverts the frame so the holes are white, discards values below 50% and then does blob detection only taking notice of blobs with a size between one and five pixels. That was close to working but picked up a few extra points at the top left & right and bottom-left that look pretty similar to bullet holes to the eye as well. What I've done in the past that works well when you mount the camera in a fixed location is to have a black & white mask image to remove anything outside the area of interest:
Mask.jpg
Once that was added I detected a total of 21 bullet holes which looks correct:
Out.png
But assuming you're detecting the shots in real-time I think you should have good luck with looking at the difference between frames and that should remove the need to use a mask image. Take a look at the CvInvoke.Subtract method, from some existing code you can use something like the following:
CvInvoke.Subtract(frame, lastFrame, diff);
CvInvoke.CvtColor(diff, gray, ColorConversion.Bgr2Gray);
CvInvoke.Threshold(gray, gray, detectThreshold, 255, ThresholdType.ToZero);

Logo recognition using emguCV

I have implemented this code and have detcted logo in couple of images,
I was able to get some results like this but I need to count that how many images contain this logo,
may be something like finding all keypoints of logo inside big image or some thing else.
I can see I have foud the logo inside big image but I want to confirm it programetically, using emguCV.
Please help.
-- edited
this is the piece of code with homography, can you guide me a bit here, because I am totaly new to emguCV and openV please help me counting these inlier
public static Mat Draw(Mat modelImage, Mat observedImage, out long matchTime)
{
Mat homography;
VectorOfKeyPoint modelKeyPoints;
VectorOfKeyPoint observedKeyPoints;
using (VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch())
{
Mat mask;
FindMatch(modelImage, observedImage, out matchTime, out modelKeyPoints, out observedKeyPoints, matches,
out mask, out homography);
//Draw the matched keypoints
Mat result = new Mat();// new Size(400,400), modelImage.Depth, modelImage.NumberOfChannels);
Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
matches, result, new MCvScalar(255, 255, 255), new MCvScalar(255, 255, 255), mask);
#region draw the projected region on the image
if (homography != null)
{
//draw a rectangle along the projected model
Rectangle rect = new Rectangle(Point.Empty, modelImage.Size);
PointF[] pts = new PointF[]
{
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)
};
pts = CvInvoke.PerspectiveTransform(pts, homography);
Point[] points = Array.ConvertAll<PointF, Point>(pts, Point.Round);
using (VectorOfPoint vp = new VectorOfPoint(points))
{
CvInvoke.Polylines(result, vp, true, new MCvScalar(255, 0, 0, 255), 5);
}
}
#endregion
return result;
}
}
I think my answer is a bit to late, but may I can help someone other. With following code snipppet you can count the matching feature points that belongs to you question (counting lines). The importents variables is the mask variable. It contains the informations.
private int CountHowManyParisExist(Mat mask) {
Matrix<Byte> matrix = new Matrix<Byte>(mask.Rows, mask.Cols);
mask.CopyTo(matrix);
var matched = matrix.ManagedArray;
var list = matched.OfType<byte>().ToList();
var count = list.Count(a => a.Equals(1));
return count;
}

Detecting threshold area(s)

I have thresholding image :
I want to know, can i detect "white zones" and draw rectangle around them (save data also wanted)
Or can i draw parallelepiped (polygon) and "say" area inside it is white?
Thanks.
So in order to detect the white zones, just get the contours of the image. This can be done with:
vector<vector<Point>>contours;
vector<Vec4i> hierarchy;
findContours(blackWhiteImage,
contours,
hierarchy,
CV_RETR_TREE,
CV_CHAIN_APPROX_SIMPLE,
Point(0,0));
Then, you can generate the approximate bounding box that models every contour you extracted with:
vector<vector<Point> > contours_poly( contours.size() );
vector<Rect> boundRect( contours.size() );
for( int i = 0; i < contours.size(); i++ ){
approxPolyDP( Mat(contours[i]),
contours_poly[i],
3,
true );
//3 is epsilon tuning param for bias-variance trade off
//true denotes contours are closed
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
}
Once this is done, you can access the boundingRect objects in your boundingRect array just like you would access any other array.
Simular code for EmguCV (C#) without approximation:
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hierarchy;
CvInvoke.FindContours(binMat, contours, hierarchy, RetrType.External, ChainApproxMethod.ChainApproxSimple);
for (int i = 0; i < contours.Size; ++i)
{
if (CvInvoke.ContourArea(contours[i]) > 8)
{
Rectangle rc = CvInvoke.BoundingRectangle(contours[i]);
CvInvoke.Rectangle(srcMat, rc, new MCvScalar(0, 0, 255));
}
}

Emgu CV - How i can get all occurrence of pattern in Image

Hi have already function solution but one issue:
// The screenshot will be stored in this bitmap.
Bitmap capture = new Bitmap(rec.Width, rec.Height, PixelFormat.Format24bppRgb);
using (Graphics g = Graphics.FromImage(capture))
{
g.CopyFromScreen(rec.Location, new System.Drawing.Point(0, 0), rec.Size);
}
MCvSURFParams surfParam = new MCvSURFParams(500, false);
SURFDetector surfDetector = new SURFDetector(surfParam);
// Template image
Image<Gray, Byte> modelImage = new Image<Gray, byte>("template.jpg");
// Extract features from the object image
ImageFeature[] modelFeatures = surfDetector.DetectFeatures(modelImage, null);
// Prepare current frame
Image<Gray, Byte> observedImage = new Image<Gray, byte>(capture);
ImageFeature[] imageFeatures = surfDetector.DetectFeatures(observedImage, null);
// Create a SURF Tracker using k-d Tree
Features2DTracker tracker = new Features2DTracker(modelFeatures);
Features2DTracker.MatchedImageFeature[] matchedFeatures = tracker.MatchFeature(imageFeatures, 2);
matchedFeatures = Features2DTracker.VoteForUniqueness(matchedFeatures, 0.8);
matchedFeatures = Features2DTracker.VoteForSizeAndOrientation(matchedFeatures, 1.5, 20);
HomographyMatrix homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(matchedFeatures);
// Merge the object image and the observed image into one image for display
Image<Gray, Byte> res = modelImage.ConcateVertical(observedImage);
#region draw lines between the matched features
foreach (Features2DTracker.MatchedImageFeature matchedFeature in matchedFeatures)
{
PointF p = matchedFeature.ObservedFeature.KeyPoint.Point;
p.Y += modelImage.Height;
res.Draw(new LineSegment2DF(matchedFeature.SimilarFeatures[0].Feature.KeyPoint.Point, p), new Gray(0), 1);
}
#endregion
#region draw the project region on the image
if (homography != null)
{
// draw a rectangle along the projected model
Rectangle rect = modelImage.ROI;
PointF[] pts = new PointF[] {
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)
};
homography.ProjectPoints(pts);
for (int i = 0; i < pts.Length; i++)
pts[i].Y += modelImage.Height;
res.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Gray(255.0), 2);
}
#endregion
pictureBoxScreen.Image = res.ToBitmap();
the result is:
And my problem is that, function homography.ProjectPoints(pts);
Get only first occurrence of pattern (white rectangle in pic above)
How i can Project all occurrence of template, respectively how I can get occurrence of template rectangle in image
I face a problem similar to yours in my master thesis. Basically you have two options:
Use a clustering such as Hierarchical k-means or a point density one such as DBSCAN (it depends on two parameters but you can make it threshold free in bidimensional R^2 space)
Use a multiple robust model fitting estimation techniques such as JLinkage. In this more advanced technique you clusters points that share an homography instead of cluster points that close to each other in euclidean space.
Once you partition your matches in "clusters" you can estimate homographies between matches belonging to correspondant clusters.

Categories