I have thresholding image :
I want to know, can i detect "white zones" and draw rectangle around them (save data also wanted)
Or can i draw parallelepiped (polygon) and "say" area inside it is white?
Thanks.
So in order to detect the white zones, just get the contours of the image. This can be done with:
vector<vector<Point>>contours;
vector<Vec4i> hierarchy;
findContours(blackWhiteImage,
contours,
hierarchy,
CV_RETR_TREE,
CV_CHAIN_APPROX_SIMPLE,
Point(0,0));
Then, you can generate the approximate bounding box that models every contour you extracted with:
vector<vector<Point> > contours_poly( contours.size() );
vector<Rect> boundRect( contours.size() );
for( int i = 0; i < contours.size(); i++ ){
approxPolyDP( Mat(contours[i]),
contours_poly[i],
3,
true );
//3 is epsilon tuning param for bias-variance trade off
//true denotes contours are closed
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
}
Once this is done, you can access the boundingRect objects in your boundingRect array just like you would access any other array.
Simular code for EmguCV (C#) without approximation:
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hierarchy;
CvInvoke.FindContours(binMat, contours, hierarchy, RetrType.External, ChainApproxMethod.ChainApproxSimple);
for (int i = 0; i < contours.Size; ++i)
{
if (CvInvoke.ContourArea(contours[i]) > 8)
{
Rectangle rc = CvInvoke.BoundingRectangle(contours[i]);
CvInvoke.Rectangle(srcMat, rc, new MCvScalar(0, 0, 255));
}
}
Related
I would like to do grabcut which uses a depth map that cuts away far objects, that is used in mixed reality application. So I would like to show just the front of what I see and the background as virtual reality scene.
The problem right now I tried to adapt so code and what I get is front which is cut but in black color, the mask actually.
I don't know where is the problem settle.
The input is a depth map from zed camera.
here is a picture of the behaviour:
My trial:
private void convertToGrayScaleValues(Mat mask)
{
int width = mask.rows();
int height = mask.cols();
byte[] buffer = new byte[width * height];
mask.get(0, 0, buffer);
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int value = buffer[y * width + x];
if (value == Imgproc.GC_BGD)
{
buffer[y * width + x] = 0; // for sure background
}
else if (value == Imgproc.GC_PR_BGD)
{
buffer[y * width + x] = 85; // probably background
}
else if (value == Imgproc.GC_PR_FGD)
{
buffer[y * width + x] = (byte)170; // probably foreground
}
else
{
buffer[y * width + x] = (byte)255; // for sure foreground
}
}
}
mask.put(0, 0, buffer);
}
For Each depth frame from Camera:
Mat erodeElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(4, 4));
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(7, 7));
depth.copyTo(maskFar);
Core.normalize(maskFar, maskFar, 0, 255, Core.NORM_MINMAX, CvType.CV_8U);
Imgproc.cvtColor(maskFar, maskFar, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_BINARY);
Imgproc.dilate(maskFar, maskFar, erodeElement);
Imgproc.erode(maskFar, maskFar, dilateElement);
Mat bgModel = new Mat();
Mat fgModel = new Mat();
Imgproc.grabCut(image, maskFar, new OpenCVForUnity.CoreModule.Rect(), bgModel, fgModel, 1, Imgproc.GC_INIT_WITH_MASK);
convertToGrayScaleValues(maskFar); // back to grayscale values
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_TOZERO);
Mat foreground = new Mat(image.size(), CvType.CV_8UC4, new Scalar(0, 0, 0));
image.copyTo(foreground, maskFar);
Utils.fastMatToTexture2D(foreground, texture);
In this case, the graph cut on the depth image might not be the correct method to solve all of your issue.
If you insist the processing should be done in the depth image. To find everything that is not on the table and filter out the table part. You may first apply the disparity based approach for finding the object that's is not on the ground. Reference: https://github.com/windowsub0406/StereoVision
Then based on the V disparity output image, find the locally connected component that is grouped together. You may follow this link how to do this disparity map in OpenCV which is asking the similar way to find the objects that's not on the ground
If you are ok with RGB based approaches, then use any deep learning-based method to recognize the monitor should be the correct approaches. It can directly detect the mointer bounding box. By apply this bounding box to the depth image, you may have what you want. For deep learning based approaches, there are many available package such as Yolo series. You may find one that is suitable for you. reference: https://medium.com/#dvshah13/project-image-recognition-1d316d04cb4c
I have a colour image of type Image<Hsv, Byte>, and another image of type Image<Gray, Byte> of the same size that is all black with some all-white shapes. From the black and white image, I found the contours of the shapes using findContours(). What I want is to create a new image or modify the original colour image I have to show only what corresponds to inside the contours, with everything else being transparent, without having to check pixel by pixel values of the two images (did this, it takes too long). Any possible way to do this?
For example, I have the original image, the black and white image, and the final product I need.
I'm completely new to emgucv, so, I'm not saying this is the best approach; but it seems to work.
Create a new draw surface
Draw the original image
Change the white pixels in the mask image to transparent pixels
Draw the transparent mask on top of the original image
The result image looks like your desired outcome.
void Main()
{
var path = Path.Combine(
Environment.GetFolderPath(Environment.SpecialFolder.Desktop),
"images");
var original = new Image<Bgr, Byte>(Path.Combine(path, "vz7Oo1W.png"));
var mask = new Image<Bgr, Byte>(Path.Combine(path, "vIQUvUU.png"));
var bitmap = new Bitmap(original.Width, original.Height);
using (Graphics g = Graphics.FromImage(bitmap))
{
g.DrawImage(original.Bitmap, 0, 0);
g.DrawImage(MakeTransparent(mask.Bitmap), 0, 0);
}
bitmap.Save(Path.Combine(path, "new.png"));
}
public static Bitmap MakeTransparent(Bitmap image)
{
Bitmap b = new Bitmap(image);
var tolerance = 10;
for (int i = b.Size.Width - 1; i >= 0; i--)
{
for (int j = b.Size.Height - 1; j >= 0; j--)
{
var col = b.GetPixel(i, j);
col.Dump();
if (255 - col.R < tolerance &&
255 - col.G < tolerance &&
255 - col.B < tolerance)
{
b.SetPixel(i, j, Color.Transparent);
}
}
}
return b;
}
I'm trying to get an entire dark "D" from this image :
With this code, I get this result :
```
Cv2.Threshold(im_gray, threshImage, 80, 255, ThresholdTypes.BinaryInv); // Threshold to find contour
Cv2.FindContours(
threshImage,
out contours,
out hierarchyIndexes,
mode: RetrievalModes.Tree,
method: ContourApproximationModes.ApproxSimple
);
double largest_area = 0;
int largest_contour_index = 0;
Rect rect = new Rect();
//Search biggest contour
for (int i = 0; i <contours.Length; i++)
{
double area = Cv2.ContourArea(contours[i]); // Find the area of contour
if (area > largest_area)
{
largest_area = area;
largest_contour_index = i; //Store the index of largest contour
rect = Cv2.BoundingRect(contours[i]); // Find the bounding rectangle for biggest contour
}
}
Cv2.DrawContours(finalImage, contours, largest_contour_index, new Scalar(0, 0, 0), -1, LineTypes.Link8, hierarchyIndexes, int.MaxValue);
```
But when I save finalImage, I get the result below :
How can I get entire black letter?
Apart from my comment, I tried out another way. I used the statistical property of the gray scale image.
For this image I used the mean value of the image to select an optimal threshold value.
The optimal value is selected by choosing all pixel values under 33% of the mean value
med = np.mean(gray)
optimal = int(med * 1.33) #selection of the optimal threshold
ret,th = cv2.threshold(gray, optimal, 255, 0)
This is what I got as a result:
Now invert this image and obtain the largest contour and there you go you will have the Big D
EDIT:
For some images with drastic histogram, you can use the median of the gray scale image instead of the mean.
I need help from any C# and or OpenCV experts in making my circle detection script more accurate.
In OpenCV circle detection is accomplished by something called HoughCircles algorithm or framework.
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
I am using a C# wrapper of OpenCV (for Unity)OpenCVforUnity HughCircles
which in turn is directly based on the official java wrapper of OpenCV.
My circle detection code is as follows (without the OpenCv dependencies of course)
I've also attached 2 images so you can see the results.
What changes are needed to improve the results? I've also included the original 2 images for reference.
using UnityEngine;
using System.Collections;
using System;
using OpenCVForUnity;
public class HoughCircleSample : MonoBehaviour{
Point pt;
// Use this for initialization
void Start ()
{
Texture2D imgTexture = Resources.Load ("balls2_bw") as Texture2D;
Mat imgMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC3);
Utils.texture2DToMat (imgTexture, imgMat);
//Debug.Log ("imgMat dst ToString " + imgMat.ToString ());
Mat grayMat = new Mat ();
Imgproc.cvtColor (imgMat, grayMat, Imgproc.COLOR_RGB2GRAY);
Imgproc.Canny (grayMat, grayMat, 50, 200);
Mat circles = new Mat();
int minRadius = 0;
int maxRadius = 0;
// Apply the Hough Transform to find the circles
Imgproc.HoughCircles(grayMat, circles, Imgproc.CV_HOUGH_GRADIENT, 3, grayMat.rows() / 8, 200, 100, minRadius, maxRadius);
Debug.Log ("circles toString " + circles.ToString ());
Debug.Log ("circles dump" + circles.dump ());
if (circles.cols() > 0)
for (int x = 0; x < Math.Min(circles.cols(), 10); x++)
{
double[] vCircle = circles.get(0, x);
if (vCircle == null)
break;
pt = new Point(Math.Round(vCircle[0]), Math.Round(vCircle[1]));
int radius = (int)Math.Round(vCircle[2]);
// draw the found circle
Core.circle(imgMat, pt, radius, new Scalar(255, 0, 0), 1);
}
Texture2D texture = new Texture2D (imgMat.cols (), imgMat.rows (), TextureFormat.RGBA32, false);
Utils.matToTexture2D (imgMat, texture);
gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
}
}
This code is in C++, but you can easily convert to C#.
I needed to change the param2 of HoughCircle to 200, resulting in:
HoughCircles(grayMat, circles, CV_HOUGH_GRADIENT, 3, grayMat.rows / 8, 200, 200, 0, 0);
which is
the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.
You also should't feed HoughCircles with a "Canny-ed" image, since will already take care of this. Use the grayMat without Canny edge detection step applied.
Results are shown below. The second one is more tricky, because of the light conditions.
Here is the whole code. Again, it's C++, but may be useful as a reference.
#include <opencv2/opencv.hpp>
using namespace cv;
int main(){
Mat3b src = imread("path_to_image");
Mat1b src_gray;
cvtColor(src, src_gray, CV_BGR2GRAY);
vector<Vec3f> circles;
HoughCircles(src_gray, circles, CV_HOUGH_GRADIENT, 3, src_gray.rows / 8, 200, 200, 0, 0);
/// Draw the circles detected
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle(src, center, 3, Scalar(0, 255, 0), -1, 8, 0);
// circle outline
circle(src, center, radius, Scalar(0, 0, 255), 3, 8, 0);
}
imshow("src", src);
waitKey();
return 0;
}
In the fourth parameter you have set a 3, but most of your images have a ratio close to 1, this could be a probable improvement, also you have to try another set of values in the parameters 6 and 7, because this values depend on the contours extracted by a canny edge detector, I hope this could help you.
I'm getting much closer now with 2 overlapping circles for each ball object. If I can correct for this it is basically solved.
Imgproc.Canny (grayMat, grayMat, 500, 200);
Mat circles = new Mat();
int minRadius =50;
int maxRadius = 200;
Imgproc.HoughCircles(grayMat, circles, Imgproc.CV_HOUGH_GRADIENT, 1, grayMat.rows() / 4, 1000, 1, minRadius, maxRadius);![solution3][1]
I would like to do motion detection in C# (using EmguCV 3.0) to remove object in motion or in foreground to draw an overlay.
Here is a sample test I done with a Kinect (because It's a depth camera)
How can I get started with EmguCV 3.0 ?
I tried many background removal code that do not work
It seems OpticalFlow is a good start but there si no example in EmguCV 3.0
If I find the largest blob how can I find its contours ?
Can someone help me to get started ?
EDIT: 17/06/2015
In EmguCV3.0.0 RC I don't see OpticalFlow in the package and documentation:
http://www.emgu.com/wiki/files/3.0.0-rc1/document/html/b72c032d-59ae-c36f-5e00-12f8d621dfb8.htm
There is only : DenseOpticalFlow, OpticalFlowDualTVL1 ???
This is a AbsDiff Code:
var grayFrame = frame.Convert<Gray, Byte>();
var motionFrame = grayFrame.AbsDiff(backFrame)
.ThresholdBinary(new Gray(20), new Gray(255))
.Erode(2)
.Dilate(2);
Result:
I don't know how to get the motion in white ?
This is the Blob Code:
Image<Bgr, Byte> smoothedFrame = new Image<Bgr, byte>(frame.Size);
CvInvoke.GaussianBlur(frame, smoothedFrame, new Size(3, 3), 1); //filter out noises
Mat forgroundMask = new Mat();
fgDetector.Apply(smoothedFrame, forgroundMask);
CvBlobs blobs = new CvBlobs();
blobDetector.Detect(forgroundMask.ToImage<Gray, byte>(), blobs);
blobs.FilterByArea(400, int.MaxValue);
blobTracker.Update(blobs, 1.0, 0, 1);
foreach (var pair in blobs) {
CvBlob b = pair.Value;
CvInvoke.Rectangle(frame, b.BoundingBox, new MCvScalar(255.0, 255.0, 255.0), 2);
}
Result:
Why so much false positive ?
This is a MOG2 Code:
forgroundDetector.Apply(frame, forgroundMask);
motionHistory.Update(forgroundMask);
var motionMask = GetMotionMask();
Image<Bgr, Byte> motionImage = new Image<Bgr, byte>(motionMask.Size);
CvInvoke.InsertChannel(motionMask, motionImage, 0);
Rectangle[] rects;
using (VectorOfRect boundingRect = new VectorOfRect()) {
motionHistory.GetMotionComponents(segMask, boundingRect);
rects = boundingRect.ToArray();
}
foreach (Rectangle comp in rects) { ...
Result:
If I select the biggest Area how can I get the contour of the object ?
First, I can give you some example Optical Flow code.
Let oldImage and newImage be variables that hold the previous and current frame. In my code, it's of type Image<Gray, Byte>.
// prep containers for x and y vectors
Image<Gray, float> velx = new Image<Gray, float>(newImage.Size);
Image<Gray, float> vely = new Image<Gray, float>(newImage.Size);
// use the Horn and Schunck dense optical flow algorithm.
OpticalFlow.HS(oldImage, newImage, true, velx, vely, 0.1d, new MCvTermCriteria(100));
// color each pixel
Image<Hsv, Byte> coloredMotion = new Image<Hsv, Byte>(newImage.Size);
for (int i = 0; i < coloredMotion.Width; i++)
{
for (int j = 0; j < coloredMotion.Height; j++)
{
// Pull the relevant intensities from the velx and vely matrices
double velxHere = velx[j, i].Intensity;
double velyHere = vely[j, i].Intensity;
// Determine the color (i.e, the angle)
double degrees = Math.Atan(velyHere / velxHere) / Math.PI * 90 + 45;
if (velxHere < 0)
{
degrees += 90;
}
coloredMotion.Data[j, i, 0] = (Byte) degrees;
coloredMotion.Data[j, i, 1] = 255;
// Determine the intensity (i.e, the distance)
double intensity = Math.Sqrt(velxHere * velxHere + velyHere * velyHere) * 10;
coloredMotion.Data[j, i, 2] = (intensity > 255) ? 255 : intensity;
}
}
// coloredMotion is now an image that shows intensity of motion by lightness
// and direction by color.
Regarding the larger question of how to remove the foreground:
If I had a way to get a static background image, that's the best way to start. Then, the foreground would be detected by the AbsDiff method and using Erode and Dilate or Gaussian to smooth the image, then use blob detection.
For simple foreground detection, I found Optical Flow to be way too much processing (8fps max), whereas the AbsDiff method was just as accurate but had no effect on framerate.
Regarding contours, if you're merely looking to find the size, position, and other moments, then the blob detection in the AbsDiff tutorial above seems to be sufficient, which uses Image.FindContours(...).
If not, I would start looking at the CvBlobDetector class as used in this tutorial. There's a built-in DrawBlob function that might come in handy.