OpenCV circle detection C# implementation - c#

I need help from any C# and or OpenCV experts in making my circle detection script more accurate.
In OpenCV circle detection is accomplished by something called HoughCircles algorithm or framework.
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
I am using a C# wrapper of OpenCV (for Unity)OpenCVforUnity HughCircles
which in turn is directly based on the official java wrapper of OpenCV.
My circle detection code is as follows (without the OpenCv dependencies of course)
I've also attached 2 images so you can see the results.
What changes are needed to improve the results? I've also included the original 2 images for reference.
using UnityEngine;
using System.Collections;
using System;
using OpenCVForUnity;
public class HoughCircleSample : MonoBehaviour{
Point pt;
// Use this for initialization
void Start ()
{
Texture2D imgTexture = Resources.Load ("balls2_bw") as Texture2D;
Mat imgMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC3);
Utils.texture2DToMat (imgTexture, imgMat);
//Debug.Log ("imgMat dst ToString " + imgMat.ToString ());
Mat grayMat = new Mat ();
Imgproc.cvtColor (imgMat, grayMat, Imgproc.COLOR_RGB2GRAY);
Imgproc.Canny (grayMat, grayMat, 50, 200);
Mat circles = new Mat();
int minRadius = 0;
int maxRadius = 0;
// Apply the Hough Transform to find the circles
Imgproc.HoughCircles(grayMat, circles, Imgproc.CV_HOUGH_GRADIENT, 3, grayMat.rows() / 8, 200, 100, minRadius, maxRadius);
Debug.Log ("circles toString " + circles.ToString ());
Debug.Log ("circles dump" + circles.dump ());
if (circles.cols() > 0)
for (int x = 0; x < Math.Min(circles.cols(), 10); x++)
{
double[] vCircle = circles.get(0, x);
if (vCircle == null)
break;
pt = new Point(Math.Round(vCircle[0]), Math.Round(vCircle[1]));
int radius = (int)Math.Round(vCircle[2]);
// draw the found circle
Core.circle(imgMat, pt, radius, new Scalar(255, 0, 0), 1);
}
Texture2D texture = new Texture2D (imgMat.cols (), imgMat.rows (), TextureFormat.RGBA32, false);
Utils.matToTexture2D (imgMat, texture);
gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
}
}

This code is in C++, but you can easily convert to C#.
I needed to change the param2 of HoughCircle to 200, resulting in:
HoughCircles(grayMat, circles, CV_HOUGH_GRADIENT, 3, grayMat.rows / 8, 200, 200, 0, 0);
which is
the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.
You also should't feed HoughCircles with a "Canny-ed" image, since will already take care of this. Use the grayMat without Canny edge detection step applied.
Results are shown below. The second one is more tricky, because of the light conditions.
Here is the whole code. Again, it's C++, but may be useful as a reference.
#include <opencv2/opencv.hpp>
using namespace cv;
int main(){
Mat3b src = imread("path_to_image");
Mat1b src_gray;
cvtColor(src, src_gray, CV_BGR2GRAY);
vector<Vec3f> circles;
HoughCircles(src_gray, circles, CV_HOUGH_GRADIENT, 3, src_gray.rows / 8, 200, 200, 0, 0);
/// Draw the circles detected
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle(src, center, 3, Scalar(0, 255, 0), -1, 8, 0);
// circle outline
circle(src, center, radius, Scalar(0, 0, 255), 3, 8, 0);
}
imshow("src", src);
waitKey();
return 0;
}

In the fourth parameter you have set a 3, but most of your images have a ratio close to 1, this could be a probable improvement, also you have to try another set of values in the parameters 6 and 7, because this values depend on the contours extracted by a canny edge detector, I hope this could help you.

I'm getting much closer now with 2 overlapping circles for each ball object. If I can correct for this it is basically solved.
Imgproc.Canny (grayMat, grayMat, 500, 200);
Mat circles = new Mat();
int minRadius =50;
int maxRadius = 200;
Imgproc.HoughCircles(grayMat, circles, Imgproc.CV_HOUGH_GRADIENT, 1, grayMat.rows() / 4, 1000, 1, minRadius, maxRadius);![solution3][1]

Related

Cut faraway objects based on depth map

I would like to do grabcut which uses a depth map that cuts away far objects, that is used in mixed reality application. So I would like to show just the front of what I see and the background as virtual reality scene.
The problem right now I tried to adapt so code and what I get is front which is cut but in black color, the mask actually.
I don't know where is the problem settle.
The input is a depth map from zed camera.
here is a picture of the behaviour:
My trial:
private void convertToGrayScaleValues(Mat mask)
{
int width = mask.rows();
int height = mask.cols();
byte[] buffer = new byte[width * height];
mask.get(0, 0, buffer);
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int value = buffer[y * width + x];
if (value == Imgproc.GC_BGD)
{
buffer[y * width + x] = 0; // for sure background
}
else if (value == Imgproc.GC_PR_BGD)
{
buffer[y * width + x] = 85; // probably background
}
else if (value == Imgproc.GC_PR_FGD)
{
buffer[y * width + x] = (byte)170; // probably foreground
}
else
{
buffer[y * width + x] = (byte)255; // for sure foreground
}
}
}
mask.put(0, 0, buffer);
}
For Each depth frame from Camera:
Mat erodeElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(4, 4));
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(7, 7));
depth.copyTo(maskFar);
Core.normalize(maskFar, maskFar, 0, 255, Core.NORM_MINMAX, CvType.CV_8U);
Imgproc.cvtColor(maskFar, maskFar, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_BINARY);
Imgproc.dilate(maskFar, maskFar, erodeElement);
Imgproc.erode(maskFar, maskFar, dilateElement);
Mat bgModel = new Mat();
Mat fgModel = new Mat();
Imgproc.grabCut(image, maskFar, new OpenCVForUnity.CoreModule.Rect(), bgModel, fgModel, 1, Imgproc.GC_INIT_WITH_MASK);
convertToGrayScaleValues(maskFar); // back to grayscale values
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_TOZERO);
Mat foreground = new Mat(image.size(), CvType.CV_8UC4, new Scalar(0, 0, 0));
image.copyTo(foreground, maskFar);
Utils.fastMatToTexture2D(foreground, texture);
In this case, the graph cut on the depth image might not be the correct method to solve all of your issue.
If you insist the processing should be done in the depth image. To find everything that is not on the table and filter out the table part. You may first apply the disparity based approach for finding the object that's is not on the ground. Reference: https://github.com/windowsub0406/StereoVision
Then based on the V disparity output image, find the locally connected component that is grouped together. You may follow this link how to do this disparity map in OpenCV which is asking the similar way to find the objects that's not on the ground
If you are ok with RGB based approaches, then use any deep learning-based method to recognize the monitor should be the correct approaches. It can directly detect the mointer bounding box. By apply this bounding box to the depth image, you may have what you want. For deep learning based approaches, there are many available package such as Yolo series. You may find one that is suitable for you. reference: https://medium.com/#dvshah13/project-image-recognition-1d316d04cb4c

MonoGame VertexPositionColor draws in incorrect place

I'm trying to make a rendering library for monogame and I'm currently working on drawing 2D polygons. However, The positions don't make any sense. Somehow, drawing them at (0, 0, 0), (100. 0, 0), (0, 100, 0), and (100, 100, 0) doesn't reach the top-left coordinate (0, 0). How do I fix this?
My Code:
BasicEffect basicEffect = new BasicEffect(GraphicsDevice);
VertexPositionColor[] vert = new VertexPositionColor[4];
vert[0].Position = new Vector3(0, 0, 0);
vert[1].Position = new Vector3(100, 0, 0);
vert[2].Position = new Vector3(0, 100, 0);
vert[3].Position = new Vector3(100, 100, 0);
short[] ind = new short[6];
ind[0] = 0;
ind[1] = 2;
ind[2] = 1;
ind[3] = 1;
ind[4] = 2;
ind[5] = 3;
foreach (EffectPass effectPass in basicEffect.CurrentTechnique.Passes)
{
effectPass.Apply();
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColor>(
PrimitiveType.TriangleList, vert, 0, vert.Length, ind, 0, ind.Length / 3);
}
RESULT: https://imgur.com/GkyqmlY
MonoGame uses a different origin for 2D and 3D coordinate systems. In 2D, (0, 0) is top-left corner, and Y increases toward the bottom of the screen. In 3D, (0,0,0) is the center of the screen, and the coordinate grid works very much like it does in mathematics - think 4 quadrants in math, if you "flatten" the z-axis.
You're drawing in Quadrant I. If you want the drawing to be based on top-left corner, you need to translate your vertices by -1/2 your viewport width and +1/2 your viewport height.

OpenTK - Top Left Origin and pixel co-ordinates

I'm trying to learn how to use OpenGL in a 2D application by using OpenTK and have read that using the inbuilt calls glMatrixMode are not modern. I want to use top left origin and pixel co-ordinates in my shader inputs and assumed I could define a matrix to do these translations.
I am trying to do this using my own matrix using the OpenTK matrix clases. However I think I have made a mistake in setting up the projection matrix and want to verify what I should be doing:-
TranslationMatrix = Matrix4.Identity * Matrix4.CreateScale(1, -1, 1);
TranslationMatrix = TranslationMatrix * Matrix4.CreateOrthographicOffCenter(0, bounds.Width, 0, bounds.Height, -1, 1);
var TranslatedPoint = TranslationMatrix * new Vector4(new Vector3(1024, 768, 0), 1); // bounds = {0, 0, 1024, 768 }
This results in x.Xyz == { 2, -2, 0 }. I thought that the x and y co-ordinates used in gl_position in the vertex shader should range from -1 to 1.
I guess I've got a major misunderstanding somewhere, what should I be looking at?
OpenTK stores the matrices in transposed form. This means you have to write everything in reversed order.
var TranslationMatrix = Matrix4.CreateOrthographicOffCenter(0, bounds.Width, 0, bounds.Height, -1, 1);
TranslationMatrix = TranslationMatrix * Matrix4.CreateScale(1, -1, 1);
var TranslatedPoint = new Vector4(1024, 768, 0, 1) * TranslationMatrix;
The result should now be [1, -1, 0, 1].

How to calculate "Arctan" and "Pow" of IplImage?

I'm trying to calculate Gradient Magnitude and Orientation of a garyscale Image using OpenCvSharp. The problem is that "Pow" function seems to not be the right for the IplImage.
I also want to know how can I calculate tan-1 (or arctan) of featureImage.
Thank you
using (IplImage cvImage = new IplImage("grayImage.png", LoadMode.AnyDepth |
LoadMode.GrayScale))
using (IplImage dstXImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels))
using (IplImage dstYImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels))
{
float[] data = { 0, -1, -1, 2 };
CvMat kernel = new CvMat(2, 2, MatrixType.F32C1, data);
Cv.Sobel(cvImage, dstXImage, 1, 0, ApertureSize.Size1);
Cv.Sobel(cvImage, dstYImage, 0, 1, ApertureSize.Size1);
Cv.Normalize(dstXImage, dstXImage, 1.0, 0, NormType.L1);
Cv.Filter2D(cvImage, dstXImage, kernel, new CvPoint(0, 0));
Cv.Normalize(dstYImage, dstYImage, 1.0, 0, NormType.L1);
Cv.Filter2D(cvImage, dstYImage, kernel, new CvPoint(0, 0));
// to calculate gradient magnitude, sqrt[(dy)power 2 + (dx)power 2]
dstXImage.Mul(dstXImage, dstXImage);
dstYImage.Mul(dstYImage, dstYImage);
IplImage dstXYImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels);
dstXImage.Add(dstYImage, dstXYImage);
dstXYImage.Pow(dstXYImage, 1/2); //this line not working,output image is black page
// to calculate gradient orientation, arctan(dy/dx)
IplImage thetaImage = new IplImage(cvImage.Size, cvImage.Depth, cvImage.NChannels);
dstYImage.Div(dstXImage, thetaImage); //afterwards need help to calculate arctan
using (new CvWindow("SrcImage", cvImage))
using (new CvWindow("DstXImage", dstXImage))
using (new CvWindow("DstYImage", dstYImage))
using (new CvWindow("DstXYImage", dstXYImage))
using (new CvWindow("thetaImage", thetaImage))
{
Cv.WaitKey(0);
}
You can use the "cartToPolar" function for your purpose.
This function calculates the magnitude and angle of 2D vectors.
magnitude(I)= sqrt(x(I)^2+y(I)^2),
angle(I)= atan2(y(I), x(I))[ *180 / pi ]
For example:
IplImage dstXYImage;
IplImage thetaImage;
CartToPolar(dstXImage, dstYImage, dstXYImage, thetaImage, true);

Plot 3d surface with ILnumerics and c#

I have used this code:
private void ilPanel1_Load(object sender, EventArgs e)
{ using (ILScope.Enter())
{
ILArray<float> X = new float[] { 0, 0, 1, 1, 2.5F, -2.6F, 5, 9, 1, 38 };
ILArray<float> Y = new float[] { 1, 0, 1, 0, 1.5F, 0.5F, 5, 9, 1, 39 };
ILArray<float> Z = new float[] { 0, 0, 1, 1, 0.4F, -0.2F, 5, 9, 1, 39 };
X = X.Reshape(2,5);
Y = Y.Reshape(2,5);
Z = Z.Reshape(2,5);
ilPanel1.Scene.Add(new ILPlotCube(twoDMode: false) {
new ILSurface(Z, colormap: Colormaps.Cool) {
Colors = 1.4f ,Children = { new ILColorbar() }
}
});
}
}
This produces:
However I checked this question and tried to adapt deprecated ILnumerics solution (as I did not find other c# code), but still do not get it, every coordinate (Z,X and Y) corresponds to one slice (m x n) in the array. So it is necesarry to reshape data.
this part is the problem:
X = X.Reshape(2,5);
Y = Y.Reshape(2,5);
Z = Z.Reshape(2,5);
If I do not give correct size, program fails, so in example I have 10 elements on each vector, so when resahping I would put 2,5 which multiplied are 10?...
What about case I have 11 elements as if I put 2,5 on reshape I get error?
What to do?
I have tried using X = X.Reshape(11); but It fails... if I use X = X.Reshape(10); it just do not draw anything
surfaces plot meshes. one must provide a mesh in order to give the surface the chance to understand how to connect the points, which points are meant to be neighbors.
the reshape should not be a problem since the original data must represent the points for a mesh/matrix anyway. so the reshape to that matrix will certainly work.
reshape(10) creates a vector of length 10. since vectors do represent a line at most but not an area - nothing is drawn. remember: surfaces draw meshes or matrices.

Categories