C# EmguCV - Circle line thickness calculation - c#

I want to Circle line thickness calculation like this below:
Which method can help me to do so?
Thanks for your reply David. I'm new to emgucv. So I do not know where I'll start. I can do the following image using canny edge. But I can not calculate distance, because I do not know what I would use the code. Which can I use code?
private void button1_Click(object sender, EventArgs e)
{
string strFileName = string.Empty;
OpenFileDialog ofd = new OpenFileDialog();
if (ofd.ShowDialog() == DialogResult.OK)
{
//Load image
Image<Bgr, Byte> img1 = new Image<Bgr, Byte>(ofd.FileName);
//Convert the img1 to grayscale and then filter out the noise
Image<Gray, Byte> gray1 = img1.Convert<Gray, Byte>().PyrDown().PyrUp();
//Canny Edge Detector
Image<Gray, Byte> cannyGray = gray1.Canny(120, 180);
pictureBox1.Image = cannyGray.ToBitmap();
}
}

Let me Guide you a little bit further.
//load image
Image<Gray, Byte> loaded_img = new Image<Gray, byte>(Filename);
Image<Gray, Byte> Thresh_img = loaded_img.CopyBlank();
//threshold to make it binary if necessaray
CvInvoke.cvThreshold(loaded_img.Ptr, Thresh_img.Ptr, 0, 255, Emgu.CV.CvEnum.THRESH.CV_THRESH_OTSU | Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY);
//get contours of circle
Contour<Point> Circle_cont = Thresh_img.FindContours();
//get height & width of the bounding rectangle
int height_Circle_1 = Circle_cont.BoundingRectangle.Height;
int width_circle_1 = Circle_cont.BoundingRectangle.Width;
Circle_cont = Circle_cont.HNext;
int height_Circle_2 = Circle_cont.BoundingRectangle.Height;
int width_Circle_2 = Circle_cont.BoundingRectangle.Width;
//get ring thicknes in Px
double ring_thickness_1 = Math.Abs(height_Circle_1 - height_Circle_2) / 2;
double ring_thickness_2 = Math.Abs(width_circle_1 - width_Circle_2) / 2;
This should get you the thickness of your ring in Pixels. If you want it in cm you need to scale the Px value with the real Pixel length. I applied this code snippet to your example image and got 56 Pixel for both Thickness values.
I hope this will help you.

Related

TemplateMatching using EMGU CV

I'm trying to use TemplateMatching from EMGU CV in order to find(match) a template image in a source image.
Below is the code (sample taken from Stack Overflow)
public static void TemplateMatch()
{
Image<Bgr, Byte> sourceImg = new Image<Bgr, Byte>(#"D:\ImageA.png");
Image<Bgr, Byte> templateImg = new Image<Bgr, Byte>(#"D:\ImageB.png");
Image<Bgr, byte> lastImage = sourceImg.Copy();
using (Image<Gray, float> resultImg = sourceImg.MatchTemplate(templateImg, Emgu.CV.CvEnum.TemplateMatchingType.CcorrNormed))
{
double[] minVal, maxVal;
System.Drawing.Point[] minLocations, maxLocations;
resultImg.MinMax(out minVal, out maxVal, out minLocations, out maxLocations);
if (maxVal[0] > 0.9)
{
Rectangle match = new Rectangle(maxLocations[0], templateImg.Size);
lastImage.Draw(match, new Bgr(Color.Red), 3);
}
ImageViewer.Show(lastImage);
}
}
But the disadvantage is that if I try to match a template image of different size with the source image, then there is no result.
I have been suggested to loop over the scales of the image in order to find the match. But I'm not sure how to do it in C#.
Here my aim is to find image A (template image) in image B (source image) using template matching in C# irrespective of the scaling and resolution.

Emgu CV Image sharpening and controur detection

I am working on a project where I need to identify dots from IR lasers on a surface. I use for that a camera with IR filter
Some input images:
There can be several dots, too. So I tried to sharpen this image from webcam and then use FindContours method of Emgu CV.
There is my code:
public static Image<Gray, byte> Sharpen(Image<Gray, byte> image, int w, int h, double sigma1, double sigma2, int k)
{
w = (w % 2 == 0) ? w - 1 : w;
h = (h % 2 == 0) ? h - 1 : h;
//apply gaussian smoothing using w, h and sigma
var gaussianSmooth = image.SmoothGaussian(w, h, sigma1, sigma2);
//obtain the mask by subtracting the gaussian smoothed image from the original one
var mask = image - gaussianSmooth;
//add a weighted value k to the obtained mask
mask *= k;
//sum with the original image
image += mask;
return image;
}
private void ProcessFrame(object sender, EventArgs arg)
{
Mat frame = new Mat();
if (_capture.Retrieve(frame, CameraDevice))
{
Image<Bgr, byte> original = frame.ToImage<Bgr, byte>();
Image<Gray, byte> img = Sharpen(frame.ToImage<Gray, byte>(), 100, 100, 100, 100, 30);
Image<Gray, byte> thresh = new Image<Gray, byte>(img.Size);
CvInvoke.PyrDown(img, thresh);
CvInvoke.PyrUp(thresh, thresh);
Image<Gray, byte> mask = new Image<Gray, byte>(thresh.Size);
Image<Gray, byte> cannyImg = thresh.Canny(10, 50);
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(
cannyImg,
contours,
hierarchy,
RetrType.External,
ChainApproxMethod.ChainApproxSimple
);
Image<Bgr, byte> resultImage = img.Copy().Convert<Bgr, byte>();
int contCount = contours.Size;
for (int i = 0; i < contCount; i++)
{
using (VectorOfPoint contour = contours[i])
{
resultImage.Draw(CvInvoke.BoundingRectangle(contour), new Bgr(255, 0, 0), 5);
}
}
captureBox.Image = original.Bitmap;
cvBox.Image = resultImage.Bitmap;
}
}
Example of result image:
So it almost all the time works as I expect it to, but framerate is very low. I'm getting like 10-15 fps with resolution of 640x480. I need to be able to do the same thing for 1920x1080 with at least 30 fps. It's my first time with OpenCV and Emgu.CV. What can I do to make it perform better?
I solved this just setting the threshold, so that image turns black and white only. By adjusting the threshold I was able to achieve the same results if not better in terms of clarity, but also performance drastically improved since there is not heavy processing going on
Here is a snippet with ARCore library instead on EmguCV
var bitmap = eventArgs.Frame;
var filter = new Grayscale(0.2125, 0.7154, 0.0721);
var grayImage = filter.Apply(bitmap);
var thresholdFilter = new Threshold(CurrentThreshold);
thresholdFilter.ApplyInPlace(grayImage);
var blobCounter = new BlobCounter();
blobCounter.ProcessImage(grayImage);
var rectangles = blobCounter.GetObjectsRectangles();

Disparity Map in Emgu.CV

I try to use disparity map calculation in C# using Emgu.CV
I read the images from this article as bitmapLeft and bitmapRight. For reference I used the example code from here
Here is my source code:
bitmapLeft = (Bitmap) mainForm.pictureBoxLeft.Image;
bitmapRight = (Bitmap)mainForm.pictureBoxRight.Image;
Image<Gray, Byte> imageLeft = new Image<Gray, Byte>(bitmapLeft);
Image<Gray, Byte> imageRight = new Image<Gray, Byte>(bitmapRight);
Image<Gray, Byte> imageDisparity = new Image<Gray, Byte>(bitmapLeft.Width, bitmapLeft.Height);
StereoBM stereoBM = new StereoBM(16, 15);
StereoMatcherExtensions.Compute(stereoBM, imageLeft, imageRight, imageDisparity);
Image bitmapDisparity = imageDisparity.ToBitmap();
However, the resulting bitmap is all black.
I think your problem is at the end. The result of calling StereoMatcherExtentions.Comput is a Mat/Image that has a depth of Cv16S, I converted that back to Cv8U and was able to display it. Here is my example. I used the same two images.
Mat leftImage = new Mat(#"C:\Users\jones_d\Desktop\Disparity\LeftImage.png", ImreadModes.Grayscale);
Mat rightImage = new Mat(#"C:\Users\jones_d\Desktop\Disparity\\RightImage.png", ImreadModes.Grayscale);
CvInvoke.Imshow("Left", leftImage);
CvInvoke.Imshow("Right", rightImage);
Mat imageDisparity = new Mat();
StereoBM stereoBM = new StereoBM(16, 15);
StereoMatcherExtensions.Compute(stereoBM, leftImage, rightImage, imageDisparity);
Mat show = new Mat();
imageDisparity.ConvertTo(show, DepthType.Cv8U);
CvInvoke.Imshow("Disparity", show);
CvInvoke.WaitKey(0);
Here are the images:
Which seems to match the result at: Depth Map from Image
Doug

How to compare two images and extract its difference?

I have two sets of images which have the same size and pixels. Now I have to compare selectedFrame which is the 1st image to backImageFrame which is the 2nd image. I need to get the difference in the images and extract it so I can output it in a ImageBox. Now, I am using AbsDiff function of EmguCV
selectedFrame.ROI = recArray[random];
backImageFrame.ROI = recArray[random];
// backImageFrame = selectedFrame.AbsDiff(backImageFrame);
CvInvoke.AbsDiff(selectedFrame, backImageFrame, backImageFrame)
imgTry.Image = backImageFrame;
imageBox1.Image = selectedFrame;
The imgTry ImageBox doesn't have any value in it
You can use the Image API to find the difference between one image and the other, then you can define a threshold for the difference to be considered and apply that.
The code will be something like:
Image<Bgr, Byte> Frame; //current Frame from camera
Image<Bgr, Byte> Previous_Frame; //Previiousframe aquired
Image<Bgr, Byte> Difference; //Difference between the two frames
int Threshold = 60; //stores threshold for thread access
Difference = Previous_Frame.AbsDiff(Frame); //find the absolute difference
/*Play with the value 60 to set a threshold for movement*/
Difference = Difference.ThresholdBinary(new Bgr(Threshold, Threshold, Threshold), new Bgr(255,255,255)); //if value > 60 set to 255, 0 otherwise
do followup with this example to better understand.
This works for me.
Image<Gray, Byte> img1 = picPrev.Convert<Gray, Byte>();
Image<Gray, Byte> img2 = picCurrent.Convert<Gray, Byte>();
Image<Gray, Byte> img3;
img3 = img1 - img2; //Here the difference is applied.
pictureBox3.Image = img3.ToBitmap();
EmguCV AbsDiff based comparison
Bitmap inputMap = //bitmap source image
Image<Gray, Byte> sourceImage = new Image<Gray, Byte>(inputMap);
Bitmap tempBitmap = //Bitmap template image
Image<Gray, Byte> templateImage = new Image<Gray, Byte>(tempBitmap);
Image<Gray, byte> resultImage = new Image<Gray, byte>(templateImage.Width,
templateImage.Height);
CvInvoke.AbsDiff(sourceImage, templateImage, resultImage);
double diff = CvInvoke.CountNonZero(resultImage);
diff = (diff / (templateImage.Width * templateImage.Height)) * 100; // this will give you the difference in percentage
As per my experience, this is the best method compared to MatchTemplate based comparison. Match template failed to capture very minimal changes in two images.
But AbsDiff will be able to capture very small difference as well

Finding contour points in emgucv

I am working with emguCV for finding contours essential points then saving this point in a file and user redraw this shape in future. so, my goal is this image:
example
my solution is this:
1. import image to picturebox
2. edge detection with canny algorithm
3. finding contours and save points
I found a lot of points with below codes but i can't drawing first shape with this point!
using Emgu.CV;
using Emgu.Util;
private void button1_Click(object sender, EventArgs e)
{
Bitmap bmp = new Bitmap(pictureBox1.Image);
Image<Bgr, Byte> img = new Image<Bgr, byte>(bmp);
Image<Gray, Byte> gray = img.Convert<Gray, Byte>().PyrDown().PyrUp();
Gray cannyThreshold = new Gray(80);
Gray cannyThresholdLinking = new Gray(120);
Gray circleAccumulatorThreshold = new Gray(120);
Image<Gray, Byte> cannyEdges = gray.Canny(cannyThreshold, cannyThresholdLinking).Not();
Bitmap color;
Bitmap bgray;
IdentifyContours(cannyEdges.Bitmap, 50, true, out bgray, out color);
pictureBox1.Image = color;
}
public void IdentifyContours(Bitmap colorImage, int thresholdValue, bool invert, out Bitmap processedGray, out Bitmap processedColor)
{
Image<Gray, byte> grayImage = new Image<Gray, byte>(colorImage);
Image<Bgr, byte> color = new Image<Bgr, byte>(colorImage);
grayImage = grayImage.ThresholdBinary(new Gray(thresholdValue), new Gray(255));
if (invert)
{
grayImage._Not();
}
using (MemStorage storage = new MemStorage())
{
for (Contour<Point> contours = grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_LIST, storage); contours != null; contours = contours.HNext)
{
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * 0.015, storage);
if (currentContour.BoundingRectangle.Width > 20)
{
CvInvoke.cvDrawContours(color, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
color.Draw(currentContour.BoundingRectangle, new Bgr(0, 255, 0), 1);
}
Point[] pts = currentContour.ToArray();
foreach (Point p in pts)
{
//add points to listbox
listBox1.Items.Add(p);
}
}
}
processedColor = color.ToBitmap();
processedGray = grayImage.ToBitmap();
}
In your code you have added contour approximation operation
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * 0.015, storage);
This contour approximation will approximate your Contour to a nearest polygon & so your actual points got shifted. If you want to reproduce the same image you need not to do any approximation.
Refer this thread.

Categories