I would like to be able to recognize the position (center) and the angle of some small components with openCV with C#. To achieve that, I am grabbing pictures from a webcam and try to process them with the Canny algorithm. Unfortunately, the results are not that good as expected. Sometimes it is ok sometimes it is not.
I have attached an example image from the cam and the corresponding output of OpenCV.
I hope that someone could give me hints or maybe some code snippets, how to achieve my desired results. Is this something that is usually done with AI?
Example images:
Input:
Output 1:
Output 2:
Expected:
Thanks.
Actual code:
Mat src;
src = BitmapConverter.ToMat(lastFrame);
Mat dst = new Mat();
Mat dst2 = new Mat();
Cv2.Canny(src, dst, hScrollBar1.Value, hScrollBar2.Value);
// Find contours
OpenCvSharp.Point[][] contours; //vector<vector<Point>> contours;
HierarchyIndex[] hierarchyIndexes; //vector<Vec4i> hierarchy;
Cv2.FindContours(dst, out contours, out hierarchyIndexes, RetrievalModes.External, ContourApproximationModes.ApproxTC89L1);
foreach (OpenCvSharp.Point[] element in contours)
{
var biggestContourRect = Cv2.BoundingRect(element);
Cv2.Rectangle(dst,
new OpenCvSharp.Point(biggestContourRect.X, biggestContourRect.Y),
new OpenCvSharp.Point(biggestContourRect.X + biggestContourRect.Width, biggestContourRect.Y + biggestContourRect.Height),
new Scalar(255, 0, 0), 3);
}
using (new Window("dst image", dst)) ;
using (new Window("src image", src)) ;
If you already have a ROI (the box) and you just want to compute the actual orientation of it, you could use the contour inside the right box and compute its moments. A tutorial on how to do this is here (Sorry only C++).
Once you have the moments you can compute the orientation easily. To do this follow the solution here.
If you have trouble figuring out the right box itself, you are actually half way with canny boxes. You could then further try:
Equalize source image:
Posterize next (to 2 levels):
Threshold (255):
Then you can use all the canny boxes you found in the centre and use them as masks to get the right contour in the thresholded image. You can then find the biggest contour here and compute its orientation with image moments. Hope this helps!
Related
Im trying to inspect the connector pins in a charger. The job is to inspect these parameters :
both pins are present
Both are of defined height
Both are straight
I used template matching in C# & EMGU to extract a template by creating an roi and after matching it checks whether both pins are present using following code for each pins:
Image<Bgr, Byte> templateImage = pintofind;
Image<Bgr, Byte> sourceImage = new Image<Bgr, Byte>(GrabImage.Bitmap);
using (Image<Gray, float> imgMatch = sourceImage.MatchTemplate(templateImage, Emgu.CV.CvEnum.TM_TYPE.CV_TM_CCOEFF_NORMED))
{
Point[] MAX_Loc, Min_Loc;
double[] min, max;
imgMatch.MinMax(out min, out max, out Min_Loc, out MAX_Loc);
using (Image<Gray, double> RG_Image = imgMatch.Convert<Gray, double>().Copy())
{
if (max[0] > 0.75)
{
Rectangle match = new Rectangle(MAX_Loc[0], templateImage.Size);
sourceImage.Draw(match, new Bgr(Color.LimeGreen), 2);
lblresulttext.Text = "OK";
lbindgood.BackColor = Color.LimeGreen;
}
else
{
Rectangle match = new Rectangle(MAX_Loc[0], templateImage.Size);
sourceImage.Draw(match, new Bgr(Color.Red), 2);
lblresulttext.Text = "NG";
lbindbad.BackColor = Color.Red;
}
}
ibresult.Image = sourceImage;
}
This is the result I get:
It works well to check the presence of pins, but now I need to check if both are of same height and if both are straight like this image below:
Please help.
I'm not sure MatchTemplate is the correct approach if you want to identify the specific failure. It might be usable if you are guaranteed a consistent rotation, and only need to check if the actual image is the same as the template. But if you need to measure the length or identify specific failures you might need a template for each kind of failure, and that might not be feasible.
I would approach this problem by thresholding the image to separate the background from the foreground. Presumably you have control over the lighting to make this fairly simple. You should then be able to use the contour features to find the position and rotation of the charger. You should then be able to compare this to a reference contour, see MatchShapes. You will probably need some way to isolate the pins if they are the important part, for example by finding the rotated bounding box of the charger, and ignore everything except the top part that contain the pins.
If you can process the rotation of the charger, I think that using 2 templates is better than using one template.
The first template might be the top part of the pin and the second template might be the bottom part of the pin. After detecting the 2 pairs of top and bottom part of the pins, you can measure the height and direction of the pins.
I am using EmguCV 2.3.0.1416 from a simple console application (.net 4.0 and c#) and I have a question around canny's, edge detection etc. Given the following code:
var colours = new[]
{
new Bgr(Color.YellowGreen),
new Bgr(Color.Turquoise),
new Bgr(Color.Blue),
new Bgr(Color.DeepPink)
};
// Convert to grayscale, remove noise and get the canny
using (var image = new Image<Bgr, byte>(fileName)
.Convert<Gray, byte>()
.PyrDown()
.PyrUp()
.Canny(new Gray(180),
new Gray(90)))
{
// Save the canny out to a file and then get each contour within
// the canny and get the polygon for it, colour each a different
// colour from a selection so we can easily see if they join up
image.Save(cannyFileName);
var contours = image
.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_EXTERNAL);
using (var debug = new Image<Bgr, byte>(image.Size))
{
int colIndex = 0;
for (; contours != null; contours = contours.HNext)
{
Contour<Point> poly = contours
.ApproxPoly(contours.Perimeter*0.05,
contours.Storage);
debug.Draw(poly, colours[colIndex], 1);
colIndex++;
if (colIndex > 3) colIndex = 0;
}
debug.Save(debugFileName);
}
}
I get this output (this is actually just a part of the image but it shows what I am asking about):
As you can see it has a blue line with a little bit of pink and then a green line. The real thing has just a solid edge here so I want this to be a single line in order that I can be sure it is the edge of what I am looking at.
The original image looks like this (I have zoomed it but you can see it has a very distinctive edge that I was expecting to be able to find easily).
If I look at just the canny I can see the gap there so I tried adjusting the parameters for creating the canny (the threshold and linking threshold) but they have made no difference.
I also dilated and then eroded the canny (using the same value for the iterations parameter - 10 incidentally) and that seemed to do the trick but could I lose accuracy by doing this (it just feels a bit wrong somehow)?
So, how should I ensure that I get a single line in this instance?
Did you try smoothing before canny?
I found this link, maybe useful for you
http://www.indiana.edu/~dll/B657/B657_lec_hough.pdf
What exactly do you mean by single line? Perhaps you are trying to thicken your line:
debug.Draw(poly, colours[colIndex], 2);
Instead of:
debug.Draw(poly, colours[colIndex], 1);
Or whatever thickness of line you want.
Here's the emgucv Draw Method for polygon.
Perhaps look at this link too.
The first argument in the approxPoly() function is exactly what you are looking for. Just fiddle with that and you will get exactly what you want.
Anyway to make this thing go faster ? coz right now it's like 6 seconds on the sourceImage the size of 1024x768 and template 50x50 around. This is using AForge, if anyone knows other faster and rather simple ways please submit.
The task i'm trying to do is to find a smaller image within a screenshot. And preferably fast my limit is 1 second. The image i'm looking for is a red rectangle simple image and the screenshot is more complex.
System.Drawing.Bitmap sourceImage = (Bitmap)Bitmap.FromFile(#"C:\SavedBMPs\1.jpg");
System.Drawing.Bitmap template = (Bitmap)Bitmap.FromFile(#"C:\SavedBMPs\2.jpg");
// create template matching algorithm's instance
// (set similarity threshold to 92.5%)
ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0.921f);
// find all matchings with specified above similarity
TemplateMatch[] matchings = tm.ProcessImage(sourceImage, template);
// highlight found matchings
BitmapData data = sourceImage.LockBits(
new Rectangle(0, 0, sourceImage.Width, sourceImage.Height),
ImageLockMode.ReadWrite, sourceImage.PixelFormat);
foreach (TemplateMatch m in matchings)
{
Drawing.Rectangle(data, m.Rectangle, Color.White);
MessageBox.Show(m.Rectangle.Location.ToString());
// do something else with matching
}
sourceImage.UnlockBits(data);
http://opencv.willowgarage.com/wiki/FastMatchTemplate - here you can find interesting idea for speeding up the template matching using two steps, first try to match downsampled images and when found match the original ones with smaller search region.
Also there is opencv implementation of template matching in matchTemplate function. This function is ported to GPU which can get significant speed up.
See the following
http://opencv.willowgarage.com/documentation/cpp/object_detection.html - matchTemplate function.
http://opencv.willowgarage.com/wiki/OpenCV_GPU - about OpenCV functionality ported to GPU.
I use this code in C# to decode (not detect) a QRCode and it works:
LuminanceSource ls = new RGBLuminanceSource(image, image.Width, image.Height);
Result result = new QRCodeReader().decode(new BinaryBitmap(new HybridBinarizer(ls)));
Now I would like to detect a QRCode in a more complex image with a lot of other stuffs such images and text. I'm not able to understand how to accomplish this because I cannot find any sample and transforming Bitmap (C#) to Bitmatrix for Detector (zxing) is not so direct.
Does anyone have a piece of code to give me?
thanks a lot
UPDATE
I try this code but I get a ReaderException:
The code:
LuminanceSource ls = new RGBLuminanceSource(bitmap, bitmap.Width, bitmap.Height);
QRCodeMultiReader multiReader = new QRCodeMultiReader();
Result[] rs = multiReader.decodeMultiple(new BinaryBitmap(new HybridBinarizer(ls)), hints);
return rs[0].Text;
The exception
com.google.zxing.ReaderException:
in com.google.zxing.qrcode.detector.FinderPatternFinder.selectBestPatterns()
in com.google.zxing.qrcode.detector.FinderPatternFinder.find(Hashtable hints)
in com.google.zxing.qrcode.detector.Detector.detect(Hashtable hints)
in com.google.zxing.qrcode.QRCodeReader.decode(BinaryBitmap image, Hashtable hints)
in com.google.zxing.qrcode.QRCodeReader.decode(BinaryBitmap image)
in ...Logic.BarCodeManager.QRCodeReader(Bitmap bitmap) in
UPDATE 02/12/2011
I have just tried to scan the printed QRCode (with the piece of code on the top of the post) with an App on my iPhone and it works well! So the problem is surely in the detection/decode phase.
QR Codes always have the three squares in the top left, top right, bottom left corners. Knowing this you should be able to search for that square pattern within the pixel data of the image you are parsing, to figure out the top left, width and height of the qr code with a bit of simple logic parsing.
Though it's old. I still want to post it in case someone needs it.
The noise of images makes them difficult for zxing to detect qrcodes. The results are much better if the images are noise free. I use a simple method to reduce noise of scanned images. It can be done by shrinking the image. The shrink factor may vary by the noise of images. I found the factor 3 works fine in my case.
private string Qrreader(Bitmap x)
{
BarcodeReader reader = new BarcodeReader { AutoRotate = true, TryHarder = true };
Result result = reader.Decode(x);
string decoded = result.ToString().Trim();
return decoded;
}
works for me! TryHarder makes it search in the whole picture
I am simulating a thermal camera effect. I have a webcam at a party pointed at people in front of a wall. I went with background subtraction technique and using Aforge blobcounter I get blobs that I want to fill with gradient coloring. My problem = GetBlobsEdgePoints doesn't return sorted point cloud so I can't use it with, for example, PathGradientBrush from GDI+ to simply draw gradients.
I'm looking for simple,fast, algorithm to trace blobs into path (can make mistakes).
A way to track blobs received by blobcounter.
A suggestion for some other way to simulate the effect.
I took a quick look at Emgu.CV.VideoSurveillance but didn't get it to work (examples are for v1.5 and I went with v2+) but I gave up because people say it's slow on forums.
thanks for reading.
sample code of aforge background removal
Bitmap bmp =(Bitmap)e.VideoFrame.Clone();
if (backGroundFrame == null)
{
backGroundFrame = (Bitmap)e.VideoFrame.Clone();
difference.OverlayImage = backGroundFrame;
}
difference.ApplyInPlace(bmp);
bmp = grayscale.Apply(bmp);
threshold.ApplyInPlace(bmp);
Well, could you post some sample image of the result of GetBlobsEdgePoints, then it might be easier to understand what types if image processing algorithms are needed.
1) You may try a greedy algorithm, first pick a point at random, mark that point as "taken", pick the closest point not marked as "taken" and so on.
You need to find suitable termination conditions. If there can be several disjunct paths you need to find out a definition of how far away points need to be to be part of disjunct paths.
3) If you have a static background you can try to create a difference between two time shifted images, like 200ms apart. Just do a pixel by pixel difference and use abs(diff) as index in your heat color map. That will give more like an edge glow effect of moving objects.
This is the direction i'm going to take (looks best for now):
Define a set of points on the blob by my own logic (color of skin blobs should be warmer etc..)
draw gradients around those points
GraphicsPath gp=new GraphicsPath();
var rect = new Rectangle(CircumferencePoint.X - radius, CircumferencePoint.Y - radius, radius*2, radius*2);
gp.AddEllipse(rect);
GradientShaper = new PathGradientBrush(gp);
GradientShaper.CenterColor = Color.White;
GradientShaper.SurroundColors = surroundingColors;
drawBmp.FillPath(GradientShaper,gp);
mask those gradients with blob shape
blobCounter.ExtractBlobsImage(bmp,blob,true);
mask.OverlayImage = blob.Image;
mask.ApplyInPlace(rslt);
colorize with color remapping
tnx for the help #Albin