I am using EmguCV 2.3.0.1416 from a simple console application (.net 4.0 and c#) and I have a question around canny's, edge detection etc. Given the following code:
var colours = new[]
{
new Bgr(Color.YellowGreen),
new Bgr(Color.Turquoise),
new Bgr(Color.Blue),
new Bgr(Color.DeepPink)
};
// Convert to grayscale, remove noise and get the canny
using (var image = new Image<Bgr, byte>(fileName)
.Convert<Gray, byte>()
.PyrDown()
.PyrUp()
.Canny(new Gray(180),
new Gray(90)))
{
// Save the canny out to a file and then get each contour within
// the canny and get the polygon for it, colour each a different
// colour from a selection so we can easily see if they join up
image.Save(cannyFileName);
var contours = image
.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_EXTERNAL);
using (var debug = new Image<Bgr, byte>(image.Size))
{
int colIndex = 0;
for (; contours != null; contours = contours.HNext)
{
Contour<Point> poly = contours
.ApproxPoly(contours.Perimeter*0.05,
contours.Storage);
debug.Draw(poly, colours[colIndex], 1);
colIndex++;
if (colIndex > 3) colIndex = 0;
}
debug.Save(debugFileName);
}
}
I get this output (this is actually just a part of the image but it shows what I am asking about):
As you can see it has a blue line with a little bit of pink and then a green line. The real thing has just a solid edge here so I want this to be a single line in order that I can be sure it is the edge of what I am looking at.
The original image looks like this (I have zoomed it but you can see it has a very distinctive edge that I was expecting to be able to find easily).
If I look at just the canny I can see the gap there so I tried adjusting the parameters for creating the canny (the threshold and linking threshold) but they have made no difference.
I also dilated and then eroded the canny (using the same value for the iterations parameter - 10 incidentally) and that seemed to do the trick but could I lose accuracy by doing this (it just feels a bit wrong somehow)?
So, how should I ensure that I get a single line in this instance?
Did you try smoothing before canny?
I found this link, maybe useful for you
http://www.indiana.edu/~dll/B657/B657_lec_hough.pdf
What exactly do you mean by single line? Perhaps you are trying to thicken your line:
debug.Draw(poly, colours[colIndex], 2);
Instead of:
debug.Draw(poly, colours[colIndex], 1);
Or whatever thickness of line you want.
Here's the emgucv Draw Method for polygon.
Perhaps look at this link too.
The first argument in the approxPoly() function is exactly what you are looking for. Just fiddle with that and you will get exactly what you want.
Related
Im trying to inspect the connector pins in a charger. The job is to inspect these parameters :
both pins are present
Both are of defined height
Both are straight
I used template matching in C# & EMGU to extract a template by creating an roi and after matching it checks whether both pins are present using following code for each pins:
Image<Bgr, Byte> templateImage = pintofind;
Image<Bgr, Byte> sourceImage = new Image<Bgr, Byte>(GrabImage.Bitmap);
using (Image<Gray, float> imgMatch = sourceImage.MatchTemplate(templateImage, Emgu.CV.CvEnum.TM_TYPE.CV_TM_CCOEFF_NORMED))
{
Point[] MAX_Loc, Min_Loc;
double[] min, max;
imgMatch.MinMax(out min, out max, out Min_Loc, out MAX_Loc);
using (Image<Gray, double> RG_Image = imgMatch.Convert<Gray, double>().Copy())
{
if (max[0] > 0.75)
{
Rectangle match = new Rectangle(MAX_Loc[0], templateImage.Size);
sourceImage.Draw(match, new Bgr(Color.LimeGreen), 2);
lblresulttext.Text = "OK";
lbindgood.BackColor = Color.LimeGreen;
}
else
{
Rectangle match = new Rectangle(MAX_Loc[0], templateImage.Size);
sourceImage.Draw(match, new Bgr(Color.Red), 2);
lblresulttext.Text = "NG";
lbindbad.BackColor = Color.Red;
}
}
ibresult.Image = sourceImage;
}
This is the result I get:
It works well to check the presence of pins, but now I need to check if both are of same height and if both are straight like this image below:
Please help.
I'm not sure MatchTemplate is the correct approach if you want to identify the specific failure. It might be usable if you are guaranteed a consistent rotation, and only need to check if the actual image is the same as the template. But if you need to measure the length or identify specific failures you might need a template for each kind of failure, and that might not be feasible.
I would approach this problem by thresholding the image to separate the background from the foreground. Presumably you have control over the lighting to make this fairly simple. You should then be able to use the contour features to find the position and rotation of the charger. You should then be able to compare this to a reference contour, see MatchShapes. You will probably need some way to isolate the pins if they are the important part, for example by finding the rotated bounding box of the charger, and ignore everything except the top part that contain the pins.
If you can process the rotation of the charger, I think that using 2 templates is better than using one template.
The first template might be the top part of the pin and the second template might be the bottom part of the pin. After detecting the 2 pairs of top and bottom part of the pins, you can measure the height and direction of the pins.
I would like to be able to recognize the position (center) and the angle of some small components with openCV with C#. To achieve that, I am grabbing pictures from a webcam and try to process them with the Canny algorithm. Unfortunately, the results are not that good as expected. Sometimes it is ok sometimes it is not.
I have attached an example image from the cam and the corresponding output of OpenCV.
I hope that someone could give me hints or maybe some code snippets, how to achieve my desired results. Is this something that is usually done with AI?
Example images:
Input:
Output 1:
Output 2:
Expected:
Thanks.
Actual code:
Mat src;
src = BitmapConverter.ToMat(lastFrame);
Mat dst = new Mat();
Mat dst2 = new Mat();
Cv2.Canny(src, dst, hScrollBar1.Value, hScrollBar2.Value);
// Find contours
OpenCvSharp.Point[][] contours; //vector<vector<Point>> contours;
HierarchyIndex[] hierarchyIndexes; //vector<Vec4i> hierarchy;
Cv2.FindContours(dst, out contours, out hierarchyIndexes, RetrievalModes.External, ContourApproximationModes.ApproxTC89L1);
foreach (OpenCvSharp.Point[] element in contours)
{
var biggestContourRect = Cv2.BoundingRect(element);
Cv2.Rectangle(dst,
new OpenCvSharp.Point(biggestContourRect.X, biggestContourRect.Y),
new OpenCvSharp.Point(biggestContourRect.X + biggestContourRect.Width, biggestContourRect.Y + biggestContourRect.Height),
new Scalar(255, 0, 0), 3);
}
using (new Window("dst image", dst)) ;
using (new Window("src image", src)) ;
If you already have a ROI (the box) and you just want to compute the actual orientation of it, you could use the contour inside the right box and compute its moments. A tutorial on how to do this is here (Sorry only C++).
Once you have the moments you can compute the orientation easily. To do this follow the solution here.
If you have trouble figuring out the right box itself, you are actually half way with canny boxes. You could then further try:
Equalize source image:
Posterize next (to 2 levels):
Threshold (255):
Then you can use all the canny boxes you found in the centre and use them as masks to get the right contour in the thresholded image. You can then find the biggest contour here and compute its orientation with image moments. Hope this helps!
My goal is to detect the different regions within a simple drawing constructed of various lines. Please click the following link to view a visual example of my goal for clarification. I am of course able to get the position of the drawn lines, but since one line can cross multiple 'regions' I don't think this information alone will be sufficient.
Any ideas, suggestions or points to other websites are welcome. I am using C# in combination with WPF - I am not certain which search words might lead to an answer to this problem. I did come across this shape checker article from AForge, but it seems to focus on detecting shapes that are already there, not so much on regions that still have to be 'discovered'. As a side note, I hope to find a solution that works not only with rectangles but also with other types of shapes.
Thank you very much in advance.
Update:
foreach (Line canvasObject in DrawingCanvas.Children.OfType<Line>())
{
LineGeometry lineGeometry1 = new LineGeometry();
lineGeometry1.StartPoint = new Point(canvasObject.X1, canvasObject.Y1);
lineGeometry1.EndPoint = new Point(canvasObject.X2, canvasObject.Y2);
if (canvasObject.X1 != canvasObject.X2) {
foreach (Line canvasObject2 in DrawingCanvas.Children.OfType<Line>()) {
if (canvasObject.X1 == canvasObject2.X1 && canvasObject.X2 == canvasObject2.X2 &&
canvasObject2.Y1 == canvasObject2.Y2 && canvasObject.Y2 == canvasObject2.Y2) {
return;
// prevent the system from 'colliding' the same two lines
}
LineGeometry lineGeometry2 = new LineGeometry {
StartPoint = new Point(canvasObject2.X1, canvasObject2.Y1),
EndPoint = new Point(canvasObject2.X2, canvasObject2.Y2)
};
if (lineGeometry1.FillContainsWithDetail(lineGeometry2).ToString() != "Empty") {
//collision detected
Rectangle rectangle = new Rectangle {
Width = Math.Abs(canvasObject.X2 - canvasObject.X1),
Height = 20,
Fill = Brushes.Red
};
//rectangle.Height = Math.Abs(canvasObject.Y2 - canvasObject.Y1);
DrawingCanvas2.Children.Add(rectangle);
Canvas.SetTop(rectangle, canvasObject.Y1);
Canvas.SetLeft(rectangle, canvasObject.X1);
}
}
}
}
I have experimented with the following code - to give you an impression of how I tried to tackle this problem. Initially I thought I had found a partial solution, by checking for collision between lines. Unfortunately I just created a second line of each line (which of course collided 'with itself'). After I added a simple if check (see below) this no longer occurs, but now I don't get any collisions anymore.. so will probably need a new technique.
Update 2:
After some more digging and searching the internet for solutions, I have a new potential solution in mind. Hopefully this can also be of use to anyone looking for answers in the future. Using a flood-fill algorithm I am able to 'fill' each region with a specific color - much like the paint bucket tool in an image editing application. Summarized, this done by taking a 'screenshot' of the Canvas element, starting at a certain pixel and expanding over and over until a pixel with a different color is found (these would be the lines). It works pretty well and is able to return an image with the various regions. However - my current problem is accessing these regions as 'objects' in C#/WPF. I would like to draw the regions myself (using polyobject or something similar?) - making it possible to use the objects for further calculations or interactions.
I have tried saving the position of the smallest and largest X and Y positions in the FloodFill algorithm after each pixel check, but this makes the algorithm work very very slow. If anyone has an idea, I would love to know. :)
This question is quite difficult for me to explain, so I'll be illustrating with some images as well as text.
For a steel engraving machine I need to use .NET's normal graphics framework to create a "document" that is sent to the engraving machine - it is treated just like a normal printer. The machine in question is this one:
http://www.rolanddga.com/products/impactprinters/mpx90/features.asp
I can print a text-outline on it in C# with this:
// ALL UNITS ARE SET IN MILIMETERS (MM)
Graphics g = <instantiated from my printers-printpage-event>;
// The following values are set as "constants" here for the purpose of my question
// they normally are passed as parameters
string s = "ABC";
float fontSize = 4.0F;
RectangleF r = new RectangleF(0, 30.0F, 100.0F, 40.0F);
StringFormat sfDraw = new StringFormat();
sfDraw.Alignment = StringAlignment.Center;
FontStyle fStyle = FontStyle.Regular;
using (var gpDraw = new GraphicsPath())
{
gpDraw.AddString(text, fFamily, (int)fStyle, fSize, r, sfDraw);
SolidBrush brushFG = new SolidBrush(Color.Black);
Pen pen = new Pen(brushFG, 0.01F);
g.DrawPath(pen, gpDraw);
}
It gives an output similar to this: http://i47.tinypic.com/mruu4j.jpg
What I want now is to fill this outline. Not simply with a brush-fill (as can easily be accomplished with g.FillPath(brushFG, gpDraw).
It should instead be "filled" with smaller and smaller outlines, like shown on this image: http://i46.tinypic.com/b3kb29.png
(the different line colors are only used to make the example clearer).
As I made the example in Photoshop, I realized that what I am actually trying to do is to mimmick the functionality, that you find in Photoshop's Select/Modify/Contract.
But I am at my wit's end as to how I accomplish this.
Any help? I'm not looking for a complete solution, but I am at the moment completely stuck. I've tried simple scaling, which probably is the wrong way (since it does not produce the right result...)
UPDATE 2012-07-16: I am now using the Clipper Library http://www.angusj.com/delphi/clipper.php which has a wonderful function called OffsetPolygons.
My test-code is shown here: http://pastie.org/4264890
It works fine with "single" polygons - e.g. a "C" since it only consists of a single polygon. An "O" consist of two polygons - an inside and outside. Likewise with "A". And these give me some trouble. See these images:
C: http://i46.tinypic.com/ap304.png
O: http://i45.tinypic.com/35k60xg.jpg
A: http://i50.tinypic.com/1zyaibm.png
B: http://i49.tinypic.com/5lbb40.png
You get the picture (heh heh... ;-)
I think the problem is, that I extract everything from GraphicsPath as a single polygon, when there are actually 2 (in the case of A and O), and 3 in the case of a B.
Clipper's OffsetPolygons actually takes an array of polygons, so I guess it is able to do this right. But I don't know how to extract my paths from GraphicsPath as seperate polygons.
UPDATE 2012-07-16 (later in the day):
Okay I've actually managed to pull it off now, and will explain it in an answer, in the hope that it might help others with similar problems.
And a big thank you to everybody who helped along the way! Only reason that I accept my own answer is so that others might benefit from this question with a full-baked solution.
Take a look at An algorithm for inflating/deflating (offsetting, buffering) polygons -- the questioner there is actually asking about the reverse operation, but the answers there apply to your case as well. One of them (the highest rated) has a pointer to an open source library that has a C# version.
The usual name for the operation you describe is "polygon offsetting", by the way.
Using the Clipper library was only half of the battle.
I extracted all points from GraphicsPath in a single array, thus inadvertently creating a misshapen polygon based on 2 seperate polygons (in the case of "A").
Instead I needed to examine the PointTypes array property on GraphicsPath. Everytime a point has a PointType == 0 it means the beginning of a new polygon. So the extracting method should use this and instead return an array of polygons instead of just a single polygon:
private ClipperPolygons graphicsPathToPolygons(GraphicsPath gp)
{
ClipperPolygons polyList = new ClipperPolygons();
ClipperPolygon poly = null;
for (int i = 0; i < gp.PointCount; i++)
{
PointF p = gp.PathPoints[i];
byte pType = gp.PathTypes[i];
if (pType == 0)
{
if (poly != null)
polyList.Add(poly);
poly = new ClipperPolygon();
}
IntPoint ip = new IntPoint();
ip.X = (int)(p.X * pointScale);
ip.Y = (int)(p.Y * pointScale);
poly.Add(ip);
}
if (poly != null)
polyList.Add(poly);
return polyList;
}
Clipper's OffsetPolygons actually WANTS a list of polygons, so this ought to have been obvious to me earlier.
The entire code can be seen here: http://pastie.org/4265265
And if you're curious, I've zipped the entire test-project here to open in Visual Studio and compile.
http://gehling.dk/wp-content/uploads/2012/07/TestClipper.zip
It has not been optimized for speed in any way.
/ Carsten
I am simulating a thermal camera effect. I have a webcam at a party pointed at people in front of a wall. I went with background subtraction technique and using Aforge blobcounter I get blobs that I want to fill with gradient coloring. My problem = GetBlobsEdgePoints doesn't return sorted point cloud so I can't use it with, for example, PathGradientBrush from GDI+ to simply draw gradients.
I'm looking for simple,fast, algorithm to trace blobs into path (can make mistakes).
A way to track blobs received by blobcounter.
A suggestion for some other way to simulate the effect.
I took a quick look at Emgu.CV.VideoSurveillance but didn't get it to work (examples are for v1.5 and I went with v2+) but I gave up because people say it's slow on forums.
thanks for reading.
sample code of aforge background removal
Bitmap bmp =(Bitmap)e.VideoFrame.Clone();
if (backGroundFrame == null)
{
backGroundFrame = (Bitmap)e.VideoFrame.Clone();
difference.OverlayImage = backGroundFrame;
}
difference.ApplyInPlace(bmp);
bmp = grayscale.Apply(bmp);
threshold.ApplyInPlace(bmp);
Well, could you post some sample image of the result of GetBlobsEdgePoints, then it might be easier to understand what types if image processing algorithms are needed.
1) You may try a greedy algorithm, first pick a point at random, mark that point as "taken", pick the closest point not marked as "taken" and so on.
You need to find suitable termination conditions. If there can be several disjunct paths you need to find out a definition of how far away points need to be to be part of disjunct paths.
3) If you have a static background you can try to create a difference between two time shifted images, like 200ms apart. Just do a pixel by pixel difference and use abs(diff) as index in your heat color map. That will give more like an edge glow effect of moving objects.
This is the direction i'm going to take (looks best for now):
Define a set of points on the blob by my own logic (color of skin blobs should be warmer etc..)
draw gradients around those points
GraphicsPath gp=new GraphicsPath();
var rect = new Rectangle(CircumferencePoint.X - radius, CircumferencePoint.Y - radius, radius*2, radius*2);
gp.AddEllipse(rect);
GradientShaper = new PathGradientBrush(gp);
GradientShaper.CenterColor = Color.White;
GradientShaper.SurroundColors = surroundingColors;
drawBmp.FillPath(GradientShaper,gp);
mask those gradients with blob shape
blobCounter.ExtractBlobsImage(bmp,blob,true);
mask.OverlayImage = blob.Image;
mask.ApplyInPlace(rslt);
colorize with color remapping
tnx for the help #Albin