I have worked OpenCV with python before. But I am getting hard to using OpenCV with unity.
I training data for the specific points on the face. I can find landmark point and I can show that points on the unity webcamTexture but I want to draw contours on landmarks point that is determined by me. Firstly I need to convert landmark points to convex hull points for draw contour among existing points. How can I convert?
I tried
List<List<Vector2>> landmarkPoints = new List<List<Vector2>>();
OpenCVForUnityUtils.ConvertVector2ListToArray(landmarkPoints) ;
But landmark points doesn't convert. I need to convert landmark points to Hull Points.
Imgproc.drawContours (rgbaMat, hullPoints, -1, new Scalar (0, 255, 0), 2);
Could you help me, please?
I found this solution, finally . If you have match same problem that answer could help you.
// detect face landmark points.
OpenCVForUnityUtils.SetImage(faceLandmarkDetector, rgbaMat);
for (int i = 0; i < trackedRects.Count; i++)
{
List<Vector2> points = faceLandmarkDetector.DetectLandmark(rect);
List<Vector2> pointss = new List<Vector2>();
//Draw Contours
List<Point> pointsss = OpenCVForUnityUtils.ConvertVector2ListToPointList(pointss);
MatOfPoint hullPointMat = new MatOfPoint();
hullPointMat.fromList(pointsss);
List<MatOfPoint> hullPoints = new List<MatOfPoint>();
hullPoints.Add(hullPointMat);
Imgproc.drawContours(rgbaMat, hullPoints, -1, new Scalar(150, 100, 5,255), -1);
}
Related
i'm quite new in Ocean Framework. I have an issue about copy a SeismicCube object with different size. I got to resize K index of the cube for time/depth resampling. All I knew is clone a cube with exactly same properties. Something like this:
Template template = source.Template;
clone = collection.CreateSeismicCube(source, template);
with source is the original cube and clone is the result. Is it possible to find a way to resize clone to different size? size of index K (trace length) particularly. I've explored the overload methods of CreateSeismicCube but still can't understand how to fill the correct parameters. Do you guys have a solution about this issue? Thanks in advance.
When you create a seismic cube using the overload that clones from another seismic cube you do not have the ability to resize it in any direction (I, J, or K). If you desire a different K dimension for your new cube, then you have to create it providing the long list of arguments that includes the vectors describing its rotation and spacing. You can generate the vectors from the original cube using the samples nearest the origin sample (0,0,0) of the original seismic cube.
Consider that you have the following locations in the cube expressed by their I,J,K indexes. Since the K vector is easy to generate, only needing sample rate, I'll focus on I and J here.
First, get positions at the origin and two neighborhing traces.
Point3 I0J0 = inputCube.PositionAtIndex( new IndexDouble3( 0, 0, 0 ) );
Point3 I1J0 = inputCube.PositionAtIndex( new IndexDouble3( 1, 0, 0 ) );
Point3 I0J1 = inputCube.PositionAtIndex( new IndexDouble3( 0, 1, 0 ) );
Now build segments in the I and J directions and use them to create the vectors.
Vector3 iVector = new Vector3( new Segment3( I0J0, I1J0 ) );
Vector3 jVector = new Vector3( new Segment3( I0J0, I0J1 ) );
Now create the K vector from the input cube sampling. Note that you have to negate the value.
Vector3 kVector = new Vector3( 0, 0, -inputCube.SampleSpacingIJK.Z );
I am testing to determine if two polygons overlap. I have developed a first version which does a simple point in polygon test (Fig 1). However I am looking to revamp that method to deal with situations where no vertices of polygon A are in polygon B but their line segments overlap (Fig B).
Any help getting started would be greatly appreciated.
Here is an example with using Region:
GraphicsPath grp = new GraphicsPath();
// Create an open figure
grp.AddLine(10, 10, 10, 50); // a of polygon
grp.AddLine(10, 50, 50, 50); // b of polygon
grp.CloseFigure(); // close polygon
// Create a Region regarding to grp
Region reg = new Region(grp);
Now you can use the Method Region.IsVisible to determine whether the region is in an Rectangle or Point.
The solution:
I modified some code found here.
private Region FindIntersections(List<PolyRegion> regions)
{
if (regions.Count < 1) return null;
Region region = new Region();
for (int i = 0; i < regions.Count; i++)
{
using (GraphicsPath path = new GraphicsPath())
{
path.AddPath(regions[i].Path, false);
region.Intersect(path);
}
}
return region;
}
The result:
I have some images like this where I need to find the central rectangle
Im using a variation of the EmguCV examples to find rectangles and came with this
using (MemStorage storage = new MemStorage())
{ //allocate storage for contour approximation
//Contour<Point> contours = gray.FindContours()
Contour<Point> contours = gray.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_LIST,
storage);
for (; contours != null; contours = contours.HNext)
{
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * 0.05, storage);
//Seq<Point> currentContour = contours.GetConvexHull(Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
if (contours.Area > MinRectangleArea) //only consider contours with area greater than 20000
{
if (currentContour.Total == 4) //The contour has 4 vertices.
{
bool isRectangle = true;
Point[] pts = currentContour.ToArray();
LineSegment2D[] edges = PointCollection.PolyLine(pts, true);
for (int i = 0; i < edges.Length; i++)
{
double angle = Math.Abs(edges[(i + 1) % edges.Length].GetExteriorAngleDegree(edges[i]));
if (angle < 90 - RectangleAngleMargin || angle > RectangleAngleMargin + 90)
{
isRectangle = false;
break;
}
}
if (isRectangle)
{
boxList.Add(currentContour.GetMinAreaRect());
}
}
}
}
}
And the result of executing that over those images sometimes finds this two rectangles:
The orange rectangle is ok, thats what I need. But I dont want the blue. Sometimes the four vertex are in the border of the image, usually one of them is out.
Changing the RETR_TYPE of the FindContours function to CV_RETR_EXTERNAL, I only get the blue rectangle, so I wonder if there is an option of NOT getting the contours with external points.
The real image actually can have smaller rectangles inside the orange (or a line appears splitting the rectangle), so after that I´m selecting the bigger rectangle to be the one I want, but cant do it that way with that blue one.
Taking a look at your sample image I would choose another approach.
Instead of classical contour detection, If you perform Hough line detection and then peform intersections of line found, you will find exactly the four vertices of the rectangle you are searching for...
If you need some help in coding let me know and I will edit my answer.
I am trying to extract out 3D distance in mm between two known points in a 2D image. I am using square AR markers in order to get the camera coordinates relative to the markers in the scene. The points are the corners of these markers.
An example is shown below:
The code is written in C# and I am using XNA. I am using AForge.net for the CoPlanar POSIT
The steps I take in order to work out the distance:
1. Mark corners on screen. Corners are represented in 2D vector form, Image centre is (0,0). Up is positive in the Y direction, right is positive in the X direction.
2. Use AForge.net Co-Planar POSIT algorithm to get pose of each marker:
float focalLength = 640; //Needed for POSIT
float halfCornerSize = 50; //Represents 1/2 an edge i.e. 50mm
AVector[] modelPoints = new AVector3[]
{
new AVector3( -halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, -halfCornerSize ),
new AVector3( -halfCornerSize, 0, -halfCornerSize ),
};
CoplanarPosit coPosit = new CoplanarPosit(modelPoints, focalLength);
coPosit.EstimatePose(cornersToEstimate, out marker1Rot, out marker1Trans);
3. Convert to XNA rotation/translation matrix (AForge uses OpenGL matrix form):
float yaw, pitch, roll;
marker1Rot.ExtractYawPitchRoll(out yaw, out pitch, out roll);
Matrix xnaRot = Matrix.CreateFromYawPitchRoll(-yaw, -pitch, roll);
Matrix xnaTranslation = Matrix.CreateTranslation(marker1Trans.X, marker1Trans.Y, -marker1Trans.Z);
Matrix transform = xnaRot * xnaTranslation;
4. Find 3D coordinates of the corners:
//Model corner points
cornerModel = new Vector3[]
{
new Vector3(halfCornerSize,0,-halfCornerSize),
new Vector3(-halfCornerSize,0,-halfCornerSize),
new Vector3(halfCornerSize,0,halfCornerSize),
new Vector3(-halfCornerSize,0,halfCornerSize)
};
Matrix markerTransform = Matrix.CreateTranslation(cornerModel[i].X, cornerModel[i].Y, cornerModel[i].Z);
cornerPositions3d1[i] = (markerTransform * transform).Translation;
//DEBUG: project corner onto screen - represented by brown dots
Vector3 t3 = viewPort.Project(markerTransform.Translation, projectionMatrix, viewMatrix, transform);
cornersProjected1[i].X = t3.X; cornersProjected1[i].Y = t3.Y;
5. Look at the 3D distance between two corners on a marker, this represents 100mm. Find the scaling factor needed to convert this 3D distance to 100mm. (I actually get the average scaling factor):
for (int i = 0; i < 4; i++)
{
//Distance scale;
distanceScale1 += (halfCornerSize * 2) / Vector3.Distance(cornerPositions3d1[i], cornerPositions3d1[(i + 1) % 4]);
}
distanceScale1 /= 4;
6. Finally I find the 3D distance between related corners and multiply by the scaling factor to get distance in mm:
for(int i = 0; i < 4; i++)
{
distance[i] = Vector3.Distance(cornerPositions3d1[i], cornerPositions3d2[i]) * scalingFactor;
}
The distances acquired are never truly correct. I used the cutting board as it allowed me easy calculation of what the distances should be. The above image calculated a distance of 147mm (expected 150mm) for corner 1 (red to purple). The image below shows 188mm (expected 200mm).
What is also worrying is the fact that when measuring the distance between marker corners sharing an edge on the same marker, the 3D distances obtained are never the same. Another thing I noticed is that the brown dots never seem to exactly match up with the colored dots. The colored dots are the coordinates used as input to the CoPlanar posit. The brown dots are the calculated positions from the center of the marker calculated via POSIT.
Does anyone have any idea what might be wrong here? I am pulling out my hair trying to figure it out. The code should be quite simple, I don't think I have made any obvious mistakes with the code. I am not great at maths so please point out where my basic maths might be wrong as well...
You are using way to many black boxes in your question. What is the focal length in the second step? Why go through ypr in step 3? How do you calibrate? I recommend to start over from scratch without using libraries that you do not understand.
Step 1: Create a camera model. Understand the errors, build a projection. If needed apply a 2d filter for lens distortion. This might be hard.
Step 2: Find you markers in 2d, after removing lens distortion. Make sure you know the error and that you get the center. Maybe over multiple frames.
Step 3: Un-project to 3d. After 1 and 2 this should be easy.
Step 4: ???
Step 5: Profit! (Measure distance in 3d and know your error)
I think you need to have 3D photo (two photo from a set of distance) so you can get the parallax distance from image differences
I am having a laser scanner application where I want to find the difference between two plots ,one the reference plot without object and the other with the object in view.I am plotting the graph with x y coordinates. currently I have plotted the graphs and filled them with different colors so that I can view the subtracted part clearly. But now I want only the difference area to show up...I thought finding the area under the curve will solve the issue.But I think it will only give the numerical value and not the exact position of the subtracted area.
So,I searched the internet looking for solutions in C# where I can do this in the plot itself.Hope I made myself clear.
Can someone guide me in the search? I am giving my c# code here..
// PointPairList holds the data for plotting, X and Y arrays (one can use other types of objects as well)
PointPairList spl1 = new PointPairList(x1, y1);
PointPairList spl2 = new PointPairList(x2, y2);
PointPairList spl3 = new PointPairList(x, y);
// Add curves to myPane object
LineItem myCurve1 = myPane.AddCurve("LIDAR Data Scanner-Measurement-Normal", spl1, Color.Blue, SymbolType.None);
LineItem myCurve2 = myPane.AddCurve("LIDAR Data Scanner-Measurement-with object", spl2, Color.Red, SymbolType.None);
LineItem myCurve3 = myPane.AddCurve("LIDAR Data Scanner-Measurement-Subtracted curve", spl3, Color.Green, SymbolType.None);
// myCurve1.Line.Width = 3.0F;
//myCurve2.Line.Width = 3.0F;
myCurve1.Line.Fill = new Fill(Color.White, Color.FromArgb(16, 155, 0, 0), 90F);
myCurve2.Line.Fill = new Fill(Color.Black, Color.FromArgb(143, 55, 6, 0), 90F);
I want to display only the rectangle white part in the figure...
I am not sure about data-structures that you are sighted - however, generally speaking, if you are dealing with polygons (closed curves specifies with a set of x,y points) then you can do polygon clipping to find the difference. See
Algorithm to Compute the Remaining Polygon After Subtraction
How to intersect two polygons?
If you can represent your two plots i.e. a reference plot and supplied plot as polygon then above algorithm should allow you compute the difference.