Trying to find an algorithm to detect rectangles in images - c#

At the moment I have detected edges in an image and I am planning to extract line segments from the edges using a hough transform. Once I have the segments I am planning on finding corners where two segments cross over. Is there an algorithm that can detect rectangles from the corners? say I have four corners detected, is it possible to get the relative lengths of the sides of the rectangle that the four corners make up knowing a rectangle has 4 right angles?
The reason I want to do this is so I can extract the texture bound by the rectangle and draw it as a flat rectangle on the screen.
Edit:
Thanks for the answers so far, I think I should explain my problem more clearly as I think I was slightly misinterpreted. I am actually trying to transform a warped rectangle into a flat rectangle. I read through some of the aforge articles and saw this function: link. I was wondering if it is possible to determine the ratio between the sides of the rectangle just from the 4 corners?

You're already using the tool you need - the Hough transform.
The standard formulation of the Hough transform is used to identify lines within an image, by translating from the (x,y) space of the image to the (theta,d) solution space of possible lines.
You can do the same thing to identify candidate rectangles by translating from the (x,y) space of the image to the solution space of possible rectangles (theta,d,width,height,rotation).
Taking this approach retains the strengths of the Hough transform to work with partially visible features from your image - a two step approach using the Hough transform to identify edges, and combining those edges in to rectanges, will fail to identify a rectangle if one edge or corner is sufficiently obscured.

Her is some code you can use to detect quadrilateral shapes in an image using the AForge.NET Framework:
// get angles between 2 pairs of opposite sides
float angleBetween1stPair = Tools.GetAngleBetweenLines(corners[0], corners[1], corners[2], corners[3]);
float angleBetween2ndPair = Tools.GetAngleBetweenLines(corners[1], corners[2], corners[3], corners[0]);
// check 1st pair for parallelism
if (angleBetween1stPair <= angleError)
{
subType = PolygonSubType.Trapezoid;
// check 2nd pair for parallelism
if (angleBetween2ndPair <= angleError)
{
subType = PolygonSubType.Parallelogram;
// check angle between adjacent sides
if (Math.Abs(Tools.GetAngleBetweenVectors(corners[1], corners[0], corners[2]) - 90) <= angleError)
subType = PolygonSubType.Rectangle;
//get length of 2 adjacent sides
float side1Length = (float)corners[0].DistanceTo( corners[1] );
float side2Length = (float)corners[0].DistanceTo( corners[3] );
if (Math.Abs(side1Length - side2Length) <= maxLengthDiff)
subType = (subType == PolygonSubType.Parallelogram) ? PolygonSubType.Rhombus : PolygonSubType.Square;
}
}
else
{
// check 2nd pair for parallelism - last chence to detect trapezoid
if (angleBetween2ndPair <= angleError)
{
subType = PolygonSubType.Trapezoid;
}
}
See this article for examples of how to detect various shapes:
http://www.aforgenet.com/articles/shape_checker/
Here's a link to download the AForge.NET Framework:
http://www.aforgenet.com/framework/downloads.html

try this brother :
http://www.emgu.com/wiki/index.php/Shape_(Triangle,_Rectangle,_Circle,_Line)_Detection_in_CSharp
have fun coding :)

Related

Stretch the corners of a plane to stretch a contained quadrilateral into a rectangle? (Image given) [duplicate]

I need an inverse perspective transform written in Pascal/Delphi/Lazarus. See the following image:
I think I need to walk through destination pixels and then calculate the corresponding position in the source image (To avoid problems with rounding errors etc.).
function redraw_3d_to_2d(sourcebitmap:tbitmap, sourceaspect:extended, point_a, point_b, point_c, point_d:tpoint, megapixelcount:integer):tbitmap;
var
destinationbitmap:tbitmap;
x,y,sx,sy:integer;
begin
destinationbitmap:=tbitmap.create;
destinationbitmap.width=megapixelcount*sourceaspect*???; // I dont how to calculate this
destinationbitmap.height=megapixelcount*sourceaspect*???; // I dont how to calculate this
for x:=0 to destinationbitmap.width-1 do
for y:=0 to destinationbitmap.height-1 do
begin
sx:=??;
sy:=??;
destinationbitmap.canvas.pixels[x,y]=sourcebitmap.canvas.pixels[sx,sy];
end;
result:=destinationbitmap;
end;
I need the real formula... So an OpenGL solution would not be ideal...
Note: There is a version of this with proper math typesetting on the Math SE.
Computing a projective transformation
A perspective is a special case of a projective transformation, which in turn is defined by four points.
Step 1: Starting with the 4 positions in the source image, named (x1,y1) through (x4,y4), you solve the following system of linear equations:
[x1 x2 x3] [λ] [x4]
[y1 y2 y3]∙[μ] = [y4]
[ 1 1 1] [τ] [ 1]
The colums form homogenous coordinates: one dimension more, created by adding a 1 as the last entry. In subsequent steps, multiples of these vectors will be used to denote the same points. See the last step for an example of how to turn these back into two-dimensional coordinates.
Step 2: Scale the columns by the coefficients you just computed:
[λ∙x1 μ∙x2 τ∙x3]
A = [λ∙y1 μ∙y2 τ∙y3]
[λ μ τ ]
This matrix will map (1,0,0) to a multiple of (x1,y1,1), (0,1,0) to a multiple of (x2,y2,1), (0,0,1) to a multiple of (x3,y3,1) and (1,1,1) to (x4,y4,1). So it will map these four special vectors (called basis vectors in subsequent explanations) to the specified positions in the image.
Step 3: Repeat steps 1 and 2 for the corresponding positions in the destination image, in order to obtain a second matrix called B.
This is a map from basis vectors to destination positions.
Step 4: Invert B to obtain B⁻¹.
B maps from basis vectors to the destination positions, so the inverse matrix maps in the reverse direction.
Step 5: Compute the combined Matrix C = A∙B⁻¹.
B⁻¹ maps from destination positions to basis vectors, while A maps from there to source positions. So the combination maps destination positions to source positions.
Step 6: For every pixel (x,y) of the destination image, compute the product
[x'] [x]
[y'] = C∙[y]
[z'] [1]
These are the homogenous coordinates of your transformed point.
Step 7: Compute the position in the source image like this:
sx = x'/z'
sy = y'/z'
This is called dehomogenization of the coordinate vector.
All this math would be so much easier to read and write if SO were to support MathJax… ☹
Choosing the image size
The above aproach assumes that you know the location of your corners in the destination image. For these you have to know the width and height of that image, which is marked by question marks in your code as well. So let's assume the height of your output image were 1, and the width were sourceaspect. In that case, the overall area would be sourceaspect as well. You have to scale that area by a factor of pixelcount/sourceaspect to achieve an area of pixelcount. Which means that you have to scale each edge length by the square root of that factor. So in the end, you have
pixelcount = 1000000.*megapixelcount;
width = round(sqrt(pixelcount*sourceaspect));
height = round(sqrt(pixelcount/sourceaspect));
Use Graphics32, specifically TProjectiveTransformation (to use with the Transform method). Don't forget to leave some transparent margin in your source image so you don't get jagged edges.

Detect junctions angles in a given binary image in c#

I would like to detect all the angles in a given binary image ,
the image contains a handwriting character (black on white bg),
is there a way that i can get the angles at the lines junctions with 100% accuracy?
My current solution (below) do find the angles but sometimes it finds unwanted angles - that is angles near the junction and not exectly on it (ther's an example below).
In this implementation i use Magick.net,
because i cant post more than two links ill post the input letter image that suppose to be a binary image with blue marks that are the lcations of the angles i want to detect - to get the input binary image they will need to be deleted (sorry).
letter A
My code:
var image = new MagickImage(#"xImg.jpg");
const int Radius = 3;//angle points surrounding circle radius
image.Grayscale(PixelIntensityMethod.Average); //redundent
var lineJunctionsImage = image.Clone(); // an image that will contain only the lines junctions points - angle points
//detect all lines junctions points (black pixels points) in the image
//with morphology method HAM + lineJunction kernel
lineJunctionsImage.Negate();
lineJunctionsImage.Morphology(MorphologyMethod.HitAndMiss, Kernel.LineJunctions);
lineJunctionsImage.Negate();
resulting image
The resulting points are supposed to be the points on the middle of the junctions,
but some are not accurate and its critical for me as i want to draw a circle that surrounds each of them and then take the angle between the point and the two points that intercects the circle, now, the next code is doing it but its to complicated and long so ill just write the algorithm here:
for each junction point p do:
detect all black pixels bi(0 >= i) that intersects a circle
with the above radius (3) that surrounds p,
for each bi pairs calculate the angle between p and the pair
print the angles found with the following protocol:
{point1} {angle point} {point 2}
angle
The angles found (the angle points (middle junctions points) marked):
{11,19} {8,17} {5,19} 112.619
{11,19} {8,17} {9,14} 105.255
{5,19} {8,17} {9,14} 142.12
{24,17} {21,20} {18,19} 116.56
{24,17} {21,20} {20,23} 90
{21,1} {24,0} {27,2} 127.87
{24,0} {27,2} {27,5} 123.7
{26,12} {27,9} {27,6} 161.56
I think the main problem is that the angle points are sometimes not the correct points but a close neighbor.
Maybe someone have a better more accurate idea that will find the correct angles.

Make a sphere with equidistant vertices

I'm trying to make a spherical burst of rays for the purpose of checking collision, but having specific interactions happen based upon what or where each ray hit. Hence why I'm using rays rather then something simpler such as OverlapSphere.
The reason I'm looking for how to make a sphere is because I can use the same math for my rays, by having them go to the vertices of where the sphere would be. But every way I can find for making a sphere has the lines get closer the near to the poles, which makes sense, as its pretty easy to do. But as you can imagine, its not that useful for my current project.
TL;DR:
How do I make a sphere with equidistant vertices? If its not perfectly equidistant its fine, it just needs to pretty close. If this happens, it would be great if you could give how much the difference would be, and where, if applicable.
Extra notes:
I've looked at this and this, but the math is way over my head, so what I've been looking for might've just been staring me in the face this whole time.
You could use an icosphere. As the vertices are distributed on equilateral triangles, your vertices are guaranteed to be equidistant.
To construct the icosphere, first you make an icosahedron and then split the faces recursively in smaller triangles as explained in this article.
Are you aware that the sphere given to you by Unity is in fact designed
with this exact goal in mind?
ie, the entire raison d'etre of the sphere built-in to Unity is that the points are fairly smoothly space ...... roughly equidistant, as you phrase it.
To bring up such a sphere in Unity, just do this:
You can then instantly get access to the verts, as you know
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vv = mesh.vertices;
int kVerts=vv.Length
for (int i=0; i<kVerts; ++i)
Debug.Log ... vv[i]
Note you can easily check "which part of the sphere" they are on by (for example) checking how far they are from your "cities" (or whatever) or just check (for example) the z values to see which hemisphere they are in .. et cetera.
Furthermore...
Please note. Regarding your overall reason for wanting to do this:
but having specific interactions happen based upon what or where each ray hit
Note that it could not be easier to do this using PhysX. (The completely built-in game physics in Unity.) Indeed, I have never, ever, looked at a collision without doing something "specific" depending on "where it hit!"
You can for example get the point where the contact was with http://docs.unity3d.com/ScriptReference/RaycastHit-point.html
It's worth noting it is absolutely inconceivable one could write something approaching the performance of PhysX in casual programming.
I hope this makes things easier!
slice the sphere into N circles
compute perimeter of it
divide it by the same angle that create the slice
this gives you the number of vertexes
and also angle step inside circle
cast rays
This is how I coded it in C++ + OpenGL:
// draw unit sphere points (r=1 center=(0,0,0)) ... your rays directions
int ia,na,ib,nb;
double x,y,z,r;
double a,b,da,db;
na=16; // number of slices
da=M_PI/double(na-1); // latitude angle step
for (a=-0.5*M_PI,ia=0;ia<na;ia++,a+=da) // slice sphere to circles in xy planes
{
r=cos(a); // radius of actual circle in xy plane
z=sin(a); // height of actual circle in xy plane
nb=ceil(2.0*M_PI*r/da);
db=2.0*M_PI/double(nb); // longitude angle step
if ((ia==0)||(ia==na-1)) { nb=1; db=0.0; } // handle edge cases
for (b=0.0,ib=0;ib<nb;ib++,b+=db) // cut circle to vertexes
{
x=r*cos(b); // compute x,y of vertex
y=r*sin(b);
// this just draw the ray direction (x,y,z) as line in OpenGL
// so you can ignore this
// instead add the ray cast of yours
double w=1.2;
glBegin(GL_LINES);
glColor3f(1.0,1.0,1.0); glVertex3d(x,y,z);
glColor3f(0.0,0.0,0.0); glVertex3d(w*x,w*y,w*z);
glEnd();
}
}
This is how it looks like:
R,G,B lines are the sphere coordinate system axises X,Y,Z
White-ish lines are your Vertexes (White) + direction (Gray)
[Notes]
do not forget to include math.h
and replace the OpenGL stuff with yours
If you want 4, 6, 8, 12 or 20 vertices then you can have exactly equidistant vertices as the Platonic solid which all fit inside a sphere. The actual coordinates of these should be easy to get. For other numbers of vertices you can use other polyhedra and scale the verties so they lie on a sphere. If you need lots of points then a geodesic dome might be a good base. The C60 bucky-ball could be a good base with 60 points. For most of these you should be able to find 3D models from which you can extract coordinates.
I think the easiest way to control points on a sphere is by using spherical coordinates. Then you can control position of points around the sphere by using two angles (rho and phi) and the radius.
Example code for filling points uniformly around a rotating sphere (for fun):
var time = 1; // Increment this variable every frame to see the rotation
var count = 1000;
for (int i = 0; i < count; i++)
{
var rho = time + i;
var phi = 2 * Math.PI * i / count;
var x = (float)(radius * Math.Sin(phi) * Math.Cos(rho));
var z = (float)(radius * Math.Sin(phi) * Math.Sin(rho));
var y = (float)(radius * Math.Cos(phi));
Draw(x, y, z); // your drawing code for rendering the point
}
As some answers have already suggested, use an icosahedron based solution. The source for this is quite easy to come by (and I have written my own several times) but I find the excellent Primitives Pro plugin extremely handy under many other circumstances, and always use their sphere instead of the built-in Unity one.
Link to Primitives Pro component
Primitives Pro options

a texture that repeats across the world, based on X, Y coordinates

I'm using XNA/MonoGame to draw some 2D polygons for me. I'd like a Texture I have to repeat on multiple polygons, based on their X and Y coordinates.
here's an example of what I mean:
I had thought that doing something like this would work (assuming a 256x256 pixel texture)
verticies[0].TextureCoordinate = new Vector2(blockX / 256f, (blockY + blockHeight) / 256f);
verticies[1].TextureCoordinate = new Vector2(blockX / 256f, blockY / 256f);
verticies[2].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, (blockY + blockHeight) / 256f);
verticies[3].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, blockY / 256f);
// each block is draw with a TriangleStrip, hence the odd ordering of coordinates.
// the blocks I'm drawing are not on a fixed grid; their coordinates and dimensions are in pixels.
but the blocks end up "textured" with long-horizontal lines that look like the texture has been extremely stretched.
(to check if the problem had to do with TriangleStrips, I tried removing the last vertex and drawing a TriangleList of 1 - this had the same result on the texture, and the expected result of drawing only one half of my blocks.)
what's the correct way to achieve this effect?
my math was correct, but it seems that other code was wrong, and I was missing at least one important thing.
maybe-helpful hints for other people trying to achieve this effect and running into trouble:
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
^ you need that code. but importantly, your SamplerState and other settings will get reset when you draw sprites (SpriteBatch's Begin()), so especially if you're abstracting your polygon-rendering code into little helper functions, be mindful of when and where you call them! ex:
spriteBatch.Begin();
// drawing sprites
MyFilledPolygonDrawer(args);
// drawing sprites
spriteBatch.End();
if you do this (assuming MyFilledPolygonDrawer uses 3D methods), you'll need to change all the settings (such as SamplerState) before you draw in 3D, and possibly after (depending on what settings you use for 2D rendering), all of which comes with a little overhead (and makes your code more fragile - you're more likely to screw up :P)
one way to avoid this is to draw all your 3D stuff and 2D stuff separately (all one, then all the other).
(in my case, I haven't got my code completely separated out in this way, but I was able to at least reduce some switching between 2D and 3D by using 2D methods to draw solid-color rectangles - Draw Rectangle in XNA using SpriteBatch - and 3D stuff only for less-regular and/or textured shapes.)

Creating a equilateral triangular grid over geometry

I need to create a equilateral triangular grid that fits a given geometry.
I have an image containing the geometry, it might include holes or thin paths. and i need to create a grid similar to this image:
The circles are variable in diameter, and need to cover the entire geometry. the points does not have to be on the geometry.
You can think of the triangular grid as being an oblique rectangular grid
This enables you to store the state of each circle in a 2-dimensional matrix, for instance, and to use simple nested loops for processing. Of cause then you will have to translate these logical coordinates to the geometry plane coordinates for drawing.
const double Sin30 = 0.5;
static readonly double Cos30 = Math.Cos(30*Math.PI/180);
for (int xLogical = 0; xLogical < NX; xLogical++) {
for (int yLogical = 0; yLogical < NY; yLogical++) {
double xGeo = GridDistance * xLogical * Cos30;
double yGeo = GridDistance * (yLogical + xLogical * Sin30);
...
}
}
I am assuming this is to create a 2D meshing tool. If it is, and it is homework, I suggest doing it yourself as you will get alot out of it. If it is not a meshing problem what I will have to say should help you regardless...
To do this, use the grid node centres to generate your equilaterals. If you don't have the centre points to start with you will need to look at first selecting an orientation for your object and then creating these (rectangular based) grid nodes (you will have to work out a way of testing whether these points actually lie inside your object boundaries). You can then construct your equilateral triangles using these points. Note. You again will have to deal with edge detection to get half decent accuracy.
To go a bit further that just equilaterals, and get a more accurate mesh, you will have to look into anisotropic mesh adaptation (AMA) using triangulation. This will be a lot harder than the basic approach outlined above - but fun!
Check out this link to a 2D tet-mesh generator using AMA. The paper this code is based on is:
V. Dolejsi: Anisotropic mesh adaptation for finite volume and finite element methods on triangular meshes
Computing and Visualisation in Science, 1:165-178, 1998.

Categories