I have an image like this:
Next i use some techniques to get the contours of the sudoku grid. for demonstration, here's a picture: (green=boundingbox, red=contour)
Now my question; How can i warp this to a perfect square? I have tried to use cvWarpPerspective but i can't seem to figure out how to get the corners from the contours(red line).
Any help would be greatly appreciated!
OK. You have the contour of the puzzle trapezoid. You have the bounding box. You want a nice looking square. This is a subjective part of your system. You can, for example, just snap the target height to the width:
Rect target(0,0,boundbox.width,boundbox.width);
How to actually transform the trapezoid into a square?
First, crop/set the roi to the target:
Mat cropped = source_image(target);
Then, you find the homography matrix, using the source and target quadrangles:
Point SourceTrapezoid[4]; // the trapezoid points, with boundbox.x and y subtracted
Point TargetSquare[4]; // 0,0, roi.width, roi.height
Mat homography_mat = findHomography(InputArray srcPoints, InputArray dstPoints)
Note that the points in TargetSquare must be listed in the same order as in SourceTrapezoid (for example, all in clockwise direction).
Next, just apply the transformation:
Mat transformed;
perspectiveTransform(cropped, transformed, homography_mat);
And copy the transformed box into its place:
transformed.copyTo(source_image(target));
I gave you an example with the c++ opencv api, but the c api is equivalent. Check the opencv documentation for the methods' equivalents.
And as user LSA added in the comments, there are a ton of examples on the web. But the steps for "transform quadrilateral into rectangle" are very simple:
Decide on the target rectangle's aspect ratio and placement(if aplicable)
Order the points of the target rectangle correctly as the source quadrangle
Calculate the homography
Apply the perspective transform
Put the resultant image where you want it(if applicable)
Related
I have searched far and wide for an answer to this problem, and I cannot find one so I am asking here.
The Problem:
I have a laser projecting down on a surface from overhead and I want to project some specific size shapes on this surface. In order to do this I need to 'calibrate' the laser to ground it in the real world.
The laser projects in its own coordinate system ranging from -32000 to 32000 in the x and y directions. I have targets setup on my surface in a rough rectangle (see image below for more details). The targets are set up in terms of millimeters and are their own coordinate system.
I need to be able to take points in millimeters and get them into this range of -32000 to 32000 accurately in an array of scenarios.
Example:
What is the most accurate way of determining the laser space coordinates of the desired point?
Problem 2:
The projection space is not guaranteed to be flat. It could be tilted in any direction. For example, if the bottom (in relation to the example picture) is raised, the real world coordinates stay the same in 2-D, but the measured laser coordinates become more of a Trapezoid. See Image below
If anyone has encountered/solved a similar problem or can even begin to point me in the right direction for a solution, it would be greatly appreciated.
Thank you!
I had the same issue on my post right here: https://stackoverflow.com/a/52480400/9130280
As an example I asked my question for pictures, because it was easier to explain but I applied the solution for device positioning on a surface. It is close to what you are trying to do.
Basically, you have to use OpenCvSharp 3 library (from nuget).
First you have to get a homography matrix. The only coordinates you have to know are the edges. So you fill up two arrays with the edges and then you use:
homographyMatrix = OpenCvSharp.Cv2.FindHomography(originalPointsList, targetPointsList);
And then to get any point in "millimeters" to its equivalent in laser coordinates:
targetPoint = OpenCvSharp.Cv2.PerspectiveTransform(orignalPoint, homographyMatrix);
Let me know if you need more details.
I´ve got a problem. I am taking pictures of a common solar module with a camera flash. I need to detect the frame of the module to cut out the module and undistort it (I only need all of the cell area (dark area inside the frame)).
sample image - direct flash --> problems with big reflection ( I think i can reduce it with a good diffusor)
sample image - flash from angle
Anybody have some recommendation for a robust method to detect the frame? I need something to work with various image angles and lighting.
processed sample image 2
The last picture is processed. I blured the image, grayscaled, inverted. After that I thresholded the image and tried to detect contours (Got some Problems with the shadow on the bottom of the image)
Thanks for your time.
Chris
as mentioned in :
Rectangle recognition with perspective projection
Hough transform should work well for rectangle detection IFF you can assume that the sides of the rectangle are the most prominent lines in your image. Then you can simply detect the 4 biggest peaks in hough space and you got your rectangle.
This works for example with a photo of a white sheet of paper in front of a dark background.
Ideally you would preprocess the image with blur, threshold, morphological operators to remove any small-scale structures before hough transform.
If there are multiple smaller rectangles or other sorts of prominent lines in your images, contour detection might be the better choice.
Some general advantages for the hough transform off the top of my head:
Hough transform can still work if part of the rectangle is obstructed or out of the frame.
Hough transform should be faster than contour detection, I guess?
Hough transform will ignore anything that is not a straight line, so you may have greater success with cluttered images. (if the rectangle sides are the most prominent lines)
For a research project, I have to find the ellipses in a fossil image.
For each fossil image, I also have a CSV file containing the contour of the fossil, in cartesian coordinates.
I need help in determining the starting and ending points of each ellipses that are present in the fossil, so that I can apply a ellipse-fitting algorithm on them.
I started to look at the possibility of studying the variations in the different slopes of the contour.
It somehow worked, until the point where I tried on fossils that have very low curve variations. As you can see on the image below (click the link), the pink points are where the variation of the slopes of the contours are the highest. However, it doesn't work for the bottom ellipse.
So I need a new approach on that. Do you have any hints or ideas where I can look at ?
I need to write a program that uses matrix multiplication to rotate an image (a simple square), based on the center of the square, a certain amount of degree based on what I need. Any help on this would be greatly appreciated. I almost have no clue as to what I'm doing because I have not taken so much as a glance at Calculus.
Take a look at http://www.aforgenet.com/framework/. This is a complete image processing framework in C# that I'm using on a project. I just checked their help and they have a function that does what you want -
// create filter - rotate for 30 degrees keeping original image size
RotateBicubic filter = new RotateBicubic( 30, true );
// apply the filter
Bitmap newImage = filter.Apply( image );
It is an LGPL library, so if licensing is an issue, if you link against their binaries, you will have no issues. Their are also other libraries out there.
If you do decide to write it yourself, be careful about speed as C# doing number crunching is not great. But there are ways to work around it.
Here's a good code project article discussing just what you're wanting:
http://www.codeproject.com/KB/GDI-plus/matrix_transformation.aspx
Rotating an digital image in the plane boils down to a lot of 2X2 matrix multiplications. There's no calculus involved here! You don't need an entire image processing framework to rotate a square image - unless this is really performance sensitive in terms of image quality and speed.
Go and read the first half of Wikipedia's article on the rotation matrix and that should get you off to a good start.
In a nutshell, establish your origin (perhaps the center of the image if that's where you want to rotate around), then compute in pixel space the coordinate of a pixel you'd like to rotate, and multiply by your rotation matrix (see article.). Once you've done the multiply, you'll have your new coordinates of the pixel in pixel space. Write out that pixel in another image buffer and you'll be off and rotating. Repeat. Note that once you know your angle of rotation, you only need compute your rotation matrix once!
Have fun,
Paul
I am stuck on a simple yet vexing problem with basic geometry. Too bad I don;t remember my high-school co-ordinate geometry and looking for some help.
My problem is illustrated in this diagram: A rectangle rotated, scaled, and warped into a parallelogram http://img248.imageshack.us/img248/8011/transform.png
I am struggling with transforming a co-ordinate from the rectangle to a resized parallelogram. Any tips, pointers and/or code-examples would be wonderful!
Thanks,
M.
There are several steps in this transformation.
Scale about (x,y) to adjust to the final size W', H'. (Possibly unequal
scaling on X and Y axes).
Apply a shear transform to convert
the rectangle to a parallelogram
(keeping x,y invariant).
Rotate about (x,y) to align to the
final coordinate orientation.
Translate to the new location.
Create the coordinate matrices for each of these and composite (multiply) them together to create the overall transform. Wikipedia could be your starting point to know about these transformation matrices.
Tip: Might be simplest to apply a translation to move (x,y) to the origin first. Then, the shear, scaling and rotation are a lot simpler to do. Then move it to the new location.