I am stuck on a simple yet vexing problem with basic geometry. Too bad I don;t remember my high-school co-ordinate geometry and looking for some help.
My problem is illustrated in this diagram: A rectangle rotated, scaled, and warped into a parallelogram http://img248.imageshack.us/img248/8011/transform.png
I am struggling with transforming a co-ordinate from the rectangle to a resized parallelogram. Any tips, pointers and/or code-examples would be wonderful!
Thanks,
M.
There are several steps in this transformation.
Scale about (x,y) to adjust to the final size W', H'. (Possibly unequal
scaling on X and Y axes).
Apply a shear transform to convert
the rectangle to a parallelogram
(keeping x,y invariant).
Rotate about (x,y) to align to the
final coordinate orientation.
Translate to the new location.
Create the coordinate matrices for each of these and composite (multiply) them together to create the overall transform. Wikipedia could be your starting point to know about these transformation matrices.
Tip: Might be simplest to apply a translation to move (x,y) to the origin first. Then, the shear, scaling and rotation are a lot simpler to do. Then move it to the new location.
Related
I would like to ask if it is possible to generate and draw a circle or ellipse using a linerenderer based on two points in the scene. Specifically, these are the vertices of the mesh. For example, I would like to draw a circle / ellipse around the neck of a given humanoid.
I need something like this:
I need this circle to enlarge and shrink based on the properties of the blendshapes. If I widen the neck with blendshape, I need this circle to enlarge as well. I can draw this circle by finding the id of all the vertices around the neck and entering these vertices into the LineRenderer. But it's a very impractical way that I don't want to use. So I would like to ask if it is possible to draw such a circle that would be connected only to two points and not to all / many.
I will be very grateful for any advice.
EDIT:
I was hoping that it would be enough to connect 4 vertices in the worldspace using LineRenderer, where I would get a drawn square. And I was hoping that some rounding would be used on the edges of this square to create an ellipse or circle. For a better idea, I also added a picture to describe what I thought would be enough to use.
I have an image like this:
Next i use some techniques to get the contours of the sudoku grid. for demonstration, here's a picture: (green=boundingbox, red=contour)
Now my question; How can i warp this to a perfect square? I have tried to use cvWarpPerspective but i can't seem to figure out how to get the corners from the contours(red line).
Any help would be greatly appreciated!
OK. You have the contour of the puzzle trapezoid. You have the bounding box. You want a nice looking square. This is a subjective part of your system. You can, for example, just snap the target height to the width:
Rect target(0,0,boundbox.width,boundbox.width);
How to actually transform the trapezoid into a square?
First, crop/set the roi to the target:
Mat cropped = source_image(target);
Then, you find the homography matrix, using the source and target quadrangles:
Point SourceTrapezoid[4]; // the trapezoid points, with boundbox.x and y subtracted
Point TargetSquare[4]; // 0,0, roi.width, roi.height
Mat homography_mat = findHomography(InputArray srcPoints, InputArray dstPoints)
Note that the points in TargetSquare must be listed in the same order as in SourceTrapezoid (for example, all in clockwise direction).
Next, just apply the transformation:
Mat transformed;
perspectiveTransform(cropped, transformed, homography_mat);
And copy the transformed box into its place:
transformed.copyTo(source_image(target));
I gave you an example with the c++ opencv api, but the c api is equivalent. Check the opencv documentation for the methods' equivalents.
And as user LSA added in the comments, there are a ton of examples on the web. But the steps for "transform quadrilateral into rectangle" are very simple:
Decide on the target rectangle's aspect ratio and placement(if aplicable)
Order the points of the target rectangle correctly as the source quadrangle
Calculate the homography
Apply the perspective transform
Put the resultant image where you want it(if applicable)
So I have a seamless texture. The black lines in the picture represent the repeating texture. Then I have a series of blocks who have known x,y coordinates and known height and width. In Unity Textures have a 'scale' and 'offset'.
Scale is the amount of the texture that will show on the block. So the block starting at 0,0 will have a scale of about (.2,1.3). It's width is .2x the width of a single texture (for the sake of simple numbers) and it's height is about 1.3x the height of the texture.
Offset then moves the 'starting point of the texture. The block at 0,0 will have an offset of 0,0 because it is perfectly aligned with a corner but the block immediately to it's right would have a texture of about .2,0 because the texture needs to start about .2 units in to align properly. This is as far as I understand offset. I am pretty sure this is correct but feel free to correct me if I am wrong.
Now when you apply a texture to a block unity automatically scales the texture to start at the top left corner of the block and stretch it appropriately to fit 1 full iteration inside of that space. I obviously don't want this.
My question comes in for the three blocks labeled with the (x,y) coordinates. I have tried for several hours over a few weeks to get it right, unsuccessfully.
So how do I take in the x,y position and width/height to create a correct scale and offset so that those blocks will look like they are exactly where they are supposed to be in the texture?
It is not a particularly difficult concept but after staring at it I have no more ideas.
For the sake of the question assume a single texture is 12x12. The x,y and width/height are known values but are arbitrary.
I know it's normally good practice to post attempted code but I would rather see a good way of doing it than see answers that try to fix my failed attempts. But I will post code if people want to see that I did try on my own or how I initially tried.
What is a UV Map
Textures are applied to models by what is known as UV map. The idea is that each (x,y,z) vertex has two (u,v) coordinates assigned. UV coordinates define which point on the texture should correspond to that vertex. If you want to know more, I suggest the awesome (and free) Udacity 3D graphics course. Unit 8 talks about UV mapping.
How to solve your problem using UV Maps
Let's ignore all the vertices that are not visible - your blocks are basically rectangles. You can assign a UV mapping, where world possition of each vertex will be turned into its UV coordinate. This way, all the blocks will have the same origin point in texture space (0,0 in world position corresponds to 0,0 on texture). Code looks like this:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vertices = mesh.vertices;
Vector2[] uvs = new Vector2[vertices.Length];
for (int i = 0; i < uvs.Length; i++)
{
// Find world position of each point
Vector3 vertexPos = transform.TransformPoint(vertices[i]);
// And assign its x,y coordinates to u,v.
uvs[i] = new Vector2(vertexPos.x, vertexPos.y);
}
mesh.uv = uvs;
You have to do this each time your block position changes.
I have an animated model that's spinning.
I want to hide/not draw any part of the model that's Y<0
what are the ways I can do it?
ideas:
1) draw a giant rectangular box right below y=0
2) tweak the camera matrix so that y<0 is outside of clipping plane (but i have no idea how)
can someone point me into the right direction? =)
A purely mathematical approach:
Don't draw the polygons whose y's are all less than 0.
Draw the polygons whose y's are all greater than or equal to 0.
Clip the rest of the polygons with the y=0 plane and draw them.
If the polygons making up the model are triangles, clipping them is pretty trivial. You need to clip the two sides intersecting with the y=0 plane and replace the original vertices whose y's are less than 0 with the intersection points of those two sides with the clipping plane.
Use the line equations:
(x-x1) = (x2-x1)*(y-y1)/(y2-y1)
(z-z1) = (z2-z1)*(y-y1)/(y2-y1)
where 1 and 2 are the vertices of the side being clipped by the y=0 plane. Substitute their coordinates (x1, y1, z1, x2, y2, z2) and y=0 into the equations to get x and z of the intersection point. Use this point's coordinates instead of vertex 1's or 2's (whichever has y < 0).
If the polygons are texture-mapped, you'll need to recalculate the texture coordinates for the vertices that you got from the clipping. You do that in the same fashion.
It sounds like you need to introduce MSDN Bounding Frustum
Here is a good tutorial from Nic's GameDev Site.
I am developing an image analysis app and I need to calculate the aspect ratio of a segmented particle.
According to
http://www.sympatec.com/Science/Characterisation/05_ParticleShape.html
the AR is given by (FIG 1) Xfmin/Xfmax.
Any suggestion of an algorithm to get this values (Xf)?
You seem to want the width and diameter of a concave polygon. Maybe you can use the rotating caliper algorithm for that, maybe after splitting your concave polygon in a number of convex polygons.