Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I already read some articles and even questions in stack overflow. I didn't found what I wanted. I may didn't looked carefully so point me to correct articles/questions if you know any.
Any way, what I want to do is clear. I know camera's position (x',y',z') and I have camera rotation matrix (Matrix3). I also have camera aspect ratio (W/H) and output size (W,H). My POINT is at (x,y,z) and I want a code or (an algorithm so I can write the code) to calculate its position at screen (screen's size is same as camera output's size) as (x'',y'').
Do you know any useful article? I is important for article or algorithm to support camera's rotation matrix.
Thank you all.
well you need to specify the projection type (ortogonal,perspective... ?) first
transform any point (x,y,z) to camera space
substract the camera position then apply inverse of camera direction (coordinate system) matrix. Z axis of camera is usually the viewing direction. If you use 4x4 homogenous matrix then the substraction is already in it so do not do it twice!
apply projection
Orthogonal projection is just scale matrix. Perspective projections are more complex so google for them. This is where aspect ratio is applied and also FOV (field of view) view angles.
clip to screen and Z-buffer space
now you have x,y,z in projected camera space. To actually obtain screen coordinates with perspective you have to divide by z or w coordinate (depends on math and projection used) so for your 3x3 matrices
xscr=x/z;
yscr=y/z;
that is why z-near for projections must be > 0! (otherwise could cause division by zero)
render or process pixel (x,y)
For more info see: Mathematically compute a simple graphics pipeline
[Notes]
If you look at OpenGL tutorials/references or any 3D vector math for rendering you will find tons of stuff. Google homogenous transform matrices or homogenous coordinates.
I'm not entirely sure of what it is that you are trying to achieve, however I'm thinking that you are attempting to make the surface of one plane (a screen) line up to be a relative size to another plane. To calculate this ratio you should look into Gaussian Surfaces. And a lot of trig. Hope this helps.
I do not think you have enough information to perform the calculation!
You can think of your camera as a pinhole camera. It consists of an image plane, and a point through which all light striking the image plane comes. Usually, the image plane is rectangular, and the point through which all incoming light comes is on the normal of the image plane starting from the center of the image plane.
With these restrictions you need the following:
- center position of the camera
- two vectors (perpendicular to each other) defining the size and attitude of the image plane
- distance of the point from the camera plane (this could be called the focal distance, even though it strictly speaking is not the same thing)
(There are really several ways to express these quantities, such as , , .)
If you only have the position and size of the image plane in pixels and camera rotation, you are missing the scale factor. In real world this is equivalent to knowing where you hold a camera and where you point it at, but not knowing the focal length (zoom setting).
There is a lot of literature available, this one popped up first with a search:
http://www.cse.psu.edu/~rcollins/CSE486/lecture12_6pp.pdf
Maybe that helps you to find the correct search terms.
Related
This question already has answers here:
How can I tell if a point is on a quadratic Bézier curve within a given tolerance?
(3 answers)
Closed 7 months ago.
I'm doing some computer graphics program. I need to draw a bezier curve and then determine if a point is on this curve. I've defined 4 points(2 endpoints, 2 control points) and plotted a bezier curve using DrawBeziers. So how can I determine if a point is on the drawn curve?
Should I get all the points on the Bezier curve and check if a point in all points of the curve?
Should I get the equation of the curve and check a point can make the
equation true?
Should I use DrawBeziers method to draw it?
How can I implement this feature?
I'm looking forward to everyone's answers, it would be better if the answers could be explained in detail. Thanks in advance.
I would assume that the goal is to check if the user clicked on the curve or not. The mathematical curve will be infinitely thin, so what you really are after is to compute the distance to the curve. The link provides the math. Once you have the distance, just check if it is lower than the line thickness.
A potentially more accurate way would be to draw the curve, and any other graphics to a internal bitmap, using a unique color for each object, and without any anti-aliasing. That will give you an easy way to lookup the clicked object, while supporting overlapping shapes in a natural way. Also consider using a wider line-thickness when drawing to make it easier to select objects. But it will be somewhat costly to redraw the internal bitmap.
Another alternative would be to convert your spline into a polyline, and check the distance to each line segment. This is probably least accurate, but should be very simple to implement if you already types for line segments.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Given a completely black and white picture (so the only colors are black or white no shades of grays), there are a lot of different sizes circles on it (in black), what is a fast way to find the center coordinate of the circles and the sizes of the circles and store them as a Dictionary entry? When I mean fast I mean by if I were to call this circle finding function 10 or 20 times per second it wouldn't lag too much. Also I did some researching and found out that I could find the center of a circle or radius by taking three points on the circle, can that help?
The first thing that comes to mind is to scan the image pixel by pixel, looking for black pixels. When you find one, start flood filling. Once the Flood fill finishes measure the left, right, top, and bottom extents, then derive the center point by dividing by 2. This assumes that none of the circles overlap. I'm sure there are probably some more optimized methods that may avoid flooding the entire circle.
optimization #1:
Instead of flood filling, when you find the first black pixel, look for adjacent pixels along all 8 neighbors that have at least 1 white neighbor. Continue following those pixels until you hit the first pixel again. That should give you the outline, from there you can derive the width, height, and center point the same way as in my original answer.
optimization #2: If the circles have a minimum size you don't have to scan pixel by pixel. You can scan horizontal and vertical lines spaced by the minimum size looking for black pixels.
I think you cannot avoid having to check all the white pixels, so the algorithm's complexity will always be O(width*height). However, you definitely don't need to check all the black pixels.
Once you find an edge of a circle you can just walk horizontally until next edge, and then vertically until another edge. With these 3 points on the edge you can build a rectangle which will have the same center as the circle. Then just walk vertically or horizontally to find the circle's radius.
You need a simple ray tracer that scans your space left to right, top to bottom, pixel by pixel. When you hit a black pixel you have some work to do. Here is an algorithm.
while current pixel is white, advance to the next one (left to right, or next row of pixels)
when you hit a black pixel, move downward in a vertical line until you hit a white pixel. That's your diameter, you use that to calculate your circle size (whatever you mean by size). To mark this circle as visited, color the perimeter in say green (simple edge detection algorithm); this way, if you hit a green pixel first as you cruise along, keep cruising in the black circles until you hit the green on the other symmetric side of the circle. Therefore, no need for a complete flood-fill algorithm and you get a away with calculating size or skipping visited circles in a fraction of the cost needed by the flood-fill algorithm.
Note 1. The screens are rasterized, which means that the first pixel you hit is not necessarily the top one, it could be one of the ones to the right of it so you have to do a bit navigating right and left because of antialiasing to determine the right top one and shoot down from it.
Note 2. If your circles overlap, say so and lets dump this algorithm and think of something else :-)
This question already has answers here:
Polygon enclosing a set of points
(4 answers)
Closed 7 years ago.
I have been working with a huge Map project in .NET. During working at that project, I encountered a problem. I have a plenty points with my coordinates (latitude,longitude), and I need to draw a polygon which could "fill" my points. My question is, how to choose points being outside to draw polygon by them. I attached a image file to clarify my problem -> http://postimg.org/image/fwsf0v285/
It sounds like you want a polygon that surrounds the data points. There are two algorithms available.
First is a "Convex Hull". This is a classic algorithm of computation geometry and will produce the polygon you have drawn. Imagine your data points are nails in a board, it produces a polygon resembling an elastic band that has been put around the outer data points. Convex Hulls are fast (or should be).
Convex Hulls really are common, I know I've coded them up to work on the Earth's surface (ie. to take into account the Earth's curvature). Here's a simpler Euclidean example (from stackoverflow) that should get you started:
Translating concave hull algorithm to c#
The alternative is an "Alpha Shape" which is sometimes known as a "Concave Hull". Basically it allows for concave (inward) curves in the side of the polygon that are larger than a specified dimension ('alpha'). For example a 'C' of data points should produce a 'C shape', whilst a convex hull would give you something more like an 'O' as will cut off the concave hollow.
Alpha shapes are much more involved as they require the calculation of a Delauney Triangulation.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am working on a college project using two kinect sensors. We are taking the X and Z coordinates from both kinects and converting them into "real world" X and Z coordinates with an offset and some basic math. Everything works great without the zoom lens but when the zoom lens is added the coordinate system get's distorted.
We are using this product http://www.amazon.com/Zoom-Kinect-Xbox-360/dp/B0050SYS5A
We seem to be going from a 57 degree view to a 113 degree view when switching the to the zoom lens. What would be a good way of trying to calculate this change in the coordinate system. How can we convert these distorted X and Z coordinates to the "real world" coordinates.
The sensors are places next to each other, at a 0 degree angle, looking at the same wall with some of their view fields overlapping. The overlap get's greater with the zoom lenses.
Thanks for any answers or ideas!
If you can take pictures via the kinect, you should be able to use a checkerboard pattern and camera calibration tool (e.g. GML calibration toolbox) to deduce the camera parameters/distortion of each lens system.
You should then be able to transform the data from each camera and its respective lens system to world coordinates.
If your measurements of the relative orientation and position of the cameras (and math) are correct the coordinates should roughly agree.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have two line segments with points
Line1 = (x1,y1) , ( x2,y2) --- smaller
Line2 = (x3,y3) , (x4,y4) --- bigger
How can I make the Line1(smaller) to rotate and make it parallel to Line2(Bigger)
using either
1) (x1,y1) as fixed point of rotation or
2) (x2,y2) as fixed point of rotation or
3) center point as fixed point of rotation
I am using C#.NET.
And Aforge.NET Library.
Thanks
All operations described below can be expressed as affine transformation matrices.
Move desired rotation center into the origin.
Compute either angle of rotation or directly the rotation matrix. See below.
Apply that rotation, as a rotation around the origin.
Apply the reverse translation to move the rotation center back to its original position.
You can multiply these three matrices to obtain a single matrix for the whole operation. You can even do so with pen and paper, and hardcode the result into your application.
As to how you compute the rotation matrix: The dot product of the two vectors spanning the lines, divided by the length of these vectors, is cos(φ), i.e. the cosine of the angle between them. The sine is ±sqrt(1-cos(φ)²). You only need these two numbers in the rotation matrix, so no need to actually compute angles in terms of performance. Getting the sign right might be tricky, though, so in terms of easy programming you might be better of with two calls to atan2, a difference, and subsequent calls to sin and cos.