Rotating and making two lines parallel [closed] - c#

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have two line segments with points
Line1 = (x1,y1) , ( x2,y2) --- smaller
Line2 = (x3,y3) , (x4,y4) --- bigger
How can I make the Line1(smaller) to rotate and make it parallel to Line2(Bigger)
using either
1) (x1,y1) as fixed point of rotation or
2) (x2,y2) as fixed point of rotation or
3) center point as fixed point of rotation
I am using C#.NET.
And Aforge.NET Library.
Thanks

All operations described below can be expressed as affine transformation matrices.
Move desired rotation center into the origin.
Compute either angle of rotation or directly the rotation matrix. See below.
Apply that rotation, as a rotation around the origin.
Apply the reverse translation to move the rotation center back to its original position.
You can multiply these three matrices to obtain a single matrix for the whole operation. You can even do so with pen and paper, and hardcode the result into your application.
As to how you compute the rotation matrix: The dot product of the two vectors spanning the lines, divided by the length of these vectors, is cos(φ), i.e. the cosine of the angle between them. The sine is ±sqrt(1-cos(φ)²). You only need these two numbers in the rotation matrix, so no need to actually compute angles in terms of performance. Getting the sign right might be tricky, though, so in terms of easy programming you might be better of with two calls to atan2, a difference, and subsequent calls to sin and cos.

Related

What happens when you apply force to the edge of an object? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I'm trying to build a 2D physics engine (rigid body dynamics simulation) in C#. So far I have simulated boxes (squares) of different sizes which are fixed in position but can rotate around their centroids (centre of mass) when a force is applied to them. This is the Box.ApplyForce() method that is called when a force is applied:
public void ApplyForce(double x, double y, Vector force)
{
//angular acceleration = torque(angular force) / moment of inertia
Force tempForce = new Force(x, y, force.X, force.Y);
forceList.Add(tempForce);
Vector displacement = new Vector(x, y);
double torque = displacement.X * tempForce.yForce - displacement.Y * tempForce.xForce;
double momentOfInertia = (mass*(size*size*2))/12;
angularAcceleration += torque / momentOfInertia;
}
Now this seems to be working correctly so far, but I now need to include translational acceleration in my simulation so my question is: what happens when you apply force to the edge (or any non-centre-of-mass point) of the object? Will the translational acceleration be the same as if it were applied to the centre of mass?
Are these simulated squares on a flat surface when simulating translational motion? Can they rotate when the force is applied, and also move translational?
If the box is on a flat surface it cannot rotate when you are simulating translation acceleration it does not matter where the force is applied. It just matters the angle at which the force is applied, or the component of the force vector in the direction of the translational motion. The component of the vector perpendicular to surface will affect the magnitude of the normal force, which will affect the frictional force if that is something you are taking into consideration.
If the box is not on a surface and just floating in space when the force is applied the answer gets a bit more complicated. Maybe clarify what you are simulating a bit more.

How to find rectangle's corner coordinate with few given information [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I know the coordinate of red points and center point of rectangle, i also know the width and length of rectangle (2D world).
So how do i find coordinate of X point (bottom left corner)?
Let me summarize your problem to make sure we are on the same page. You have an arbitrary rectangle with known center, width, and height. And you know two arbitrary points on the left edge and on the bottom edge.
If you had the midpoints of the two edges, the problem would be easy:
BottomLeft = CenterBottom + (CenterLeft - Center)
So the question is how to calculate these points.
I will explain this for one edge (in this case the bottom edge). The same holds for the left edge. Let's call the center of the rectangle C, the midpoint of the edge M, and the arbitrary point on the edge E.
You can calculate the distance between C and E. If E were the midpoint, this distance would be exactly half the rectangle's height. But it is not. What we can do with this information is calculate the angle MCE:
cos MCE = h / (2 * |C - E|)
So all we have to do to find M is rotate the direction vector by this angle and re-scale:
M = C + rotate(E - C, MCE) * h / (2 * |C - E|)
There are two solutions for that. One with a positive angle and one with a negative angle.
So just calculate the two possible midpoints for the two edges. If you have these, you need to check which of the four pairs are valid. To do so, simply check if the angle between C - MLeft and C - MBottom is 90° (i.e. their dot product is close to zero).
Once you have a valid pair, you can calculate the corner as described above. Note that there may be more than one valid solution.

Mapping a 3D point to 2D context [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I already read some articles and even questions in stack overflow. I didn't found what I wanted. I may didn't looked carefully so point me to correct articles/questions if you know any.
Any way, what I want to do is clear. I know camera's position (x',y',z') and I have camera rotation matrix (Matrix3). I also have camera aspect ratio (W/H) and output size (W,H). My POINT is at (x,y,z) and I want a code or (an algorithm so I can write the code) to calculate its position at screen (screen's size is same as camera output's size) as (x'',y'').
Do you know any useful article? I is important for article or algorithm to support camera's rotation matrix.
Thank you all.
well you need to specify the projection type (ortogonal,perspective... ?) first
transform any point (x,y,z) to camera space
substract the camera position then apply inverse of camera direction (coordinate system) matrix. Z axis of camera is usually the viewing direction. If you use 4x4 homogenous matrix then the substraction is already in it so do not do it twice!
apply projection
Orthogonal projection is just scale matrix. Perspective projections are more complex so google for them. This is where aspect ratio is applied and also FOV (field of view) view angles.
clip to screen and Z-buffer space
now you have x,y,z in projected camera space. To actually obtain screen coordinates with perspective you have to divide by z or w coordinate (depends on math and projection used) so for your 3x3 matrices
xscr=x/z;
yscr=y/z;
that is why z-near for projections must be > 0! (otherwise could cause division by zero)
render or process pixel (x,y)
For more info see: Mathematically compute a simple graphics pipeline
[Notes]
If you look at OpenGL tutorials/references or any 3D vector math for rendering you will find tons of stuff. Google homogenous transform matrices or homogenous coordinates.
I'm not entirely sure of what it is that you are trying to achieve, however I'm thinking that you are attempting to make the surface of one plane (a screen) line up to be a relative size to another plane. To calculate this ratio you should look into Gaussian Surfaces. And a lot of trig. Hope this helps.
I do not think you have enough information to perform the calculation!
You can think of your camera as a pinhole camera. It consists of an image plane, and a point through which all light striking the image plane comes. Usually, the image plane is rectangular, and the point through which all incoming light comes is on the normal of the image plane starting from the center of the image plane.
With these restrictions you need the following:
- center position of the camera
- two vectors (perpendicular to each other) defining the size and attitude of the image plane
- distance of the point from the camera plane (this could be called the focal distance, even though it strictly speaking is not the same thing)
(There are really several ways to express these quantities, such as , , .)
If you only have the position and size of the image plane in pixels and camera rotation, you are missing the scale factor. In real world this is equivalent to knowing where you hold a camera and where you point it at, but not knowing the focal length (zoom setting).
There is a lot of literature available, this one popped up first with a search:
http://www.cse.psu.edu/~rcollins/CSE486/lecture12_6pp.pdf
Maybe that helps you to find the correct search terms.

Get xyz coordinates from starting point, quaternion and moved distance [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
In my case I have a starting coordinate x,y,z, an orientation in Quaternion and I know the moved distance.
Basically I would like to know the x',y',z' after applying the transformation and the forward movement. So I am trying to move a point in 3D using quaternion. I guess it should be just a simple calculation but for some reason I cannot find the solution that easily.
In previously I converted the Quaternion to Euler angles and used them to calculate the x',y',z'. Unfortunately because of the Gimbal lock this solution is not suitable for me anymore.
I have found a few example for example this one in python and here's an other in C#, but I still did not get the formula of them as they are discussing the rotation instead of the movement it self, the C# example just changes the middle point of the cube and then it redraws it with the rotation.
Why reinvent the wheel ? This kind of operation is best handled via matrixes - and C# has even support for it.
// PresentationCore.dll
using System.Windows.Media.Media3D;
Matrix3D matrix = Matrix3D.Identity;
matrix.Translate(new Vector3D(x, y, z));
matrix.Rotate(quaterion);
var newPoint = matrix.Transform(point);

Calculate Kinect X,Y,Z position when a zoom lens is applied [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am working on a college project using two kinect sensors. We are taking the X and Z coordinates from both kinects and converting them into "real world" X and Z coordinates with an offset and some basic math. Everything works great without the zoom lens but when the zoom lens is added the coordinate system get's distorted.
We are using this product http://www.amazon.com/Zoom-Kinect-Xbox-360/dp/B0050SYS5A
We seem to be going from a 57 degree view to a 113 degree view when switching the to the zoom lens. What would be a good way of trying to calculate this change in the coordinate system. How can we convert these distorted X and Z coordinates to the "real world" coordinates.
The sensors are places next to each other, at a 0 degree angle, looking at the same wall with some of their view fields overlapping. The overlap get's greater with the zoom lenses.
Thanks for any answers or ideas!
If you can take pictures via the kinect, you should be able to use a checkerboard pattern and camera calibration tool (e.g. GML calibration toolbox) to deduce the camera parameters/distortion of each lens system.
You should then be able to transform the data from each camera and its respective lens system to world coordinates.
If your measurements of the relative orientation and position of the cameras (and math) are correct the coordinates should roughly agree.

Categories