Given a triangle vertex defined with three 3D points, how do you calculate the angle between it and a given point.
class Point3D
{
double x, y, z;
}
class Vertex
{
Point3D P1, P2, P3;
}
Point3D LightPoint;
http://www.ianquigley.com/iq/RandomImages/vextor.png
Light point in Red. Blue points - triangle with the surface normal shown.
I need to calculate the surface normal, and the angle between that and the LightPoint. I've found a few bits a pieces but nothing which puts it together.
OK, here goes...
You have points A, B, and C, each of which has coordinates x, y, and z. You want the length of the normal, as Matias said, so that you can calculate the angle that a vector between your point and the origin of the normal makes with the normal itself. It may help you to realize that your image is misleading for the purposes of our calculations; the normal (blue line) should be emanating from one of the vertices of the triangle. To turn your point into a vector it has to go somewhere, and you only know the points of the vertices (while you can interpolate any point within the triangle, the whole point of vertex shading is to not have to).
Anyway, first step is to turn your Point3Ds into Vector3Ds. This is accomplished simply by taking the difference between each of the origin and destination points' coordinates. Use one point as the origin for both vectors, and the other two points as the destination of each. So if A is your origin, subtract A from B, then A from C. You now have a vector that describes the magnitude of the movement in X, Y, and Z axes to go from Point A to point B, and likewise from A to C. It is worth noting that a theoretical vector has no start point of its own; to get to point B, you have to start at A and apply the vector.
The System.Windows.Media.Media3D namespace has a Vector3D struct you can use, and handily enough, the Point3D in the same namespace has a Subtract() function that returns a Vector3D:
Vector3D vectorAB = pointB.Subtract(pointA);
Vector3D vectorAC = pointC.Subtract(pointA);
Now, the normal is the cross product of the two vectors. Use the following formula:
v1 x v2 = [ y1*z2 - y2*z1 , z1*x2 - z2*x1 , x1*y2 - x2*y1 ]
This is based on matrix math you don't strictly have to know to implement it. The three terms in the matrix are the X, Y, and Z of the normal vector. Luckily enough, if you use the Media3D namespace, the Vector3D structure has a CrossProduct() method that will do this for you:
Vector3D vectorNormal = Vector3D.CrossProduct(vectorAB, vectorAC);
Now, you need a third vector, between LightPoint and A:
Vector3D vectorLight = PointA.Subtract(LightPoint);
This is the direction that light will travel to get to PointA from your source.
Now, to find the angle between them, you compute the dot product of these two and the length of these two:
|v| = sqrt(x^2 + y^2 + z^2)
v1 * v2 = x1*x2 + y1*y2 + z1*z2
Or, if you're using Media3D, Vector3D has a Length property and a DotProduct static method:
double lengthLight = vectorLight.Length;
double lengthNormal = vectorNormal.Length;
double dotProduct = Vector3D.DotProduct(vectorNormal, vectorLight);
Finally, the formula Matias mentioned:
v1 * v2 = |v1||v2|cos(theta)
rearranging and substituting variable names:
double theta = arccos(dotProduct/(lengthNormal*lengthLight))
Or, if you were smart enough to use Media3D objects, forget all the length and dot product stuff:
double theta = Vector3D.AngleBetween(vectorNormal, vectorLight);
Theta is now the angle in degrees. Multiply this by the quantity 2(pi)/360 to get radians if you need it that way.
The moral of the story is, use what the framework gives you unless you have a good reason to do otherwise; using the Media3D namespace, all the vector algebra goes away and you can find the answer in 5 easy-to-read lines [I edited this, adding the code I used -- Ian]:
Vector3D vectorAB = Point3D.Subtract(pointB, pointA);
Vector3D vectorAC = Point3D.Subtract(pointC, pointA);
Vector3D vectorNormal = Vector3D.CrossProduct(vectorAB, vectorAC);
Vector3D vectorLight = Point3D.Subtract(pointA, LightPoint);
double lengthLight = light.Length;
double lengthNormal = norm.Length;
double dotProduct = Vector3D.DotProduct(norm, light);
double theta = Math.Acos(dotProduct / (lengthNormal * lengthLight));
// Convert to intensity between 0..255 range for 'Color.FromArgb(...
//int intensity = (120 + (Math.Cos(theta) * 90));
To get the normal vector, calculate the cross product between P2 - P1 and P3 - P1.
To get the angle, use the dot product between the normal and the lightPoint. Remember that dot(a, b) = |a||b| * cos(theta), so since you can calculate the length of both, you can get theta (the angle between them).
Only tangentially related, but most graphics systems also allow for surface normal interpolation, where each vertex has an associated normal. The effective normal at a given U,V then is interpolated from the vertex normals. This allows for smoother shading and much better representation of 'curvature' in lighting without having to rely on a very large number of triangles. Many systems do similar tricks with 'bump mapping' to introduce perturbations in the normal and get a sense of texture without modeling the individual small facets.
Now you can compute the angle between the interpolated normal and the light vector, as others have already described.
One other point of interest is to consider whether you have an ambient light source. If you can pretend that the point light is infinitely far away (effectively true for a source like direct sunlight on a small surface like the Earth), you can save all the trouble of calculating vector differences and simply assume the incoming angle of the light is constant. Now you have a single constant light vector for your dot product.
Related
I have a Vector with 3 components (X,Y,Z) and i want to find a Vector orthogonal to the given one. Since the Vectors orthogonal to any Vector are infinite, i just need one that is randomic.
I've tried using an equation with the Dot Product formula, since the dot product between two orthogonal Vectors is always 0, and I managed to write a bit of code that works only when the given Vector is axis-aligned, but that is probably because the randomized components of the vectors are X and Y. I really can't get my head around this.
I wrote my code on the Unity3D engine in order to be able to easly visualize it:
Vector3 GenerateOrthogonal(Vector3 normal)
{
float x = Random.Range(1f, -1f);
float y = Random.Range(1f, -1f);
float total = normal.x * x + normal.y * y;
float z = -total / -normal.z;
return new Vector3(x, y, z).normalized;
}
There are a few methods for doing this. I'll provide two. The first is a one-liner that uses Quaternions to generate a random vector and then rotate it into place:
Vector3 RandomTangent(Vector3 vector) {
return Quaternion.FromToRotation(Vector3.forward, vector) * (Quaternion.AngleAxis(Random.Range(0f, 360f), Vector3.forward) * Vector3.right);
}
The second is longer, more mathematically rigorous, and less platform dependent:
Vector3 RandomTangent(Vector3 vector) {
var normal = vector.normalized;
var tangent = Vector3.Cross(normal, new Vector3(-normal.z, normal.x, normal.y));
var bitangent = Vector3.Cross(normal, tangent);
var angle = Random.Range(-Mathf.PI, Mathf.PI);
return tangent * Mathf.Sin(angle) + bitangent * Mathf.Cos(angle);
}
Here are some notes on their differences:
Both of these functions generate a random perpendicular vector (or "tangent") with a uniform distribution.
You can measure the accuracy of these functions by getting the angle between the input and the output. While most of the time it will be exactly 90, there will sometimes be very minor deviations, owing mainly to floating point rounding errors.
While neither of these functions generate large errors, the second function generates them far less frequently.
Initial experimentation suggests the performance of these functions is close enough that the faster function may vary depending on platform. The first one was actually faster on my machine for standard Windows builds, which caught me off guard.
If you're prepared to assume that the input to the second function is a normalized vector, you can remove the explicit normalization of the input and get a performance increase. If you do this and then pass it a non-normalized vector, you'll still get a perpendicular vector as a result, but its length and distribution will no longer be reliably uniform.
In the degenerate case of passing a zero vector, the first function will generate random vectors on the XY plane, while the second function will propagate the error and return a zero vector itself.
I have an application where I want to scale, rotate, and translate some 3D points.
I'm used to seeing 3x3 rotation matrices, and storing translation in a separate array of values. But the .Net Matrix3D structure ( https://msdn.microsoft.com/en-us/library/System.Windows.Media.Media3D.Matrix3D%28v=vs.110%29.aspx) is 4x4 and has a row of "offsets" - OffsetX, OffsetY, OffsetZ, which are apparently used for translation. But how, exactly, are they intended to be applied?
Say I have a Vector3D with X, Y, Z values, say 72, 24, 4. And say my Matrix3D has
.707 0 -.707 0
0 1 0 0
.707 0 .707 0
100 100 0 1
i.e., so the OffsetX and OffsetY values are 100.
Is there any method or operator for Matrix3D that will apply this as a translation to my points? Transform() doesn't seem to. If my code has . . .
Vector3D v = new Vector3D(72, 24, 0);
Vector3D vectorResult = new Vector3D();
vectorResult = MyMatrix.Transform(v);
vectorResult has 8.484, 24, -8.484, and it has the same values if the offsets are 0.
Obviously I can manually apply the translation individually for each axis, but I thought since it's part of the structure there might be some method or operator where you give it a point and it applies the entire matrix including translation. Is there?
A 4x4 matrix represents a transform in 3D space using homogeneous coordinates. In this representation, a w component is added to the vector. This component differs based on what a vector should represent. Transforming a vector with the matrix is simple multiplication:
transformed = vector * matrix
(where both vectors are row-vectors).
The w component is only considered by the matrix' last row - the part where the translation is stored. So if you want to transform points, this component needs to be 1. If you want to transform directions, this component needs to be 0 (because direction vectors do not change if you translate them).
This difference is expressed with two different structures in WPF. The Vector3D represents a direction (with w component 0) and the Point3D represents a point (with w component 1). So if you make v a Point3D, everything should work as you expect.
I have a 3d point, defined by [x0, y0, z0].
This point belongs to a plane, defined by [a, b, c, d].
normal = [a, b, c], and ax + by + cz + d = 0
How can convert or map the 3d point to a pair of (u,v) coordinates ?
This must be something really simple, but I can't figure it out.
First of all, you need to compute your u and v vectors. u and v shall be orthogonal to the normal of your plane, and orthogonal to each other. There is no unique way to define them, but a convenient and fast way may be something like this:
n = [a, b, c]
u = normalize([b, -a, 0]) // Assuming that a != 0 and b != 0, otherwise use c.
v = cross(n, u) // If n was normalized, v is already normalized. Otherwise normalize it.
Now a simple dot product will do:
u_coord = dot(u,[x0 y0 z0])
v_coord = dot(v,[x0 y0 z0])
Notice that this assumes that the origin of the u-v coordinates is the world origin (0,0,0).
This will work even if your vector [x0 y0 z0] does not lie exactly on the plane. If that is the case, it will just project it to the plane.
Assuming you want to find the coordinates of any point in the plane, in terms of coordinates (u,v)...
If the point [x0,y0,z0] lies in the plane, then we know that
dot([a,b,c],[x0,y0,z0]) = -d
Where dot is the dot product between two vectors. This is simply rewriting the plane equation.
The trick is to find two vectors that span the planar subspace. To do this, we choose a random vector of length 3. Call it V0. I'll call the planar normal vector
N = [a,b,c]
Next, use the cross product of the normal vector N with V0.
V1 = cross(N,V0)
This vector will be orthogonal to the normal vector, unless we were extremely unlucky and N and V0 were collinear. In that case, simply choose another random vector V0. We can tell if the two vectors were collinear, because then V1 will be the vector [0 0 0].
So, if V1 is not the zero vector, then divide each element by the norm of V1. The norm of a vector is simply the square root of the sum of the squares of the elements.
V1 = V1/norm(V1)
Next, we choose a second vector V2 that is orthogonal to both N and V1. Again, a vector cross product does this trivially. Normalize that vector to have unit length also. (Since we know now that V1 is a vector with unit norm, we could just have divided by norm(N).)
V2 = cross(N,V1)
V2 = V2/norm(V2)
ANY point in the plane can now be described trivially as a function of (u,v), as:
[x0,y0,z0] + u*V1 + v*V2
For example, when (u,v) = (0,0), clearly we get [x0,y0,z0] back, so we can think of that point as the "origin" in (u,v) coordinates.
Likewise, we can do things like recover u and v from any point [x,y,z] that is known to lie in the plane, or we can find the normal projection for a point that is not in the plane, projected into that plane.
I have been trying to get this problem resolved for week and have get to come to a solution. What I have is 2 points in a 2d space, what I need to resolve is what the rotation of one is around the other. With luck the attached diagram will help, what I need to be able to calculate is the rotational value of b around a.
I have found lots of stuff that points to finding the dot product etc but I am still searching for that golden solution :o(
Thanks!
Vector2 difference = pointB - pointA;
double rotationInRadians = Math.Atan2(difference.Y, difference.X);
See http://msdn.microsoft.com/en-us/library/system.math.atan2.aspx for reference.
A guess:
1.) Find the slope m of the line A, B.
2.) Convert slope to angle theta = arctan(m)
3.) Project the angle to a quadrant in a cartesian coordinate system centered at point A to get the normalized angle
I'll assume that due east (along the X axis, to the right) is zero radians, and that +x points to the right and +y points down.
The bearing of B with respect to A is
angle = Arctan2 [(A_y - B_y) / (B_x - A_x)]
Use the proper function to calculate the proper quadrant (probably Math.Atan2)
I'm currently trying to wrap my head around WPF, and I'm at the stage of converting between coordinate spaces with Matrix3D structures.
After realising WPF has differences between Point3D and Vector3D (thats two hours I'll never get back...) I can get a translation matrix set up. I'm now trying to introduce a rotation matrix, but it seems to be giving innacurate results. Here is my code for my world coordinate transformation...
private Point3D toWorldCoords(int x, int y)
{
Point3D inM = new Point3D(x, y, 0);
//setup matrix for screen to world
Screen2World = Matrix3D.Identity;
Screen2World.Translate(new Vector3D(-200, -200, 0));
Screen2World.Rotate(new Quaternion(new Vector3D(0, 0, 1), 90));
//do the multiplication
Point3D outM = Point3D.Multiply(inM, Screen2World);
//return the transformed point
return new Point3D(outM.X, outM.Y, m_ZVal);
}
The translation appears to be working fine, the rotation by 90 degrees however seems to return floating point inaacuracies. The Offset row of the matrix seems to be off by a slight factor (0.000001 either way) which is producing aliasing on the renders. Is there something I'm missing here, or do I just need to manually round the matrix up?
Cheers
Even with double precision mathematics there will be rounding errors with matrix multiplication.
You are performing 4 multiplications and then summing the results for each element in the new matrix.
It might be better to set your Screen2World matrix up as the translation matrix to start with rather than multiplying an identity matrix by your translation and then by the rotation. That's two matrix multiplications rather than one and (more than) twice the rounding error.