Use of offsets for translation in Matrix3D - c#

I have an application where I want to scale, rotate, and translate some 3D points.
I'm used to seeing 3x3 rotation matrices, and storing translation in a separate array of values. But the .Net Matrix3D structure ( https://msdn.microsoft.com/en-us/library/System.Windows.Media.Media3D.Matrix3D%28v=vs.110%29.aspx) is 4x4 and has a row of "offsets" - OffsetX, OffsetY, OffsetZ, which are apparently used for translation. But how, exactly, are they intended to be applied?
Say I have a Vector3D with X, Y, Z values, say 72, 24, 4. And say my Matrix3D has
.707 0 -.707 0
0 1 0 0
.707 0 .707 0
100 100 0 1
i.e., so the OffsetX and OffsetY values are 100.
Is there any method or operator for Matrix3D that will apply this as a translation to my points? Transform() doesn't seem to. If my code has . . .
Vector3D v = new Vector3D(72, 24, 0);
Vector3D vectorResult = new Vector3D();
vectorResult = MyMatrix.Transform(v);
vectorResult has 8.484, 24, -8.484, and it has the same values if the offsets are 0.
Obviously I can manually apply the translation individually for each axis, but I thought since it's part of the structure there might be some method or operator where you give it a point and it applies the entire matrix including translation. Is there?

A 4x4 matrix represents a transform in 3D space using homogeneous coordinates. In this representation, a w component is added to the vector. This component differs based on what a vector should represent. Transforming a vector with the matrix is simple multiplication:
transformed = vector * matrix
(where both vectors are row-vectors).
The w component is only considered by the matrix' last row - the part where the translation is stored. So if you want to transform points, this component needs to be 1. If you want to transform directions, this component needs to be 0 (because direction vectors do not change if you translate them).
This difference is expressed with two different structures in WPF. The Vector3D represents a direction (with w component 0) and the Point3D represents a point (with w component 1). So if you make v a Point3D, everything should work as you expect.

Related

How do I calculate a 2D Vector from a dot product or a signed angle?

I have the following scenario:
I am working on a top-down 2-dimensional (XZ plane) game and i need to calculate the difference
of the characters movement direction (moveDir) to the look direction (lookDir).
I would like to calculate a 2D Vector (xy) that holds the following information:
The X value should range from -1 (character facing backwards) to 1 (facing forwards)
The Y value should also range from -1 (character facing left) to 1 (facing right).
To calculate the X value I can use the dot product of moveDir and lookDir.
However, I dont understand how to correctly calculate the Y value. I assume that I have to use the signedAngle between moveDir and lookDir, as the signedAngle returns a value of -90 if the character is facing left and 90 if its facing right.
I could probably even use the signedAngle between the vectors for calculating both X and Y values, as a signedAngle of 0 has the same meaning as a dot product of 1 (and also the same applies to a signedAngle of 180 and the dot product of -1).
How do I calculate the missing part?
I figured out the following solution (pseudocode):
y = 1 - absoluteValue(dotProduct(lookDir, moveDir))
if (signedAngle(lookDir, moveDir) <= 0
y *= -1
As said assuming both moveDir and lookDir are normalized vectors I think you can simply use Vector2.Dot for the first one:
var x = Vector2.Dot(moveDir, lookDir);
this will return a value between 1 (lookDir == moveDir => looking "forward") and -1 (lookDir == -moveDir => looking "back") and 0 would mean it is perpendicular.
Then for "left" and "right" you can cheat a little bit and basically do the same but using Vector3.Cross (note that even though you only want 2D vectors Unity is a 3D engine after all) in order to first obtain a vector that supposedly is "right" like e.g.
var y = Vector2.Dot(lookDir, Vector3.Cross(moveDir, Vector3.forward).normalized);
to explain this a bit
we use Vector3.forward which is basically (0, 0, 1) (a vector pointing into the screen away from you) since you are dealing with Vector2 vectors which move on the XY plane. So in order to get a perpendicular vector to a Vector2 we first need a vector that is perpendicular to all of them on the third axis
Vector3.Cross uses left hand rule
so
Vector3.Cross(moveDir, Vector3.forward)
returns a Vector pointing 90° to the "right" from the moveDir
Vector3 and Vector2 are implicitly convertible into each other which will simply ignore the Z component or use 0
OR as Fredy pointed out you can also simply use
var y = Mathf.Sin(Vector2.SignedAngle(lookDir, moveDir) * Mathf.Deg2Rad);
which basically does the same and is probably less complicated ;)

WorldToScreen function C#

Can anyone please explain the world to screen function (c#) to me?
I want to understand what it does, how to use it and what I need to use it.
So far I know, that it converts 3d vectors to 2d vectors, that then can be shown on the monitor.
Now I want to know the steps needed to show a 3d point as a drawing on the screen.
Thx in advance :)
I am going to assume that your desired screen space runs from top to bottom so that {0, 0} is top left and {screenWidth, screenHeight} is bottom right. I am also going to assume that normalized device coordinates are in the range of [{-1, -1, 0}, {1, 1, 1}].
The first thing you wanna do, is convert your 3D world coordinates into what are known as Normalized Device Coordinates (NDC for short). This is an intermediary step which simplifies the mapping to ScreenSpace.
Normalized Device Coordinates are used to measure relative positions on the monitor. These are not pixel coordinates. The center of the screen is always 0, the top is always 1 and the bottom is always -1. The left side is always -1 and the right side is always 1.
Knowing view and projection matrices, the transformation is following:
Vector3 pointInNdc = Vector3.Transform(pointInWorld, view * projection);
To understand how this transformation works, you could take a look at the source code of Paradox Game Engine for example.
Once the coordinate is normalized, you want to map it into Screen Space:
To find the X coordinate, move it from range [-1,1] -> [0,2] by adding 1 to it and dividing it by 2 to move it from range [0,2] -> [0,1]. Multiply that by screen width and you have the X coordinate in Screen Space.
float screenX = (pointInNdc.X + 1) / 2f * screenWidth;
Finding Y coordinate is similar, but instead of adding 1, you need to subtract pointInNdc.Y from 1 to invert the coordinate upside down (move from Y running bottom to top to Y running top to bottom)
float screenY = (1 - pointInNdc.Y) / 2f * screenHeight;
There you have both X and Y coordinates of Screen Space. There are many great articles out there (such as this one) which describe this exact same process and also how to go back from ScreenToWorld.
Here is the full listing of WorldToScreen:
Vector2 WorldToScreen(Vector3 pointInWorld, Matrix view, Matrix projection, int screenWidth, int screenHeight)
{
Vector3 pointInNdc = Vector3.Transform(pointInWorld, view * projection);
float screenX = (pointInNdc.X + 1) / 2f * screenWidth;
float screenY = (1 - pointInNdc.Y) / 2f * screenHeight;
return new Vector2(screenX, screenY);
}

What's the different between Vector2.Transform() and Vector2.TransformNormal() in XNA(C#)

I am developing a game in XNA (C#), I am wondering how to use 2 versions of transformation. In my idea, the works of this functions are:
(Assume that vectors are originated from Matrix.Identity)
Vector2 resultVec = Vector2.Transform(sourceVector, destinationMatrix); is used for Position vectors transformation.
Vector2 resultVec = Vector2.TransformNormal(sourceVector, destinationMatrix); used for transforming Velocity vectors.
Is that true?. Who knows the explanation in detail, please help!
The simple answer is that -
Vector2.Transform() applies the entire Matrix to the vector while
Vector2.TransformNormal() only applies the Scale and Rotational parts of the Matrix to the vector.
With transformation, functions will multiply the source vector with the produced matrices.
Transform() is used for vectors representing the positions in 2D or 3D space. This will, in detail, take the Transpose (T operator) of the Invert matrix represents your Coordinate.
In Math: retVec = T(M ^ -1) x srcVec.
TransformNormal() used for direction, tangent vectors. This reserves the matrix.
In Math: retVect = srcVec x M.
To transform a vector from one matrix/coordinate to another (say M1 to M2): retVec = Transform by M1 -> then transform by invert of M2:
Vector2 retVec = Vector2.Transform(vectorInSource, M1);
Matrix invertDestMatrix = Matrix.Invert(M2);
outVect= Vector2.Transform(retVec , invertDestMatrix);

Vertex shading and calculating lighting vector effect

Given a triangle vertex defined with three 3D points, how do you calculate the angle between it and a given point.
class Point3D
{
double x, y, z;
}
class Vertex
{
Point3D P1, P2, P3;
}
Point3D LightPoint;
http://www.ianquigley.com/iq/RandomImages/vextor.png
Light point in Red. Blue points - triangle with the surface normal shown.
I need to calculate the surface normal, and the angle between that and the LightPoint. I've found a few bits a pieces but nothing which puts it together.
OK, here goes...
You have points A, B, and C, each of which has coordinates x, y, and z. You want the length of the normal, as Matias said, so that you can calculate the angle that a vector between your point and the origin of the normal makes with the normal itself. It may help you to realize that your image is misleading for the purposes of our calculations; the normal (blue line) should be emanating from one of the vertices of the triangle. To turn your point into a vector it has to go somewhere, and you only know the points of the vertices (while you can interpolate any point within the triangle, the whole point of vertex shading is to not have to).
Anyway, first step is to turn your Point3Ds into Vector3Ds. This is accomplished simply by taking the difference between each of the origin and destination points' coordinates. Use one point as the origin for both vectors, and the other two points as the destination of each. So if A is your origin, subtract A from B, then A from C. You now have a vector that describes the magnitude of the movement in X, Y, and Z axes to go from Point A to point B, and likewise from A to C. It is worth noting that a theoretical vector has no start point of its own; to get to point B, you have to start at A and apply the vector.
The System.Windows.Media.Media3D namespace has a Vector3D struct you can use, and handily enough, the Point3D in the same namespace has a Subtract() function that returns a Vector3D:
Vector3D vectorAB = pointB.Subtract(pointA);
Vector3D vectorAC = pointC.Subtract(pointA);
Now, the normal is the cross product of the two vectors. Use the following formula:
v1 x v2 = [ y1*z2 - y2*z1 , z1*x2 - z2*x1 , x1*y2 - x2*y1 ]
This is based on matrix math you don't strictly have to know to implement it. The three terms in the matrix are the X, Y, and Z of the normal vector. Luckily enough, if you use the Media3D namespace, the Vector3D structure has a CrossProduct() method that will do this for you:
Vector3D vectorNormal = Vector3D.CrossProduct(vectorAB, vectorAC);
Now, you need a third vector, between LightPoint and A:
Vector3D vectorLight = PointA.Subtract(LightPoint);
This is the direction that light will travel to get to PointA from your source.
Now, to find the angle between them, you compute the dot product of these two and the length of these two:
|v| = sqrt(x^2 + y^2 + z^2)
v1 * v2 = x1*x2 + y1*y2 + z1*z2
Or, if you're using Media3D, Vector3D has a Length property and a DotProduct static method:
double lengthLight = vectorLight.Length;
double lengthNormal = vectorNormal.Length;
double dotProduct = Vector3D.DotProduct(vectorNormal, vectorLight);
Finally, the formula Matias mentioned:
v1 * v2 = |v1||v2|cos(theta)
rearranging and substituting variable names:
double theta = arccos(dotProduct/(lengthNormal*lengthLight))
Or, if you were smart enough to use Media3D objects, forget all the length and dot product stuff:
double theta = Vector3D.AngleBetween(vectorNormal, vectorLight);
Theta is now the angle in degrees. Multiply this by the quantity 2(pi)/360 to get radians if you need it that way.
The moral of the story is, use what the framework gives you unless you have a good reason to do otherwise; using the Media3D namespace, all the vector algebra goes away and you can find the answer in 5 easy-to-read lines [I edited this, adding the code I used -- Ian]:
Vector3D vectorAB = Point3D.Subtract(pointB, pointA);
Vector3D vectorAC = Point3D.Subtract(pointC, pointA);
Vector3D vectorNormal = Vector3D.CrossProduct(vectorAB, vectorAC);
Vector3D vectorLight = Point3D.Subtract(pointA, LightPoint);
double lengthLight = light.Length;
double lengthNormal = norm.Length;
double dotProduct = Vector3D.DotProduct(norm, light);
double theta = Math.Acos(dotProduct / (lengthNormal * lengthLight));
// Convert to intensity between 0..255 range for 'Color.FromArgb(...
//int intensity = (120 + (Math.Cos(theta) * 90));
To get the normal vector, calculate the cross product between P2 - P1 and P3 - P1.
To get the angle, use the dot product between the normal and the lightPoint. Remember that dot(a, b) = |a||b| * cos(theta), so since you can calculate the length of both, you can get theta (the angle between them).
Only tangentially related, but most graphics systems also allow for surface normal interpolation, where each vertex has an associated normal. The effective normal at a given U,V then is interpolated from the vertex normals. This allows for smoother shading and much better representation of 'curvature' in lighting without having to rely on a very large number of triangles. Many systems do similar tricks with 'bump mapping' to introduce perturbations in the normal and get a sense of texture without modeling the individual small facets.
Now you can compute the angle between the interpolated normal and the light vector, as others have already described.
One other point of interest is to consider whether you have an ambient light source. If you can pretend that the point light is infinitely far away (effectively true for a source like direct sunlight on a small surface like the Earth), you can save all the trouble of calculating vector differences and simply assume the incoming angle of the light is constant. Now you have a single constant light vector for your dot product.

Is the WPF Matrix3D.Rotate() function innacurate?

I'm currently trying to wrap my head around WPF, and I'm at the stage of converting between coordinate spaces with Matrix3D structures.
After realising WPF has differences between Point3D and Vector3D (thats two hours I'll never get back...) I can get a translation matrix set up. I'm now trying to introduce a rotation matrix, but it seems to be giving innacurate results. Here is my code for my world coordinate transformation...
private Point3D toWorldCoords(int x, int y)
{
Point3D inM = new Point3D(x, y, 0);
//setup matrix for screen to world
Screen2World = Matrix3D.Identity;
Screen2World.Translate(new Vector3D(-200, -200, 0));
Screen2World.Rotate(new Quaternion(new Vector3D(0, 0, 1), 90));
//do the multiplication
Point3D outM = Point3D.Multiply(inM, Screen2World);
//return the transformed point
return new Point3D(outM.X, outM.Y, m_ZVal);
}
The translation appears to be working fine, the rotation by 90 degrees however seems to return floating point inaacuracies. The Offset row of the matrix seems to be off by a slight factor (0.000001 either way) which is producing aliasing on the renders. Is there something I'm missing here, or do I just need to manually round the matrix up?
Cheers
Even with double precision mathematics there will be rounding errors with matrix multiplication.
You are performing 4 multiplications and then summing the results for each element in the new matrix.
It might be better to set your Screen2World matrix up as the translation matrix to start with rather than multiplying an identity matrix by your translation and then by the rotation. That's two matrix multiplications rather than one and (more than) twice the rounding error.

Categories