I'm currently trying to wrap my head around WPF, and I'm at the stage of converting between coordinate spaces with Matrix3D structures.
After realising WPF has differences between Point3D and Vector3D (thats two hours I'll never get back...) I can get a translation matrix set up. I'm now trying to introduce a rotation matrix, but it seems to be giving innacurate results. Here is my code for my world coordinate transformation...
private Point3D toWorldCoords(int x, int y)
{
Point3D inM = new Point3D(x, y, 0);
//setup matrix for screen to world
Screen2World = Matrix3D.Identity;
Screen2World.Translate(new Vector3D(-200, -200, 0));
Screen2World.Rotate(new Quaternion(new Vector3D(0, 0, 1), 90));
//do the multiplication
Point3D outM = Point3D.Multiply(inM, Screen2World);
//return the transformed point
return new Point3D(outM.X, outM.Y, m_ZVal);
}
The translation appears to be working fine, the rotation by 90 degrees however seems to return floating point inaacuracies. The Offset row of the matrix seems to be off by a slight factor (0.000001 either way) which is producing aliasing on the renders. Is there something I'm missing here, or do I just need to manually round the matrix up?
Cheers
Even with double precision mathematics there will be rounding errors with matrix multiplication.
You are performing 4 multiplications and then summing the results for each element in the new matrix.
It might be better to set your Screen2World matrix up as the translation matrix to start with rather than multiplying an identity matrix by your translation and then by the rotation. That's two matrix multiplications rather than one and (more than) twice the rounding error.
Related
I have a Vector with 3 components (X,Y,Z) and i want to find a Vector orthogonal to the given one. Since the Vectors orthogonal to any Vector are infinite, i just need one that is randomic.
I've tried using an equation with the Dot Product formula, since the dot product between two orthogonal Vectors is always 0, and I managed to write a bit of code that works only when the given Vector is axis-aligned, but that is probably because the randomized components of the vectors are X and Y. I really can't get my head around this.
I wrote my code on the Unity3D engine in order to be able to easly visualize it:
Vector3 GenerateOrthogonal(Vector3 normal)
{
float x = Random.Range(1f, -1f);
float y = Random.Range(1f, -1f);
float total = normal.x * x + normal.y * y;
float z = -total / -normal.z;
return new Vector3(x, y, z).normalized;
}
There are a few methods for doing this. I'll provide two. The first is a one-liner that uses Quaternions to generate a random vector and then rotate it into place:
Vector3 RandomTangent(Vector3 vector) {
return Quaternion.FromToRotation(Vector3.forward, vector) * (Quaternion.AngleAxis(Random.Range(0f, 360f), Vector3.forward) * Vector3.right);
}
The second is longer, more mathematically rigorous, and less platform dependent:
Vector3 RandomTangent(Vector3 vector) {
var normal = vector.normalized;
var tangent = Vector3.Cross(normal, new Vector3(-normal.z, normal.x, normal.y));
var bitangent = Vector3.Cross(normal, tangent);
var angle = Random.Range(-Mathf.PI, Mathf.PI);
return tangent * Mathf.Sin(angle) + bitangent * Mathf.Cos(angle);
}
Here are some notes on their differences:
Both of these functions generate a random perpendicular vector (or "tangent") with a uniform distribution.
You can measure the accuracy of these functions by getting the angle between the input and the output. While most of the time it will be exactly 90, there will sometimes be very minor deviations, owing mainly to floating point rounding errors.
While neither of these functions generate large errors, the second function generates them far less frequently.
Initial experimentation suggests the performance of these functions is close enough that the faster function may vary depending on platform. The first one was actually faster on my machine for standard Windows builds, which caught me off guard.
If you're prepared to assume that the input to the second function is a normalized vector, you can remove the explicit normalization of the input and get a performance increase. If you do this and then pass it a non-normalized vector, you'll still get a perpendicular vector as a result, but its length and distribution will no longer be reliably uniform.
In the degenerate case of passing a zero vector, the first function will generate random vectors on the XY plane, while the second function will propagate the error and return a zero vector itself.
I have an application where I want to scale, rotate, and translate some 3D points.
I'm used to seeing 3x3 rotation matrices, and storing translation in a separate array of values. But the .Net Matrix3D structure ( https://msdn.microsoft.com/en-us/library/System.Windows.Media.Media3D.Matrix3D%28v=vs.110%29.aspx) is 4x4 and has a row of "offsets" - OffsetX, OffsetY, OffsetZ, which are apparently used for translation. But how, exactly, are they intended to be applied?
Say I have a Vector3D with X, Y, Z values, say 72, 24, 4. And say my Matrix3D has
.707 0 -.707 0
0 1 0 0
.707 0 .707 0
100 100 0 1
i.e., so the OffsetX and OffsetY values are 100.
Is there any method or operator for Matrix3D that will apply this as a translation to my points? Transform() doesn't seem to. If my code has . . .
Vector3D v = new Vector3D(72, 24, 0);
Vector3D vectorResult = new Vector3D();
vectorResult = MyMatrix.Transform(v);
vectorResult has 8.484, 24, -8.484, and it has the same values if the offsets are 0.
Obviously I can manually apply the translation individually for each axis, but I thought since it's part of the structure there might be some method or operator where you give it a point and it applies the entire matrix including translation. Is there?
A 4x4 matrix represents a transform in 3D space using homogeneous coordinates. In this representation, a w component is added to the vector. This component differs based on what a vector should represent. Transforming a vector with the matrix is simple multiplication:
transformed = vector * matrix
(where both vectors are row-vectors).
The w component is only considered by the matrix' last row - the part where the translation is stored. So if you want to transform points, this component needs to be 1. If you want to transform directions, this component needs to be 0 (because direction vectors do not change if you translate them).
This difference is expressed with two different structures in WPF. The Vector3D represents a direction (with w component 0) and the Point3D represents a point (with w component 1). So if you make v a Point3D, everything should work as you expect.
My understanding of the WPF transformation classes is that the Matrix class represents a 3x3 row-major matrix of 2D homogenised coordinates where the final column is always (0,0,1). I understand that the reason for this design is to facilitate translations to be represented as matrix multiplications, in the same way as rotate, scale and skew transformations, rather than separately as vectors, which would have to be the case if 2x2 matrices were used.
My expectation therefore is that when multiplying a Vector by a Matrix that contains a translation, the resultant vector should be translated. This does not seem to be happening for me when using the WPF matrix classes, so what am I doing wrong?
Matrix m = new Matrix();
m.Translate(12, 34);
Vector v = new Vector(100, 200);
Vector r = Vector.Multiply(v, m);
// Confirm that the matrix was translated correctly
Debug.WriteLine(m);
// Confirm that the vector has been translated
Debug.WriteLine(r);
Results:
1,0,0,1,12,34 // Matrix contains translation as expected
100,200 // Vector is unchanged - not expected
I see now. The distinction between a Vector and a Point is important. I should be using Points and Point.Multiply instead, and then the results are what I was expecting. A vector is the difference between two points, which is unaffected by a translation, whereas a point is a specific location which is affected.
Given a triangle vertex defined with three 3D points, how do you calculate the angle between it and a given point.
class Point3D
{
double x, y, z;
}
class Vertex
{
Point3D P1, P2, P3;
}
Point3D LightPoint;
http://www.ianquigley.com/iq/RandomImages/vextor.png
Light point in Red. Blue points - triangle with the surface normal shown.
I need to calculate the surface normal, and the angle between that and the LightPoint. I've found a few bits a pieces but nothing which puts it together.
OK, here goes...
You have points A, B, and C, each of which has coordinates x, y, and z. You want the length of the normal, as Matias said, so that you can calculate the angle that a vector between your point and the origin of the normal makes with the normal itself. It may help you to realize that your image is misleading for the purposes of our calculations; the normal (blue line) should be emanating from one of the vertices of the triangle. To turn your point into a vector it has to go somewhere, and you only know the points of the vertices (while you can interpolate any point within the triangle, the whole point of vertex shading is to not have to).
Anyway, first step is to turn your Point3Ds into Vector3Ds. This is accomplished simply by taking the difference between each of the origin and destination points' coordinates. Use one point as the origin for both vectors, and the other two points as the destination of each. So if A is your origin, subtract A from B, then A from C. You now have a vector that describes the magnitude of the movement in X, Y, and Z axes to go from Point A to point B, and likewise from A to C. It is worth noting that a theoretical vector has no start point of its own; to get to point B, you have to start at A and apply the vector.
The System.Windows.Media.Media3D namespace has a Vector3D struct you can use, and handily enough, the Point3D in the same namespace has a Subtract() function that returns a Vector3D:
Vector3D vectorAB = pointB.Subtract(pointA);
Vector3D vectorAC = pointC.Subtract(pointA);
Now, the normal is the cross product of the two vectors. Use the following formula:
v1 x v2 = [ y1*z2 - y2*z1 , z1*x2 - z2*x1 , x1*y2 - x2*y1 ]
This is based on matrix math you don't strictly have to know to implement it. The three terms in the matrix are the X, Y, and Z of the normal vector. Luckily enough, if you use the Media3D namespace, the Vector3D structure has a CrossProduct() method that will do this for you:
Vector3D vectorNormal = Vector3D.CrossProduct(vectorAB, vectorAC);
Now, you need a third vector, between LightPoint and A:
Vector3D vectorLight = PointA.Subtract(LightPoint);
This is the direction that light will travel to get to PointA from your source.
Now, to find the angle between them, you compute the dot product of these two and the length of these two:
|v| = sqrt(x^2 + y^2 + z^2)
v1 * v2 = x1*x2 + y1*y2 + z1*z2
Or, if you're using Media3D, Vector3D has a Length property and a DotProduct static method:
double lengthLight = vectorLight.Length;
double lengthNormal = vectorNormal.Length;
double dotProduct = Vector3D.DotProduct(vectorNormal, vectorLight);
Finally, the formula Matias mentioned:
v1 * v2 = |v1||v2|cos(theta)
rearranging and substituting variable names:
double theta = arccos(dotProduct/(lengthNormal*lengthLight))
Or, if you were smart enough to use Media3D objects, forget all the length and dot product stuff:
double theta = Vector3D.AngleBetween(vectorNormal, vectorLight);
Theta is now the angle in degrees. Multiply this by the quantity 2(pi)/360 to get radians if you need it that way.
The moral of the story is, use what the framework gives you unless you have a good reason to do otherwise; using the Media3D namespace, all the vector algebra goes away and you can find the answer in 5 easy-to-read lines [I edited this, adding the code I used -- Ian]:
Vector3D vectorAB = Point3D.Subtract(pointB, pointA);
Vector3D vectorAC = Point3D.Subtract(pointC, pointA);
Vector3D vectorNormal = Vector3D.CrossProduct(vectorAB, vectorAC);
Vector3D vectorLight = Point3D.Subtract(pointA, LightPoint);
double lengthLight = light.Length;
double lengthNormal = norm.Length;
double dotProduct = Vector3D.DotProduct(norm, light);
double theta = Math.Acos(dotProduct / (lengthNormal * lengthLight));
// Convert to intensity between 0..255 range for 'Color.FromArgb(...
//int intensity = (120 + (Math.Cos(theta) * 90));
To get the normal vector, calculate the cross product between P2 - P1 and P3 - P1.
To get the angle, use the dot product between the normal and the lightPoint. Remember that dot(a, b) = |a||b| * cos(theta), so since you can calculate the length of both, you can get theta (the angle between them).
Only tangentially related, but most graphics systems also allow for surface normal interpolation, where each vertex has an associated normal. The effective normal at a given U,V then is interpolated from the vertex normals. This allows for smoother shading and much better representation of 'curvature' in lighting without having to rely on a very large number of triangles. Many systems do similar tricks with 'bump mapping' to introduce perturbations in the normal and get a sense of texture without modeling the individual small facets.
Now you can compute the angle between the interpolated normal and the light vector, as others have already described.
One other point of interest is to consider whether you have an ambient light source. If you can pretend that the point light is infinitely far away (effectively true for a source like direct sunlight on a small surface like the Earth), you can save all the trouble of calculating vector differences and simply assume the incoming angle of the light is constant. Now you have a single constant light vector for your dot product.
I have been developing (for the last 3 hours) a small project I'm doing in C# to help me choose a home.
Specifically, I am putting crime statistics in an overlay on Google maps, to find a nice neighborhood.
Here is an example:
http://otac0n.com/Demos/prospects.html
Now, I manually found the Lat and Lng to match the corners of the map displated in the example, but I have a few more maps to overlay.
My new application allows me to choose a landmark and point at the image to tie the Pixel to a LatLng. Something like:
locations.Add(new LocationPoint(37.6790f, -97.3125f, "Kellogg and I-135"));
// and later...
targetPoint.Pixel = FindPixel(mouseEvent.Location);
So, I've gathered a list of pixel/latlng combinations, and now would like to transform the image (using affine or non-affine transformations).
The goal here is to make every street line up. Given a good map, the only necessary transformation would be a rotation to line the map up north-to-south (and for now I would be happy with that). But I'm not sure where to start.
Does anybody have any experience doing image transformations in C#? How would I find the proper rotation to make the map level?
After the case of well-made maps is resolved, I would eventually like to be able to overlay hand drawn maps. This would obviously entail heavy distortion of the final image, and may be beyond the scope of this first iteration. However, I would not like to develop a system that would be un-expandable to this system in the future.
I'm unsure of what exactly do you want to accomplish, but if you want to fit more than three points on one map to more than three points on another one, there are basically two ways you can go:
You could try to create a triangular mesh over your points, and then apply a different affine transformation within each triangle, and get a piecewise linear transformation. To get the meshing right, you'll probably need to do something like a Delaunay triangulation of the points, for which qhull should probably be your preferred option.
You can go for a higher order transform, such as quad distortion, but it will probably be hard to find a solution that works for any number of points in a generic position. Find yourself a good finite element method book, and read the chapter(s) on higher order isoparametric elements, either lagrangian or serendipity ones, which will provide you with well-behaved mappings of many point to many points. Here are a couple of links(1 and 2) to set you on your way. But be aware that the math content is intensive...
In 2D space affine transformation can be specified by two sets of three nonlinear 2D points. In C# you can use the following routine to compute appropriate Matrix:
public static Matrix fit(PointF[] src, PointF[] dst) {
Matrix m1 = new Matrix(new RectangleF(0, 0, 1, 1), src);
m1.Invert();
Matrix m2 = new Matrix(new RectangleF(0, 0, 1, 1), dst);
m2.Multiply(m1);
return m2;
}
It works for both array arguments having 3 elements.
If you only need rotation and translation, then you can use the following routine:
public static Matrix fitOrt(PointF src1, PointF src2, PointF dst1, PointF dst2) {
return fit(new PointF[] { src1, src2, ort(src1, src2) },
new PointF[] { dst1, dst2, ort(dst1, dst2) });
}
public static PointF ort(PointF p, PointF q) {
return new PointF(p.X + q.Y - p.Y, p.Y - q.X + p.X);
}
If you would like to find the best approximation between two sets of multiple points then you can start with this http://elonen.iki.fi/code/misc-notes/affine-fit/
Beautiful.
So, thanks To Jamie's direction, I have found this:
Delaunay Triangulation in .NET 2.0
http://local.wasp.uwa.edu.au/~pbourke/papers/triangulate/morten.html
At this point, this is pretty much simplified to lerping.