Patch Normals Algorithm - c#

I'm trying hard to reproduce a MATLAB algorithm called "patchnormal" which first calculates the normal vectors of all faces, and after that calculates the vertex normals from the face normals weighted by the angles of the faces. (See illustration below)
There doesn't seem to be a free library available for 3D mesh in WPF C# oriented to such mathematical use. Or is there ?
So the question is :
How do I compute this (red) vector for all my vertices? Can it be optimized to be used in real time simulation ?
(source: hostingpics.net)

You can compute the angle between two edges as follows:
given: edge vectors E and F for a given face of your vertex,
E_normalized = normalize(E)
F_normalized = normalize(F)
cross_normal = cross(E_normalized, F_normalized)
sin_theta = length( cross_normal )
cos_theta = dot(E_normalized, F_normalized)
results:
face normal = normalize(cross_normal)
face angle theta = atan2(sin_theta, cos_theta)
Then, weight the normals accordingly:
total_vector = vec(0,0,0)
for(each face adjacent to a particular vertex):
[compute normal and theta as above]
total_vector += normal * theta
return normalize(total_vector)
To optimize for real-time, I would first profile to see what was actually slowing things down. I'd guess that computing atan2() several times per vertex might turn out to be expensive, but improving on that would really require finding some substitute for angle-weighted normals.

Related

Is this Edge of a 3D solid Concave or Convex?

Given a 3D solid model and an edge, I am testing whether that edge is concave or convex.
What is the best way to do this? I'd like to minimize the assumptions regarding the input geometry. My first pass at the problem takes the average of the vertices of the two adjacent faces to generate center points, offsets one of those points by the face normal at that point, and tests whether the offset point is closer or farther to the opposing face than the original. This works for pairs of simple faces, approximately the same size. It fails, for example, with small faces far from the center of larger faces.
I'm doing this in Revit, but I imagine the problem is the same in Rhino, Catia, any solid modeler. From the edge I can extract the adjacent faces. I know the faces are oriented correctly such that the normals point outward. I can project 3D points to the faces, calculate normals of the faces at those points, etc.
Here's the code for the naive version:
public static Boolean AreFacesConcave(Face face_0, Face face_1, Document doc)
{
UV uvMid_0 = VertexAverageUV( face_0 ); //3D average of the vertices
UV uvMid_1 = VertexAverageUV( face_1 ); // approximates the center of the face
XYZ pt_0 = face_0.Evaluate(uvMid_0);
XYZ pt_1 = face_1.Evaluate(uvMid_1);
// normals at those points
XYZ normal_0 = face_0.ComputeNormal(uvMid_0);
XYZ normal_1 = face_1.ComputeNormal(uvMid_1);
// third point, offset from face 2 by normal
XYZ pt_2 = pt_1.Add(normal_1.Normalize());
Double d0 = pt_0.DistanceTo(pt_1);
Double d1 = pt_0.DistanceTo(pt_2);
return (d1 < d0);
}
If you know that normal vectors always are outward, get two points A and B inside two adjacent faces (either mean of three non-collinear vertices or your 'center points').
Then check the sign of dot (scalar) product of vectors AB and nA (normal to the face containing point A).
Result = DotProduct(AB, nA)
Negative sign denotes 'convex' edge, positive - 'concave' one.
2D example: nA is outward normal, D-edge for CDF is concave, D-edge for CDE is convex

Make a sphere with equidistant vertices

I'm trying to make a spherical burst of rays for the purpose of checking collision, but having specific interactions happen based upon what or where each ray hit. Hence why I'm using rays rather then something simpler such as OverlapSphere.
The reason I'm looking for how to make a sphere is because I can use the same math for my rays, by having them go to the vertices of where the sphere would be. But every way I can find for making a sphere has the lines get closer the near to the poles, which makes sense, as its pretty easy to do. But as you can imagine, its not that useful for my current project.
TL;DR:
How do I make a sphere with equidistant vertices? If its not perfectly equidistant its fine, it just needs to pretty close. If this happens, it would be great if you could give how much the difference would be, and where, if applicable.
Extra notes:
I've looked at this and this, but the math is way over my head, so what I've been looking for might've just been staring me in the face this whole time.
You could use an icosphere. As the vertices are distributed on equilateral triangles, your vertices are guaranteed to be equidistant.
To construct the icosphere, first you make an icosahedron and then split the faces recursively in smaller triangles as explained in this article.
Are you aware that the sphere given to you by Unity is in fact designed
with this exact goal in mind?
ie, the entire raison d'etre of the sphere built-in to Unity is that the points are fairly smoothly space ...... roughly equidistant, as you phrase it.
To bring up such a sphere in Unity, just do this:
You can then instantly get access to the verts, as you know
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vv = mesh.vertices;
int kVerts=vv.Length
for (int i=0; i<kVerts; ++i)
Debug.Log ... vv[i]
Note you can easily check "which part of the sphere" they are on by (for example) checking how far they are from your "cities" (or whatever) or just check (for example) the z values to see which hemisphere they are in .. et cetera.
Furthermore...
Please note. Regarding your overall reason for wanting to do this:
but having specific interactions happen based upon what or where each ray hit
Note that it could not be easier to do this using PhysX. (The completely built-in game physics in Unity.) Indeed, I have never, ever, looked at a collision without doing something "specific" depending on "where it hit!"
You can for example get the point where the contact was with http://docs.unity3d.com/ScriptReference/RaycastHit-point.html
It's worth noting it is absolutely inconceivable one could write something approaching the performance of PhysX in casual programming.
I hope this makes things easier!
slice the sphere into N circles
compute perimeter of it
divide it by the same angle that create the slice
this gives you the number of vertexes
and also angle step inside circle
cast rays
This is how I coded it in C++ + OpenGL:
// draw unit sphere points (r=1 center=(0,0,0)) ... your rays directions
int ia,na,ib,nb;
double x,y,z,r;
double a,b,da,db;
na=16; // number of slices
da=M_PI/double(na-1); // latitude angle step
for (a=-0.5*M_PI,ia=0;ia<na;ia++,a+=da) // slice sphere to circles in xy planes
{
r=cos(a); // radius of actual circle in xy plane
z=sin(a); // height of actual circle in xy plane
nb=ceil(2.0*M_PI*r/da);
db=2.0*M_PI/double(nb); // longitude angle step
if ((ia==0)||(ia==na-1)) { nb=1; db=0.0; } // handle edge cases
for (b=0.0,ib=0;ib<nb;ib++,b+=db) // cut circle to vertexes
{
x=r*cos(b); // compute x,y of vertex
y=r*sin(b);
// this just draw the ray direction (x,y,z) as line in OpenGL
// so you can ignore this
// instead add the ray cast of yours
double w=1.2;
glBegin(GL_LINES);
glColor3f(1.0,1.0,1.0); glVertex3d(x,y,z);
glColor3f(0.0,0.0,0.0); glVertex3d(w*x,w*y,w*z);
glEnd();
}
}
This is how it looks like:
R,G,B lines are the sphere coordinate system axises X,Y,Z
White-ish lines are your Vertexes (White) + direction (Gray)
[Notes]
do not forget to include math.h
and replace the OpenGL stuff with yours
If you want 4, 6, 8, 12 or 20 vertices then you can have exactly equidistant vertices as the Platonic solid which all fit inside a sphere. The actual coordinates of these should be easy to get. For other numbers of vertices you can use other polyhedra and scale the verties so they lie on a sphere. If you need lots of points then a geodesic dome might be a good base. The C60 bucky-ball could be a good base with 60 points. For most of these you should be able to find 3D models from which you can extract coordinates.
I think the easiest way to control points on a sphere is by using spherical coordinates. Then you can control position of points around the sphere by using two angles (rho and phi) and the radius.
Example code for filling points uniformly around a rotating sphere (for fun):
var time = 1; // Increment this variable every frame to see the rotation
var count = 1000;
for (int i = 0; i < count; i++)
{
var rho = time + i;
var phi = 2 * Math.PI * i / count;
var x = (float)(radius * Math.Sin(phi) * Math.Cos(rho));
var z = (float)(radius * Math.Sin(phi) * Math.Sin(rho));
var y = (float)(radius * Math.Cos(phi));
Draw(x, y, z); // your drawing code for rendering the point
}
As some answers have already suggested, use an icosahedron based solution. The source for this is quite easy to come by (and I have written my own several times) but I find the excellent Primitives Pro plugin extremely handy under many other circumstances, and always use their sphere instead of the built-in Unity one.
Link to Primitives Pro component
Primitives Pro options

Rotation matrix from normal vector, warping around the poles

I have a camera that needs to orbit locally around an object. This object has an arbitrary rotation, described by a normal vector. Imagine a spherical planet, with a camera looking down at a certain triangle on that planet.
My current implementation is to use the classic vector crossing method to generate a rotation matrix from the triangle's normal, then use that matrix as the basis for the standard orbit camera. This works fine near the equator of the planet, but once it gets near the poles, it starts blowing up, with the camera behaving increasingly erratically the closer that it gets to the very center of the pole.
I've determined that this is due to the first vector cross, as the two vectors are close to one another in that case - I'm not sure what the technical name for the phenomena is. If first vector is 0,1,0, the craziness happens when the normal is close to 0, 1, 0 or 0, -1, 0.
I've found quite a few descriptions of this problem, but no working solutions. The closest I've come was here: http://xboxforums.create.msdn.com/forums/p/13278/13278.aspx It mentions that to handle the 'singularity', use a different vector when it is detected. I can easily determine when the camera is on planet face that will cause this to happen (as my planet sphere is generated from 6 quadtrees projected to spherical coordinates), but there is a very noticeable snap when I switch to a new vector.
Here's the current code:
Vector3 triNormal; //the current normal of the target vertex
Vector3 origin = Vector3.Forward;
Matrix orientation.Forward = origin;
orientation.Up = triNormal;
orientation.Right = Vector3.Cross(orientation.Up, orientation.Forward);
orientation.Right.Normalize();
orientation.Forward = Vector3.Cross(orientation.Right, orientation.Up);
orientation.Forward.Normalize();
I've experimented with detecting when triNormal is on one of the pole faces, and setting 'origin' to something else such as Right. The camera then behaves properly once it is on the face, but is immediately snapped to a new rotation as it crosses over. This makes sense, as its reference vector has just changed, but needs to be eliminated for a smooth user experience. I tried figuring out how to offset the camera's yaw for the orbit camera to counteract the new coordinate system, but it doesn't seem to be a constant value, depending on where on the sphere the camera is currently aiming. I'm not sure how I could calculate what the difference is.
Also note that as it's in XNA and C#, I'm using a right-hand coordinate system.
I don't understand why you do this:
orientation.Forward = Vector3.Cross(orientation.Right, orientation.Up);
orientation.Forward.Normalize();
when you've already used previous orientation.Forward to get orientation.Right.
(If you are "crossing" normal vector I don't think you'll need to normalize them.)
Anyway, if triNormal is the current normal of the target vertex, and your camera is looking down to it, I think you should have:
orientation.Forward = -triNormal

What's the different between Vector2.Transform() and Vector2.TransformNormal() in XNA(C#)

I am developing a game in XNA (C#), I am wondering how to use 2 versions of transformation. In my idea, the works of this functions are:
(Assume that vectors are originated from Matrix.Identity)
Vector2 resultVec = Vector2.Transform(sourceVector, destinationMatrix); is used for Position vectors transformation.
Vector2 resultVec = Vector2.TransformNormal(sourceVector, destinationMatrix); used for transforming Velocity vectors.
Is that true?. Who knows the explanation in detail, please help!
The simple answer is that -
Vector2.Transform() applies the entire Matrix to the vector while
Vector2.TransformNormal() only applies the Scale and Rotational parts of the Matrix to the vector.
With transformation, functions will multiply the source vector with the produced matrices.
Transform() is used for vectors representing the positions in 2D or 3D space. This will, in detail, take the Transpose (T operator) of the Invert matrix represents your Coordinate.
In Math: retVec = T(M ^ -1) x srcVec.
TransformNormal() used for direction, tangent vectors. This reserves the matrix.
In Math: retVect = srcVec x M.
To transform a vector from one matrix/coordinate to another (say M1 to M2): retVec = Transform by M1 -> then transform by invert of M2:
Vector2 retVec = Vector2.Transform(vectorInSource, M1);
Matrix invertDestMatrix = Matrix.Invert(M2);
outVect= Vector2.Transform(retVec , invertDestMatrix);

How can I calculate individual point masses?

I am working on a C# 2d soft body physics engine and I need to assign masses to an object's vertices given: a list of vertices (x,y positions), the total mass for the object, and the center of mass.
The center of mass is given as:
where,
R = center of mass
M = total mass
mj = mass of vertex j
rj = position of vertex j
I need an algorithm that can approximate each mj given R, M, and rj.
edit: I just want to clarify that I am aware that there are an infinite set of solutions. I am looking for a quick algorithm that finds a set of mj's (such that they are each sufficiently close to mj = M/[number of vertices] and where "sufficiently" is defined as some small floating point threshold).
Also, each object will consist of about 5 to 35 points.
You can compute the CM of a uniformly dense polygon as follows: number the N vertices from 0..N-1, and treat them cyclicly, so that vertex N wraps to vertex 0:
total_area = sum[i=0..N-1]( X(p[i],p[i+1])/2 )
CM = sum[i=0..N-1]( (p[i]+p[i+1])*X(p[i],p[i+1])/6 ) / total_area
where X(p,q)= p.x*q.y - q.x*p.y [basically, a 2D cross product]
If the polygon is convex, the CM will be inside the polygon, so you can reasonably start out by slicing up the area in triangles like a pie, with the CM at the hub. You should be able to weight each vertex of a triangle with a third of its mass, without changing the CM -- however, this would still leave a third of the total mass at the CM of the entire polygon. Nonetheless, scaling the mass transfer by 3/2 should let you split the mass of each triangle between the two "external" vertices. As a result,
area[i] = X( (p[i]-CM), (p[i+1]-CM) ) / 2
(this is the area of the triangle between the CM and vertices i and i+1)
mass[i] = (total_mass/total_area) * (area[i-1] + area[i])/2
Note that this kind of mass transfer is profoundly "unphysical" -- if nothing else, if treated literally, it would screw up the moment of inertia something fierce. However, if you need to distribute the mass among the vertices (like for some kind of cheesy explosion), and you don't want to disrupt the CM in doing so, this should do the trick.
Finally, a couple of warnings:
if you don't use the actual CM for this, it won't work right
it is hazardous to use this on concave objects; you risk ending up with negative masses
The center of mass R will constantly be changing as the vertices move. So, if you have 10 vertices, store the values from 10 consecutive "frames" - this will give you 10 equations for your 10 unknowns (assuming that the masses don't change over time).
Count the degrees of freedom: for points in D dimensional space you have D+1 equations[+] and n unknowns for n separate particles. If n>D+1 you are sunk (unless you have more information than you have told us about: symmetry constraints, higher order moments, etc...).
edit: My earlier version assumed you had the m_is and were looking for the r_is. It is slightly better when you have the r_is and want the m_is.
[+] The one you list above (which is actual D separate equation) and M = \sum m_j
Arriu said:
Oh sorry I misunderstood your question. I thought you were asking if I was modeling objects such as a torus, doughnut, or ring (objects with cutouts...). I am modeling bodies with just outer shells (like balloons or bubbles). I don't require anything more complex than that.
Now we are getting somewhere. You do know something more.
You can approximate the surface area of the object by breaking it into triangles between adjacent points. This total area gives you mean mass density. Now find the DoF deficit, and assign that many r_is (drawn at random, I guess) an initial mass based on the mean density and 1/3 of the area of each triangle it is a party to. Then solve the remaining system analytically. If the problem is ill-conditioned you can either draw a new set of assigned points, or attempt a random walk on the masses that you have already guessed at.
I would flip the problem around. That is, given a density and the position of the object (which is of course naturally still the center of mass of the object and three vectors corresponding to the orientation of the object, see Euler's angles), at each vertex associate a volume with that element (which would change with resolution and could be fractional for positions at the edge of the object) and multiply the density (d_j) with the associated volume (v_j), m_j=v_j * d_j. This approach should naturally reproduce the center of the mass of the object again.
Perhaps I didn't understand your problem, but consider that this would ultimately yield the correct mass ( Mass = sum(m_j) = sum(v_j * d_j) ) and at worst this approach should yield a verification of your result.

Categories