Given a 3D solid model and an edge, I am testing whether that edge is concave or convex.
What is the best way to do this? I'd like to minimize the assumptions regarding the input geometry. My first pass at the problem takes the average of the vertices of the two adjacent faces to generate center points, offsets one of those points by the face normal at that point, and tests whether the offset point is closer or farther to the opposing face than the original. This works for pairs of simple faces, approximately the same size. It fails, for example, with small faces far from the center of larger faces.
I'm doing this in Revit, but I imagine the problem is the same in Rhino, Catia, any solid modeler. From the edge I can extract the adjacent faces. I know the faces are oriented correctly such that the normals point outward. I can project 3D points to the faces, calculate normals of the faces at those points, etc.
Here's the code for the naive version:
public static Boolean AreFacesConcave(Face face_0, Face face_1, Document doc)
{
UV uvMid_0 = VertexAverageUV( face_0 ); //3D average of the vertices
UV uvMid_1 = VertexAverageUV( face_1 ); // approximates the center of the face
XYZ pt_0 = face_0.Evaluate(uvMid_0);
XYZ pt_1 = face_1.Evaluate(uvMid_1);
// normals at those points
XYZ normal_0 = face_0.ComputeNormal(uvMid_0);
XYZ normal_1 = face_1.ComputeNormal(uvMid_1);
// third point, offset from face 2 by normal
XYZ pt_2 = pt_1.Add(normal_1.Normalize());
Double d0 = pt_0.DistanceTo(pt_1);
Double d1 = pt_0.DistanceTo(pt_2);
return (d1 < d0);
}
If you know that normal vectors always are outward, get two points A and B inside two adjacent faces (either mean of three non-collinear vertices or your 'center points').
Then check the sign of dot (scalar) product of vectors AB and nA (normal to the face containing point A).
Result = DotProduct(AB, nA)
Negative sign denotes 'convex' edge, positive - 'concave' one.
2D example: nA is outward normal, D-edge for CDF is concave, D-edge for CDE is convex
Related
So basically programmatically, given 4 3D coordinates for the ceiling of a room as well as 3 pairs of 3D coordinates for the water pipes above the ceiling I am to calculate how many sprinklers I can have in the ceiling if each sprinkler has to be 2500mm from the walls and apart from each other.
I could write out the program but the problem is I don't know how this is calculated.
The problem:
calculate the number of sprinklers, their positions on the room’s ceiling and connect each sprinkler to the nearest water pipe.
The room has a rectangular shape. Ceiling coordinates (x, y, z) are:
(97500.00, 34000.00, 2500.00)
(85647.67, 43193.61, 2500.00)
(91776.75, 51095.16, 2500.00)
(103629.07, 41901.55, 2500.00)
Three water pipes are available:
(98242.11, 36588.29, 3000.00) to (87970.10, 44556.09, 3500.00)
(99774.38, 38563.68, 3500.00) to (89502.37, 46531.47, 3000.00)
(101306.65, 40539.07, 3000.00) to (91034.63, 48506.86, 3000.00)
Sprinklers are to be placed on the ceiling 2500mm away from the walls and from each other.
Please, calculate the number of sprinklers that can be fitted into this room, then calculate
coordinates (x, y, z) of each sprinkler.
For each sprinkler calculate coordinates (x, y, z) of the connection point to the nearest water pipe.
Now I understand that the distance formula between two 3d points is d = sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2).
But I'm not sure how to calculate the number of sprinklers using this. or how to calculate their points of intersection on the water pipes. if i could calculate their points of intersection on each pipe and the distance from that point to each sprinkler, then the nearest pipe would be obvious and that would be the pipe the sprinkler connects to.
I have to write this using c# and the dotnet framework. But would I be able to please get some assistance putting this into the form of pseudocode and understanding how to approach, tackle and calculate this problem? I'm not good at the maths side of this but hopefully once understanding how it is solved, I can then possibly put this into a c# function.
This question is so elaborated that, even in pseudocode, it's a big deal.
You might start with a re-calibration of your rectangle (indeed, the four coordinates you provide form a rectangle indeed): you shift your entire rectangle so that one point equals (0,0). The you rotate your rectangle in order for your rectangle to have following coordinates:
(0,0)
(Xmax, 0)
(0, Ymax)
(Xmax, Ymax)
Once you have this, you can start looking for your algorithm for filling the ceiling with sprinklers. First you can base yourself on this "algorithm":
Fill your rectangle with squares where the length of the line equals two times the given radius.
Once you have this, you can upgrade to the following "algorithm":
Fill your rectangle with circles with the given radius.
Revert everything to the original coordinates (rotating back and shifting back).
Once you have this, you might start calculating the distances to the pipes, using basic "distance-between-point-and-line" formula.
Notice that the ceiling can be tiled with 4x6 squares of side 2500 mm. That tells you the number of sprinklers.
You can find their positions by computing vectors of length 2500 mm, parallel to the sides (first compute unit vectors). It is not too difficult to obtain then coordinates using a matrix arrangement.
Next, you compute the distance of every sprinkler to the three pipes and keep the shortest. (CAUTION: distance to the line segments, not to the lines of support.)
This question already has answers here:
Calculate depth of indentation in an model using three point system?
(2 answers)
Closed 4 years ago.
Via Raycasting one can select a point on an GameObjects Collider. In the short visual two points are represented with small spheres denoting user selection. The desire is to calculate the depth of any spot in an models indentation. Current thought is to use a system in which the user selects a point within the depth of the indentation and a point outside of it than calculating the depth using Vector math.
As of now, the information available to us is the two points (Vectors) in space and the distance between them. How do we use the aforementioned data to calculate a point perpendicular to the point in the depth? Thought being if this point is calculable than the depth would be the distance between those two points. How to go about this, and is it feasible?
Visual:
I don't think that 2 points are enough. You, seeing the whole scene, know the planes where both points lie so you can see the direction of the shortest distance (perpendicular) segment. But just for two points there are infinitely many other valid planes going through them. The only thing you can say is that the depth is bound by the distance between those two points.
Consider following example: one point A is (0,0,0) and B is (2,1,0). How deep is the indentation? The answer is: you don't know.
Assume first that the planes are (the plane a contains the point A and the plane b contains the point B):
a is X = 0 while b is X = 2. Then the indentation depth is clearly 2.
a is Y = 0 while b is Y = 1. Then the indentation depth is clearly 1.
finally if a is Z = 0 and b is Z = 0. Then the indentation depth is clearly 0.
Actually change the planes direction you can get any depth between 0 and the distance between A and B.
The only solution I see is to fix at least one of the two planes by selecting 3 points on that plane. And then the problem becomes trivial. Having 3 points you find the plane equation in a form of
a*x + b*y + c*z + d = 0
Then the distance from a point (x1,y1,z1) to that plane is
dist = (a*x1 + b*y1 + c*z1 + d)/sqrt(a^2+b^2+c^2)
I would like to detect all the angles in a given binary image ,
the image contains a handwriting character (black on white bg),
is there a way that i can get the angles at the lines junctions with 100% accuracy?
My current solution (below) do find the angles but sometimes it finds unwanted angles - that is angles near the junction and not exectly on it (ther's an example below).
In this implementation i use Magick.net,
because i cant post more than two links ill post the input letter image that suppose to be a binary image with blue marks that are the lcations of the angles i want to detect - to get the input binary image they will need to be deleted (sorry).
letter A
My code:
var image = new MagickImage(#"xImg.jpg");
const int Radius = 3;//angle points surrounding circle radius
image.Grayscale(PixelIntensityMethod.Average); //redundent
var lineJunctionsImage = image.Clone(); // an image that will contain only the lines junctions points - angle points
//detect all lines junctions points (black pixels points) in the image
//with morphology method HAM + lineJunction kernel
lineJunctionsImage.Negate();
lineJunctionsImage.Morphology(MorphologyMethod.HitAndMiss, Kernel.LineJunctions);
lineJunctionsImage.Negate();
resulting image
The resulting points are supposed to be the points on the middle of the junctions,
but some are not accurate and its critical for me as i want to draw a circle that surrounds each of them and then take the angle between the point and the two points that intercects the circle, now, the next code is doing it but its to complicated and long so ill just write the algorithm here:
for each junction point p do:
detect all black pixels bi(0 >= i) that intersects a circle
with the above radius (3) that surrounds p,
for each bi pairs calculate the angle between p and the pair
print the angles found with the following protocol:
{point1} {angle point} {point 2}
angle
The angles found (the angle points (middle junctions points) marked):
{11,19} {8,17} {5,19} 112.619
{11,19} {8,17} {9,14} 105.255
{5,19} {8,17} {9,14} 142.12
{24,17} {21,20} {18,19} 116.56
{24,17} {21,20} {20,23} 90
{21,1} {24,0} {27,2} 127.87
{24,0} {27,2} {27,5} 123.7
{26,12} {27,9} {27,6} 161.56
I think the main problem is that the angle points are sometimes not the correct points but a close neighbor.
Maybe someone have a better more accurate idea that will find the correct angles.
I'm trying to make a spherical burst of rays for the purpose of checking collision, but having specific interactions happen based upon what or where each ray hit. Hence why I'm using rays rather then something simpler such as OverlapSphere.
The reason I'm looking for how to make a sphere is because I can use the same math for my rays, by having them go to the vertices of where the sphere would be. But every way I can find for making a sphere has the lines get closer the near to the poles, which makes sense, as its pretty easy to do. But as you can imagine, its not that useful for my current project.
TL;DR:
How do I make a sphere with equidistant vertices? If its not perfectly equidistant its fine, it just needs to pretty close. If this happens, it would be great if you could give how much the difference would be, and where, if applicable.
Extra notes:
I've looked at this and this, but the math is way over my head, so what I've been looking for might've just been staring me in the face this whole time.
You could use an icosphere. As the vertices are distributed on equilateral triangles, your vertices are guaranteed to be equidistant.
To construct the icosphere, first you make an icosahedron and then split the faces recursively in smaller triangles as explained in this article.
Are you aware that the sphere given to you by Unity is in fact designed
with this exact goal in mind?
ie, the entire raison d'etre of the sphere built-in to Unity is that the points are fairly smoothly space ...... roughly equidistant, as you phrase it.
To bring up such a sphere in Unity, just do this:
You can then instantly get access to the verts, as you know
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vv = mesh.vertices;
int kVerts=vv.Length
for (int i=0; i<kVerts; ++i)
Debug.Log ... vv[i]
Note you can easily check "which part of the sphere" they are on by (for example) checking how far they are from your "cities" (or whatever) or just check (for example) the z values to see which hemisphere they are in .. et cetera.
Furthermore...
Please note. Regarding your overall reason for wanting to do this:
but having specific interactions happen based upon what or where each ray hit
Note that it could not be easier to do this using PhysX. (The completely built-in game physics in Unity.) Indeed, I have never, ever, looked at a collision without doing something "specific" depending on "where it hit!"
You can for example get the point where the contact was with http://docs.unity3d.com/ScriptReference/RaycastHit-point.html
It's worth noting it is absolutely inconceivable one could write something approaching the performance of PhysX in casual programming.
I hope this makes things easier!
slice the sphere into N circles
compute perimeter of it
divide it by the same angle that create the slice
this gives you the number of vertexes
and also angle step inside circle
cast rays
This is how I coded it in C++ + OpenGL:
// draw unit sphere points (r=1 center=(0,0,0)) ... your rays directions
int ia,na,ib,nb;
double x,y,z,r;
double a,b,da,db;
na=16; // number of slices
da=M_PI/double(na-1); // latitude angle step
for (a=-0.5*M_PI,ia=0;ia<na;ia++,a+=da) // slice sphere to circles in xy planes
{
r=cos(a); // radius of actual circle in xy plane
z=sin(a); // height of actual circle in xy plane
nb=ceil(2.0*M_PI*r/da);
db=2.0*M_PI/double(nb); // longitude angle step
if ((ia==0)||(ia==na-1)) { nb=1; db=0.0; } // handle edge cases
for (b=0.0,ib=0;ib<nb;ib++,b+=db) // cut circle to vertexes
{
x=r*cos(b); // compute x,y of vertex
y=r*sin(b);
// this just draw the ray direction (x,y,z) as line in OpenGL
// so you can ignore this
// instead add the ray cast of yours
double w=1.2;
glBegin(GL_LINES);
glColor3f(1.0,1.0,1.0); glVertex3d(x,y,z);
glColor3f(0.0,0.0,0.0); glVertex3d(w*x,w*y,w*z);
glEnd();
}
}
This is how it looks like:
R,G,B lines are the sphere coordinate system axises X,Y,Z
White-ish lines are your Vertexes (White) + direction (Gray)
[Notes]
do not forget to include math.h
and replace the OpenGL stuff with yours
If you want 4, 6, 8, 12 or 20 vertices then you can have exactly equidistant vertices as the Platonic solid which all fit inside a sphere. The actual coordinates of these should be easy to get. For other numbers of vertices you can use other polyhedra and scale the verties so they lie on a sphere. If you need lots of points then a geodesic dome might be a good base. The C60 bucky-ball could be a good base with 60 points. For most of these you should be able to find 3D models from which you can extract coordinates.
I think the easiest way to control points on a sphere is by using spherical coordinates. Then you can control position of points around the sphere by using two angles (rho and phi) and the radius.
Example code for filling points uniformly around a rotating sphere (for fun):
var time = 1; // Increment this variable every frame to see the rotation
var count = 1000;
for (int i = 0; i < count; i++)
{
var rho = time + i;
var phi = 2 * Math.PI * i / count;
var x = (float)(radius * Math.Sin(phi) * Math.Cos(rho));
var z = (float)(radius * Math.Sin(phi) * Math.Sin(rho));
var y = (float)(radius * Math.Cos(phi));
Draw(x, y, z); // your drawing code for rendering the point
}
As some answers have already suggested, use an icosahedron based solution. The source for this is quite easy to come by (and I have written my own several times) but I find the excellent Primitives Pro plugin extremely handy under many other circumstances, and always use their sphere instead of the built-in Unity one.
Link to Primitives Pro component
Primitives Pro options
I'm trying hard to reproduce a MATLAB algorithm called "patchnormal" which first calculates the normal vectors of all faces, and after that calculates the vertex normals from the face normals weighted by the angles of the faces. (See illustration below)
There doesn't seem to be a free library available for 3D mesh in WPF C# oriented to such mathematical use. Or is there ?
So the question is :
How do I compute this (red) vector for all my vertices? Can it be optimized to be used in real time simulation ?
(source: hostingpics.net)
You can compute the angle between two edges as follows:
given: edge vectors E and F for a given face of your vertex,
E_normalized = normalize(E)
F_normalized = normalize(F)
cross_normal = cross(E_normalized, F_normalized)
sin_theta = length( cross_normal )
cos_theta = dot(E_normalized, F_normalized)
results:
face normal = normalize(cross_normal)
face angle theta = atan2(sin_theta, cos_theta)
Then, weight the normals accordingly:
total_vector = vec(0,0,0)
for(each face adjacent to a particular vertex):
[compute normal and theta as above]
total_vector += normal * theta
return normalize(total_vector)
To optimize for real-time, I would first profile to see what was actually slowing things down. I'd guess that computing atan2() several times per vertex might turn out to be expensive, but improving on that would really require finding some substitute for angle-weighted normals.