I'm running a marching squares (relative of the marching cubes) algorithm over an iso plane, then translating the data into a triangular mesh.
This works, but creates very complex mesh data. I would like to simplify this to the minimum triangles required, as seen illustrated below:
I have tried looping around contours (point -> segment -> point -> ...), but the contour can become inverted if a point has more than 2 attaching segments.
Ideally the solution should be fairly fast so that it can be done at runtime. The language I'm using is C#, but could probably port it from most other C-like languages.
This is a very common problem in 3D computer graphics. Algorithms solving this problem are called mesh simplification algorithms.
The problem is however much simpler to solve in the 2D case, as no surface normals need to be considered.
The first important thing is to have a reliable mesh data structure, that provides operations to modify the mesh. A set of operations, that can produce and modify any mesh is for instance the set of "Euler Operators".
To simplify a 2D mesh provide a dense version of the 2D mesh.
You can simply convert your quad mesh to a triangle mesh by
splitting each quad at its diagonal.
Dense triangle mesh:
Then iteratively collapse the shortest edges, which are not at the boundary.
"Edge collapse" is a typical mesh operation, depicted here:
After some steps your mesh will look like that:
Simplifying a mesh generated by marching squares (like suggested in the answers above) is highly inefficient. To get a polygon out of a scalar field (e.g. a bitmap) you should first run a modified version of marching squares that only generates the polygon contour (i.e. in the 16 cases of marching squares you don't generate geometry, you just add points to a polygon), and after that you run a triangulation algorithm (e.g. Delaunay or Ear Clipping).
Do it with an immediate step, go from your volume representation to the grid representation.
After that you can then group the areas with case 3,6,9,12 into bigger/longer versions.
You can aslo try to group squares into bigger squares, but its a NP problem so an ideal optimization could take a long time.
Then you can convert it into the polygon version.
After converting it to polygons you can fuse the edges.
EDIT: Use Ear Clipping on resulting contours:
http://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf
Related
Hello I am creating a procedurally generated cave generation script and I have gotten down all the perlin noise garbage out of the way and am trying to transform the vertices into a mesh . I understand that I need to declare the faces for it and need some form of marching cubes algorithm. For me to know which direction to render the face in I need my script to be aware of all the vertices around it by searching through the vertices. Is there any way my script can efficiently search through a vector3 array to find if a vector3 is in there and if so what place in the array is the Vector3 in?
If you're using a triangulation lookup table based implementation of marching cubes, you could store a normal vector alongside the face in the same table entry. A video by Sebastian Lague mentions using such a table. I'm not exactly sure where he downloaded it from, but he includes it in his repo which is MIT licensed. Video, Table (EDIT: The order of a triangle's vertices alone may be sufficient to define its direction, and you may not need an explicit normal vector)
Also heads up: Old Perlin noise tends to be visibly grid aligned. Most times I see it used, appear to be because a library provided it or because it was mentioned in a tutorial, and not because it was actually the best choice for the application. Simplex-type noises generally produce less grid-aligned results. It's straightforward to import external noise into Unity. It looks like you might need to anyway, if your implementation depends on 3D noise. Here are the noises from my repo that use an open implementation for 3D, a tailored gradient table, and rotated evaluators that are good for terrain. There are a lot of other options out there too, though they may not have these aspects in particular. Note that the range is [-1,1] not [0,1] so if your current noise is [0, 1], you might need to do noise(...) * 0.5 + 0.5 to correct that. Choose the 2F version if you have a lot of octaves, or the 2S version if you have one octave or are doing ridged noise.
I have a planet mesh that I am generating procedurally (based on a random seed). The generator creates tectonic plates and moves them creating the land masses, mountains and water depressions you see below; ~2500 points in a planet. The textured surface is just a shader.
Each vertex will have information associated with it, and I need to know which vertex they are pointing at to relay this information.
I am looking for a way identify which vertex they are pointing at. The current solution is to generate a cube at each vertex, then use a collider/Ray to identify it. The two white cubes in the picture above are for testing.
What I want to know is if there is a better way to identify the vertex without generating cubes?
if you're doing such awesome and advanced work, surely you know about this
http://docs.unity3d.com/ScriptReference/RaycastHit-triangleIndex.html
when working with mesh, it's totally commonplace to need the nearest vertex.
note that very simply, you just find the nearest one, ie look over them all for the nearest.
(it's incredibly fast to do this, you only have a tiny number of verts so there's no way performance will even be measurable.)
{Consider that, of course, you could break the object in to say 8 pieces - but that's just something you have to do anyway in many cases, for example a race track, so it can occlude properly.}
I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.
While I was trying to parse and convert a Gerber RS274X file to a GDSII file, I encountered a certain problem.
If you stroke a solid circle along a certain path (a polyline),
what you get is a solid shape, which can be subsequently converted
to a closed polygon. My question would be is there a library or
reliable algorithm to automate this process,where the input would be a
string of points signifying the polyline, the output would be the
resulting polygon.
Below is an image I uploaded to explain my problem.
The shape you seek can be calculated by placing a desired number of evenly spaced points in a circle around each input point, and then finding the convex hull for each pair of circles on a line segment.
The union of these polygons will make up the polygon you want.
There are a number of algorithms that can find the convex hull for a set of points, and also libraries which provide implementations.
The algorithm you are talking about is called "Minkowski sum" (in your case, of polyline and of a polygon, approximating a circle). In you case the second summand (the circle) is convex and it means the Minkowski sum can be computed rather efficiently using so called polygon convolution.
You did not specify the language you use. In C++ Minkowski sum is available as part of Boost.Polygon or as part of CGAL.
To use them you will probably need to convert your polyline into a (degenerated) polygon by traversing it twice: forward, then backward.
Union of convex hulls proposed by #melak47 will produce correct result but much less efficiently.
I am creating a CAD like program, creating modelvisual3D objects. How do i do collision detection between my objects(modelvisual3d) using MeshGeometry3D. Do i have to compare every triangle in the moving object against the still standing objects?
What will be my best way to do collision detection?
It depends on how precise your collision detection needs to be.
There is no built-in collision detection in WPF's 3D library. If you need high precision, you'll need to compare every triangle.
That being said, you can start with comparing bounding boxes and/or bounding spheres. This is always a good first step, since it can quickly eliminate most cases. If you don't need precision collision detection, this alone may be fine.
To add to Reed's answer (based on my answer here):
After you've eliminated most of your objects via the bounding box/sphere to bounding box/sphere test you should test the triangles of your test object(s) against the other object's bounding box/sphere first before checking triangle/triangle collisions. This will eliminate a lot more cases.
To rule out a collision you'll have to check all the triangles in the test object, but to find a case where you'll need to go down to the triangle/triangle case you only need to find the first triangle that interacts with the bounding box/sphere of the other object.
Look at the SAT theorem (Separating Axes Theorem), it's the fastest and easiest one out there.
The theory about this is that if you can draw a line which separates the triangles, then they're not colliding.
As is said, first do an AABB earlier detection, and when two objects collide, test each polygon of object A against each polygon of object B.
Starting in 2D, to test if two polygons collide, you get the extents of them in the possible axes (in this case X and Y), if those extents intersect, then the poligons are colliding.
On this page you can find a very good explanation on how it works and how to apply it:
http://www.metanetsoftware.com/technique/tutorialA.html
To apply it to 3D simply use the edges of each polygon as the separating axes.
If the extents on those axes intersect, then the polygons are colliding.
Also, this method resolves collission for moving objects, giving also the momentum of collision (resolve the relative angular velocity, substracting velocity B from velocity A, this way the problem is reduced to a moving object and a static one, and add the velocity in the axis you are testing to the extent of the polygon A, if they intersect, rest the original extent of the polygon and you will get the momentum of collission).
Another option would be to use BulletSharp, a C# wrapper of the well-known Bullet Physics Engine. In this case, you would need to write functions to create a (concave) collision shape from a MeshGeometry3D.
In my experience, it works pretty well, even though dynamic collision between concave shapes is not supported. You'll need to use convex decompsition, as a workaround.