I have a planet mesh that I am generating procedurally (based on a random seed). The generator creates tectonic plates and moves them creating the land masses, mountains and water depressions you see below; ~2500 points in a planet. The textured surface is just a shader.
Each vertex will have information associated with it, and I need to know which vertex they are pointing at to relay this information.
I am looking for a way identify which vertex they are pointing at. The current solution is to generate a cube at each vertex, then use a collider/Ray to identify it. The two white cubes in the picture above are for testing.
What I want to know is if there is a better way to identify the vertex without generating cubes?
if you're doing such awesome and advanced work, surely you know about this
http://docs.unity3d.com/ScriptReference/RaycastHit-triangleIndex.html
when working with mesh, it's totally commonplace to need the nearest vertex.
note that very simply, you just find the nearest one, ie look over them all for the nearest.
(it's incredibly fast to do this, you only have a tiny number of verts so there's no way performance will even be measurable.)
{Consider that, of course, you could break the object in to say 8 pieces - but that's just something you have to do anyway in many cases, for example a race track, so it can occlude properly.}
Related
Apologies for the lack of example code, I'm currently in the brainstorming phase of the problem and having trouble finding a proper solution.
As I have stated in my title, I want to find out what the intersection area of two polygon are.
To be more specific, I have two ARPlane's that may overlap each other on the x-z plane but be on different y-levels (imagine stairs with an overhang). I can get the area boundaries of these ARPlanes easily. My first idea to simplify the process is to remove the y-component so as to have them on the same plane and turn this into a 2D problem.
From here onward, I'm unsure of how to proceed. I could not find any methods that calculated the intersection areas of two polygons. I have a few solutions that look promising if I can get the planes aligned neatly (such that the +x direction points from the center of one of the planes to the other), but I cannot move them in any way so I must modify what the local "forward" for a plane is. Even then, I don't think the ARPlane has a direction vector in the first place as they are not GameObjects, so I am unsure if this is a viable option as a path to follow. ARPlane class for quick reference.
One other way is to turn the planes so that they're in alignment with world x axis. This looks promising over the other methods but as I previously stated, I cannot turn the actual ARPlanes. I must make a copy of them and turn the copies while keeping their relative rotations and positions the same.
So far these have been the methods I could come up with but could not develop fully due to unity restrictions. My question, then, is whether there is a way to get around the issues of these problems; failing that, whether there is an alternative solution to the issue that can be recommended.
Below is an example use case of the tool. As can be seen, some stair threads have an overhang that covers a portion of the previous thread's surface (second and third figure). Each stair thread will be scanned and then processed to find their usable surface. The area covered by the overhang is not a usable surface. This usable area is defined by the placements of a staircase thread (A), and the very next thread right above it (B); so then the usable area will be surface_area_of_A - xz_crossSection_of_AB
Hello I am creating a procedurally generated cave generation script and I have gotten down all the perlin noise garbage out of the way and am trying to transform the vertices into a mesh . I understand that I need to declare the faces for it and need some form of marching cubes algorithm. For me to know which direction to render the face in I need my script to be aware of all the vertices around it by searching through the vertices. Is there any way my script can efficiently search through a vector3 array to find if a vector3 is in there and if so what place in the array is the Vector3 in?
If you're using a triangulation lookup table based implementation of marching cubes, you could store a normal vector alongside the face in the same table entry. A video by Sebastian Lague mentions using such a table. I'm not exactly sure where he downloaded it from, but he includes it in his repo which is MIT licensed. Video, Table (EDIT: The order of a triangle's vertices alone may be sufficient to define its direction, and you may not need an explicit normal vector)
Also heads up: Old Perlin noise tends to be visibly grid aligned. Most times I see it used, appear to be because a library provided it or because it was mentioned in a tutorial, and not because it was actually the best choice for the application. Simplex-type noises generally produce less grid-aligned results. It's straightforward to import external noise into Unity. It looks like you might need to anyway, if your implementation depends on 3D noise. Here are the noises from my repo that use an open implementation for 3D, a tailored gradient table, and rotated evaluators that are good for terrain. There are a lot of other options out there too, though they may not have these aspects in particular. Note that the range is [-1,1] not [0,1] so if your current noise is [0, 1], you might need to do noise(...) * 0.5 + 0.5 to correct that. Choose the 2F version if you have a lot of octaves, or the 2S version if you have one octave or are doing ridged noise.
I want to slice a 3D model relative to an infinite plane(In WPF). I'm checking if edges intersect with the infinite plane. If true, I'll create a new point at the intersection position, so I'm getting a couple of points that I want to generate a cap on so that the model is closed after slicing. For example, if this is the cross section, the result would be as follows:
Note: The triangulation ain't important. I just need triangles.
I also need to detect the holes as follows(holes are marked in red):
If it is impossible to do it the way I think(It seems to be so), the how should I do it? How do developers cap an object after being sliced?
There is also too much confusion. For example, The first picture's result may be:
What am I missing??
EDIT:
After some research, I knew one thing that I am missing:
The input is now robust, and I need the exact same output. How do I accomplish that??
In the past, I have done this kind of thing using a BSP.
Sorry to be so vague, but its not a a trivial problem!
Basically you convert your triangle mesh into the BSP representation, add your clipping plane to the BSP, and then convert it back into triangles.
As code11 said already you have too few data to solve this, the points are not enough.
Instead of clipping edges to produce new points you should clip entire triangles, which would give you new edges. This way, instead of a bunch of points you'd have a bunch of connected edges.
In your example with holes, with this single modification you'd get a 3 polygons - which is almost what you need. Then you will need to compute only the correct triangulation.
Look for CSG term or Constructive Solid Geometry.
EDIT:
If the generic CSG is too slow for you and you have clipped edges already then I'd suggest to try an 'Ear Clipping' algorithm.
Here's some description with support for holes:
https://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf
You may try also a 'Sweep Line' approach:
http://sites-final.uclouvain.be/mema/Poly2Tri/
And similar question on SO, with many ideas:
Polygon Triangulation with Holes
I hope it helps.
Building off of what zwcloud said, your point representation is ambiguous. You simply don't have enough points to determine where any concavities/notches actually are.
However, if you can solve that by obtaining additional points (you need midpoints of segments I think), you just need to throw the points into a shrinkwrap algorithm. Then at least you will have a cap.
The holes are a bit more tricky. Perhaps you can get away with just looking at the excluded points from the output of the shrinkwrap calculation and trying to find additional shapes in that, heuristically favoring points located near the centroid of your newly created polygon.
Additional thought: If you can limit yourself to convex polygons with only one similarly convex hole, the problem will be much easier to solve.
I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.
I'm going to make a game similar to AudioSurf for iOS, and implement in it "generate of route to certain parameters".
I used the example of Extrude Mesh from Unity Procedural Example and this answer on question - link
But in iOS there is considerable lag in the extrusion of the object, and if the whole route extrude early in the game - it takes a lot of time, it also has a display problem of the track, consists of a large number of polygons...
Can you advise me how to display the generated route or what is the best way to generate route?
You can tremendously increase the performance in procedural mesh generation by-
- Instead of generating new meshes, you can edit the vertices and triangles arrays of the mesh i.e use fixed size triangle and vertex array-
create a class "RoadMesh" and add functions to it like- UpdateRoadMesh(Vector3 playerPosition)
this function will calculate and set the values of vertices and triangles at the far end on the road depending on the current position of the player,
While updating the array values when you reach the end on the vertices and triangles array start using the beginning indexes as the player would already have crossed those points . as no new mesh is created and the vertices are also edited once every few frames it will have tremendous performance..