I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.
Related
Hello I am creating a procedurally generated cave generation script and I have gotten down all the perlin noise garbage out of the way and am trying to transform the vertices into a mesh . I understand that I need to declare the faces for it and need some form of marching cubes algorithm. For me to know which direction to render the face in I need my script to be aware of all the vertices around it by searching through the vertices. Is there any way my script can efficiently search through a vector3 array to find if a vector3 is in there and if so what place in the array is the Vector3 in?
If you're using a triangulation lookup table based implementation of marching cubes, you could store a normal vector alongside the face in the same table entry. A video by Sebastian Lague mentions using such a table. I'm not exactly sure where he downloaded it from, but he includes it in his repo which is MIT licensed. Video, Table (EDIT: The order of a triangle's vertices alone may be sufficient to define its direction, and you may not need an explicit normal vector)
Also heads up: Old Perlin noise tends to be visibly grid aligned. Most times I see it used, appear to be because a library provided it or because it was mentioned in a tutorial, and not because it was actually the best choice for the application. Simplex-type noises generally produce less grid-aligned results. It's straightforward to import external noise into Unity. It looks like you might need to anyway, if your implementation depends on 3D noise. Here are the noises from my repo that use an open implementation for 3D, a tailored gradient table, and rotated evaluators that are good for terrain. There are a lot of other options out there too, though they may not have these aspects in particular. Note that the range is [-1,1] not [0,1] so if your current noise is [0, 1], you might need to do noise(...) * 0.5 + 0.5 to correct that. Choose the 2F version if you have a lot of octaves, or the 2S version if you have one octave or are doing ridged noise.
I have a planet mesh that I am generating procedurally (based on a random seed). The generator creates tectonic plates and moves them creating the land masses, mountains and water depressions you see below; ~2500 points in a planet. The textured surface is just a shader.
Each vertex will have information associated with it, and I need to know which vertex they are pointing at to relay this information.
I am looking for a way identify which vertex they are pointing at. The current solution is to generate a cube at each vertex, then use a collider/Ray to identify it. The two white cubes in the picture above are for testing.
What I want to know is if there is a better way to identify the vertex without generating cubes?
if you're doing such awesome and advanced work, surely you know about this
http://docs.unity3d.com/ScriptReference/RaycastHit-triangleIndex.html
when working with mesh, it's totally commonplace to need the nearest vertex.
note that very simply, you just find the nearest one, ie look over them all for the nearest.
(it's incredibly fast to do this, you only have a tiny number of verts so there's no way performance will even be measurable.)
{Consider that, of course, you could break the object in to say 8 pieces - but that's just something you have to do anyway in many cases, for example a race track, so it can occlude properly.}
I'm running a marching squares (relative of the marching cubes) algorithm over an iso plane, then translating the data into a triangular mesh.
This works, but creates very complex mesh data. I would like to simplify this to the minimum triangles required, as seen illustrated below:
I have tried looping around contours (point -> segment -> point -> ...), but the contour can become inverted if a point has more than 2 attaching segments.
Ideally the solution should be fairly fast so that it can be done at runtime. The language I'm using is C#, but could probably port it from most other C-like languages.
This is a very common problem in 3D computer graphics. Algorithms solving this problem are called mesh simplification algorithms.
The problem is however much simpler to solve in the 2D case, as no surface normals need to be considered.
The first important thing is to have a reliable mesh data structure, that provides operations to modify the mesh. A set of operations, that can produce and modify any mesh is for instance the set of "Euler Operators".
To simplify a 2D mesh provide a dense version of the 2D mesh.
You can simply convert your quad mesh to a triangle mesh by
splitting each quad at its diagonal.
Dense triangle mesh:
Then iteratively collapse the shortest edges, which are not at the boundary.
"Edge collapse" is a typical mesh operation, depicted here:
After some steps your mesh will look like that:
Simplifying a mesh generated by marching squares (like suggested in the answers above) is highly inefficient. To get a polygon out of a scalar field (e.g. a bitmap) you should first run a modified version of marching squares that only generates the polygon contour (i.e. in the 16 cases of marching squares you don't generate geometry, you just add points to a polygon), and after that you run a triangulation algorithm (e.g. Delaunay or Ear Clipping).
Do it with an immediate step, go from your volume representation to the grid representation.
After that you can then group the areas with case 3,6,9,12 into bigger/longer versions.
You can aslo try to group squares into bigger squares, but its a NP problem so an ideal optimization could take a long time.
Then you can convert it into the polygon version.
After converting it to polygons you can fuse the edges.
EDIT: Use Ear Clipping on resulting contours:
http://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf
I'm going to make a game similar to AudioSurf for iOS, and implement in it "generate of route to certain parameters".
I used the example of Extrude Mesh from Unity Procedural Example and this answer on question - link
But in iOS there is considerable lag in the extrusion of the object, and if the whole route extrude early in the game - it takes a lot of time, it also has a display problem of the track, consists of a large number of polygons...
Can you advise me how to display the generated route or what is the best way to generate route?
You can tremendously increase the performance in procedural mesh generation by-
- Instead of generating new meshes, you can edit the vertices and triangles arrays of the mesh i.e use fixed size triangle and vertex array-
create a class "RoadMesh" and add functions to it like- UpdateRoadMesh(Vector3 playerPosition)
this function will calculate and set the values of vertices and triangles at the far end on the road depending on the current position of the player,
While updating the array values when you reach the end on the vertices and triangles array start using the beginning indexes as the player would already have crossed those points . as no new mesh is created and the vertices are also edited once every few frames it will have tremendous performance..
In FarseerPhysics engine / XNA, what is ConvertUnits.ToDisplayUnits(); ?
Farseer (or rather, Box2D, which it is derived from) is tuned to work best with objects that range from 0.1 to 10 units in weight, and 0.1 and 10 units in size (width or height). If you use objects outside this range, your simulation may not be as stable as it otherwise could be.
Most of the time this works well for "regular" sized objects you might find in a game (cars, books, etc), as measured in meters and kilograms. However this is not mandatory and you can, in fact, choose any scale. (For example: games involving marbles, or aeroplanes, might use a scale other than meters/kilograms).
Most games have various spaces. For example: "Model" space, "Projection" space, "View" space, "World" space, "Screen" space, "Client" space. Some are measured in pixels, others in plain units. And in general games use matrices to convert vertices from one space to another. Most obviously when taking a world measured in units, and displaying it on a screen measured in pixels.
XNA's SpriteBatch simplifies this a fair bit, by default, by having the world space be the same as client space. One world unit = one pixel.
Normally you should have your world space defined to be identical to the space your physics world exists in. But this would be a problem when using SpriteBatch's default space - as you could then not have a physics object larger than 10 pixels, without going outside range that Farseer is tuned for.
Farseer's[1] solution is to have two different world spaces - the game space and the physics space. And use the ConvertUnits class everywhere it needs to convert between these two systems.
I personally find this solution pretty damn awful, as it is highly error-prone (as you have to get the conversion correct in multiple places spread throughout your code).
For any modestly serious game development effort, I would recommend using a unified world space, designed around what Farseer requires. And then either use a global transform to SpriteBatch.Begin, or something completely other than SpriteBatch, to render that world to screen.
However, for simple demos, ConvertUnits does the job. And it lets you keep the nice SpriteBatch property that one pixel in an unscaled sprite's texture = one pixel on screen.
[1]: last time I checked, ConvertUnits was part of the Farseer samples, and not part of the physics library itself.
I haven't dealt with that particular chunk of code, but most games that have a virtual space (the game world) will have a function similar to 'ToDisplayUnits', and it's function is convert the game world's physical units to the display units in XNA.
An example would be meters to pixels, or meters to x,y screen coordinates.
Having this is good, because it allows you do all your math in physics units and keep all abstract, and then translate stuff to the game screen separately.
Farseer uses the MKS (metre, kilogram, second) units of measure. They provide methods to convert display units of measure to MKS units of measure and vice versa. ToSimUnits() and ToDisplayUnits().