Can DirectX 3D Meshes be merged, or joined together - c#

C# programmer, beginner DirectX.
Have created 2 meshes using Mesh.Cylinder but need to combine them into a single mesh. Is that possible?

Yeah thats doable. You have a transform matrix for both meshes presumably?
Lock both meshes and then take the 1st mesh (I will assume we add it to the second) and transform its vertices one by one by the matrix transformation transfoming from cylinder 1s local space to clyinder 2's local space (ie [cylinder 1 world transform] * [inverse cylinder 2 world transform]). Define up the correct indices and you have now added mesh 1 to Mesh 2.
It will get more compilcated if you want both meshes to intersect properly. If you want to do that I suggest you look into Constructive Solid Geometry (CSG). There are plenty of links to be found on google on the subject.

Related

Icosphere versus Cubemapped Sphere

I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.

How to get common part of two flat meshes?

I am working on cutting meshes. What am I exactly trying to achieve is that:
Input - two meshes(procedurally generated of course)
Output - three meshes(or more, depends on the given meshes)
A - which is a meshA substract mesh B,
B - which is a common part of meshes A and B,
C - which is a mesh B substract mesh A.
I am using the Triangulator that for the given set of vertices creates triangles for me. All I have to do is to give him those vertices, and here goes the question. Is there any algorithm that would help me to do this? Maybe your brilliant idea that came into your mind at the time you saw this picture? I am working in Unity(C#) so anything related to Unity Tools is helpful aswell. Thanks !
EDIT:
I am using this clipping library: http://sourceforge.net/projects/polyclipping/ and everything works fine untill this case:
I am trying to get the difference A-B and it should look like the one from Picture 2), unfortunately the output is mesh A and mesh B as in the picture 1).
What i do:
Clipper c = new Clipper();
c.AddPath(here goes the vertices of mesh A, polyType.Subject, true);
c.AddPath(here goes the vertices of mesh B, polyType.Clip, true);
c.Execute(ClipType.ctDIfference, a list of lists for my output, PolyFillType.NonZero, PolyFillType.NonZero);
I've already tried to change PolyFillTypes but it changed nothing. And here I am, asking for your advices :)
It is enough to work with mesh boundaries. With that you have two polygons and problem of finding there intersection. Some external library can be used for it, like GPC.
After intersection is found just perform triangulation of these three polygons.

Unity3D C#: Drawing border of a hexagon

Im fairly new to both Unity or C# or even anything GUI connected but im trying to make a simple hexagonal grid, i followed this tutorial: http://forum.unity3d.com/threads/procedural-hexagon-terrain-tutorial.233296/ and it created nice chunks of hexes. Now the thing is i'd like to make a borderline for each hex. I tried to use ToonShader but it doesn't seem to work with such structure. I also tried to make a LineRenderer in every hex, containing coordinates of it's edgepoints, but after some lurking I realised that I should probably use 6 LineRenderers for each hex. Here comes my question, does using so many LineRenderers makes any sense? Is there any more convinient (I'm sure there is) or prettier way to do this?
Thanks in advance.
LineRenderer supports more than one line, :) so you need only 1 LineRenderer per hex. You can do this:
LineRenderer lineRenderer = ... ; // Add or get LineRenderer component to game object
lineRenderer.SetVertexCount(7); // 6+1 since vertex 6 has to connect to vertex 1
for (int i = 0; i < 7; i++) {
Vector3 pos = ... ; // Positions of hex vertices
lineRenderer.SetPosition(i, pos);
}
Though this might be not very efficient depending on how many tiles you have. If you have a lot of them, but not too many, you could create a single LineRenderer for all tiles. And if you have really lots of them, then subdivide the map into regions containing X x Y tiles, and generate one LineRenderer per region. Also notice that adjacent hexes can share lines if you go with those methods.
If you don't mind spending about $20, Vectrosity is a nice Unity plugin that allows you to draw lines. I used it in a graph drawing application that I made with Unity to draw vertices between nodes.

How to simplify a marching squares mesh?

I'm running a marching squares (relative of the marching cubes) algorithm over an iso plane, then translating the data into a triangular mesh.
This works, but creates very complex mesh data. I would like to simplify this to the minimum triangles required, as seen illustrated below:
I have tried looping around contours (point -> segment -> point -> ...), but the contour can become inverted if a point has more than 2 attaching segments.
Ideally the solution should be fairly fast so that it can be done at runtime. The language I'm using is C#, but could probably port it from most other C-like languages.
This is a very common problem in 3D computer graphics. Algorithms solving this problem are called mesh simplification algorithms.
The problem is however much simpler to solve in the 2D case, as no surface normals need to be considered.
The first important thing is to have a reliable mesh data structure, that provides operations to modify the mesh. A set of operations, that can produce and modify any mesh is for instance the set of "Euler Operators".
To simplify a 2D mesh provide a dense version of the 2D mesh.
You can simply convert your quad mesh to a triangle mesh by
splitting each quad at its diagonal.
Dense triangle mesh:
Then iteratively collapse the shortest edges, which are not at the boundary.
"Edge collapse" is a typical mesh operation, depicted here:
After some steps your mesh will look like that:
Simplifying a mesh generated by marching squares (like suggested in the answers above) is highly inefficient. To get a polygon out of a scalar field (e.g. a bitmap) you should first run a modified version of marching squares that only generates the polygon contour (i.e. in the 16 cases of marching squares you don't generate geometry, you just add points to a polygon), and after that you run a triangulation algorithm (e.g. Delaunay or Ear Clipping).
Do it with an immediate step, go from your volume representation to the grid representation.
After that you can then group the areas with case 3,6,9,12 into bigger/longer versions.
You can aslo try to group squares into bigger squares, but its a NP problem so an ideal optimization could take a long time.
Then you can convert it into the polygon version.
After converting it to polygons you can fuse the edges.
EDIT: Use Ear Clipping on resulting contours:
http://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf

How to improve performance while generating (extruding) mesh in unity3d on iOS?

I'm going to make a game similar to AudioSurf for iOS, and implement in it "generate of route to certain parameters".
I used the example of Extrude Mesh from Unity Procedural Example and this answer on question - link
But in iOS there is considerable lag in the extrusion of the object, and if the whole route extrude early in the game - it takes a lot of time, it also has a display problem of the track, consists of a large number of polygons...
Can you advise me how to display the generated route or what is the best way to generate route?
You can tremendously increase the performance in procedural mesh generation by-
- Instead of generating new meshes, you can edit the vertices and triangles arrays of the mesh i.e use fixed size triangle and vertex array-
create a class "RoadMesh" and add functions to it like- UpdateRoadMesh(Vector3 playerPosition)
this function will calculate and set the values of vertices and triangles at the far end on the road depending on the current position of the player,
While updating the array values when you reach the end on the vertices and triangles array start using the beginning indexes as the player would already have crossed those points . as no new mesh is created and the vertices are also edited once every few frames it will have tremendous performance..

Categories