I imported a generated 3d model of a MRT scan into Unity. Unfortunately the model is imported into a lot of slices which make up the whole model.
The hierachy is like this:
As you can imagine it takes a lot of ressources and slows down further processes. So I want to combine the children (default_MeshPart0, default_MeshPart1, ...) of "default" into one mesh. I found out about Mesh Combine Wizard which works perfectly for objects with only a few children but it doesn't work with a larger number of meshes as children. I tried to move some children into another parent folder and combine them seperatly to combine the new parent folders later on but this doesn't work either as I got a random looking mesh.
Do you have an idea how to fix this or do you recommend me another process?
Perhaps you need to do some post-scan process.
Try:
Opening the scan model in a 3d software like Maya, 3D Max or Blender.
Attach or Merge all the parts together.
Export it as .fbx.
Import it into Unity.
In Unity will be a single gameObject.
Combining the meshes is actually pretty straightforward (I'll run you through in a minute), there is a gotcha however - while its easy to do in runtime, I've had some problems trying to reliably save the resulting mesh as an hard file asset to disk. I dit get there in the end but it was trickier than it seemed. You don't need to save your asset if you are okay in this being done in Start(), and you can do it in edit mode, but the resulting mesh will not be automatically saved with the scene - just something to have in mind.
Back to geometry - geometry consists of three arrays
Vector3[] verts;
Vector2[] uv;
int[] triangles;
The only thing to wrap your head around is that if you have n vertexes, uv[n] and vector3[n] describes n-th vertex (obviously), the triangles array is 3*n, and contains indexes refering to the vertex table, it goes in sequence, so if you want to find triangle k you read
x=triangles[k*3]
y=triangles[k*3+1]
z=triangles[k*3+2]
and than you have your triangle defined by three vectors verts[x],verts[y],verts[z].
There are also submeshes, used when a mesh contains multiple materials, but its not complicated once you know how the above structure is laid out.
Co create a new mesh, firs to GetComponentsInChildren, than grab a list of all the meshes, count the total vertex count j.
Create a new Mesh object, initialise it with tables verts = new int[j], tris = new int[j*3] and than just copy all the vertex data. Once you have the mesh, create a new object with a mesh filter and mesh renderer, feed the mesh with it and you're done. Problem is this will dissapear on play mode state unless you save the asset but that's a seperate question
if you have submeshes you need to list all the materials that are present and group vertieces by material used (sub group = a grup using a single material), aside from managing another list this isn't much harder (and nothing terrible happens if you ignore that step, you'll just get a single solid object) which is great for performance in most cases as your object will be a single drawcall not depending on dynamic batching)
It shouldn't be problem if its created for Unity3D. Anyway If you want single mesh, you will need a 3D modeling tool to combine it then export in FBX format according to unity guideline as explained here. I hope it helps you.
Related
I'm making a program where a large and complicated mesh is procedurally generated. At first I generated a large number of smaller meshes, which works just fine, but then I decided to merge them together and some triangles got corrupted.
After a bit of looking i managed to pinpoint the problem:
Debug.Log(fullTriangles == fullTriangles);
boxMesh.triangles = fullTriangles;
Debug.Log(boxMesh.triangles == fullTriangles);
This only occurs when I use the large mesh. For the many smaller meshes both debugs return true.
Here's a picture of the mesh. It loops on itself in many places, is in no way convex, has several floating islands, and is in general very difficult from a rendering perspective.
Other information that might matter:
The many small meshes in the large mesh do not share any verticies in the large mesh, but some verticies have the same positions
Each small mesh is made of one or several triangles, that do share verticies
The small meshes are not submeshes
Why does this happend, and what can I do to fix it?
I'm guessing that your mesh has too many vertices, take a look at mesh index format.
To avoid this problem, you could either run a mesh welder, or batch smaller meshes together. You could also merge smaller meshes into a bigger one, but keeping count of the merged mesh number of vertices. If it exceeds the vertex count limit, create another merged mesh (and so on).
As a side note, keep in mind that comparing arrays with " == " will not compare the values in the array, but the arrays references. You could use Enumerable.SequenceEqual, or run a simple for loop.
I have a planet mesh that I am generating procedurally (based on a random seed). The generator creates tectonic plates and moves them creating the land masses, mountains and water depressions you see below; ~2500 points in a planet. The textured surface is just a shader.
Each vertex will have information associated with it, and I need to know which vertex they are pointing at to relay this information.
I am looking for a way identify which vertex they are pointing at. The current solution is to generate a cube at each vertex, then use a collider/Ray to identify it. The two white cubes in the picture above are for testing.
What I want to know is if there is a better way to identify the vertex without generating cubes?
if you're doing such awesome and advanced work, surely you know about this
http://docs.unity3d.com/ScriptReference/RaycastHit-triangleIndex.html
when working with mesh, it's totally commonplace to need the nearest vertex.
note that very simply, you just find the nearest one, ie look over them all for the nearest.
(it's incredibly fast to do this, you only have a tiny number of verts so there's no way performance will even be measurable.)
{Consider that, of course, you could break the object in to say 8 pieces - but that's just something you have to do anyway in many cases, for example a race track, so it can occlude properly.}
I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.
I'm making a 2D top down survival game, maximum sprite count for the top level is 100 sprites.
When I use a random to generate their vector positions occasionally i'll get some overlap between sprites.
So to combat this i'm going to store some pre-defined positions.
The question
So my question is what would be the best way of storing these. Initially i was going to store them in an array, however i'm thinking that storing them in a text file and reading them in at the start of the game would be a better solution.
I'm a beginner so if anyone could give any advice on this it would be much appreciated.
Many thanks
Yes store them in a CSV (comma separated values) text file, or if you want, use a database, although I would recommend the former. Database storage sounds like overkill in your situation. On start-up you would load the values into an array. If you don't the game will lag every time it gets a value. You just need the text file for persistent storage and then the array for usage.
Hope this clears your issue up for you!
Why don't you just check if the position of the sprites overlap? If the sprites do not overlap often this should not add too much calculations and grands more randomness then a fixed position template.
What you do in the class for the entities with the sprite is add a public Rectangle, you probably already use a rectangle to draw them to the screen. Making this public allows you to do something like this in the class that generates the entities.
if (addSprite)
{
Entity newEntity = new Entity(random position);
foreach (Entity e in entityList)
{
if (newEntity.rectangle(Intersects(newEntity.rectangle))
{
//give a new position to newEntity and run this loop again
}
else
{
entityList.add(newEntity);
}
}
I'm going to make a game similar to AudioSurf for iOS, and implement in it "generate of route to certain parameters".
I used the example of Extrude Mesh from Unity Procedural Example and this answer on question - link
But in iOS there is considerable lag in the extrusion of the object, and if the whole route extrude early in the game - it takes a lot of time, it also has a display problem of the track, consists of a large number of polygons...
Can you advise me how to display the generated route or what is the best way to generate route?
You can tremendously increase the performance in procedural mesh generation by-
- Instead of generating new meshes, you can edit the vertices and triangles arrays of the mesh i.e use fixed size triangle and vertex array-
create a class "RoadMesh" and add functions to it like- UpdateRoadMesh(Vector3 playerPosition)
this function will calculate and set the values of vertices and triangles at the far end on the road depending on the current position of the player,
While updating the array values when you reach the end on the vertices and triangles array start using the beginning indexes as the player would already have crossed those points . as no new mesh is created and the vertices are also edited once every few frames it will have tremendous performance..