I am a Bullet rookie, so I apologise in advance if my questions sound trivial to you.
I need to load a set of concave triangle meshes from .stl files and perform collision detection. Objects can be moved by the user. From the user manual, I read:
"CONCAVE TRIANGLE MESHES:
For static world environment, a very efficient way to represent static triangle meshes is to use a btBvhTriangleMeshShape."
Hence, my questions are:
- can Bullet detect collisions between concave mesh objects modelled using the BvhTriangleMeshShape?
- what is the real difference between contactTest and CollisionWorld::PerformDiscreteCollisionDetection()
- do I need to specify a different collision algorithm for concave collision detection?
I am working with BulletSharp a maintained C# wrapper of Bullet. what I did, was set up my bullet environment:
CollisionConfiguration bt_collision_configuration;
CollisionDispatcher bt_dispatcher;
BroadphaseInterface bt_broadphase;
CollisionWorld bt_collision_world;
double scene_size = 500;
uint max_objects = 16000;
bt_collision_configuration = new DefaultCollisionConfiguration();
bt_dispatcher = new CollisionDispatcher(bt_collision_configuration);
float sscene_size = (float)scene_size;
Vector3 worldAabbMin = new Vector3(-sscene_size, -sscene_size, -sscene_size);
Vector3 worldAabbMax = new Vector3(sscene_size, sscene_size, sscene_size);
bt_broadphase = new AxisSweep3_32Bit(worldAabbMin, worldAabbMax, max_objects);
bt_collision_world = new CollisionWorld(bt_dispatcher, bt_broadphase, bt_collision_configuration); [/code]
And load my CollisionObjects as BvhTriangleMeshShape.
To detect collisions, I used:
contactTest(object, callback)
The result with this code is that the ContactResultCallback::NeedsCollision function is called every time two bounding boxes intercept, but no manifolds are found if 2 meshes intersect.
And secondly:
bt_collision_world.PerformDiscreteCollisionDetection();
int numManifolds = bt_collision_world.Dispatcher.NumManifolds;
Using this secondary code, no manifolds are found at all.
As you've already read in manual, btBvhTriangleMeshShape can be used for static objects only. This means that there is no collision algorithm for two objects of this type (because if all of them are static, they cannot collide). As you tried, you can test the intersection of their bounding boxes, but no collision manifold will be ever created.
If you wonder what is the use of btBvhTriangleMeshShape, it is meant for detecting precise collision between convex dynamic objects and static concave environment.
The official solution suggested by Bullet is to perform convex decomposition of every concave model that is used for moving objects. The preferred method for convex decomposition is HACD (Hierarchical Approximate Convex Decomposition). An example can be found in Bullet Examples.
Another algorithm worth trying is V-HACD (Volumetric Hierarchical Approximate Convex Decomposition), which produces shapes with better topology than the previous one but convex hull is less precise - the result collision shape is bigger then the original mesh. The error rate can be controlled by the resolution but it slows down the process.
Related
Hello I am creating a procedurally generated cave generation script and I have gotten down all the perlin noise garbage out of the way and am trying to transform the vertices into a mesh . I understand that I need to declare the faces for it and need some form of marching cubes algorithm. For me to know which direction to render the face in I need my script to be aware of all the vertices around it by searching through the vertices. Is there any way my script can efficiently search through a vector3 array to find if a vector3 is in there and if so what place in the array is the Vector3 in?
If you're using a triangulation lookup table based implementation of marching cubes, you could store a normal vector alongside the face in the same table entry. A video by Sebastian Lague mentions using such a table. I'm not exactly sure where he downloaded it from, but he includes it in his repo which is MIT licensed. Video, Table (EDIT: The order of a triangle's vertices alone may be sufficient to define its direction, and you may not need an explicit normal vector)
Also heads up: Old Perlin noise tends to be visibly grid aligned. Most times I see it used, appear to be because a library provided it or because it was mentioned in a tutorial, and not because it was actually the best choice for the application. Simplex-type noises generally produce less grid-aligned results. It's straightforward to import external noise into Unity. It looks like you might need to anyway, if your implementation depends on 3D noise. Here are the noises from my repo that use an open implementation for 3D, a tailored gradient table, and rotated evaluators that are good for terrain. There are a lot of other options out there too, though they may not have these aspects in particular. Note that the range is [-1,1] not [0,1] so if your current noise is [0, 1], you might need to do noise(...) * 0.5 + 0.5 to correct that. Choose the 2F version if you have a lot of octaves, or the 2S version if you have one octave or are doing ridged noise.
I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.
I'm running a marching squares (relative of the marching cubes) algorithm over an iso plane, then translating the data into a triangular mesh.
This works, but creates very complex mesh data. I would like to simplify this to the minimum triangles required, as seen illustrated below:
I have tried looping around contours (point -> segment -> point -> ...), but the contour can become inverted if a point has more than 2 attaching segments.
Ideally the solution should be fairly fast so that it can be done at runtime. The language I'm using is C#, but could probably port it from most other C-like languages.
This is a very common problem in 3D computer graphics. Algorithms solving this problem are called mesh simplification algorithms.
The problem is however much simpler to solve in the 2D case, as no surface normals need to be considered.
The first important thing is to have a reliable mesh data structure, that provides operations to modify the mesh. A set of operations, that can produce and modify any mesh is for instance the set of "Euler Operators".
To simplify a 2D mesh provide a dense version of the 2D mesh.
You can simply convert your quad mesh to a triangle mesh by
splitting each quad at its diagonal.
Dense triangle mesh:
Then iteratively collapse the shortest edges, which are not at the boundary.
"Edge collapse" is a typical mesh operation, depicted here:
After some steps your mesh will look like that:
Simplifying a mesh generated by marching squares (like suggested in the answers above) is highly inefficient. To get a polygon out of a scalar field (e.g. a bitmap) you should first run a modified version of marching squares that only generates the polygon contour (i.e. in the 16 cases of marching squares you don't generate geometry, you just add points to a polygon), and after that you run a triangulation algorithm (e.g. Delaunay or Ear Clipping).
Do it with an immediate step, go from your volume representation to the grid representation.
After that you can then group the areas with case 3,6,9,12 into bigger/longer versions.
You can aslo try to group squares into bigger squares, but its a NP problem so an ideal optimization could take a long time.
Then you can convert it into the polygon version.
After converting it to polygons you can fuse the edges.
EDIT: Use Ear Clipping on resulting contours:
http://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf
I've been working on a simple space shooter, and have gotten to the point in the project where my terribly written code is actually slowing things down. After running EQATEC, I can see that the majority of the problem is in the icky check everything on everything collision detection. I was considering putting in a QuadTree, but the majority of the collisions are on asteroids, which do move around a lot (would require a lot of updating). My next alternative was micro-optimizing the collision check itself. Here it is:
public bool IsCircleColliding(GameObject obj) //simple bounding circle collision detection check
{
float distance = Vector2.Distance(this.WorldCenter, obj.WorldCenter);
int totalradii = CollisionRadius + obj.CollisionRadius;
if (distance < totalradii)
return true;
return false;
}
I've heard that Vector2.Distance involves a costly Sqrt, so is there any way to avoid that? Or is there a way to approximate the distance using some fancy calculation? Anything for more speed, essentially.
Also, slightly unrelated to the actual question, is there a good method (other than QuadTrees) for spatial partitioning of fast-moving objects?
Compute the square of the distance instead of the actual distance, and compare that to the square of the collision threshold. That should be a bit faster. If most asteroids are the same size, you can reuse the same value for the collision threshold without recomputing it.
Another useful trick is to first do a simple check based on bounding boxes, and compute the distance only if the bounding boxes intersect. If they don't, then you know (for cheap) that the two objects aren't colliding.
I am creating a CAD like program, creating modelvisual3D objects. How do i do collision detection between my objects(modelvisual3d) using MeshGeometry3D. Do i have to compare every triangle in the moving object against the still standing objects?
What will be my best way to do collision detection?
It depends on how precise your collision detection needs to be.
There is no built-in collision detection in WPF's 3D library. If you need high precision, you'll need to compare every triangle.
That being said, you can start with comparing bounding boxes and/or bounding spheres. This is always a good first step, since it can quickly eliminate most cases. If you don't need precision collision detection, this alone may be fine.
To add to Reed's answer (based on my answer here):
After you've eliminated most of your objects via the bounding box/sphere to bounding box/sphere test you should test the triangles of your test object(s) against the other object's bounding box/sphere first before checking triangle/triangle collisions. This will eliminate a lot more cases.
To rule out a collision you'll have to check all the triangles in the test object, but to find a case where you'll need to go down to the triangle/triangle case you only need to find the first triangle that interacts with the bounding box/sphere of the other object.
Look at the SAT theorem (Separating Axes Theorem), it's the fastest and easiest one out there.
The theory about this is that if you can draw a line which separates the triangles, then they're not colliding.
As is said, first do an AABB earlier detection, and when two objects collide, test each polygon of object A against each polygon of object B.
Starting in 2D, to test if two polygons collide, you get the extents of them in the possible axes (in this case X and Y), if those extents intersect, then the poligons are colliding.
On this page you can find a very good explanation on how it works and how to apply it:
http://www.metanetsoftware.com/technique/tutorialA.html
To apply it to 3D simply use the edges of each polygon as the separating axes.
If the extents on those axes intersect, then the polygons are colliding.
Also, this method resolves collission for moving objects, giving also the momentum of collision (resolve the relative angular velocity, substracting velocity B from velocity A, this way the problem is reduced to a moving object and a static one, and add the velocity in the axis you are testing to the extent of the polygon A, if they intersect, rest the original extent of the polygon and you will get the momentum of collission).
Another option would be to use BulletSharp, a C# wrapper of the well-known Bullet Physics Engine. In this case, you would need to write functions to create a (concave) collision shape from a MeshGeometry3D.
In my experience, it works pretty well, even though dynamic collision between concave shapes is not supported. You'll need to use convex decompsition, as a workaround.