I have been searching the web for quite some time about this but I couldn't find anything that's concrete enough to help me out. I know XNA is going to die, but there is still use for it (in my heart, before I port it later to SharpDX)
I'm making a 3D FPS shooter in XNA 4.0 and I am having serious issues on setting up my collision detection.
First of all, I am making models in blender and I have a high polygon and low polygon version of the model. I would like to use the low polygon model with collision detection but I'm baffled as to how to do it. I want to use JigLibX but I'm not sure how to set my project up in order to do so.
In a nutshell: I want to accomplish this one simple goal:
Make a complicated map in blender, and have boundingboxes be made from it and then use a quadtree to split it up. Then my main character and his gun can run around it shooting stuff!
Any help would be greatly appreciated.
I don't understand exactly what your concrete question is, but I assume you want to know how to implement collision detection efficiently in principal:
for characters: use (several) bounding-boxes and bounding spheres (like a sphere for the head, and 9 boxes for torso, legs and arms.
for terrain: use data from height-map for Y (up/down) collision detection and bounding-boxes/spheres for objects on terrain (like trees, walls, bushes, ...)
for particles - like gunfire: use points, small bounding spheres or - even better because framerateindependant - raytraycing.
In almost no case you want to do collision detection on a polygon-basis as you suggested in your post (quote "low poly modell for collision detection").
I hope that put you in the right direction.
cheers
Related
So I have been tasked in making a 2D top-down racing game over the summer for college, and I have been dreading doing the AI but it is finally time. I have googled many different ways of the same thing just to try and find a person asking the same question but it seems everyone uses Unity over Monogame.
So I have an "enemy" car which can accelerate (as in slowly speeds up to top speed), decelerate and steer left and right. I have got the actual car the player drives working fine but the game is boring when the player isn't racing against anyone. All I need is a very basic AI which will follow a path around the course and will readjust if it gets knocked or something happens to it. I don't even know where to start, Please help!!! Please let me know if you need any more details.
I may be misunderstanding your question, but it does not seem like you are looking for AI capabilities in your enemy car.... "All I need is a very basic AI which will follow a path around the course and will readjust if it gets knocked or something happens to it.". AI typically implies learning, but no where does it seem that you need your car to learn from past mistakes/"experiences". It sounds like you can use a path-finding algorithm to solve your problem since you have no requirement of the car actually learning from previous interactions with other cars, fields, etc. A super popular algorithm you can look into is A*. You can set up your game to be a graph with edges that have the "boosts" be lower weighted then the common "road". The obstacles or path-finding equivalent term - walls can be represented as high weight edges which would cause your car to avoid them automatically, by nature of A* finding the fastest path to a point.
AStar explanation with pseudo code: https://en.wikipedia.org/wiki/A*_search_algorithm
Great visualizer tool: https://qiao.github.io/PathFinding.js/visual/
Accelerating/Decelerating
As for accelerating/decelerating that can be separate logic like randoms deciding whether to speed up or not.
If it gets knocked or something happens to it
You can re-calculate the A* when the car is hit to ensure that your car gets the new fastest path to get back on course. The actual collision logic is up to you (not part of the A* algo).
Note that if you are planning to have more than just a straight path in which the cars can steer (meaning there is no crazy bends or turns) the A* should not have too much variation from the natural algorithm. If you are planning to support that kind of track you may need to look into slightly different algorithms, because you will need to keep track of the rotated angle of your car.
What you need to implement would be dependent on how complex, of course, your AI needs to be. If all it needs to do is readjust its steering and monitor its speed, a basic AI car could at a given time step...
Accelerate if not at top speed
Decelerate if cooling down from a boost
Steer away from the track boundaries
Decide whether or not to boost
(1) and (2) are easy enough to implement at a given time interval. Something like if(speed < maxSpeed) { accel(); } else if(speed > maxSpeed) { decel(); } where a double maxBoostSpeed exists to limit speed during a boost.
(3) and (4) could be achieved by drawing a trajectory in front of the car with something like [ x + speed*Math.cos(angle), y + speed * Math.sin(angle) ]. Then (3) could be achieved by steering towards the center of the track, and (4) could be from projecting the trajectory into a line and finding the distance before the next track boundary a.k.a. the next turn. If that distance to the trajectory intersection is large, it may be time to boost.
I have a planet mesh that I am generating procedurally (based on a random seed). The generator creates tectonic plates and moves them creating the land masses, mountains and water depressions you see below; ~2500 points in a planet. The textured surface is just a shader.
Each vertex will have information associated with it, and I need to know which vertex they are pointing at to relay this information.
I am looking for a way identify which vertex they are pointing at. The current solution is to generate a cube at each vertex, then use a collider/Ray to identify it. The two white cubes in the picture above are for testing.
What I want to know is if there is a better way to identify the vertex without generating cubes?
if you're doing such awesome and advanced work, surely you know about this
http://docs.unity3d.com/ScriptReference/RaycastHit-triangleIndex.html
when working with mesh, it's totally commonplace to need the nearest vertex.
note that very simply, you just find the nearest one, ie look over them all for the nearest.
(it's incredibly fast to do this, you only have a tiny number of verts so there's no way performance will even be measurable.)
{Consider that, of course, you could break the object in to say 8 pieces - but that's just something you have to do anyway in many cases, for example a race track, so it can occlude properly.}
I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.
I've been company many of question people post and all the answers you guys give, followed several tutorials and since all the links on my google search are marked as “already visited” I’ve decided to put my pride aside and post a question for you.
This is my first post so i don’t know if im doing that right sorry if not, anyway the problems is this:
I’m working in a C# planetary exploration game on unity 5, I’ve already built a sphere out of an octagon following some tutorials mentioned here, and also could build the perlin textures and heightmaps with them as well, the problem comes on applying them to the sphere and produce the terrain on the mesh, I know I have to map the vertices and UVs of the sphere to do that, but the problem is that I really suck at the math thing and I couldn’t find any step by step to follow, I’ve heard about Tessellation shaders, LOD, voronoi noise, perlin noise, and got lost on the process. To simplify:
What I have:
I have the spherical mesh
I have the heightmaps
I’ve assigned them to a material along with the proper normal maps
what I think I; (since honestly, I don’t know if this is the correct path anymore) need assistance with:
the code to produce the spherical mesh deformation based on the heightmaps
How to use those Tessellation LOD based shaders and such to make a real size procedural planet
Ty very much for your attention and sorry if I was rude or asked for too much, but any kind of help you could provide will be of a tremendous help for me.
I don't really think I have the ability to give you code specific information but here are a few additions/suggestions for your checklist.
If you want to set up an LOD mesh and texture system, the only way I know how to do it is the barebones approach where you physically create lower poly versions of your mesh and texture, then in Unity, write a script where you have an array of distances, and once the player reaches a certain distance away or towards the object, switch to the mesh and that texture that are appropriate for that distance. I presume you could do the same by writing a shader that would do the same thing but the basic idea remains the same. Here's some pseudocode as an example (I don't know the Unity library too well):
int distances[] = {10,100,1000}
Mesh mesh[] = {hi_res_mesh, mid_res_mesh, low_res_mesh}
Texture texture[] = {hi_res_texture, mid_res_texture, low_res_texture}
void Update()
{
if(player.distance < distances[0])
{
gameobject.Mesh = mesh[0]
gameobject.Texture = texture[0]
else
{
for(int i = 1; i < distances.length(); i++)
{
if (player.distance <= distances[i] && player.distance >= distances[i-1]):
{
gameobject.Texture = texture[i]
gameobject.Mesh = mesh[i]
}
}
}
}
If you want real Tesselation based LOD stuff which is more difficult to code: here are some links:
https://developer.nvidia.com/content/opengl-sdk-simple-tessellation-shader
http://docs.unity3d.com/Manual/SL-SurfaceShaderTessellation.html
Essentially the same concept applies. But instead of physically changing the mesh and texture you are using, you change the mesh and texture procedurally using the same idea of having set distances where you change the resolution of the mesh and texture inside of your shader code.
As for your issue with spherical mesh deformation. You can do it in a 3d editing software like 3dsMax, Maya, or Blender by importing your mesh and applying a mesh deform modifier and use your texture as the deform texture and then alter the level of deformation to your liking. But if you want to do something more procedural in real time you are going to have to physically alter the verticies of your mesh using the vertex arrays and then retriangulating your mesh or something like that. Sorry I'm less helpful about this topic as I am less knowledgeable about it. Here are the links I could find related to your problem:
http://answers.unity3d.com/questions/274269/mesh-deformation.html
http://forum.unity3d.com/threads/deform-ground-mesh-terrain-with-dynamically-modified-displacement-map.284612/
http://blog.almostlogical.com/2010/06/10/real-time-terrain-deformation-in-unity3d/
Anyways, good luck and please let me know if I have been unclear about something or you need more explanation.
I am creating a CAD like program, creating modelvisual3D objects. How do i do collision detection between my objects(modelvisual3d) using MeshGeometry3D. Do i have to compare every triangle in the moving object against the still standing objects?
What will be my best way to do collision detection?
It depends on how precise your collision detection needs to be.
There is no built-in collision detection in WPF's 3D library. If you need high precision, you'll need to compare every triangle.
That being said, you can start with comparing bounding boxes and/or bounding spheres. This is always a good first step, since it can quickly eliminate most cases. If you don't need precision collision detection, this alone may be fine.
To add to Reed's answer (based on my answer here):
After you've eliminated most of your objects via the bounding box/sphere to bounding box/sphere test you should test the triangles of your test object(s) against the other object's bounding box/sphere first before checking triangle/triangle collisions. This will eliminate a lot more cases.
To rule out a collision you'll have to check all the triangles in the test object, but to find a case where you'll need to go down to the triangle/triangle case you only need to find the first triangle that interacts with the bounding box/sphere of the other object.
Look at the SAT theorem (Separating Axes Theorem), it's the fastest and easiest one out there.
The theory about this is that if you can draw a line which separates the triangles, then they're not colliding.
As is said, first do an AABB earlier detection, and when two objects collide, test each polygon of object A against each polygon of object B.
Starting in 2D, to test if two polygons collide, you get the extents of them in the possible axes (in this case X and Y), if those extents intersect, then the poligons are colliding.
On this page you can find a very good explanation on how it works and how to apply it:
http://www.metanetsoftware.com/technique/tutorialA.html
To apply it to 3D simply use the edges of each polygon as the separating axes.
If the extents on those axes intersect, then the polygons are colliding.
Also, this method resolves collission for moving objects, giving also the momentum of collision (resolve the relative angular velocity, substracting velocity B from velocity A, this way the problem is reduced to a moving object and a static one, and add the velocity in the axis you are testing to the extent of the polygon A, if they intersect, rest the original extent of the polygon and you will get the momentum of collission).
Another option would be to use BulletSharp, a C# wrapper of the well-known Bullet Physics Engine. In this case, you would need to write functions to create a (concave) collision shape from a MeshGeometry3D.
In my experience, it works pretty well, even though dynamic collision between concave shapes is not supported. You'll need to use convex decompsition, as a workaround.