I was wondering where I could find some nice resource material on how I could write some clean collision detection coding.
I'm pretty new to XNA programming, and have a general understanding on how I would want to write my game, but I am having serious trouble with the idea of collision detection. It boggles my mind.
I know you can use the 2d boundingbox class. But after that, I'm stuck. I don't want to have to check if an object is colliding with EVERY single object in the game, so I was wondering if someone could point me in the right direction for either some literature on the matter or something.
It depends a great deal on the scale and implementation of your game. How are the objects organized? Is the game organized into maps, or is there only one screen? Which objects will be colliding with which other objects?
If the game is a small enough scale, you might not need to worry about it at all.
If not, consider:
splitting the game into distinct maps, where the objects on one map will only collide with other objects on the same map
organizing lists of enemies by type, so that you can check only the right kinds of object against each other. (ie: projectiles aren't checking against projectiles, etc...). For example, I use the following dictionaries so that I can check against only objects of a certain type, or only creatures that belong to a certain faction:
private readonly Dictionary<Type, List<MapObject>> mTypeObjects =
new Dictionary<Type, List<MapObject>>();
private readonly Dictionary<FACTION, List<MapObject>> mFactionCreatures =
new Dictionary<FACTION, List<MapObject>>();
for maximum efficiency, but a more challenging implementation, you can use space partitioning, where objects are organized by 'sector', allowing you to rule out far away objects instantly.
If you are just trying to minimize the CPU detection work, check out QuadTree implementations. They basically break up your "scene" into smaller sections to optimize detection (example in C#):
http://www.codeproject.com/Articles/30535/A-Simple-QuadTree-Implementation-in-C
If you are talking more about actual physics, check out the tutorials from the devs of N-Game:
http://www.metanetsoftware.com/technique.html
Or just use an existing XNA physics engine, such as Farseer or Bullet.
Related
I'm currently working on my weather system for a 2D Unity game project. What I am trying to accomplish is randomising which layer a particle will spawn on, so that sometimes it will appear behind other sprites such as the player or buildings, sometimes in front, and sometimes collide with them (on the same layer).
I'm relatively new to the particle system in general but after much playing around have accomplished my goals apart from this one.
I basically want to be able to randomise each particle to have, for instance, a Z value of -1, 0 or 1, with the player/other objects potentially being on 0, where the particles will collide with them. To visualise, here is the behaviour I currently have, where every particle collides with the objects, when I want roughly 1/3 to:
https://i.imgur.com/vplFcye.gifv
Perhaps rather than working with the Z positional values, I should perhaps be thinking more in terms of layers? Randomising the layer the particles spawn on and therefore what the they can interact with, but either way I cannot find a way to script or set within the inspector the ability to change this layer/z value.
I have considered using three particle systems, which I believe would accomplish my goals, but is not ideal, and potentially detrimental to performance I'd assume?
If anyone has any insight, perhaps I've just missed a really obvious variable/setting in the inspector even.
Thanks.
Perhaps not the most elegant solve for this, but you could set your Emission shape to Mesh, Type to Edge, and use a custom mesh that's 3 separate rectangles, one each for the front/back/middle layer. I tested this and it seems to work well:
The only issue I had with this is I originally tried this with a model that was just 3 edges, but found that the Unity importer strips edges not attached to faces, so I had to make them three near-zero-width faces in my mesh instead.
I have the following method in my code:
private void OnMouseDown()
{
cube.transform.position = new Vector3(0, 1, 0);
}
When clicked on the gameObject, this method is called. It takes long to change the position of the cube (not more than a second, but still long, at least half a second). It does affect the user playing experience.
If I don't change the position of the cube, but for example its material, it is still slow. So I do know that this doesn't have to do with the line in my OnMouseDown() event.
Note however that I do have around 35 imported 3D meshes in my Scene, as well as over 50 primitive game objects. Does this have anything to do with the method being slow? Because when I make an empty Unity project with just 2 gameObjects, the method is fast.
Is it because of my full scene? Any ideas? Thanks!
If someone knows how to speed it up as well, I'd love to know!
EDIT: If I remove all my 3D meshes, the method is fast...
Unless your transfom tree is hundereds of children deep, this should be pretty much instant, its one of the things Unity has been designed to do quickly, so I think this has to do with the meshes you import. The should not have too many verticies, too many materials, and too many objects.
You can try to go through you objects with a 3D editor package (blender?) and combine the chunks into single objects.
If unity lags when you move an object, it may mean it has too many individual transforms to process. If your scene is intense you can gain a lot of performance by 'baknig' it, combining individual objects into larger meshes (those live on the gpu and cost gpu time not cpu time).
It is also possible that one or more of the meshes you imported has too many verts/triangles. If you currently have 10fps you may want to target having lighter meshes, and as few seperate GameObjects as possible (ideally you want the environment to be just a few combined meshes (to enable culling), not thousands of objects)
solution
Try to check if there are heavy materials are used that can slow the procedure. The game becomes slow when the objects are being added in the project and it destroy the overall efficiency of the game . It happens with me when i used too heavy models in the project the overall speed of the game is decreased . I replaced them with lightweight models of truck and issue was resolved .
I'm creating a tool that converts hi-poly meshes to low-poly meshes and I have some best practice questions on how I want to approach some of the problems.
I have some experience with C++ and DirectX but I prefer to use C#/WPF to create this tool, I'm also hoping that C# has some rich libraries for opening, displaying and saving 3d models. This brings me to my first question:
Best approach for reading, viewing and saving 3d models
To display 3D models in my WPF application, I'm thinking about using the Helix 3D toolkit.
To read vertex data from my 3D models I'm going to write my own .OBJ reader because I'll have to optimize the vertices and write out everything
Best approach for optimizing the 3d model
For optimization things will get tricky, especially when dealing with tons of vertices and tons of changes. Guess I'll keep it simple at the start and try to detect if an edge is on the same slope as adjacent edges and then I'll remove that redundant edge and retriangulate everything.
In later stages I also want to create LODs to simplify the model by doing the opposite of what a turbosmooth modifier does in Max (inverse interpolation). I have no real clue how to start on this right now but I'll look around online and experiment a little.
And at last I have to save the model, and make sure everything still works.
For viewing 3D objects you can also consider the Ab3d.PowerToys library - it is not free, but greatly simplifies work with WPF 3D and also comes with many samples.
OBJ file is good because it is very commonly used and has very simple structure that is easy to read and write to. But it does not support object hierarchies, transformations, animations, bones, etc. If you will need any of those, than you will need to use some other data format.
I do not have any experience in optimizing hi-poly meshes, so I cannot give you any advice here. Here I can only say that you may also consider combining the meshes with the same material into one mesh - this can reduce the number of draw calls and also improve performance.
My main advice is on how to write your code to make it perform better in WPF 3D. Because you will need to check and compare many vertices, you need to avoid getting data from the MeshGeometry3D.Positions and MeshGeometry3D.TriangleIndices collections - accessing a single value from those collections is very slow (you may check the .Net source and see how many lines of code are behind each get).
Therefore I would recommend you to have your own structure of meshes with Lists (List, List) for Positions and TriangleIndices. In my observations, Lists of structs are faster than using simple arrays of structs (but the lists must be presized - their size need to be set in constructor). This way you can access the data much faster. Also, when an extra boost is needed, you may also use unsafe blocks with pointers. You may also add some other data to your mesh classes - for example you mentioned adjacent edges.
Once you have your positions and triangle indices set, you can create the WPF's MeshGeometry3D object with the following code:
var wpfMesh = new MeshGeometry3D()
{
Positions = new Point3DCollection(optimizedPositions),
TriangleIndices = new Int32Collection(optimizedTriangleIndices)
};
This is faster than adding each Point3D to Positions collection.
Because you will not change that instance of wpfMesh (for each change you will create a new MeshGeometry3D), you can freeze it - call Freeze() on it. This allows WPF to optimize the meshes (combine them into vertex buffers) to reduce the number of draw calls. What is more, after you freeze a MeshGeometry3D (or any other WPF object), you can pass it from one thread to another. This means that you can parallelize your work and create the MeshGeometry3D objects in worker threads and then pass them to UI thread as frozen objects.
The same applies to change the Positions (and other data) in MeshGeometry3D object. It is faster to copy the existing positions to an array or List, change the data there and then recreate the Positions collection back from your array, then to change each individual position. Before doing any change of MeshGeometry3D you also need to disconnect it from the parent GeometryModel3D to prevent triggering many change events. This is done with the following:
var mesh = parentGeometryModel3D.Geometry; // Save MeshGeometry3D to mesh
parentGeometryModel3D.Geometry = null; // Disconnect
// modify the mesh here ...
parentGeometryModel3D.Geometry = mesh; // Connect the mesh back
So I'm working on a space game called Star Commander.
The progress was going beautifully until I decided I needed to implement some sort of physics. I'm mainly going to be needing Farseer Physics for collision detection.
Anyway, since it's a space game, when I am declaring my 'World' object:
private World world;
this.world = new World(Vector2.Zero);
I have no gravity. This causes a weird result. I can collide with objects, but once I stop colliding with them, that's it. I can no longer collide with them and will just go straight through them. However, with gravity:
private World world;
this.world = new World(new Vector2(0F, 1F));
Collision works beautifully.
I've tried looking around for help with Farseer, but a lot of the posts are dated and there are no real good sources for information and sadly, I'm pretty sure I'm not going to get the help I need here either.
The only thing I found whilst looking around was that with objects called "Geoms" I need to disable a property called "CollisionResponeEnabled" or something similar.
However the Geom object is no longer present in Farseer Physics 3 and has been totally replaced by Fixtures. Fixtures do not seem to have this property, however.
I can provide any source code that may help, but keep in mind I am still implementing the physics engine into my project and a lot of the code isn't final and kind of messy.
IMPORTANT EDIT:
After recording a short gif to demonstrate my issue, I found out that I can only collide with an object once, to collide with it again, I have to collide with a different object, but then cannot collide with that object until I collide with a different object.
Example:
It seems to me that your bodies might be "sleeping" after the collision. Have you tried setting SleepingAllowed = false on the bodies to see if this is the problem?
I want to optimize my basic XNA engine. The structure is somewhat like this: I've a GameWorld instance and more GameObjects attached to it. Now, in every frame I do a loop between GameObjects and I call the draw method inside of them. The con of this implementation is that the GameDevice draw function is called multiple times, one for every object.
Now, I want to reduce the drawing calls, implementing a structure that, before the drawing method is called, transfers all the geometry in a big vector cointains all the vertex data and performs a single drawing call to draw them all.
Is that an efficient way? Someone can tell me a solution to optimize?
Thanks
The first step is to reduce the number of objects you are drawing. There are many ways to do this, most commonly:
Frustum culling - i.e. cull all objects outside of the view frustum
Scene queries - e.g. organise your scene using a BSP tree or a QuadTree - some data structure that gives you the ability to reduce the potentially visible set of objects
Occlusion culling - more advanced topic but in some cases you can determine an object is not visible because it is occluded by other geometry.
There are loads of tutorials on the web covering all these. I would attack them in the order above, and probably ignore occlusion culling for now. The most important optimisation in any graphics engine is that the fastest primitive to draw is the one you don't have to draw.
Once you have you potentially visible set of objects it is fine to send them all to GPU individually but you must ensure that you do so in a way that minimises the state changes on the GPU - e.g. group all objects that use the same texture/material properties together.
Once that is done you should find everything is pretty fast. Of course you can always take it further but the above steps are probably the best way to start.
Just to make the point clear - don't just assume that less drawing calls = faster. Of course it depends on many factors including hardware but generally XNA/DirectX API is pretty good at queueing geometry through the pipeline - this is what it's for after all. The key is not minimising calls but minimising the amount of changes in state (textures/shaders etc) required across the scene.