XNA Collision Detection with BoundingBoxes - c#

We're building an FPS in XNA, and we're in the process of implementing collision detection. Our walls are rectangular, so the BoundingBox class seemed like a good option, but it's axis aligned. We'd really like to have non-axis aligned walls. Most of the discussion we've seen says to use BoundingSpheres, but this doesn't seem like the best option because the walls are really just rotated rectangles.
We've managed to model the character's movement as rays, and we know that we should be able to translate these rays from world space to the axis aligned box space using a rotation matrix. Unfortunately we think that something is amiss with this transformation because we seem to be colliding with invisible (or somehow larger) walls, and our ray generation works for floor intersections (which don't rotate [yet anyway]). The only reason we're using rays is because they're easy to generate, should be easy to transform into the box's space, and we can use XNA classes like the Axis Aligned BoundingBoxes for the actual collision detection.
We've implemented it like this:
internal float? collide(Ray ray) {
Matrix transform = Matrix.Invert(Transform);
ray.Direction = Vector3.Transform(ray.Direction, transform);
ray.Position = Vector3.Transform(ray.Position, transform);
return new BoundingBox(new Vector3(-.5f, -.5f, -.5f), new Vector3(.5f, .5f, .5f)).Intersects(ray);
}
public Matrix Transform { get { return Matrix.CreateScale(size) * rotate * Matrix.CreateTranslation(pos); } }
Rotate is passed in the constructor and is formed with a call to Matrix.CreateRotationY(theta). The ray passed into collide is in world space. Transform is also the matrix applied to our model for rendering. This model is a cube which goes from (-.5, -.5, -.5) to (.5, .5, .5).
In any event, our question boils down a few things: Is there a better way to do collision detection for non-axis aligned boxes against rays? Is there a better way to do it than using rays? Are we just idiots with something obviously wrong with our code? If possible, we'd like to rely as much as possible on XNA's classes (or at least code that's already been written), and not have to write much more than a wrapper class.

Well, if your trying to rotate rectangles (2d) and keep collision detection, then I have an answer. When I learned Xna, I used rectangles. You could have lots of little rectangles in your texture rectangle and detect collision for each collision.

Related

Sphere vs Rotation Box Custom Collision Problem (C#, Unity)

I'm not really like to post questions about problems without doing the research, but I'm close to give up, so I thought I give it a shot and ask you about my problem.
I want to create a custom collision detection in Unity ( So please don't advice "use rigidbody and\or colliders" because I don't want to use them by purpose).
The main idea: I want to detect Basic Sphere and Basic Box collision. I already find AABB vs Sphere theme with the following solution:
bool intersect(sphere, box) {
var x = Math.max(box.minX, Math.min(sphere.x, box.maxX));
var y = Math.max(box.minY, Math.min(sphere.y, box.maxY));
var z = Math.max(box.minZ, Math.min(sphere.z, box.maxZ));
var distance = Math.sqrt((x - sphere.x) * (x - sphere.x) +
(y - sphere.y) * (y - sphere.y) +
(z - sphere.z) * (z - sphere.z));
return distance < sphere.radius;
}
And this code does the job, the box bounding and the sphere center point with radius works fine, I can detect the Sphere collision on Box.
The problem is, I want to Rotating the Cube in Runtime, so that will screw up everything, the bounding will split away and the collision will gone (or collide on random places). I've read about some comments where they said, bounding not works with rotation, but I'm not sure what else can I use to solve this problem.
Can you help me with this topic please? I'll take every advice I can get (except Colliders & Rigidbodies of course).
Thank you very much.
You might try using the separating axis theorem. Essentially, for a polyhedron, you use the normal of each face to create an axis. Project the two shapes you are comparing onto each axis and look for an intersection. If there is no intersection along any of the axes, there is no intersection of shapes. For a sphere, you will just need to project onto the polyhedron's axes. There is a great 2D intro to this from metanet.
Edit: hey, check it out-- a Unity implementation.
A good method to find if an AABB (axis aligned bounding box) and sphere are intersecting is to find the closest point on the box to the sphere's center and determine if that point is within the sphere's radius. If so, then they are intersecting, if not then not.
I believe you can do the same thing with this more complicated scenario. You can represent a rotated AABB with a geometrical shape called a parallelepiped. You would then find the closest point on the parallelepiped to the center of the sphere and again check if that point exists within the sphere's radius. If so, then they intersect. If not, then not.
The difficult part is finding the closest point on the parallelepiped. You can represent a parallelepiped in code with 4 3d vectors: center, extentRight, extentUp, and extentForward. This is similar to how you can represent an AABB with a 3d vector for center along with 3 floats: extentRight, extentUp, and extentForward. The difference is that for the parallelepiped those 3 extents are not 1 dimensional scalars, but are full vectors.
When finding the closest point on an AABB surface to a given point, you are basically taking that given point and clamping it to the AABB's volume. You would, for example, call Math.Clamp(point.x, AABB.Min.x, AABB.Max.x) and so on for Y and Z.
The resulting X,Y,Z would be the closest point on the AABB surface to the given point.
To do this for a parallelepiped you need to solve the "linear combination" (math keyword) of extentRight(ER), extentUp(EU), and extentForward(EF) to get the given point. In other words, what scalars do you have to multiply ER, EU, and EF by to get to the given point? When you find those scalars you need to clamp them between 0 and 1 and then multiply them again by ER, EU, and EF respectively to get that closest point on the surface of the parallelepiped. Be sure to offset the given point by the Parallelepiped's min position so that the whole calculation is done in its local space.
I didn't want to spend any extra time learning how to solve for a linear combination (it seems it involves things like using an "augmented matrix" and "gaussian elimination") otherwise I'd include that here too. This should get you or anyone else reading this off to the right track hopefully.
Edit:
Actually I think its a lot simpler and you don't need a parallelepiped. If you have access to the rotation (Vector3 or Quaternion) that rotated the cube you could get the inverse of that and use that inverse rotation to orbit the sphere around the cube so that the new scenario is just the normal axis aligned cube and the orbited sphere. Then you can do a normal AABB - sphere collision detection.

How to calculate the 3D vector perpendicular to a polygon surface in a specific direction

I'm creating an asteroid mining game in Unity 3D, where "down" is the center of the planet you are on (So I am mostly using local rotations when dealing with vectors). I have a working physics-based character controller, but it has problems dealing with slopes. This is because when I add a force to the character, currently it pushes along the player's forward vector (see picture). What I want to do is calculate the vector that is parallel to this terrain surface, but in the direction that the player is facing (so that the player can move up the slope).
I originally thought that i could just find the vector perpendicular to the normal, but then how do I know which direction it will be in. Also complicating matters is the fact that the player could be oriented in any way in relation to the global x, y, and z.
Either way, I have the surface normals of the terrain, I have all of the player's directional vectors, but I just can't figure out how to put them all together. I can upload code or screenshots of the editor if necessary. Thanks.
The usual way is to factor out the component of the forward vector that is parallel to the surface normal:
fp = f - dot(f, n) * n
See vector rejection.
This formulation will also make fp shorter the steeper the slope is. If you don't want that, you can re-scale fp afterwards to have the same length as f.

Getting a Model to Clamp to the Camera in 3D

I'm trying to get a 3D model that I've implemented into Monogame (Xna 4.0) code to constantly be in the bottom right of the screen, it's a gun - basic pistol model, there's nothing fancy - doesn't even have textures! But I can't figure it out, I've tried some maths to get it to stay there, but the problem is that the X and Z offset I set is hard-coded so when the player rotates the camera, while the pistol rotates to face the new LookAt, the actual model doesn't move position so it goes out of sight very quickly.
It's been a long road for a pretty pointless 3D game for an A-Level (Highschool equivalent I suppose) course in computer science which I'm making in Monogame Xna to rake in those complexity marks, but I've got to the last stretch, I need to get my fbx model to sit like the player is holding it in the bottom right of the screen. Right now, if it sits in the centre of the screen, there's not much notice - however, if I increase my X or Z offset to position it where I want, when you begin to rotate your camera it falls behind the camera or goes in a massive circle around the camera which breaks immersion (despite how silly it sounds it really bugs me).
After the simple way didn't work, I tried attaching it to the BoundingBox that the current camera is using: Still left a gap, tried a BoundingSphere to see if I could get it to move around the sphere's edge and follow the player that way: No luck. Yet.
tl;dr I've tried attaching gun to BoundingBox, BoundingSphere and regular Vector Positioning to no success.
if (pausedGame == false)
{
camera.UpdateCamera(gameTime);
if (carryingWeapon)
{
weapons[weaponToCarry].SetRotationMatrix(Matrix.CreateRotationX(camera.GetCameraRotation().X), Matrix.CreateRotationY(camera.GetCameraRotation().Y));
weapons[weaponToCarry].SetPosition(camera.weaponSlot.Center + camera.GetLookAtOffSet());
//Debug.WriteLine(weapons[weaponToCarry].Position);
//Debug.WriteLine(camera.Position);
}
CheckCollision();
AIMovement();
UpdateVisitedNodes(updatedNodes);
}
that was my update for it, simple setting position and rotation of model. The 'weaponslot.Centre' is due to the fact that right now I left on using BoundingSphere so that piece of code is still in there.
else if (_state == ItemState.Dynamic)
{
foreach (var mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.PreferPerPixelLighting = true;
effect.World = Matrix.CreateScale(SCALE, SCALE, SCALE) * rotationMatrix *
Matrix.CreateTranslation(position);
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
this is my draw for the actual item, it's nothing fancy.
Like you can imagine, I want it to be locked in the bottom right of the frustum and moves it's position according to where the player is looking themselves. Right now the positioning of the item is really messed up - any conceptual help is welcome! Thanks.
What I would do is parenting the gun to the camera (it's done that way in engines like unity as well).
You would need to do that by yourself in mono (I think) - the basic principle is to first transform the gun relative to the cam and multiply with the cameras World-Matrix afterwards - this also means to have the guns position relative to the camera
gun.worldmatrix = Matrix.CreateTranslation(gun.position) * camera.world;
You now just move and rotate the camera and the gun keeps in relative position to the camera. Remember: The guns position/(rotation) has to be relative to the camera in that case.
The important thing is just the order in which you multiply the matrices (haven't tried it, hopefully I got it right above)

Extending textured gameObject smoothly in Unity

I'm creating simple game and I need to create extending cylinder. I know how to normally do it - by changing objects pivot point and scaling it up, but now I have texture on it and I don't want it to stretch.
I fought about adding small segments to the end of the cylinder when it has to grow, but this wont work smooth and might affect performance. Do you know any solution to this?
This the rope texture that I'm using right now(but it might change in the future):
Before scaling
After scaling
I don't want it to stretch
This is not easy to accomplish but it is possible.
You have two ways to do this.
1.One is to procedurally generate the rope mesh and assign material real-time. This is complicated for beginners. Can't help on this one.
2.Another solution that doesn't require mesh procedurally mesh generation. Change the tiles while the Object is changing size.
For this to work, your texture must be tileable. You can't just use any random texture online. Also, you need a normal map to actually make it look like a rope. Here is a tutorial for making a rope texture with normal map in Maya. There are other parts of the video you have to watch too.
Select the texture, change Texture Type to Texture, change Wrap Mode to Repeat then click Apply.
Get the MeshRenderer of the Mesh then get Material of the 3D Object from the MeshRenderer.Use ropeMat.SetTextureScale to change the the tile of the texture. For example, when changing the xTile and yTile value of the code below, the texture of the Mesh will be tiled.
public float xTile, yTile;
public GameObject rope;
Material ropeMat;
void Start()
{
ropeMat = rope.GetComponent<MeshRenderer>().material;
}
void Update()
{
ropeMat.SetTextureScale("_MainTex", new Vector2(xTile, yTile));
}
Now, you have to find a way to map the xTile and yTile values to the size of the Mesh. It's not simple. Here is complete method to calculate what value xTile and yTile should be when re-sizing the mesh/rope.

What kind of projection and camera to use for 3D/2D top-down view?

I am probably going to smack myself in the face once the answer becomes clear, but I'm afraid I can't figure out what to do with my camera in order to effectively mix scrolling 2D and 3D objects.
Currently I'm using a displacement camera, which I implemented when this was still just a 2D project. The camera displaces the draw position of all 2D objects based on where its position is in the world. In case that's not clear, here's the code:
public void DrawSprite(Sprite sprite)
{
Vector2 drawtopLeftPosition = ApplyTransformation(sprite.Position);
//TODO: add culling logic here
sprite.Draw(spriteBatch, drawtopLeftPosition);
}
private Vector2 ApplyTransformation(Vector2 spritetopLeftPosition)
{
return (spritetopLeftPosition - topLeftPosition);
}
Simple enough. This worked effectively until I tried to push 3D into the equation. I have several spheres which I want to display alongside the 2D game objects. I have already figured out how to Z-order everything properly, but I cannot get the spheres to project correctly. They appear and disappear depending on where the camera is and often fly around erratically with even the slightest camera movement. Here are my camera matrices:
viewMatrix = Matrix.CreateLookAt(new Vector3(Position, 1f), new Vector3(Position , 0), Vector3.Up);
projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, graphics.Viewport.AspectRatio, 1f, 1000f);
Note that the Vector2 Position is the absolute center of the camera viewport, not the top left or anything like that. I have also tried using OrthographicOffCenter and Orthographic projections. The viewMatrix updates each frame with the same CreateLookAt function based on the current camera position. Here is the Camera method which draws a 3D object:
public void DrawModel(Sprite3D sprite, GraphicsDevice device)
{
device.BlendState = BlendState.Opaque;
device.DepthStencilState = DepthStencilState.Default;
device.SamplerStates[0] = SamplerState.LinearWrap;
sprite.DrawBasic(this, device);
}
My main point of confusion is whether I should displace the 3D objects as well as the 2D. I have tried doing this but have had more success actually glimpsing the 3D objects on screen if I do not.
Here is the Sprite3D section for drawing:
public Matrix GenerateWorld()
{
return Matrix.CreateScale(Scale) * Matrix.CreateTranslation(new Vector3(Position, -ZPosition)) * Matrix.CreateFromYawPitchRoll(Rotation3D.X, Rotation3D.Y, Rotation3D.Z);
}
private Matrix GenerateDEBUGWorld()
{
return Matrix.CreateScale(Scale) * Matrix.CreateTranslation(Vector3.Zero);
}
public void DrawBasic(Camera camera, GraphicsDevice device)
{
Matrix[] transforms = new Matrix[Model.Bones.Count];
Model.CopyAbsoluteBoneTransformsTo(transforms);
foreach (ModelMesh mesh in Model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.World = (transforms[mesh.ParentBone.Index] * GenerateDEBUGWorld());
effect.View = camera.ViewMatrix;
effect.Projection = camera.ProjectionMatrix;
}
mesh.Draw();
}
}
GenerateDEBUGWorld simply places the sphere at 0,0 so that I always know where it should be.
A few things I've noticed:
If I set the Z position of the camera to a larger number (say, 10), spheres move less erratically but still wrongly
Using OrthographicOffCenter projection displays tiny little spheres at the center of the screen which do not change in position or scale, even if I multiply the Sprite3D's Scale variable by 1000.
So, the main question: should I even use a displacement camera, or is there a better way to mix 2D and 3D effectively (billboarding, perhaps? but I don't want to add unnecessary complexity to this project)? And, if I do use displacement, should I displace the 3D objects as well as the 2D? Finally, what sort of projection should be used?
Update: If I move the camera back to z position of 100 and make the spheres scale up by 100%, they behave somewhat normally-- when the camera moves left they move right, up they move down, etc, as you'd expect for a perspective projection. However they move far too much in relation to the rest of the 2D objects. Is this a matter of scale? I feel like it's very difficult to reconcile the way sprites are shown and the way 3D objects are shown onscreen...
It sounds like what you're looking for is Unproject - this will take a 2D point and "cast it" into 3D space. If all of your objects are in 3D space, your typical Ortho/LookAt camera should work as you'd expect...I'm also going completely from memory here, since I can't be arsed to look it up. ;)
Just a thought, but your Z in your view matrix is being constrained between 1 and 0. Update the Z component of those Vector3's to be the actual Z of the camera and see what that does for you.
<2 cents/>

Categories