Figuring out which direction my object is facing on a 2D plane? - c#

My friend and I were messing around in XNA 4.0 making a 2D racing game. Kind of like this one: Ivan “Ironman” Stewart’s Super Off Road. The problem we are having is know which direction are car is facing to move it appropriately. We could track the direction by a enum value North, South, East, West but we don't want to do that for a number of reasons.
We were wondering if there was a way to accomplish this via math. Maybe by having an anchor point designated at the hood of the car and having the car always move towards that spot and then move that anchor point. We aren't sure. Or maybe there is a way using a 2D Vector.
I figured since we hit a hard spot, we should ask the coding community for help!
Just to be clear. I'm not looking for code; I just want to discuss some concepts of 2D movement in all directions without having to track a direction enum. I know that can't be the only way to do it.

Car physics is actually a very difficult subject. I wish you well, but your're embarking on a difficult quest.
As for the answer: you can store the direction angle in radians and use atan2 function to get the relation between angles.
Or you can use Vector2D and use vector math to determine angles, also atan2 will be your friend.
Asnippets of my code on the issue:
public class RotatingImage implements RotatingImageVO {
protected double x, y; // center position
protected double facingx, facingy;
protected double angle;
protected void calculateAngle() {
angle = Math.atan2(x-facingx, facingy-y);
}
}
Remember that calculating atan2 is expensive. When I did it for each draw iteration for each object (tower defense, towers were rotating;) it took ~30% of my computing power. Do it only if you detect a noticable angle change. Like this:
public void setFacingPoint(double facingx, double facingy) {
if (Math.abs((facingx-this.facingx)/(facingx+this.facingx)) > 0.002
|| Math.abs((facingy-this.facingy)/(facingy+this.facingy)) > 0.002) {
this.facingx = facingx;
this.facingy = facingy;
calculateAngle();
}
}

You can represent the direction by using a normalized vector. This is just a hard coded example but you could easily use the input from a gamepad thumbstick.
Vector2 north = new Vector2(0, -1);
And later in your code move your sprite like this (assuming there is a position vector).
float speed = 100; // 100 pixels per second.
Vector2 direction = north;
position += direction * ((float)gameTime.ElapsedGameTime.TotalSeconds * speed);

Describing the direction using a unitvector and the position using a point makes sense. Then when the car moves forward you do
currentPosition = currentPosition + distance * directionvector; // (pseudocode)
When changing the direction it is good to use a matrix to (rotate)transform the direction vector.
(I'm not familiar with XNA)
Wrote some dummy code to illustrate:
[Test]
public void MoveForwardTest()
{
var position = new Point(0, 0);
var direction = new Vector(1, 0);
double distance = 5;
//Update position moving forward distance along direction
position = position + distance * direction;
Assert.AreEqual(5.0, position.X, 1e-3);
Assert.AreEqual(0, position.Y, 1e-3);
}
[Test]
public void RotateThenMoveTest()
{
var position = new Point(0, 0);
var direction = new Vector(1, 0);
double distance = 5;
//Create the rotation matrix
var rotateTransform = new RotateTransform(90);
//Apply the rotation to the direction
direction = Vector.Multiply(direction, rotateTransform.Value);
//Update position moving forward distance along direction
position = position + distance * direction;
Assert.AreEqual(0, position.X, 1e-3);
Assert.AreEqual(5.0, position.Y, 1e-3);
}
[Test]
public void CheckIfOtherCarIsInfrontTest()
{
var position = new Point(0, 0);
var direction = new Vector(1, 0);
var otherCarPosition = new Point(1, 0);
//Create a vector from current car to other car
Vector vectorTo = Point.Subtract(otherCarPosition, position);
//If the dotproduct is > 0 the other car is in front
Assert.IsTrue(Vector.Multiply(direction, vectorTo) > 0);
}
[Test]
public void AngleToNortTest()
{
var direction = new Vector(1, 0);
var northDirection = new Vector(0, 1);
Assert.AreEqual(90, Vector.AngleBetween(direction, northDirection), 1e-3);
}

Related

Rotate faces so normals align with an axis in C#

I have an array of faces, each face has an array of points in 3d space. I want to fill an array of unfolded faces that contains the faces with their normals all pointing along the z axis. DirectionA is the z axis, DirectionB is the normal of the face. I work out the angle and axis then apply it. As I have points, myPoint is a point not a vector could that be a problem? My logic is not right somewhere....
Here is my current code:
public void UnfoldAll()
{
Vector3d directionA = new Vector3d(0, 0, 1);//z axis
int iii = 0;
foreach (Face f in faces)
{
Vector3d directionB = normals[f.normal - 1]; //normal from face
float rotationAngle = (float)Math.Acos(directionA.DotProduct(directionB));
Vector3d rotationAxis = directionA.CrossProduct(directionB);
//rotate all points around axis by angle
for (int i = 0; i < f.Corners3D.Length; i++)
{
Vector3d myPoint;
myPoint.X = f.Corners3D[i].X;
myPoint.Y = f.Corners3D[i].Y;
myPoint.Z = f.Corners3D[i].Z;
myPoint = Vector3d.Normalize(myPoint);
Vector3d vxp = Vector3d.CrossProduct(rotationAxis, myPoint);
Vector3d vxvxp = Vector3d.CrossProduct(rotationAxis, vxp);
Vector3d final = directionB;
var angle = Math.Sin(rotationAngle);
var angle2 = 1 - Math.Cos(rotationAngle);
final.X += (angle * vxp.X) + (angle2 * vxvxp.X);
final.Y += (angle * vxp.Y) + (angle2 * vxvxp.Y);
final.Z += (angle * vxp.Z) + (angle2 * vxvxp.Z);
unfoldedFaces[iii].Corners3D[i].X = final.X;
unfoldedFaces[iii].Corners3D[i].Y = final.Y;
unfoldedFaces[iii].Corners3D[i].Z = final.Z;
}
}
iii++;
}
Any suggestions would be great. Thank you.
When doing any kind of 3D transformation, it is usually a good idea to stay away from angles if you can. Things tend to be easier if you stick to matrices, quaternions and vectors as much as possible.
If you want to rotate a face you should find a transform that describes the rotation, and then simply apply this transform to each of the vertices to get the rotated triangle. You could use either a matrix or a quaternion to describe a rotational transform.
The exact method will depend a bit on what library you are using for transforms. For Unity3D you have the Quaternion.FromToRotation that should do what you want, just input the current normal as the from vector, and the desired normal as the toDirection.
If you are using System.Numerics you can use Quaternion.FromAxisAngle. Just take the cross product your two normals to get the axis, and take the arc cos of the dot-product to get the angle. Don't forget to ensure the normals are normalized.
Thank you that was helpful, here is my code if anyone else needs help:
public void UnfoldAll()
{
Vector3d directionA = new Vector3d(0, 0, 1);//z axis
unfoldedFaces = new UnfoldedFace[faces.Length];
int iii = 0;
foreach (Face f in faces)
{
unfoldedFaces[iii].Corners3D = f.Corners3D;
Vector3d directionB = normals[f.normal - 1]; //normal from face
directionB = Vector3d.Normalize(directionB);
Vector3d vxp = Vector3d.CrossProduct(directionA, directionB);
float rotationAngle = (float)Math.Acos(directionA.DotProduct(directionB));
Quaternion q = Quaternion.FromAxisAngleQ(vxp, rotationAngle);
q.Rotate(unfoldedFaces[iii].Corners3D);
iii++;
}
}

Get random vector in the general direction of another?

I'm trying to create a wander to target behavior for my enemies so that they wander, but with a purpose. After some research and effort, I determined that I would need to accomplish three things to get it to work:
Create a basic wander behavior.
Determine general direction to target.
Generate random destination in general direction of target.
Now, the first step is complete for sure. I believe the second step is complete, but let me explain what I'm doing, as it may need adjustments to achieve the desired result. Currently, I calculate the dot product between the target's location and a directional vector (i.e. left, right, forward, backward), which ever calculation results in the highest dot product determines the general direction:
private readonly Vector3[] _compass = { Vector3.left, Vector3.right, Vector3.forward, Vector3.back };
private Vector3 GetDirectionOfTarget() {
var result = Vector3.zero;
var maximumDotProduct = _negativeInfinity;
foreach (var direction in _compass) {
var dotProduct = Vector3.Dot(Target.transform.position, direction);
if (dotProduct > maximumDotProduct)
{
result = direction;
maximumDotProduct = dotProduct;
}
}
return result;
}
After calculating the general direction, I then get a random point within a unit sphere, multiply that vector by a user specified scalar value for range, and then add that to the general direction. However, after breaking that down mentally, it doesn't sound correct, because I'm going to get a random vector, then scale it out, and finally add a normalized direction to it, which just increases the range in the added direction. My code to get the new destination by applying this is:
var directionOfTarget = GetDirectionOfTarget();
var randomOffset = directionOfTarget + UnityEngine.Random.insideUnitSphere * MaximumRadiusFromSpawnPoint;
NavMesh.SamplePosition(transform.position + randomOffset, out NavMeshHit navHit, MaximumRadiusFromSpawnPoint, SampleNavigationArea);
One thought I have is to get the general direction, then manually create a random vector using that direction:
var directionOfTarget = GetDirectionOfTarget();
// This only works because directionOfTarget is normalized.
var x = _random.Next(0, Math.Abs(directionOfTarget.x)) * directionOfTarget.x;
var y = // same as x for y.
var z = // same as x for z.
var randomInDirection = new Vector3(x, y z);
However, this isn't it either because this will create movement in a box pattern instead of the more realistic approach of including movement along the x and z axes together.
Update 1: I managed to create a more accurate general direction through brute force:
for (float x = -1.0f; x < 1.0f; x += 0.1f) {
for (float z = -1.0f; z < 1.0f; z += 0.1f) {
var direction = new Vector3(x, 0, z);
var dotProduct = Vector3.Dot(Target.transform.position, direction);
if (dotProduct > maximumDotProduct) {
result = direction;
maximumDotProduct = dotProduct;
}
}
}
However, even though this produces a better result, it's expensive, and still doesn't focus the wander in the direction of the target (i.e. there's still back tracking going on).
Update 2: After some more tinkering, I removed the brute force attempt and updated my compass to include bidirectional vectors. This significantly reduced the cost of calculating the general direction of the target. Adding to this change, I also ran with the idea of generating a random vector based on the general direction and though it's not a graceful solution in my opinion, it works:
private readonly Vector3[] _compass = {
new Vector3(-1, 0, -1), new Vector3(-1, 0, 0), new Vector3(-1, 0, 1),
new Vector3(0, 0, -1), new Vector3(0, 0, 1),
new Vector3(1, 0, 0), new Vector3(1, 0, 1)
};
var directionOfTarget = GetDirectionOfTarget();
var x = UnityEngine.Random.Range(0, Math.Abs(directionOfTarget.x)) * directionOfTarget.x;
var z = UnityEngine.Random.Range(0, Math.Abs(directionOfTarget.z)) * directionOfTarget.z;
var randomOffset = new Vector3(x, 0, z) * AreaSize;
NavMesh.SamplePosition(transform.position + randomOffset, out NavMeshHit navHit, AreaSize, SampleNavigationArea);
Note that I simply reverted back to the original state described above for the GetDirectionOfTarget method.
How do I get a random vector in the general direction of another?
Note: If there's anything I can do to clarify my question, or otherwise improve it for future readers, please leave a comment and let me know. I will gladly adjust my question as needed.

Triangle.NET - How to add vertex to existing triangulation?

I've looked through what seems like every question and resource there is for Triangle.NET trying to find an answer to how to insert a vertex into an existing triangulation. The closest I've gotten was in the discussion archives for Traingle.Net where someone asked a similar question (discussion id 632458) but unfortunately, the answer was not what I was looking for.
My goal here is to make a destructible wall in Unity where, when the player shoots the wall, it will create a hole in the wall (like in Rainbow Six Siege).
Here's what I did for my original implementation:
Create initial triangulation using the four corners of the wall.
When the player shoots, perform a raycast, if the raycast intersects with the wall then add the point of intersection to the polygon variable and re-triangulate the entire mesh using that variable.
Draw new triangulation on the wall as a texture to visualise what's happening.
Repeat.
As you can see, step 2 is the problem.
Because I re-triangulate the entire mesh every time the player hits the wall, the more times the player hits the wall the slower the triangulation gets as the number of vertices rises. This could be fine I guess, but I want destructible walls to play a major role in my game so this is not acceptable.
So, digging through the Triangle.Net source code, I find an internal method called InsertVertex. The summary for this method states:
Insert a vertex into a Delaunay triangulation, performing flips as necessary to maintain the Delaunay property.
This would mean I wouldn't have to re-triangulate every time the player shoots!
So I get to implementing this method, and...it doesn't work. I get an error like the one below:
NullReferenceException: Object reference not set to an instance of an object
TriangleNet.TriangleLocator.PreciseLocate (TriangleNet.Geometry.Point searchpoint, TriangleNet.Topology.Otri& searchtri, System.Boolean stopatsubsegment) (at Assets/Triangle.NET/TriangleLocator.cs:146)
I have been stuck on this problem for days and I cannot solve it for the life of me! If anyone who is knowledgeable enough with the Triangle.NET library would be willing to help me I would be so grateful! Along with that, if there is a better alternative to either the implementation or library I'm using (for my purpose which I outlined above) that would also be awesome!
Currently, how I've set up the scene is really simple, I just have a quad which I scaled up and added the script below to it as a component. I then linked that component to a shoot raycast script attached to the Main Camera:
How the scene is setup.
What it looks like in Play Mode.
The exact Triangle.Net repo I cloned is this one.
My code is posted below:
using UnityEngine;
using TriangleNet.Geometry;
using TriangleNet.Topology;
using TriangleNet.Meshing;
public class Delaunay : MonoBehaviour
{
[SerializeField]
private int randomPoints = 150;
[SerializeField]
private int width = 512;
[SerializeField]
private int height = 512;
private TriangleNet.Mesh mesh;
Polygon polygon = new Polygon();
Otri otri = default(Otri);
Osub osub = default(Osub);
ConstraintOptions constraintOptions = new ConstraintOptions() { ConformingDelaunay = true };
QualityOptions qualityOptions = new QualityOptions() { MinimumAngle = 25 };
void Start()
{
osub.seg = null;
Mesh objMesh = GetComponent<MeshFilter>().mesh;
// Add four corners of wall (quad in this case) to polygon.
//foreach (Vector3 vert in objMesh.vertices)
//{
// Vector2 temp = new Vector2();
// temp.x = map(vert.x, -0.5f, 0.5f, 0, 512);
// temp.y = map(vert.y, -0.5f, 0.5f, 0, 512);
// polygon.Add(new Vertex(temp.x, temp.y));
//}
// Generate random points and add to polygon.
for (int i = 0; i < randomPoints; i++)
{
polygon.Add(new Vertex(Random.Range(0.0f, width), Random.Range(0.0f, height)));
}
// Triangulate polygon.
delaunayTriangulation();
}
// When left click is pressed, a raycast is sent out. If that raycast hits the wall, updatePoints() is called and is passed in the location of the hit (hit.point).
public void updatePoints(Vector3 pos)
{
// Convert pos to local coords of wall.
pos = transform.InverseTransformPoint(pos);
Vertex newVert = new Vertex(pos.x, pos.y);
//// Give new vertex a unique id.
//if (mesh != null)
//{
// newVert.id = mesh.NumberOfInputPoints;
//}
// Insert new vertex into existing triangulation.
otri.tri = mesh.dummytri;
mesh.InsertVertex(newVert, ref otri, ref osub, false, false);
// Draw result as a texture onto the wall so to visualise what is happening.
draw();
}
private void delaunayTriangulation()
{
mesh = (TriangleNet.Mesh)polygon.Triangulate(constraintOptions, qualityOptions);
draw();
}
void draw()
{
Texture2D tx = new Texture2D(width, height);
// Draw triangulation.
if (mesh.Edges != null)
{
foreach (Edge edge in mesh.Edges)
{
Vertex v0 = mesh.vertices[edge.P0];
Vertex v1 = mesh.vertices[edge.P1];
DrawLine(new Vector2((float)v0.x, (float)v0.y), new Vector2((float)v1.x, (float)v1.y), tx, Color.black);
}
}
tx.Apply();
this.GetComponent<Renderer>().sharedMaterial.mainTexture = tx;
}
// Bresenham line algorithm
private void DrawLine(Vector2 p0, Vector2 p1, Texture2D tx, Color c, int offset = 0)
{
int x0 = (int)p0.x;
int y0 = (int)p0.y;
int x1 = (int)p1.x;
int y1 = (int)p1.y;
int dx = Mathf.Abs(x1 - x0);
int dy = Mathf.Abs(y1 - y0);
int sx = x0 < x1 ? 1 : -1;
int sy = y0 < y1 ? 1 : -1;
int err = dx - dy;
while (true)
{
tx.SetPixel(x0 + offset, y0 + offset, c);
if (x0 == x1 && y0 == y1) break;
int e2 = 2 * err;
if (e2 > -dy)
{
err -= dy;
x0 += sx;
}
if (e2 < dx)
{
err += dx;
y0 += sy;
}
}
}
private float map(float from, float fromMin, float fromMax, float toMin, float toMax)
{
float fromAbs = from - fromMin;
float fromMaxAbs = fromMax - fromMin;
float normal = fromAbs / fromMaxAbs;
float toMaxAbs = toMax - toMin;
float toAbs = toMaxAbs * normal;
float to = toAbs + toMin;
return to;
}
}
Great news! I've managed to fix the issue. InsertVertex() doesn't actually add the new vertex to the list of vertices! So this means that when it tried to triangulate, it was trying to point to the new vertex but it couldn't (because that vertex wasn't in the list). So, to solve this, I just manually add my new vertex to the list of vertices in the mesh, before calling InsertVertex(). Note: When you do this, you also need to manually set the vertex's id. I set the id to the size of the list of vertices because I was adding all new vertices to the end of the list.
// When left click is pressed, a raycast is sent out. If that raycast hits the wall, updatePoints() is called and is passed in the location of the hit (hit.point).
public void updatePoints(Vector3 pos)
{
// Convert pos to local coords of wall. You don't need to do this, i do it because of my draw() method where i map everything out onto a texture and display it.
pos = transform.InverseTransformPoint(pos);
pos.x = map(pos.x, -0.5f, 0.5f, 0, 512);
pos.y = map(pos.y, -0.5f, 0.5f, 0, 512);
Vertex newVert = new Vertex(pos.x, pos.y);
// Manually add new vertex to list of vertices.
newVert.id = mesh.vertices.Count;
mesh.vertices.Add(newVert.id, newVert);
//Doing just the first line gave me a null pointer exception. Adding the two extra lines below it fixed it for me.
otri.tri = mesh.dummytri;
otri.orient = 0;
otri.Sym();
// Insert new vertex into existing triangulation.
mesh.InsertVertex(newVert, ref otri, ref osub, false, false);
// Draw result as a texture onto the wall so to visualise what is happening.
draw();
}
Hope this will help someone done the road!

Can't seem to get smooth 3D collision XNA

I am making a 3D game. The player is the "camera". I want it not to go through walls which is achieved. But now I want it to be able to "glide" along the wall as in any other fps. Here is the code: and thanks in advance:
protected override void Update(GameTime gameTime)
{
if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
Exit();
float dt = (float)gameTime.ElapsedGameTime.TotalSeconds;
keyState = Keyboard.GetState();
camera.Update(gameTime);
if (keyState.IsKeyDown(Keys.W)) camera.moveVector.Z = 1;
if (keyState.IsKeyDown(Keys.S)) camera.moveVector.Z = -1;
if (keyState.IsKeyDown(Keys.A)) camera.moveVector.X = 1;
if (keyState.IsKeyDown(Keys.D)) camera.moveVector.X = -1;
if (keyState.IsKeyDown(Keys.Space)&&camera.Position.Y>=0.5f) camera.moveVector.Y = 0.5f;
if (camera.moveVector != Vector3.Zero)
{
//We don't want to make the player move faster when it is going diagonally.
camera.moveVector.Normalize();
//Now we add the smoothing factor and speed factor
camera.moveVector *= (dt * camera.cameraSpeed);
Vector3 newPosition = camera.PreviewMove(camera.moveVector);
bool moveTrue = true;
if (newPosition.X < 0 || newPosition.X > Map.mazeWidth) moveTrue = false;
if (newPosition.Z < 0 || newPosition.Z > Map.mazeHeight) moveTrue = false;
foreach (BoundingBox boxes in map.GetBoundsForCell((int)newPosition.X, (int)newPosition.Z))
{
if (boxes.Contains(newPosition) == ContainmentType.Contains)
{
moveTrue = false;
}
}
if (moveTrue) camera.Move(camera.moveVector);
base.Update(gameTime);
And here is the code for excecuting the movement:
//Updating the look at vector
public void UpdateLookAt()
{
//Built a rotation matrix to rotate the direction we are looking
Matrix rotationMatrix = Matrix.CreateRotationX(cameraRotation.X) * Matrix.CreateRotationY(cameraRotation.Y);
// Build a look at offset vector
Vector3 lookAtOffset = Vector3.Transform(Vector3.UnitZ, rotationMatrix);
//Update our camera's look at the vector
cameraLookAt = (cameraPosition + lookAtOffset);
}
//Method to create movement and to check if it can move:)
public Vector3 PreviewMove(Vector3 amount)
{
//Create a rotation matrix to move the camera
Matrix rotate = Matrix.CreateRotationY(cameraRotation.Y);
//Create the vector for movement
Vector3 movement = new Vector3(amount.X, amount.Y, amount.Z);
movement = Vector3.Transform(movement, rotate);
// Give the value of the camera position +ze movement
return (cameraPosition+movement);
}
//Method that moves the camera when it hasnt'collided with anything
public void Move(Vector3 scale)
{
//Moveto the location
MoveTo(PreviewMove(scale), Rotation);
}
Already thought of using the invert method given by xna. But I can't seem to find the normal. And I have tried to move the camera parallel to the wall. But I was unable to achieve that. Any help is appreicated.
If you have found that a point intersects a bounding box, you have to check which of the six faces the entry point lies in. This can be done as follows: Construct a line segment between the old camera position and the new one:
p = (1 - t) * oldPos + t * newPos
, where you use only the dimension of oldPos and newPos, which is interesting for the face (e.g. for the left/right face, take the x-coordinate). p is the according coordinate for the face. Calculate t for every face and find the face for which t is maximal, ignoring faces behind which the point already lies (i.e. the dot product of the face normal and the direction from the face to the point is negative). This will be the face, in which your entry point lies. All you then need to do is adapt the relevant coordinate (again the x-coordinate for left/right face etc.), such that it does not lie within the bounds (e.g. set newPos.x = boundingBox.MaxX for the right face). This is equal to a projection of the point onto the bounding box surface and equivalent to using only the component of the movement vector that is parallel to the box if an intersection would occur).
Btw, the solution of the above formula is:
t = (p - oldPos) / (newPos - oldPos)

Meshes starting to jump on camera rotation/movement

Hey together,
first time posting here, because I'm damn stuck...
The further away a mesh is from the origin at (0, 0, 0), the more it "jumps"/"flickers" when rotating or moving the camera. It's somehow hard to describe this effect: it is like the mesh is jittering/shivering/trembling a little bit and this trembling gets bigger and bigger as you gain distance to the origin.
For me, it begins to be observable at around 100000 units distance to the origin, so at (0, 0, 100000) for example. Neither the axis of the translation nor the type of the mesh (default mesh created from Mesh.Create... or with assimp.NET imported 3ds mesh) have influence on this effect. The value of the position of the mesh doesn't change when this effect occurs, checked this by logging the position.
If I'm not missing something, this narrows it down to two possibilities:
My camera code
The DirectX-Device
As for the DirectX-Device, this is my device initialization code:
private void InitializeDevice()
{
//Initialize D3D
_d3dObj = new D3D9.Direct3D();
//Set presentation parameters
_presParams = new D3D9.PresentParameters();
_presParams.Windowed = true;
_presParams.SwapEffect = D3D9.SwapEffect.Discard;
_presParams.AutoDepthStencilFormat = D3D9.Format.D16;
_presParams.EnableAutoDepthStencil = true;
_presParams.PresentationInterval = D3D9.PresentInterval.One;
_presParams.BackBufferFormat = _d3dObj.Adapters.DefaultAdapter.CurrentDisplayMode.Format;
_presParams.BackBufferHeight = _d3dObj.Adapters.DefaultAdapter.CurrentDisplayMode.Height;
_presParams.BackBufferWidth = _d3dObj.Adapters.DefaultAdapter.CurrentDisplayMode.Width;
//Set form width and height to current backbuffer width und height
this.Width = _presParams.BackBufferWidth;
this.Height = _presParams.BackBufferHeight;
//Checking device capabilities
D3D9.Capabilities caps = _d3dObj.GetDeviceCaps(0, D3D9.DeviceType.Hardware);
D3D9.CreateFlags devFlags = D3D9.CreateFlags.SoftwareVertexProcessing;
D3D9.DeviceType devType = D3D9.DeviceType.Reference;
//setting device flags according to device capabilities
if ((caps.VertexShaderVersion >= new Version(2, 0)) && (caps.PixelShaderVersion >= new Version(2, 0)))
{
//if device supports vertexshader and pixelshader >= 2.0
//then use the hardware device
devType = D3D9.DeviceType.Hardware;
if (caps.DeviceCaps.HasFlag(D3D9.DeviceCaps.HWTransformAndLight))
{
devFlags = D3D9.CreateFlags.HardwareVertexProcessing;
}
if (caps.DeviceCaps.HasFlag(D3D9.DeviceCaps.PureDevice))
{
devFlags |= D3D9.CreateFlags.PureDevice;
}
}
//initialize the device
_device = new D3D9.Device(_d3dObj, 0, devType, this.Handle, devFlags, _presParams);
//set culling
_device.SetRenderState(D3D9.RenderState.CullMode, D3D9.Cull.Counterclockwise);
//set texturewrapping (needed for seamless spheremapping)
_device.SetRenderState(D3D9.RenderState.Wrap0, D3D9.TextureWrapping.All);
//set lighting
_device.SetRenderState(D3D9.RenderState.Lighting, false);
//enabling the z-buffer
_device.SetRenderState(D3D9.RenderState.ZEnable, D3D9.ZBufferType.UseZBuffer);
//and setting write-access exlicitly to true...
//i'm a little paranoid about this since i had to struggle for a few days with weirdly overlapping meshes
_device.SetRenderState(D3D9.RenderState.ZWriteEnable, true);
}
Am I missing a flag or renderstate? Is there something that could cause such a weird/distorted behaviour?
My camera class is based on Michael Silvermans C++ Quaternion Camera:
//every variable prefixed with an underscore is
//a private static variable initialized beforehand
public static class Camera
{
//gets called every frame
public static void Update()
{
if (_filter)
{
_filteredPos = Vector3.Lerp(_filteredPos, _pos, _filterAlpha);
_filteredRot = Quaternion.Slerp(_filteredRot, _rot, _filterAlpha);
}
_device.SetTransform(D3D9.TransformState.Projection, Matrix.PerspectiveFovLH(_fov, _screenAspect, _nearClippingPlane, _farClippingPlane));
_device.SetTransform(D3D9.TransformState.View, GetViewMatrix());
}
public static void Move(Vector3 delta)
{
_pos += delta;
}
public static void RotationYaw(float theta)
{
_rot = Quaternion.Multiply(Quaternion.RotationAxis(_up, -theta), _rot);
}
public static void RotationPitch(float theta)
{
_rot = Quaternion.Multiply(_rot, Quaternion.RotationAxis(_right, theta));
}
public static void SetTarget(Vector3 target, Vector3 up)
{
SetPositionAndTarget(_pos, target, up);
}
public static void SetPositionAndTarget(Vector3 position, Vector3 target, Vector3 upVec)
{
_pos = position;
Vector3 up, right, lookAt = target - _pos;
lookAt = Vector3.Normalize(lookAt);
right = Vector3.Cross(upVec, lookAt);
right = Vector3.Normalize(right);
up = Vector3.Cross(lookAt, right);
up = Vector3.Normalize(up);
SetAxis(lookAt, up, right);
}
public static void SetAxis(Vector3 lookAt, Vector3 up, Vector3 right)
{
Matrix rot = Matrix.Identity;
rot.M11 = right.X;
rot.M12 = up.X;
rot.M13 = lookAt.X;
rot.M21 = right.Y;
rot.M22 = up.Y;
rot.M23 = lookAt.Y;
rot.M31 = right.Z;
rot.M32 = up.Z;
rot.M33 = lookAt.Z;
_rot = Quaternion.RotationMatrix(rot);
}
public static void ViewScene(BoundingSphere sphere)
{
SetPositionAndTarget(sphere.Center - new Vector3((sphere.Radius + 150) / (float)Math.Sin(_fov / 2), 0, 0), sphere.Center, new Vector3(0, 1, 0));
}
public static Vector3 GetLookAt()
{
Matrix rot = Matrix.RotationQuaternion(_rot);
return new Vector3(rot.M13, rot.M23, rot.M33);
}
public static Vector3 GetRight()
{
Matrix rot = Matrix.RotationQuaternion(_rot);
return new Vector3(rot.M11, rot.M21, rot.M31);
}
public static Vector3 GetUp()
{
Matrix rot = Matrix.RotationQuaternion(_rot);
return new Vector3(rot.M12, rot.M22, rot.M32);
}
public static Matrix GetViewMatrix()
{
Matrix viewMatrix, translation = Matrix.Identity;
Vector3 position;
Quaternion rotation;
if (_filter)
{
position = _filteredPos;
rotation = _filteredRot;
}
else
{
position = _pos;
rotation = _rot;
}
translation = Matrix.Translation(-position.X, -position.Y, -position.Z);
viewMatrix = Matrix.Multiply(translation, Matrix.RotationQuaternion(rotation));
return viewMatrix;
}
}
Do you spot anything in the camera code which could cause this behaviour?
I just can't imagine that DirectX can't handle distances greater than 100k. I am supposed to render solar systems and I'm using 1 unit = 1km. So the earth would be rendered at its maximum distance to the sun at (0, 0, 152100000) (just as an example). This is becoming impossible if these "jumps" keep occuring.
Finally i thought about scaling everything down, so that a system never goes beyond 100k/-100k distance from the origin, but I think this won't work because the "jittering" gets bigger as the distance from the origin gets bigger. Scaling everything down would - i think - scale down the jumping-behaviour, too.
Just to not leave this question unanswered (credits to #jcoder, see comments of question):
The weird behaviour of the meshes comes from the floating point precision of DX. The bigger your world gets, the less precision is there to calculate positions accurately.
There are two possibilities to solve this problem:
Downscaling the whole world
this may be problematic in a "galactic-style" world, where you have really big position offsets as well as really small ones (i.e. the distance of a planet to its sun is really big, but the distance of a spaceship in orbit of a planet may be really small)
Dividing the world into smaller chunks
this way you have either to express all positions relative to something else (see stackoverflow.com/questions/1930421) or make multiple worlds and somehow move between them

Categories