Triangle.NET - How to add vertex to existing triangulation? - c#

I've looked through what seems like every question and resource there is for Triangle.NET trying to find an answer to how to insert a vertex into an existing triangulation. The closest I've gotten was in the discussion archives for Traingle.Net where someone asked a similar question (discussion id 632458) but unfortunately, the answer was not what I was looking for.
My goal here is to make a destructible wall in Unity where, when the player shoots the wall, it will create a hole in the wall (like in Rainbow Six Siege).
Here's what I did for my original implementation:
Create initial triangulation using the four corners of the wall.
When the player shoots, perform a raycast, if the raycast intersects with the wall then add the point of intersection to the polygon variable and re-triangulate the entire mesh using that variable.
Draw new triangulation on the wall as a texture to visualise what's happening.
Repeat.
As you can see, step 2 is the problem.
Because I re-triangulate the entire mesh every time the player hits the wall, the more times the player hits the wall the slower the triangulation gets as the number of vertices rises. This could be fine I guess, but I want destructible walls to play a major role in my game so this is not acceptable.
So, digging through the Triangle.Net source code, I find an internal method called InsertVertex. The summary for this method states:
Insert a vertex into a Delaunay triangulation, performing flips as necessary to maintain the Delaunay property.
This would mean I wouldn't have to re-triangulate every time the player shoots!
So I get to implementing this method, and...it doesn't work. I get an error like the one below:
NullReferenceException: Object reference not set to an instance of an object
TriangleNet.TriangleLocator.PreciseLocate (TriangleNet.Geometry.Point searchpoint, TriangleNet.Topology.Otri& searchtri, System.Boolean stopatsubsegment) (at Assets/Triangle.NET/TriangleLocator.cs:146)
I have been stuck on this problem for days and I cannot solve it for the life of me! If anyone who is knowledgeable enough with the Triangle.NET library would be willing to help me I would be so grateful! Along with that, if there is a better alternative to either the implementation or library I'm using (for my purpose which I outlined above) that would also be awesome!
Currently, how I've set up the scene is really simple, I just have a quad which I scaled up and added the script below to it as a component. I then linked that component to a shoot raycast script attached to the Main Camera:
How the scene is setup.
What it looks like in Play Mode.
The exact Triangle.Net repo I cloned is this one.
My code is posted below:
using UnityEngine;
using TriangleNet.Geometry;
using TriangleNet.Topology;
using TriangleNet.Meshing;
public class Delaunay : MonoBehaviour
{
[SerializeField]
private int randomPoints = 150;
[SerializeField]
private int width = 512;
[SerializeField]
private int height = 512;
private TriangleNet.Mesh mesh;
Polygon polygon = new Polygon();
Otri otri = default(Otri);
Osub osub = default(Osub);
ConstraintOptions constraintOptions = new ConstraintOptions() { ConformingDelaunay = true };
QualityOptions qualityOptions = new QualityOptions() { MinimumAngle = 25 };
void Start()
{
osub.seg = null;
Mesh objMesh = GetComponent<MeshFilter>().mesh;
// Add four corners of wall (quad in this case) to polygon.
//foreach (Vector3 vert in objMesh.vertices)
//{
// Vector2 temp = new Vector2();
// temp.x = map(vert.x, -0.5f, 0.5f, 0, 512);
// temp.y = map(vert.y, -0.5f, 0.5f, 0, 512);
// polygon.Add(new Vertex(temp.x, temp.y));
//}
// Generate random points and add to polygon.
for (int i = 0; i < randomPoints; i++)
{
polygon.Add(new Vertex(Random.Range(0.0f, width), Random.Range(0.0f, height)));
}
// Triangulate polygon.
delaunayTriangulation();
}
// When left click is pressed, a raycast is sent out. If that raycast hits the wall, updatePoints() is called and is passed in the location of the hit (hit.point).
public void updatePoints(Vector3 pos)
{
// Convert pos to local coords of wall.
pos = transform.InverseTransformPoint(pos);
Vertex newVert = new Vertex(pos.x, pos.y);
//// Give new vertex a unique id.
//if (mesh != null)
//{
// newVert.id = mesh.NumberOfInputPoints;
//}
// Insert new vertex into existing triangulation.
otri.tri = mesh.dummytri;
mesh.InsertVertex(newVert, ref otri, ref osub, false, false);
// Draw result as a texture onto the wall so to visualise what is happening.
draw();
}
private void delaunayTriangulation()
{
mesh = (TriangleNet.Mesh)polygon.Triangulate(constraintOptions, qualityOptions);
draw();
}
void draw()
{
Texture2D tx = new Texture2D(width, height);
// Draw triangulation.
if (mesh.Edges != null)
{
foreach (Edge edge in mesh.Edges)
{
Vertex v0 = mesh.vertices[edge.P0];
Vertex v1 = mesh.vertices[edge.P1];
DrawLine(new Vector2((float)v0.x, (float)v0.y), new Vector2((float)v1.x, (float)v1.y), tx, Color.black);
}
}
tx.Apply();
this.GetComponent<Renderer>().sharedMaterial.mainTexture = tx;
}
// Bresenham line algorithm
private void DrawLine(Vector2 p0, Vector2 p1, Texture2D tx, Color c, int offset = 0)
{
int x0 = (int)p0.x;
int y0 = (int)p0.y;
int x1 = (int)p1.x;
int y1 = (int)p1.y;
int dx = Mathf.Abs(x1 - x0);
int dy = Mathf.Abs(y1 - y0);
int sx = x0 < x1 ? 1 : -1;
int sy = y0 < y1 ? 1 : -1;
int err = dx - dy;
while (true)
{
tx.SetPixel(x0 + offset, y0 + offset, c);
if (x0 == x1 && y0 == y1) break;
int e2 = 2 * err;
if (e2 > -dy)
{
err -= dy;
x0 += sx;
}
if (e2 < dx)
{
err += dx;
y0 += sy;
}
}
}
private float map(float from, float fromMin, float fromMax, float toMin, float toMax)
{
float fromAbs = from - fromMin;
float fromMaxAbs = fromMax - fromMin;
float normal = fromAbs / fromMaxAbs;
float toMaxAbs = toMax - toMin;
float toAbs = toMaxAbs * normal;
float to = toAbs + toMin;
return to;
}
}

Great news! I've managed to fix the issue. InsertVertex() doesn't actually add the new vertex to the list of vertices! So this means that when it tried to triangulate, it was trying to point to the new vertex but it couldn't (because that vertex wasn't in the list). So, to solve this, I just manually add my new vertex to the list of vertices in the mesh, before calling InsertVertex(). Note: When you do this, you also need to manually set the vertex's id. I set the id to the size of the list of vertices because I was adding all new vertices to the end of the list.
// When left click is pressed, a raycast is sent out. If that raycast hits the wall, updatePoints() is called and is passed in the location of the hit (hit.point).
public void updatePoints(Vector3 pos)
{
// Convert pos to local coords of wall. You don't need to do this, i do it because of my draw() method where i map everything out onto a texture and display it.
pos = transform.InverseTransformPoint(pos);
pos.x = map(pos.x, -0.5f, 0.5f, 0, 512);
pos.y = map(pos.y, -0.5f, 0.5f, 0, 512);
Vertex newVert = new Vertex(pos.x, pos.y);
// Manually add new vertex to list of vertices.
newVert.id = mesh.vertices.Count;
mesh.vertices.Add(newVert.id, newVert);
//Doing just the first line gave me a null pointer exception. Adding the two extra lines below it fixed it for me.
otri.tri = mesh.dummytri;
otri.orient = 0;
otri.Sym();
// Insert new vertex into existing triangulation.
mesh.InsertVertex(newVert, ref otri, ref osub, false, false);
// Draw result as a texture onto the wall so to visualise what is happening.
draw();
}
Hope this will help someone done the road!

Related

K-means centroids end up the same

I am trying to implement K-means in Unity to cluster randomly spawned assets around a terrain. Once the K-means operation completes I instantiate the last centroid positions and instantiate a Capsule for all locations that belong to a cluster and parent the instantiated Capsules to the centroid position to see and understand the final clustering result. The problem I am getting is that once it's spawned all relevant locations, it begins to spawn more and more capsules which do not parent to any cluster, not sure why.
Also the centroids do not seem to select unique random positions within my list of data points; when inspecting the positions of the final centroids they all seem to be at the same place. If I set k = 4 the 4th centroid never seems to spawn. I am struggling to find out where I am going wrong trying to implement this and would appreciate any insight.
Script for K-means clustering + Spawning assets (done in same script)
K-means.cs
Calculating initial centroid position
Vector3 CentroidPos()
{
var random = new System.Random();
var pos = assetSpawnLocations[random.Next(assetSpawnLocations.Count)];
if(centroidsInUse.Contains(pos))
{
return CentroidPos();
} else
{
return pos;
}
}
// Using funtion in Clustering method
void Clustering()
{
Vector3 centroid1 = CentroidPos();
Vector3 centroid2 = CentroidPos();
Vector3 centroid3 = CentroidPos();
Vector3 centroid4 = CentroidPos();
//...
}
Recalculating centroid position after data points have been assigned to cluster
Vector3 RecalculateCentroid(List<Vector3> Data)
{
float x = 0.0f;
float y = 0.0f;
float z = 0.0f;
for (int i = 0; i < Data.Count; i++)
{
x += Data[i].x;
y += Data[i].y;
z += Data[i].z;
}
return new Vector3(((x) / Data.Count), ((y) / Data.Count), ((z) / Data.Count));
}
Spawning the last centroid after K iterations and relevant data point positions using Capsule
// Spawn the last cluster centre and all it's points.
void SpawnResult(List<Vector3> cluster, Vector3 lastCentroid)
{
if(cluster != null && lastCentroid != null && cluster.Count > 0)
{
var c = Instantiate(GameObject.CreatePrimitive(PrimitiveType.Cylinder), lastCentroid, Quaternion.identity);
c.name = "Centroid";
for (int i = 0; i < cluster.Count; i++)
{
var item = Instantiate(GameObject.CreatePrimitive(PrimitiveType.Capsule), cluster[i], Quaternion.identity);
item.transform.parent = c.transform;
}
}
}
I am using DrawGizmos to place cubes with a certain colour depending on the cluster they belong to however it only seems to display Yellow cubes which is for cluster3 (see full script for DrawGizmos function implementation) [Small section of the map shown to clearly see yellow cube but this is the same across the map without colour changing for other clusters]

How to Spawn Objects in Unity3D with a Minimum Distance between

I am Programming a random "Stone" Spawner and have a big problem at the moment. I have some ideas how to fix it, but want to know a performance friendly way to do it.
So my way to Spawn the Objects on the Sphere Surface is this one:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class SpawnObjects : MonoBehaviour
{
public Vector3 centerOfSphere = new Vector3(0, 5000, 0);
public float radiusOfSphere = 5000.0f;
public List<GameObject> stones;
public int stonesToSpawn = 1;
void Start()
{
Vector3 center = transform.position + centerOfSphere; //Setting the center position of the Sphere
//This for loop is spawning the Stones on a random position on Sphere Surface
for (int i = 0; i < stonesToSpawn; i++)
{
Vector3 pos = RandomCircle(center, radiusOfSphere);
Quaternion rot = Quaternion.FromToRotation(Vector3.forward, center - pos);
Instantiate(stones[Random.Range(0, stones.Count)], pos, rot);
}
}
//Method returns a Random Position on a Sphere
Vector3 RandomCircle(Vector3 center, float radius)
{
float alpha = UnityEngine.Random.value * 360;
float beta = UnityEngine.Random.value * 360;
Vector3 pos;
pos.x = radius * Mathf.Cos(beta) * Mathf.Cos(alpha);
pos.y = radius * Mathf.Cos(beta) * Mathf.Sin(alpha);
pos.z = radius * Mathf.Sin(beta);
return pos;
}
}
So thank you for your following explanations! :)
As said to make your live one step easier simply use Random.onUnitSphere so your entire method RandomCircle (change that name btw!) can be shrinked to
private Vector3 RandomOnSphere(Vector3 center, float radius)
{
return center + Random.onUnitSphere * radius;
}
And then in order to have a minimum distance between them there are probably multiple ways but I guess the simplest - brute force - way would be:
store already used positions
when you get a new random position check the distance to already existing ones
keep get new random positions until you have found one, that is not too close to an already existing one
This depends of course a lot on your use case and the amount of objects and the minimum distance etc - in other words I leave it up to you to assure that the requested amount and minimum distance is doable at all with the given sphere radius.
You could always leave an "emergency exit" and give up after e.g. 100 attempts.
Something like e.g.
// Linq offers some handy query shorthands that allow to shorten
// long foreach loops into single calls
using System.Linq;
...
private const int MAX_ATTEMPTS = 100;
public float minimumDistance = 1f;
void Start()
{
var center = transform.position + centerOfSphere;
// It is cheaper to use with square magnitudes
var minDistanceSqr = minimumDistance * minimumDistance;
// For storing the already used positions
// Already initialize with the correct capacity, this saves resources
var usedPositions = new List<Vector3>(stonesToSpawn);
for (int i = 0; i < stonesToSpawn; i++)
{
// Keep track of the attempts (for the emergency break)
var attempts = 0;
Vector3 pos = Vector3.zero;
do
{
// Get a new random position
pos = RandomOnSphere(center, radiusOfSphere);
// increase the attempts
attempts++;
// We couldn't find a "free" position within the 100 attempts :(
if(attempts >= MAX_ATTEMPTS)
{
throw new Exception ("Unable to find a free spot! :'(");
}
}
// As the name suggests this checks if any "p" in "usedPositions" is too close to the given "pos"
while(usedPositions.Any(p => (p - pos).sqrMagnitude <= minDistanceSqr)));
var rot = Quaternion.FromToRotation(Vector3.forward, center - pos);
Instantiate(stones[Random.Range(0, stones.Count)], pos, rot);
// Finally add this position to the used ones so the next iteration
// also checks against this position
usedPositions.Add(pos);
}
}
Where
usedPositions.Any(p => (p - pos).sqrMagnitude <= minDistanceSqr))
basically equals doing something like
private bool AnyPointTooClose(Vector3 pos, List<Vector3> usedPositions, float minDistanceSqr)
{
foreach(var p in usedPositions)
{
if((p - pos).sqrMagnitude <= minDistanceSqr)
{
return true;
}
}
return false;
}
if that's better to understand for you

2D Projectile Trajectory Prediction(unity 2D)

using (unity 2019.3.7f1) 2d.
I have a player that moves around using a pullback mechanic and has a max power(like in Angry Birds).
I'm trying to draw a line(using a line renderer) that shows the exact path the player will go. I'm trying to make the line curve just like the player's path will. so far I've only managed to make a straight line in a pretty scuffed way.
The known variables are the Jump Power and the player's position, there is no friction. and I believe gravity is a constant(-9.81). Also, I would like to have a variable that allows me to control the line's length. And, if possible, the line will not go through objects and would act as if it has a collider.
// Edit
This is my current code. I changed The function so it would return the list's points because I wanted to be able to access it in Update() so it would only draw while I hold my mouse button.
My problem is that the trajectory line doesn't seem to curve, it goes in the right angle but it's straight. the line draws in the right direction and angle, but my initial issue of the line not curving remains unchanged. If you could please come back to me with an answer I would appreciate it.
enter code here
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class TrajectoryShower : MonoBehaviour
{
LineRenderer lr;
public int Points;
public GameObject Player;
private float collisionCheckRadius = 0.1f;
public float TimeOfSimulation;
private void Awake()
{
lr = GetComponent<LineRenderer>();
lr.startColor = Color.white;
}
// Update is called once per frame
void Update()
{
if (Input.GetButton("Fire1"))
{
lr.positionCount = SimulateArc().Count;
for (int a = 0; a < lr.positionCount;a++)
{
lr.SetPosition(a, SimulateArc()[a]);
}
}
if (Input.GetButtonUp("Fire1"))
{
lr.positionCount = 0;
}
}
private List<Vector2> SimulateArc()
{
float simulateForDuration = TimeOfSimulation;
float simulationStep = 0.1f;//Will add a point every 0.1 secs.
int steps = (int)(simulateForDuration / simulationStep);
List<Vector2> lineRendererPoints = new List<Vector2>();
Vector2 calculatedPosition;
Vector2 directionVector = Player.GetComponent<DragAndShoot>().Direction;// The direction it should go
Vector2 launchPosition = transform.position;//Position where you launch from
float launchSpeed = 5f;//The initial power applied on the player
for (int i = 0; i < steps; ++i)
{
calculatedPosition = launchPosition + (directionVector * ( launchSpeed * i * simulationStep));
//Calculate gravity
calculatedPosition.y += Physics2D.gravity.y * (i * simulationStep);
lineRendererPoints.Add(calculatedPosition);
if (CheckForCollision(calculatedPosition))//if you hit something
{
break;//stop adding positions
}
}
return lineRendererPoints;
}
private bool CheckForCollision(Vector2 position)
{
Collider2D[] hits = Physics2D.OverlapCircleAll(position, collisionCheckRadius);
if (hits.Length > 0)
{
for (int x = 0;x < hits.Length;x++)
{
if (hits[x].tag != "Player" && hits[x].tag != "Floor")
{
return true;
}
}
}
return false;
}
}
Here's a simple way to visualize this.
To create your line you want a bunch of points.
The points represents the player's positions after being fired after X amount of time.
The position of each point is going to be : DirectionVector * (launch speed * time elapse) + (GravityDirection * time elapse^2)
You can decide in advance how far you pre calculate the points by simulating X duration and choosing the simulation step(calculate a point every X amount of time)
To detect collision each time you calculate a point you can do a small circle cast at that location. If it hits something you can stop add new points.
private float collisionCheckRadius = 0.1f;
private void SimulateArc()
{
float simulateForDuration = 5f;//simulate for 5 secs in the furture
float simulationStep = 0.1f;//Will add a point every 0.1 secs.
int steps = (int)(simulateForDuration/simulationStep);//50 in this example
List<Vector2> lineRendererPoints = new List<Vector2>();
Vector2 calculatedPosition;
Vector2 directionVector = new Vector2(0.5f,0.5f);//You plug you own direction here this is just an example
Vector2 launchPosition = Vector2.zero;//Position where you launch from
float launchSpeed = 10f;//Example speed per secs.
for(int i = 0; i < steps; ++i)
{
calculatedPosition = launchPosition + ( directionVector * (launchSpeed * i * simulationStep));
//Calculate gravity
calculatedPosition.y += Physics2D.gravity.y * ( i * simulationStep) * ( i * simulationStep);
lineRendererPoints.Add(calculatedPosition);
if(CheckForCollision(calculatedPosition))//if you hit something
{
break;//stop adding positions
}
}
//Assign all the positions to the line renderer.
}
private bool CheckForCollision(Vector2 position)
{
Collider2D[] hits = Physics2D.OverlapCircleAll(position, collisionCheckRadius);
if(hits.Length > 0)
{
//We hit something
//check if its a wall or seomthing
//if its a valid hit then return true
return true;
}
return false;
}
This is basically a sum of 2 vectors along the time.
You have your initial position (x0, y0), initial speed vector (x, y) and gravity vector (0, -9.81) being added along the time. You can build a function that gives you the position over time:
f(t) = (x0 + x*t, y0 + y*t - 9.81t²/2)
translating to Unity:
Vector2 positionInTime(float time, Vector2 initialPosition, Vector2 initialSpeed){
return initialPosition +
new Vector2(initialSpeed.x * t, initialSpeed.y * time - 4.905 * (time * time);
}
Now, choose a little delta time, say dt = 0.25.
Time | Position
0) 0.00 | f(0.00) = (x0, y0)
1) 0.25 | f(0.25) = (x1, y1)
2) 0.50 | f(0.50) = (x2, y2)
3) 0.75 | f(0.75) = (x3, y3)
4) 1.00 | f(1.00) = (x4, y4)
... | ...
Over time, you have a lot of points where the line will cross. Choose a time interval (say 3 seconds), evaluate all the points between 0 and 3 seconds (using f) and put your line renderer to cover one by one.
The line renderer have properties like width, width over time, color, etc. This is up to you.

Algorithm for generating a "ramp" object in Unity

I'm creating a basic simulator in Unity for my A-level Computer Science project. At the moment the user is able to draw a box (crate) object by selecting the associated tool and clicking and dragging to determine two opposite corners of the box, thus determining its dimensions.
The box consists of a single prefab which is instantiated and has its size changed accordingly. The code for it is as follows:
void Start () {
boxAnim = boxButton.GetComponent<Animator>();
}
// Update is called once per frame
void Update()
{
//sets the mouseDown and mouseHeld bools and the mouse position Vector3
mouseDown = Input.GetMouseButtonDown(0);
mouseHeld = Input.GetMouseButton(0);
mousePosition = Input.mousePosition;
//checks if the user has started to draw
if (mouseDown && !draw)
{
draw = true;
originalMousePosition = mousePosition;
}
//checking if the user has released the mouse
if (draw && !mouseHeld)
{
finalMousePosition = mousePosition;
draw = false;
if (boxAnim.GetBool("Pressed") == true) //if the box draw button is pressed
{
boxDraw(originalMousePosition, finalMousePosition); //draws crate
}
}
}
void boxDraw(Vector3 start, Vector3 end)
{
//asigns world coordinates for the start and end mouse positions
worldStart = Camera.main.ScreenToWorldPoint(start);
worldEnd = Camera.main.ScreenToWorldPoint(end);
if (worldStart.y >= -3.2f && worldEnd.y >= -3.2f)
{
//determines the size of box to be drawn
boxSize.x = Mathf.Abs(worldStart.x - worldEnd.x);
boxSize.y = Mathf.Abs(worldStart.y - worldEnd.y);
//crate sprite is 175px wide, 175/50 = 3.5 (50px per unit) so the scale factor must be the size, divided by 3.5
boxScaleFactor.x = boxSize.x / 3.5f;
boxScaleFactor.y = boxSize.y / 3.5f;
//initial scale of the box is 1 (this isn't necessary but makes reading program easier)
boxScale.x = 1 * boxScaleFactor.x;
boxScale.y = 1 * boxScaleFactor.y;
//creates a new crate under the name newBox and alters its size
GameObject newBox = Instantiate(box, normalCoords(start, end), box.transform.rotation) as GameObject;
newBox.transform.localScale = boxScale;
}
}
Vector3 normalCoords(Vector3 start, Vector3 end)
{
//takes start and end coordinates as position coordinates and returns a world coordinate coordinate for the box
if(end.x > start.x)
{
start.x = start.x + (Mathf.Abs(start.x - end.x) / 2f);
}
else
{
start.x = start.x - (Mathf.Abs(start.x - end.x) / 2f);
}
if(end.y > start.y)
{
start.y = start.y + (Mathf.Abs(start.y - end.y) / 2f);
}
else
{
start.y = start.y - (Mathf.Abs(start.y - end.y) / 2f);
}
start = Camera.main.ScreenToWorldPoint(new Vector3(start.x, start.y, 0f));
return start;
}
In a similar manner, I want to be able to have a 'ramp' object be able to be created, so that the user can click and drag to determine the base width, then click again to determine the angle of elevation/ height, (the ramp will always be a right angled triangle.) The problem lies in that I want to have the ramp as a sprite I have created, rather than just a basic block colour. A single sprite however would only have a single angle of elevation, and no transform would be able to alter this (as far as I'm aware.) Obviously I don't want to have to create a different sprite for each angle, so is there anything I can do?
The solution I was thinking was if there was some sort of feature whereby I could alter the nodes of a vector image in the code, but I'm pretty sure this doesn't exist.
EDIT: Just to clarify this is a 2D environment, the code includes Vector3s just because that’s what I’m used to
You mention Sprite which is a 2D object (well, its actually very much alike a Quad which counts as 3D) but you reference full 3D in other parts of your question and in your code, which I think was confusing people, because creating a texture for a sprite is a very different problem. I am assuming you mentioned Sprite by mistake and you actually want a 3D object (Unity is 3D internally most of the time anyways), it can only have one side if you want
You can create 3D shapes from code no problems, although you do need to get familiar with the Mesh class, and mastering creating triangles on the fly takes some practice
Here's a couple of good starting points
https://docs.unity3d.com/Manual/Example-CreatingaBillboardPlane.html
https://docs.unity3d.com/ScriptReference/Mesh.html
I have a solution to part of the problem using meshes and a polygon collider. I now have a function that will create a right angled triangle with a given width and height and a collider in the shape of that triangle:
using UnityEngine;
using System.Collections;
public class createMesh : MonoBehaviour {
public float width = 5f;
public float height = 5f;
public PolygonCollider2D polyCollider;
void Start()
{
polyCollider = GetComponent<PolygonCollider2D>();
}
// Update is called once per frame
void Update () {
TriangleMesh(width, height);
}
void TriangleMesh(float width, float height)
{
MeshFilter mf = GetComponent<MeshFilter>();
Mesh mesh = new Mesh();
mf.mesh = mesh;
//Verticies
Vector3[] verticies = new Vector3[3]
{
new Vector3(0,0,0), new Vector3(width, 0, 0), new Vector3(0,
height, 0)
};
//Triangles
int[] tri = new int[3];
tri[0] = 0;
tri[1] = 2;
tri[2] = 1;
//normals
Vector3[] normals = new Vector3[3];
normals[0] = -Vector3.forward;
normals[1] = -Vector3.forward;
normals[2] = -Vector3.forward;
//UVs
Vector2[] uv = new Vector2[3];
uv[0] = new Vector2(0, 0);
uv[0] = new Vector2(1, 0);
uv[0] = new Vector2(0, 1);
//initialise
mesh.vertices = verticies;
mesh.triangles = tri;
mesh.normals = normals;
mesh.uv = uv;
//setting up collider
polyCollider.pathCount = 1;
Vector2[] path = new Vector2[3]
{
new Vector2(0,0), new Vector2(0, height), new Vector2(width, 0)
};
polyCollider.SetPath(0, path);
}
}
I just need to put this function into code very similar to my code for drawing a box so that the user can specify width and height.

Meshes starting to jump on camera rotation/movement

Hey together,
first time posting here, because I'm damn stuck...
The further away a mesh is from the origin at (0, 0, 0), the more it "jumps"/"flickers" when rotating or moving the camera. It's somehow hard to describe this effect: it is like the mesh is jittering/shivering/trembling a little bit and this trembling gets bigger and bigger as you gain distance to the origin.
For me, it begins to be observable at around 100000 units distance to the origin, so at (0, 0, 100000) for example. Neither the axis of the translation nor the type of the mesh (default mesh created from Mesh.Create... or with assimp.NET imported 3ds mesh) have influence on this effect. The value of the position of the mesh doesn't change when this effect occurs, checked this by logging the position.
If I'm not missing something, this narrows it down to two possibilities:
My camera code
The DirectX-Device
As for the DirectX-Device, this is my device initialization code:
private void InitializeDevice()
{
//Initialize D3D
_d3dObj = new D3D9.Direct3D();
//Set presentation parameters
_presParams = new D3D9.PresentParameters();
_presParams.Windowed = true;
_presParams.SwapEffect = D3D9.SwapEffect.Discard;
_presParams.AutoDepthStencilFormat = D3D9.Format.D16;
_presParams.EnableAutoDepthStencil = true;
_presParams.PresentationInterval = D3D9.PresentInterval.One;
_presParams.BackBufferFormat = _d3dObj.Adapters.DefaultAdapter.CurrentDisplayMode.Format;
_presParams.BackBufferHeight = _d3dObj.Adapters.DefaultAdapter.CurrentDisplayMode.Height;
_presParams.BackBufferWidth = _d3dObj.Adapters.DefaultAdapter.CurrentDisplayMode.Width;
//Set form width and height to current backbuffer width und height
this.Width = _presParams.BackBufferWidth;
this.Height = _presParams.BackBufferHeight;
//Checking device capabilities
D3D9.Capabilities caps = _d3dObj.GetDeviceCaps(0, D3D9.DeviceType.Hardware);
D3D9.CreateFlags devFlags = D3D9.CreateFlags.SoftwareVertexProcessing;
D3D9.DeviceType devType = D3D9.DeviceType.Reference;
//setting device flags according to device capabilities
if ((caps.VertexShaderVersion >= new Version(2, 0)) && (caps.PixelShaderVersion >= new Version(2, 0)))
{
//if device supports vertexshader and pixelshader >= 2.0
//then use the hardware device
devType = D3D9.DeviceType.Hardware;
if (caps.DeviceCaps.HasFlag(D3D9.DeviceCaps.HWTransformAndLight))
{
devFlags = D3D9.CreateFlags.HardwareVertexProcessing;
}
if (caps.DeviceCaps.HasFlag(D3D9.DeviceCaps.PureDevice))
{
devFlags |= D3D9.CreateFlags.PureDevice;
}
}
//initialize the device
_device = new D3D9.Device(_d3dObj, 0, devType, this.Handle, devFlags, _presParams);
//set culling
_device.SetRenderState(D3D9.RenderState.CullMode, D3D9.Cull.Counterclockwise);
//set texturewrapping (needed for seamless spheremapping)
_device.SetRenderState(D3D9.RenderState.Wrap0, D3D9.TextureWrapping.All);
//set lighting
_device.SetRenderState(D3D9.RenderState.Lighting, false);
//enabling the z-buffer
_device.SetRenderState(D3D9.RenderState.ZEnable, D3D9.ZBufferType.UseZBuffer);
//and setting write-access exlicitly to true...
//i'm a little paranoid about this since i had to struggle for a few days with weirdly overlapping meshes
_device.SetRenderState(D3D9.RenderState.ZWriteEnable, true);
}
Am I missing a flag or renderstate? Is there something that could cause such a weird/distorted behaviour?
My camera class is based on Michael Silvermans C++ Quaternion Camera:
//every variable prefixed with an underscore is
//a private static variable initialized beforehand
public static class Camera
{
//gets called every frame
public static void Update()
{
if (_filter)
{
_filteredPos = Vector3.Lerp(_filteredPos, _pos, _filterAlpha);
_filteredRot = Quaternion.Slerp(_filteredRot, _rot, _filterAlpha);
}
_device.SetTransform(D3D9.TransformState.Projection, Matrix.PerspectiveFovLH(_fov, _screenAspect, _nearClippingPlane, _farClippingPlane));
_device.SetTransform(D3D9.TransformState.View, GetViewMatrix());
}
public static void Move(Vector3 delta)
{
_pos += delta;
}
public static void RotationYaw(float theta)
{
_rot = Quaternion.Multiply(Quaternion.RotationAxis(_up, -theta), _rot);
}
public static void RotationPitch(float theta)
{
_rot = Quaternion.Multiply(_rot, Quaternion.RotationAxis(_right, theta));
}
public static void SetTarget(Vector3 target, Vector3 up)
{
SetPositionAndTarget(_pos, target, up);
}
public static void SetPositionAndTarget(Vector3 position, Vector3 target, Vector3 upVec)
{
_pos = position;
Vector3 up, right, lookAt = target - _pos;
lookAt = Vector3.Normalize(lookAt);
right = Vector3.Cross(upVec, lookAt);
right = Vector3.Normalize(right);
up = Vector3.Cross(lookAt, right);
up = Vector3.Normalize(up);
SetAxis(lookAt, up, right);
}
public static void SetAxis(Vector3 lookAt, Vector3 up, Vector3 right)
{
Matrix rot = Matrix.Identity;
rot.M11 = right.X;
rot.M12 = up.X;
rot.M13 = lookAt.X;
rot.M21 = right.Y;
rot.M22 = up.Y;
rot.M23 = lookAt.Y;
rot.M31 = right.Z;
rot.M32 = up.Z;
rot.M33 = lookAt.Z;
_rot = Quaternion.RotationMatrix(rot);
}
public static void ViewScene(BoundingSphere sphere)
{
SetPositionAndTarget(sphere.Center - new Vector3((sphere.Radius + 150) / (float)Math.Sin(_fov / 2), 0, 0), sphere.Center, new Vector3(0, 1, 0));
}
public static Vector3 GetLookAt()
{
Matrix rot = Matrix.RotationQuaternion(_rot);
return new Vector3(rot.M13, rot.M23, rot.M33);
}
public static Vector3 GetRight()
{
Matrix rot = Matrix.RotationQuaternion(_rot);
return new Vector3(rot.M11, rot.M21, rot.M31);
}
public static Vector3 GetUp()
{
Matrix rot = Matrix.RotationQuaternion(_rot);
return new Vector3(rot.M12, rot.M22, rot.M32);
}
public static Matrix GetViewMatrix()
{
Matrix viewMatrix, translation = Matrix.Identity;
Vector3 position;
Quaternion rotation;
if (_filter)
{
position = _filteredPos;
rotation = _filteredRot;
}
else
{
position = _pos;
rotation = _rot;
}
translation = Matrix.Translation(-position.X, -position.Y, -position.Z);
viewMatrix = Matrix.Multiply(translation, Matrix.RotationQuaternion(rotation));
return viewMatrix;
}
}
Do you spot anything in the camera code which could cause this behaviour?
I just can't imagine that DirectX can't handle distances greater than 100k. I am supposed to render solar systems and I'm using 1 unit = 1km. So the earth would be rendered at its maximum distance to the sun at (0, 0, 152100000) (just as an example). This is becoming impossible if these "jumps" keep occuring.
Finally i thought about scaling everything down, so that a system never goes beyond 100k/-100k distance from the origin, but I think this won't work because the "jittering" gets bigger as the distance from the origin gets bigger. Scaling everything down would - i think - scale down the jumping-behaviour, too.
Just to not leave this question unanswered (credits to #jcoder, see comments of question):
The weird behaviour of the meshes comes from the floating point precision of DX. The bigger your world gets, the less precision is there to calculate positions accurately.
There are two possibilities to solve this problem:
Downscaling the whole world
this may be problematic in a "galactic-style" world, where you have really big position offsets as well as really small ones (i.e. the distance of a planet to its sun is really big, but the distance of a spaceship in orbit of a planet may be really small)
Dividing the world into smaller chunks
this way you have either to express all positions relative to something else (see stackoverflow.com/questions/1930421) or make multiple worlds and somehow move between them

Categories