I'm creating a basic simulator in Unity for my A-level Computer Science project. At the moment the user is able to draw a box (crate) object by selecting the associated tool and clicking and dragging to determine two opposite corners of the box, thus determining its dimensions.
The box consists of a single prefab which is instantiated and has its size changed accordingly. The code for it is as follows:
void Start () {
boxAnim = boxButton.GetComponent<Animator>();
}
// Update is called once per frame
void Update()
{
//sets the mouseDown and mouseHeld bools and the mouse position Vector3
mouseDown = Input.GetMouseButtonDown(0);
mouseHeld = Input.GetMouseButton(0);
mousePosition = Input.mousePosition;
//checks if the user has started to draw
if (mouseDown && !draw)
{
draw = true;
originalMousePosition = mousePosition;
}
//checking if the user has released the mouse
if (draw && !mouseHeld)
{
finalMousePosition = mousePosition;
draw = false;
if (boxAnim.GetBool("Pressed") == true) //if the box draw button is pressed
{
boxDraw(originalMousePosition, finalMousePosition); //draws crate
}
}
}
void boxDraw(Vector3 start, Vector3 end)
{
//asigns world coordinates for the start and end mouse positions
worldStart = Camera.main.ScreenToWorldPoint(start);
worldEnd = Camera.main.ScreenToWorldPoint(end);
if (worldStart.y >= -3.2f && worldEnd.y >= -3.2f)
{
//determines the size of box to be drawn
boxSize.x = Mathf.Abs(worldStart.x - worldEnd.x);
boxSize.y = Mathf.Abs(worldStart.y - worldEnd.y);
//crate sprite is 175px wide, 175/50 = 3.5 (50px per unit) so the scale factor must be the size, divided by 3.5
boxScaleFactor.x = boxSize.x / 3.5f;
boxScaleFactor.y = boxSize.y / 3.5f;
//initial scale of the box is 1 (this isn't necessary but makes reading program easier)
boxScale.x = 1 * boxScaleFactor.x;
boxScale.y = 1 * boxScaleFactor.y;
//creates a new crate under the name newBox and alters its size
GameObject newBox = Instantiate(box, normalCoords(start, end), box.transform.rotation) as GameObject;
newBox.transform.localScale = boxScale;
}
}
Vector3 normalCoords(Vector3 start, Vector3 end)
{
//takes start and end coordinates as position coordinates and returns a world coordinate coordinate for the box
if(end.x > start.x)
{
start.x = start.x + (Mathf.Abs(start.x - end.x) / 2f);
}
else
{
start.x = start.x - (Mathf.Abs(start.x - end.x) / 2f);
}
if(end.y > start.y)
{
start.y = start.y + (Mathf.Abs(start.y - end.y) / 2f);
}
else
{
start.y = start.y - (Mathf.Abs(start.y - end.y) / 2f);
}
start = Camera.main.ScreenToWorldPoint(new Vector3(start.x, start.y, 0f));
return start;
}
In a similar manner, I want to be able to have a 'ramp' object be able to be created, so that the user can click and drag to determine the base width, then click again to determine the angle of elevation/ height, (the ramp will always be a right angled triangle.) The problem lies in that I want to have the ramp as a sprite I have created, rather than just a basic block colour. A single sprite however would only have a single angle of elevation, and no transform would be able to alter this (as far as I'm aware.) Obviously I don't want to have to create a different sprite for each angle, so is there anything I can do?
The solution I was thinking was if there was some sort of feature whereby I could alter the nodes of a vector image in the code, but I'm pretty sure this doesn't exist.
EDIT: Just to clarify this is a 2D environment, the code includes Vector3s just because that’s what I’m used to
You mention Sprite which is a 2D object (well, its actually very much alike a Quad which counts as 3D) but you reference full 3D in other parts of your question and in your code, which I think was confusing people, because creating a texture for a sprite is a very different problem. I am assuming you mentioned Sprite by mistake and you actually want a 3D object (Unity is 3D internally most of the time anyways), it can only have one side if you want
You can create 3D shapes from code no problems, although you do need to get familiar with the Mesh class, and mastering creating triangles on the fly takes some practice
Here's a couple of good starting points
https://docs.unity3d.com/Manual/Example-CreatingaBillboardPlane.html
https://docs.unity3d.com/ScriptReference/Mesh.html
I have a solution to part of the problem using meshes and a polygon collider. I now have a function that will create a right angled triangle with a given width and height and a collider in the shape of that triangle:
using UnityEngine;
using System.Collections;
public class createMesh : MonoBehaviour {
public float width = 5f;
public float height = 5f;
public PolygonCollider2D polyCollider;
void Start()
{
polyCollider = GetComponent<PolygonCollider2D>();
}
// Update is called once per frame
void Update () {
TriangleMesh(width, height);
}
void TriangleMesh(float width, float height)
{
MeshFilter mf = GetComponent<MeshFilter>();
Mesh mesh = new Mesh();
mf.mesh = mesh;
//Verticies
Vector3[] verticies = new Vector3[3]
{
new Vector3(0,0,0), new Vector3(width, 0, 0), new Vector3(0,
height, 0)
};
//Triangles
int[] tri = new int[3];
tri[0] = 0;
tri[1] = 2;
tri[2] = 1;
//normals
Vector3[] normals = new Vector3[3];
normals[0] = -Vector3.forward;
normals[1] = -Vector3.forward;
normals[2] = -Vector3.forward;
//UVs
Vector2[] uv = new Vector2[3];
uv[0] = new Vector2(0, 0);
uv[0] = new Vector2(1, 0);
uv[0] = new Vector2(0, 1);
//initialise
mesh.vertices = verticies;
mesh.triangles = tri;
mesh.normals = normals;
mesh.uv = uv;
//setting up collider
polyCollider.pathCount = 1;
Vector2[] path = new Vector2[3]
{
new Vector2(0,0), new Vector2(0, height), new Vector2(width, 0)
};
polyCollider.SetPath(0, path);
}
}
I just need to put this function into code very similar to my code for drawing a box so that the user can specify width and height.
Related
So i am trying so to make a script for zooming in and out on a ui element(image), and so far i am doing it by scaling the image based on differences in magnitudes between touches from frame to frame, the only problem is that is zooms from where the pivot is.
The solution would be to move the pivot to the point the is the middle of the line that connects the touches. I tried but it puts my pivot way out of the screen due to the fact that i get values greater than 1 from the formula i used.
Basically, i don't know how to move the pivot so it matches the point the would be the middle of the line that connects the touches.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class Zoom : MonoBehaviour
{
public GameObject image1, image2;
public RectTransform tran1, tran2;
public float zoomSpeed = 0.0090f;
public bool startZooming = false;
void Start()
{
tran1 = image1.GetComponent<RectTransform>();
tran2 = image2.GetComponent<RectTransform>();
}
void Update()
{
if(Input.touchCount == 0)
{
//Remember if the player is zooming to be able to change pivot point
startZooming = false;
}
// If there are two touches on the device...
if (Input.touchCount == 2)
{
// Store both touches.
Touch touchZero = Input.GetTouch(0);
Touch touchOne = Input.GetTouch(1);
// Find the position in the previous frame of each touch.
Vector2 touchZeroPrevPos = touchZero.position - touchZero.deltaPosition;
Vector2 touchOnePrevPos = touchOne.position - touchOne.deltaPosition;
// Find the magnitude of the vector (the distance) between the touches in each frame.
float prevTouchDeltaMag = (touchZeroPrevPos - touchOnePrevPos).magnitude;
float touchDeltaMag = (touchZero.position - touchOne.position).magnitude;
// Find the difference in the distances between each frame.
float deltaMagnitudeDiff = prevTouchDeltaMag - touchDeltaMag;
//Find pivot point, the middle of the line that connects the touch points
if (deltaMagnitudeDiff < 0 && startZooming == false)
{
float xpivot, ypivot;
xpivot = (touchZero.position.x + touchOne.position.x) / 2;
ypivot = (touchOne.position.y + touchZero.position.y) / 2;
tran1.pivot = new Vector2(xpivot, ypivot);
tran2.pivot = new Vector2(xpivot, ypivot);
startZooming = true; // player is currently zooming, don't change the pivot point
}
float x, y;
x = tran1.localScale.x - deltaMagnitudeDiff * zoomSpeed;
y = tran1.localScale.y - deltaMagnitudeDiff * zoomSpeed;
// Make sure the localScale size never goes below 1 or above 5
x = Mathf.Clamp(x, 1.0f, 5.0f);
y = Mathf.Clamp(y, 1.0f, 5.0f);
// ... change the localScale size based on the change in distance between the touches.
tran1.localScale = new Vector3(x, y, tran1.localScale.z);
tran2.localScale = new Vector3(x, y, tran2.localScale.z);
}
}
}
The RectTrasnform.pivot you are trying to set is
The normalized position in this RectTransform that it rotates around.
this would be a vector between 0,0 and 1,1,!
You can use ScaleAroundRelative from this post
public static void ScaleAround(Transform target, Vector3 pivotInWorldSpace, Vector3 newScale)
{
// pivot
var localPivot = target.InverseTransformPoint(pivotInWorldSpace);
// diff from object pivot to desired pivot/origin
Vector3 pivotDelta = target.transform.localPosition - pivot;
Vector3 scaleFactor = new Vector3(
newScale.x / target.transform.localScale.x,
newScale.y / target.transform.localScale.y,
newScale.z / target.transform.localScale.z );
pivotDelta.Scale(scaleFactor);
target.transform.localPosition = pivot + pivotDelta;
//scale
target.transform.localScale = newScale;
}
then I would implement it using something like
private float initialTouchDistance;
private Vector2 pivot1;
private Vector2 pivot2;
private Vector2 initialScale1;
private Vector2 initialScale2;
void Update()
{
// If there are two touches on the device...
if (Input.touchCount == 2)
{
// Store both touches.
var touchZero = Input.GetTouch(0);
var touchOne = Input.GetTouch(1);
var touchZeroPosition = touchZero.position;
var touchOnePosition = touchOne.position;
var currentDistance = Vector2.Distance(touchZeroPosition, touchOnePosition);
// Is this a new zoom process? => One of the two touches was added this frame
if(touchZero.phase == TouchPhase.Began || touchOne.phase == TouchPhase.Began)
{
// initialize values
initialTouchDistance = currentDistance;
initialScale1 = tran1.localScale;
initialScale2 = tran2.localScale;
// get center between the touches
// THIS IS STILL IN PIXEL SPACE
var zoomPivot = (touchZeroPosition + touchOnePosition) / 2f;
// Get the position on the RectTransforms planes in WORLD SPACE
RectTransformUtility.ScreenPointToWorldPointInRectangle(tran1, zoomPivot, Camera.main, out pivot1));
RectTransformUtility.ScreenPointToWorldPointInRectangle(tran2, zoomPivot, Camera.main, out pivot2));
}
// Is this an already running zoom process and at least one of the touches was moved?
else if(touchZero.phase == TouchPhase.Moved || touchOne.phase == TouchPhase.Moved)
{
// Scale factor
var factor = (currentDistance / initialTouchDistance) * zoomSpeed;
factor = Mathf.Clamp(factor, 1, 5);
var scaleVector1 = Vector3.Scale(new Vector3(factor, factor, 1), initialScale1);
var scaleVector2 = Vector3.Scale(new Vector3(factor, factor, 1), initialScale2);
// apply scaled around pivot
ScaleAround(tran1, pivot1, scaleVector1);
ScaleAround(tran2, pivot2, scaleVector2);
}
}
}
Note: Typed on smartphone but I hope the idea gets clear
I am implementing a pan function in a 3D perspective view using OpenTK and C#. The idea is to have an intuitive 'click and drag' functionality with the right mouse button. One obvious complication is which depth within the scene to use for the click/drag, since the amount of displacement is depth dependent.
I have got mostly blank space in the scene so expecting to get a depth from an object under the pointer doesn't seem a good solution. Therefore my idea is to use the model space origin (which is also the centre of rotation) as the reference, and have that move with the mouse.
I have used the code below. It works as intended with the original zoom level, but once I zoom in or out the displacement becomes too much (when zoomed in) or too little (when zoomed out).
The method is this, when the right button is pressed:
Get the screen coordinates at the start and end of the move.
Get the screen z (depth within the scene) for the world space origin (0,0,0).
Apply that z value to the start/end screen coordinates and convert them to model space.
Get the vector between the two.
Apply that vector to the model matrix as a translation, send to vertex shader and render scene.
My questions are:
Am I overdoing the calculations here? Could the method be simplified?
Why would my mouse pointer not being staying a fixed distance from the origin as intended once the zoom level changes?
.
private Vector3 pan = new Vector3();
private float zoom = -3;
private Point mouseStartDrag;
private void GlControl1_MouseDown(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Right)
{
mouseStartDrag = new Point(e.X, e.Y);
}
}
private void GlControl1_MouseMove(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Right) {
Vector3 origin_screen = Project( new Vector3(0, 0, 0) ); // get the screen z for the world space origin
Vector4 screen1 = ScreenToViewSpace( mouseStartDrag, origin_screen.Z ); // start
Vector4 screen2 = ScreenToViewSpace( new Point(e.X, e.Y), origin_screen.Z ); // end
pan = new Vector3(screen2 - screen1);
ApplyPanZoom();
}
}
private Vector4 ScreenToViewSpace(Point MousePos, float ScreenZ) {
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4 pos = new Vector4();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (MousePos.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (MousePos.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = ScreenZ * 2.0f - 1.0f;
pos.W = 1.0f;
return pos * view;
}
private Vector3 Project(Vector3 p)
{
Vector4 clipSpace = new Vector4(p.X, p.Y, p.Z, 1.0f) * model * view * projection; // clip space coordinates
Vector4 ndc = Vector4.Divide(clipSpace, clipSpace.W); // normalised device coordinates
return new Vector3(
glControl1.Width * (ndc.X + 1) / 2,
glControl1.Height * (ndc.Y + 1) / 2,
(ndc.Z + 1) / 2
);
}
private void ApplyPanZoom() {
view = Matrix4.CreateTranslation(pan.X, pan.Y, zoom);
SetMatrix4(Handle, "view", view);
glControl1.Invalidate();
}
Answering my own question. Got it eventually through trial, error and Googling. The problem was that my 'screen to view space' method wasn't quite right. I had to multiply by the inverse projection matrix. This works, and it results in the sort of click-and-drag pan that I was trying to achieve:
private Vector3 ScreenToViewSpace(Point MousePos, float ScreenZ) {
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4 pos = new Vector4();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (MousePos.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (MousePos.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = ScreenZ * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4 a = Vector4.Transform(pos, Matrix4.Invert(projection));
Vector3 b = new Vector3(a.X, a.Y, a.Z);
return b / a.W;
}
I've looked through what seems like every question and resource there is for Triangle.NET trying to find an answer to how to insert a vertex into an existing triangulation. The closest I've gotten was in the discussion archives for Traingle.Net where someone asked a similar question (discussion id 632458) but unfortunately, the answer was not what I was looking for.
My goal here is to make a destructible wall in Unity where, when the player shoots the wall, it will create a hole in the wall (like in Rainbow Six Siege).
Here's what I did for my original implementation:
Create initial triangulation using the four corners of the wall.
When the player shoots, perform a raycast, if the raycast intersects with the wall then add the point of intersection to the polygon variable and re-triangulate the entire mesh using that variable.
Draw new triangulation on the wall as a texture to visualise what's happening.
Repeat.
As you can see, step 2 is the problem.
Because I re-triangulate the entire mesh every time the player hits the wall, the more times the player hits the wall the slower the triangulation gets as the number of vertices rises. This could be fine I guess, but I want destructible walls to play a major role in my game so this is not acceptable.
So, digging through the Triangle.Net source code, I find an internal method called InsertVertex. The summary for this method states:
Insert a vertex into a Delaunay triangulation, performing flips as necessary to maintain the Delaunay property.
This would mean I wouldn't have to re-triangulate every time the player shoots!
So I get to implementing this method, and...it doesn't work. I get an error like the one below:
NullReferenceException: Object reference not set to an instance of an object
TriangleNet.TriangleLocator.PreciseLocate (TriangleNet.Geometry.Point searchpoint, TriangleNet.Topology.Otri& searchtri, System.Boolean stopatsubsegment) (at Assets/Triangle.NET/TriangleLocator.cs:146)
I have been stuck on this problem for days and I cannot solve it for the life of me! If anyone who is knowledgeable enough with the Triangle.NET library would be willing to help me I would be so grateful! Along with that, if there is a better alternative to either the implementation or library I'm using (for my purpose which I outlined above) that would also be awesome!
Currently, how I've set up the scene is really simple, I just have a quad which I scaled up and added the script below to it as a component. I then linked that component to a shoot raycast script attached to the Main Camera:
How the scene is setup.
What it looks like in Play Mode.
The exact Triangle.Net repo I cloned is this one.
My code is posted below:
using UnityEngine;
using TriangleNet.Geometry;
using TriangleNet.Topology;
using TriangleNet.Meshing;
public class Delaunay : MonoBehaviour
{
[SerializeField]
private int randomPoints = 150;
[SerializeField]
private int width = 512;
[SerializeField]
private int height = 512;
private TriangleNet.Mesh mesh;
Polygon polygon = new Polygon();
Otri otri = default(Otri);
Osub osub = default(Osub);
ConstraintOptions constraintOptions = new ConstraintOptions() { ConformingDelaunay = true };
QualityOptions qualityOptions = new QualityOptions() { MinimumAngle = 25 };
void Start()
{
osub.seg = null;
Mesh objMesh = GetComponent<MeshFilter>().mesh;
// Add four corners of wall (quad in this case) to polygon.
//foreach (Vector3 vert in objMesh.vertices)
//{
// Vector2 temp = new Vector2();
// temp.x = map(vert.x, -0.5f, 0.5f, 0, 512);
// temp.y = map(vert.y, -0.5f, 0.5f, 0, 512);
// polygon.Add(new Vertex(temp.x, temp.y));
//}
// Generate random points and add to polygon.
for (int i = 0; i < randomPoints; i++)
{
polygon.Add(new Vertex(Random.Range(0.0f, width), Random.Range(0.0f, height)));
}
// Triangulate polygon.
delaunayTriangulation();
}
// When left click is pressed, a raycast is sent out. If that raycast hits the wall, updatePoints() is called and is passed in the location of the hit (hit.point).
public void updatePoints(Vector3 pos)
{
// Convert pos to local coords of wall.
pos = transform.InverseTransformPoint(pos);
Vertex newVert = new Vertex(pos.x, pos.y);
//// Give new vertex a unique id.
//if (mesh != null)
//{
// newVert.id = mesh.NumberOfInputPoints;
//}
// Insert new vertex into existing triangulation.
otri.tri = mesh.dummytri;
mesh.InsertVertex(newVert, ref otri, ref osub, false, false);
// Draw result as a texture onto the wall so to visualise what is happening.
draw();
}
private void delaunayTriangulation()
{
mesh = (TriangleNet.Mesh)polygon.Triangulate(constraintOptions, qualityOptions);
draw();
}
void draw()
{
Texture2D tx = new Texture2D(width, height);
// Draw triangulation.
if (mesh.Edges != null)
{
foreach (Edge edge in mesh.Edges)
{
Vertex v0 = mesh.vertices[edge.P0];
Vertex v1 = mesh.vertices[edge.P1];
DrawLine(new Vector2((float)v0.x, (float)v0.y), new Vector2((float)v1.x, (float)v1.y), tx, Color.black);
}
}
tx.Apply();
this.GetComponent<Renderer>().sharedMaterial.mainTexture = tx;
}
// Bresenham line algorithm
private void DrawLine(Vector2 p0, Vector2 p1, Texture2D tx, Color c, int offset = 0)
{
int x0 = (int)p0.x;
int y0 = (int)p0.y;
int x1 = (int)p1.x;
int y1 = (int)p1.y;
int dx = Mathf.Abs(x1 - x0);
int dy = Mathf.Abs(y1 - y0);
int sx = x0 < x1 ? 1 : -1;
int sy = y0 < y1 ? 1 : -1;
int err = dx - dy;
while (true)
{
tx.SetPixel(x0 + offset, y0 + offset, c);
if (x0 == x1 && y0 == y1) break;
int e2 = 2 * err;
if (e2 > -dy)
{
err -= dy;
x0 += sx;
}
if (e2 < dx)
{
err += dx;
y0 += sy;
}
}
}
private float map(float from, float fromMin, float fromMax, float toMin, float toMax)
{
float fromAbs = from - fromMin;
float fromMaxAbs = fromMax - fromMin;
float normal = fromAbs / fromMaxAbs;
float toMaxAbs = toMax - toMin;
float toAbs = toMaxAbs * normal;
float to = toAbs + toMin;
return to;
}
}
Great news! I've managed to fix the issue. InsertVertex() doesn't actually add the new vertex to the list of vertices! So this means that when it tried to triangulate, it was trying to point to the new vertex but it couldn't (because that vertex wasn't in the list). So, to solve this, I just manually add my new vertex to the list of vertices in the mesh, before calling InsertVertex(). Note: When you do this, you also need to manually set the vertex's id. I set the id to the size of the list of vertices because I was adding all new vertices to the end of the list.
// When left click is pressed, a raycast is sent out. If that raycast hits the wall, updatePoints() is called and is passed in the location of the hit (hit.point).
public void updatePoints(Vector3 pos)
{
// Convert pos to local coords of wall. You don't need to do this, i do it because of my draw() method where i map everything out onto a texture and display it.
pos = transform.InverseTransformPoint(pos);
pos.x = map(pos.x, -0.5f, 0.5f, 0, 512);
pos.y = map(pos.y, -0.5f, 0.5f, 0, 512);
Vertex newVert = new Vertex(pos.x, pos.y);
// Manually add new vertex to list of vertices.
newVert.id = mesh.vertices.Count;
mesh.vertices.Add(newVert.id, newVert);
//Doing just the first line gave me a null pointer exception. Adding the two extra lines below it fixed it for me.
otri.tri = mesh.dummytri;
otri.orient = 0;
otri.Sym();
// Insert new vertex into existing triangulation.
mesh.InsertVertex(newVert, ref otri, ref osub, false, false);
// Draw result as a texture onto the wall so to visualise what is happening.
draw();
}
Hope this will help someone done the road!
I have vertex that have a color value.
I'd like to make a mesh using vertex with the same color values.
This picture is an example.
I took pictures with my Android Phone, and I did image segmentation on the object
So I got a color value corresponding to the coordinate value.
I succeeded in just making textures. please check the image.
But I want a mesh object.
Below is making texture code.
var pixel = await this.segmentation.SegmentAsync(rotated, scaled.width, scaled.height);
// int pixel[][]; // image segmentation using tensorflow
Color transparentColor = new Color32(255, 255, 255, 0); // transparent
for (int y = 0; y < texture.height; y++)
{
for (int x = 0; x < texture.width; x++)
{
int class_output = pixel[y][x];
texture.SetPixel(x, y, pixel[y][x] == 0 ? transparentColor : colors[class_output]);
}
}
texture.Apply();
How can I make a mesh object?
1- Set a prefab with a MeshFilter and a MeshRenderer.
2- Variables inside the script that you will need to fill.
// This first list contains every vertex of the mesh that we are going to render
public List<Vector3> newVertices = new List<Vector3>();
// The triangles tell Unity how to build each section of the mesh joining
// the vertices
public List<int> newTriangles = new List<int>();
// The UV list is unimportant right now but it tells Unity how the texture is
// aligned on each polygon
public List<Vector2> newUV = new List<Vector2>();
// A mesh is made up of the vertices, triangles and UVs we are going to define,
// after we make them up we'll save them as this mesh
private Mesh mesh;
3- Initialize the mesh
void Start () {
mesh = GetComponent<MeshFilter> ().mesh;
float x = transform.position.x;
float y = transform.position.y;
float z = transform.position.z;
newVertices.Add( new Vector3 (x , y , z ));
newVertices.Add( new Vector3 (x + 1 , y , z ));
newVertices.Add( new Vector3 (x + 1 , y-1 , z ));
newVertices.Add( new Vector3 (x , y-1 , z ));
newTriangles.Add(0);
newTriangles.Add(1);
newTriangles.Add(3);
newTriangles.Add(1);
newTriangles.Add(2);
newTriangles.Add(3);
newUV.Add(new Vector2 (tUnit * tStone.x, tUnit * tStone.y + tUnit));
newUV.Add(new Vector2 (tUnit * tStone.x + tUnit, tUnit * tStone.y + tUnit));
newUV.Add(new Vector2 (tUnit * tStone.x + tUnit, tUnit * tStone.y));
newUV.Add(new Vector2 (tUnit * tStone.x, tUnit * tStone.y));
mesh.Clear ();
mesh.vertices = newVertices.ToArray();
mesh.triangles = newTriangles.ToArray();
mesh.uv = newUV.ToArray(); // add this line to the code here
mesh.Optimize ();
mesh.RecalculateNormals ();
}
This code will draw a square at the position of the prefab, if you keep adding vertices you can generate a more complex mesh.
The source of information is a tutorial to generate mensh for a terrain like minecrat, check the link for more information.
The answer which has been selected best is, in my opinion, faulty for four reasons. First, it is deprecated. Second, it is more complex than necessary. Third, it offers little explanation, and finally, it is mostly just a copy from someone else's blog post. For that reason, I offer a new suggestion. For more info, view the documentation here.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class meshmaker : MonoBehaviour {
Mesh mesh;
MeshFilter meshFilter;
Vector3[] newVertices;
int[] newTriangles;
// Use this for initialization
void Start () {
//First, we create an array of vector3's. Each vector3 will
//represent one vertex in our mesh. Our shape will be a half
//cube (probably the simplest 3D shape we can make.
newVertices = new Vector3[4];
newVertices [0] = new Vector3 (0, 0, 0);
newVertices [1] = new Vector3 (1, 0, 0);
newVertices [2] = new Vector3 (0, 1, 0);
newVertices [3] = new Vector3 (0, 0, 1);
//Next, we create an array of integers which will represent
//triangles. Triangles are built by taking integers in groups of
//three, with each integer representing a vertex from our array of
//vertices. Note that the integers are in a certain order. The order
//of integers determines the normal of the triangle. In this case,
//connecting 021 faces the triangle out, while 012 faces the
//triangle in.
newTriangles = new int[12];
newTriangles[0] = 0;
newTriangles[1] = 2;
newTriangles[2] = 1;
newTriangles[3] = 0;
newTriangles[4] = 1;
newTriangles[5] = 3;
newTriangles[6] = 0;
newTriangles[7] = 3;
newTriangles[8] = 2;
newTriangles[9] = 1;
newTriangles[10] = 2;
newTriangles[11] = 3;
//We instantiate our mesh object and attach it to our mesh filter
mesh = new Mesh ();
meshFilter = gameObject.GetComponent<MeshFilter> ();
meshFilter.mesh = mesh;
//We assign our vertices and triangles to the mesh.
mesh.vertices = newVertices;
mesh.triangles = newTriangles;
}
Ta da! Your very own half-cube.
Hey together,
first time posting here, because I'm damn stuck...
The further away a mesh is from the origin at (0, 0, 0), the more it "jumps"/"flickers" when rotating or moving the camera. It's somehow hard to describe this effect: it is like the mesh is jittering/shivering/trembling a little bit and this trembling gets bigger and bigger as you gain distance to the origin.
For me, it begins to be observable at around 100000 units distance to the origin, so at (0, 0, 100000) for example. Neither the axis of the translation nor the type of the mesh (default mesh created from Mesh.Create... or with assimp.NET imported 3ds mesh) have influence on this effect. The value of the position of the mesh doesn't change when this effect occurs, checked this by logging the position.
If I'm not missing something, this narrows it down to two possibilities:
My camera code
The DirectX-Device
As for the DirectX-Device, this is my device initialization code:
private void InitializeDevice()
{
//Initialize D3D
_d3dObj = new D3D9.Direct3D();
//Set presentation parameters
_presParams = new D3D9.PresentParameters();
_presParams.Windowed = true;
_presParams.SwapEffect = D3D9.SwapEffect.Discard;
_presParams.AutoDepthStencilFormat = D3D9.Format.D16;
_presParams.EnableAutoDepthStencil = true;
_presParams.PresentationInterval = D3D9.PresentInterval.One;
_presParams.BackBufferFormat = _d3dObj.Adapters.DefaultAdapter.CurrentDisplayMode.Format;
_presParams.BackBufferHeight = _d3dObj.Adapters.DefaultAdapter.CurrentDisplayMode.Height;
_presParams.BackBufferWidth = _d3dObj.Adapters.DefaultAdapter.CurrentDisplayMode.Width;
//Set form width and height to current backbuffer width und height
this.Width = _presParams.BackBufferWidth;
this.Height = _presParams.BackBufferHeight;
//Checking device capabilities
D3D9.Capabilities caps = _d3dObj.GetDeviceCaps(0, D3D9.DeviceType.Hardware);
D3D9.CreateFlags devFlags = D3D9.CreateFlags.SoftwareVertexProcessing;
D3D9.DeviceType devType = D3D9.DeviceType.Reference;
//setting device flags according to device capabilities
if ((caps.VertexShaderVersion >= new Version(2, 0)) && (caps.PixelShaderVersion >= new Version(2, 0)))
{
//if device supports vertexshader and pixelshader >= 2.0
//then use the hardware device
devType = D3D9.DeviceType.Hardware;
if (caps.DeviceCaps.HasFlag(D3D9.DeviceCaps.HWTransformAndLight))
{
devFlags = D3D9.CreateFlags.HardwareVertexProcessing;
}
if (caps.DeviceCaps.HasFlag(D3D9.DeviceCaps.PureDevice))
{
devFlags |= D3D9.CreateFlags.PureDevice;
}
}
//initialize the device
_device = new D3D9.Device(_d3dObj, 0, devType, this.Handle, devFlags, _presParams);
//set culling
_device.SetRenderState(D3D9.RenderState.CullMode, D3D9.Cull.Counterclockwise);
//set texturewrapping (needed for seamless spheremapping)
_device.SetRenderState(D3D9.RenderState.Wrap0, D3D9.TextureWrapping.All);
//set lighting
_device.SetRenderState(D3D9.RenderState.Lighting, false);
//enabling the z-buffer
_device.SetRenderState(D3D9.RenderState.ZEnable, D3D9.ZBufferType.UseZBuffer);
//and setting write-access exlicitly to true...
//i'm a little paranoid about this since i had to struggle for a few days with weirdly overlapping meshes
_device.SetRenderState(D3D9.RenderState.ZWriteEnable, true);
}
Am I missing a flag or renderstate? Is there something that could cause such a weird/distorted behaviour?
My camera class is based on Michael Silvermans C++ Quaternion Camera:
//every variable prefixed with an underscore is
//a private static variable initialized beforehand
public static class Camera
{
//gets called every frame
public static void Update()
{
if (_filter)
{
_filteredPos = Vector3.Lerp(_filteredPos, _pos, _filterAlpha);
_filteredRot = Quaternion.Slerp(_filteredRot, _rot, _filterAlpha);
}
_device.SetTransform(D3D9.TransformState.Projection, Matrix.PerspectiveFovLH(_fov, _screenAspect, _nearClippingPlane, _farClippingPlane));
_device.SetTransform(D3D9.TransformState.View, GetViewMatrix());
}
public static void Move(Vector3 delta)
{
_pos += delta;
}
public static void RotationYaw(float theta)
{
_rot = Quaternion.Multiply(Quaternion.RotationAxis(_up, -theta), _rot);
}
public static void RotationPitch(float theta)
{
_rot = Quaternion.Multiply(_rot, Quaternion.RotationAxis(_right, theta));
}
public static void SetTarget(Vector3 target, Vector3 up)
{
SetPositionAndTarget(_pos, target, up);
}
public static void SetPositionAndTarget(Vector3 position, Vector3 target, Vector3 upVec)
{
_pos = position;
Vector3 up, right, lookAt = target - _pos;
lookAt = Vector3.Normalize(lookAt);
right = Vector3.Cross(upVec, lookAt);
right = Vector3.Normalize(right);
up = Vector3.Cross(lookAt, right);
up = Vector3.Normalize(up);
SetAxis(lookAt, up, right);
}
public static void SetAxis(Vector3 lookAt, Vector3 up, Vector3 right)
{
Matrix rot = Matrix.Identity;
rot.M11 = right.X;
rot.M12 = up.X;
rot.M13 = lookAt.X;
rot.M21 = right.Y;
rot.M22 = up.Y;
rot.M23 = lookAt.Y;
rot.M31 = right.Z;
rot.M32 = up.Z;
rot.M33 = lookAt.Z;
_rot = Quaternion.RotationMatrix(rot);
}
public static void ViewScene(BoundingSphere sphere)
{
SetPositionAndTarget(sphere.Center - new Vector3((sphere.Radius + 150) / (float)Math.Sin(_fov / 2), 0, 0), sphere.Center, new Vector3(0, 1, 0));
}
public static Vector3 GetLookAt()
{
Matrix rot = Matrix.RotationQuaternion(_rot);
return new Vector3(rot.M13, rot.M23, rot.M33);
}
public static Vector3 GetRight()
{
Matrix rot = Matrix.RotationQuaternion(_rot);
return new Vector3(rot.M11, rot.M21, rot.M31);
}
public static Vector3 GetUp()
{
Matrix rot = Matrix.RotationQuaternion(_rot);
return new Vector3(rot.M12, rot.M22, rot.M32);
}
public static Matrix GetViewMatrix()
{
Matrix viewMatrix, translation = Matrix.Identity;
Vector3 position;
Quaternion rotation;
if (_filter)
{
position = _filteredPos;
rotation = _filteredRot;
}
else
{
position = _pos;
rotation = _rot;
}
translation = Matrix.Translation(-position.X, -position.Y, -position.Z);
viewMatrix = Matrix.Multiply(translation, Matrix.RotationQuaternion(rotation));
return viewMatrix;
}
}
Do you spot anything in the camera code which could cause this behaviour?
I just can't imagine that DirectX can't handle distances greater than 100k. I am supposed to render solar systems and I'm using 1 unit = 1km. So the earth would be rendered at its maximum distance to the sun at (0, 0, 152100000) (just as an example). This is becoming impossible if these "jumps" keep occuring.
Finally i thought about scaling everything down, so that a system never goes beyond 100k/-100k distance from the origin, but I think this won't work because the "jittering" gets bigger as the distance from the origin gets bigger. Scaling everything down would - i think - scale down the jumping-behaviour, too.
Just to not leave this question unanswered (credits to #jcoder, see comments of question):
The weird behaviour of the meshes comes from the floating point precision of DX. The bigger your world gets, the less precision is there to calculate positions accurately.
There are two possibilities to solve this problem:
Downscaling the whole world
this may be problematic in a "galactic-style" world, where you have really big position offsets as well as really small ones (i.e. the distance of a planet to its sun is really big, but the distance of a spaceship in orbit of a planet may be really small)
Dividing the world into smaller chunks
this way you have either to express all positions relative to something else (see stackoverflow.com/questions/1930421) or make multiple worlds and somehow move between them