Best way to display a 2D shape from x,y positions unity - c#

I would like to find the best way (or at least a working way) to display in my UI a custom 2D shape from the position of all its points.
I have tried :
- to create a mesh (using a triangulator script to convert my (x,y) to the vertices+triangles, with a mesh renderer and a mesh filter. It seems to work but it not rendered in my UI even if its parent is a canvas (screen overlay).
- Creating a Canvasrenderer and setting my mesh inside it using SetMesh, but nothing seems to be displayed. Here is my code if it's the good way to do it :
GameObject shapeChild = new GameObject();
var shapeChildMeshRenderer = shapeChild.AddComponent<CanvasRenderer>();
shapeChildMeshRenderer.SetMaterial(material, null);
// Use the triangulator to get indices for creating triangles
Triangulator tr = new Triangulator(part.Points);
int[] indices = tr.Triangulate();
// Create the Vector3 vertices
Vector3[] vertices = new Vector3[vertices2D.Length];
for (int i = 0; i < vertices.Length; i++)
{
vertices[i] = new Vector3(vertices2D[i].x, vertices2D[i].y, 0);
}
// Create the mesh
Mesh _mesh = new Mesh();
_mesh.vertices = vertices;
_mesh.triangles = indices;
_mesh.RecalculateNormals();
_mesh.RecalculateBounds();
shapeChildMeshRenderer.SetMesh(_mesh);
shapeChild.transform.SetParent(this.transform);
I would like to keep the Screen Overlay setting.
What would be the best way to render my polygon in my overlay UI ?

Related

Unity - Colliding canvas elements with viewport elements using WorldToScreenPoint per vertex

I wish to make collide a UI element with a viewport element. To achieve this I figured I take the collision data EdgeCollider2D of the viewport element and convert each of its vertices to Screen space, then assign the points to another canvas element which has an EdgeCollider2D on it.
Problem is, the generated collision is way off, and I can't figure out why is that.
Here's my short code:
[RequireComponent(typeof(EdgeCollider2D))]
public class ScreenSpaceEdgeCollisionGenerator : MonoBehaviour, IScreenSpaceCollisionGenerator {
public EdgeCollider2D target;
private void Update() {
GenerateCollision();
}
public void GenerateCollision() {
var mEdge = GetComponent<EdgeCollider2D>();
var viewportPts = target.points;
Vector2[] n_Pts = new Vector2[viewportPts.Length];
for (int i = 0; i < viewportPts.Length; i++) {
n_Pts[i] = Camera.main.WorldToScreenPoint(viewportPts[i]);
}
mEdge.points = n_Pts;
}
}
here's the result:
On this first image you see how off the collision data is.
On this second image the red rect is the viewport element, while the white square is the ui element. I use only one, the main camera, the same which is used to determine the point transformation. The viewport element's EdgeCollider2D is placed on top of the red rect, as it should be.

Apply Textures to Only Certain Faces of a Cube in Unity

I am trying to make realistic-looking procedurally generated buildings. So far I have made a texture generation program for them. I would like to apply that texture to a simple cube in Unity, however, I don't want to apply the texture to all of the faces. Currently when I apply the texture to the cube's material it apply's the same texture to all of the faces and on some of the faces the texture is upside down. Do you recommend that I make plane objects and apply the textures to each of those (and form a cube that way). I know this would work, but is it efficient at a large scale? Or is there a way to apply different textures to individual faces of a cube in C#?
Considering that you are trying to create buildings you should procedurally generate your own 'cube'/building mesh data.
For example:
using UnityEngine;
using System.Collections;
public class ExampleClass : MonoBehaviour {
public Vector3[] newVertices;
public Vector2[] newUV;
public int[] newTriangles;
void Start() {
Mesh mesh = new Mesh();
mesh.vertices = newVertices;
mesh.uv = newUV;
mesh.triangles = newTriangles;
GetComponent<MeshFilter>().mesh = mesh;
}
}
Then you populate the verticies, and tris with data.
For example, you could create a small 1 by 1 cube around origin 0 with:
int size = 1;
newVertices = new Vector3[]{
Vector3(-size, -size, -size),
Vector3(-size, size, -size),
Vector3( size, size, -size),
Vector3( size, -size, -size),
Vector3( size, -size, size),
Vector3( size, size, size),
Vector3(-size, size, size),
Vector3(-size, -size, size)
};
Then because you only want to render the texture on 1 of the meshes faces;
newUV = new Vector2[]{
Vector2(0,0),
Vector2(0,1),
Vector2(1,0),
Vector2(1,1),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0)
};
Note that only allocating UV coords for some of the verts This will effectively only place your desired texture on 1 of the faces and then depending on how you allocate the tris, the rest of the UVs can be altered as you see fit to appear as though they are untextured.
Please note that I wrote this with out an IDE so there may be some syntax errors. Also, this isn't the most straight forward process, but I promise that it far exceeds the use of a series of quads as a building because with this you can make a whole range of shapes 'easily'.
Resources:
http://docs.unity3d.com/ScriptReference/Mesh.html
https://msdn.microsoft.com/en-us/library/windows/desktop/bb205592%28v=vs.85%29.aspx
http://docs.unity3d.com/Manual/GeneratingMeshGeometryProcedurally.html
AFAIK unity does not natively support this.
There are however some easy ways to work around. Your example of using a plane is a good thought, but using a quad would make more sense. Or even easier, create a multi faced cube as a model and export it as for example a .fbx

How to draw circle around an object in unity3d?

I want to draw a cube on scene. I create a project on unity3d. It has main camera and directional light. I add an empty gameobject using unity gui. I create a .cs file and attached to gameobject. Content of C# file is :
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
/**
* Simple example of creating a procedural 6 sided cube
*/
[RequireComponent (typeof (MeshFilter))]
[RequireComponent (typeof (MeshRenderer))]
public class test : MonoBehaviour {
void Start () {
MeshFilter meshFilter = gameObject.GetComponent<MeshFilter>();
Mesh mesh = new Mesh ();
meshFilter.mesh = mesh;
mesh.vertices = new Vector3[]{
// face 1 (xy plane, z=0)
new Vector3(0,0,0),
new Vector3(1,0,0),
new Vector3(1,1,0),
new Vector3(0,1,0),
// face 2 (zy plane, x=1)
new Vector3(1,0,0),
new Vector3(1,0,1),
new Vector3(1,1,1),
new Vector3(1,1,0),
// face 3 (xy plane, z=1)
new Vector3(1,0,1),
new Vector3(0,0,1),
new Vector3(0,1,1),
new Vector3(1,1,1),
// face 4 (zy plane, x=0)
new Vector3(0,0,1),
new Vector3(0,0,0),
new Vector3(0,1,0),
new Vector3(0,1,1),
// face 5 (zx plane, y=1)
new Vector3(0,1,0),
new Vector3(1,1,0),
new Vector3(1,1,1),
new Vector3(0,1,1),
// face 6 (zx plane, y=0)
new Vector3(0,0,0),
new Vector3(0,0,1),
new Vector3(1,0,1),
new Vector3(1,0,0),
};
int faces = 6; // here a face = 2 triangles
List<int> triangles = new List<int>();
List<Vector2> uvs = new List<Vector2>();
for (int i = 0; i < faces; i++) {
int triangleOffset = i*4;
triangles.Add(0+triangleOffset);
triangles.Add(2+triangleOffset);
triangles.Add(1+triangleOffset);
triangles.Add(0+triangleOffset);
triangles.Add(3+triangleOffset);
triangles.Add(2+triangleOffset);
// same uvs for all faces
uvs.Add(new Vector2(0,0));
uvs.Add(new Vector2(1,0));
uvs.Add(new Vector2(1,1));
uvs.Add(new Vector2(0,1));
}
mesh.triangles = triangles.ToArray();
mesh.uv = uvs.ToArray();
GetComponent<Renderer>().material = new Material(Shader.Find("Diffuse"));
mesh.RecalculateNormals();
mesh.RecalculateBounds ();
mesh.Optimize();
}
}
This code works. Now, I want to draw circle that has perspective effects around this cube using SetPixel function. How can I do this work? I want to create a view as below
1 - Instead of creating a hand maded cube, why don't you user a primitive box and just set the size?
2 - One of the problems is that your "D" can be different depends on the box rotation.
For example, if your box is 0º, the D in 0º direction will be 0.5 (from center to boundary of a 1 unit cube). The sabe box, if you calculate the D in 45º direction will be 0.7 (hipotenuse).
Even if you trie to calculate the boundaries first, this boundaries will be different depens on rotation. 0º = 0.5, 45º = 0.7 and so on (the same problem)
The simple way to aprouch (that I can think now) is:
Create a primitive cube and set the desired scale.
Create a plane that will represent the circle and add a transparent texture of a circle.
Add the plane (circle) as a child of the box, while you resize the box, the circle will resize together.
Sorry for the grammar, english is not my native language.

WPF Image Collison Detection

I have some code which detects collision ;
public bool DetectCollision(ContentControl ctrl1, ContentControl ctrl2)
{
Rect ctrl1Rect = new Rect(
new Point(Convert.ToDouble(ctrl1.GetValue(Canvas.LeftProperty)),
Convert.ToDouble(ctrl1.GetValue(Canvas.TopProperty))),
new Point((Convert.ToDouble(ctrl1.GetValue(Canvas.LeftProperty)) + ctrl1.ActualWidth),
(Convert.ToDouble(ctrl1.GetValue(Canvas.TopProperty)) + ctrl1.ActualHeight)));
Rect ctrl2Rect = new Rect(
new Point(Convert.ToDouble(ctrl2.GetValue(Canvas.LeftProperty)),
Convert.ToDouble(ctrl2.GetValue(Canvas.TopProperty))),
new Point((Convert.ToDouble(ctrl2.GetValue(Canvas.LeftProperty)) + ctrl2.ActualWidth),
(Convert.ToDouble(ctrl2.GetValue(Canvas.TopProperty)) + ctrl2.ActualHeight)));
ctrl1Rect.Intersect(ctrl2Rect);
return !(ctrl1Rect == Rect.Empty);
}
It detects when 2 rectangles are over. There are images in the given parameter ContentControls. I want to be able to detect if those images intersects not the rectangels. Following images shows whatn I want ;
Then you are not looking for rectangular collision detection but actually pixel-level collision detection and that is going to be much more processing intensive.
On top of the rectangular collision detection that you already have implemented you will have to examine each pixel of both images in the overlapping rectangular region.
In the simplest case, if both of two overlapping pixels have non transparent color then you have a collision.
If you want to complicate things you may want to add thresholds such as: requiring a percentage of overlapping pixels in order to trigger a collision; or setting a threshold for the combined alpha level of the pixels instead of using any non zero value.
You can try converting your images as a geometry object and then you can check if they are colliding correctly. But these images should be as a vector image. To convert images to a vector image, you can check this open source project.
public static Point[] GetIntersectionPoints(Geometry g1, Geometry g2)
{
Geometry og1 = g1.GetWidenedPathGeometry(new Pen(Brushes.Black, 1.0));
Geometry og2 = g2.GetWidenedPathGeometry(new Pen(Brushes.Black, 1.0));
CombinedGeometry cg = new CombinedGeometry(GeometryCombineMode.Intersect, og1, og2);
PathGeometry pg = cg.GetFlattenedPathGeometry();
Point[] result = new Point[pg.Figures.Count];
for (int i = 0; i < pg.Figures.Count; i++)
{
Rect fig = new PathGeometry(new PathFigure[] { pg.Figures[i] }).Bounds;
result[i] = new Point(fig.Left + fig.Width / 2.0, fig.Top + fig.Height / 2.0);
}
return result;
}

3D graphics in wpf c# on windows

I am new to 3d Graphics and also wpf and need to combine these two in my current project. I add points and normals to MeshGeometry3D and add MeshGeometry3D to GeometryModel3D. Then add GeometryModel3D to ModelVisual3D and finally add ModelVisual3D to ViewPort3D. Now if i need to rotate i perform the required Transform either on GeometryModel3D or ModelVisual3D and add it again finally to the ViewPort3D. I'm running into a problems:
objViewPort3D.Remove(objModelVisual3D);
objGeometryModel3D.Transform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), angle += 15));
objModelVisual3D.Content = objGeometryModel3D;
objViewPort3D.Children.Add(objModelVisual3D);
to rotate it everytime by 15 degrees why must i do angle += 15 and not just 15? It seems that the stored model is not transformed by Transform operation but transformation is applied only when displaying by ViewPort3D. I want the transformation to actually change the coordinates in the stored MeshGeometry3D object so that when i do the transform next time it does on the previously transformed model and not the original model. How do i obtain this behaviour?
I think you can use Animation
some pseudo-code:
angle = 0
function onClick:
new_angle = angle + 30
Animate(angle, new_angle)
angle = new_angle
You have to do angle += 15 because you're applying a new RotateTransform3D each time.
This might help:
public RotateTransform3D MyRotationTransform { get; set; }
...
//constructor
public MyClass()
{
MyRotationTransform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), 0));
}
//in your method
MyRotationTransform.Rotation += 15;
objGeometryModel3D.Transform = MyRotationTransform;
Correct, the position of the mesh is not transformed by the "Transform" operation. Instead the Transform property defines the world transform of the mesh during rendering.
In 3d graphics the world transform transforms the points of the mesh from object space to world space during the render of the object.
(Image from World, View and Projection Matrix Unveiled)
It's much faster to set the world transform and let the renderer draw the mesh in a single transform than transforming each vertex of a mesh, like you want.

Categories