I want to draw a cube on scene. I create a project on unity3d. It has main camera and directional light. I add an empty gameobject using unity gui. I create a .cs file and attached to gameobject. Content of C# file is :
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
/**
* Simple example of creating a procedural 6 sided cube
*/
[RequireComponent (typeof (MeshFilter))]
[RequireComponent (typeof (MeshRenderer))]
public class test : MonoBehaviour {
void Start () {
MeshFilter meshFilter = gameObject.GetComponent<MeshFilter>();
Mesh mesh = new Mesh ();
meshFilter.mesh = mesh;
mesh.vertices = new Vector3[]{
// face 1 (xy plane, z=0)
new Vector3(0,0,0),
new Vector3(1,0,0),
new Vector3(1,1,0),
new Vector3(0,1,0),
// face 2 (zy plane, x=1)
new Vector3(1,0,0),
new Vector3(1,0,1),
new Vector3(1,1,1),
new Vector3(1,1,0),
// face 3 (xy plane, z=1)
new Vector3(1,0,1),
new Vector3(0,0,1),
new Vector3(0,1,1),
new Vector3(1,1,1),
// face 4 (zy plane, x=0)
new Vector3(0,0,1),
new Vector3(0,0,0),
new Vector3(0,1,0),
new Vector3(0,1,1),
// face 5 (zx plane, y=1)
new Vector3(0,1,0),
new Vector3(1,1,0),
new Vector3(1,1,1),
new Vector3(0,1,1),
// face 6 (zx plane, y=0)
new Vector3(0,0,0),
new Vector3(0,0,1),
new Vector3(1,0,1),
new Vector3(1,0,0),
};
int faces = 6; // here a face = 2 triangles
List<int> triangles = new List<int>();
List<Vector2> uvs = new List<Vector2>();
for (int i = 0; i < faces; i++) {
int triangleOffset = i*4;
triangles.Add(0+triangleOffset);
triangles.Add(2+triangleOffset);
triangles.Add(1+triangleOffset);
triangles.Add(0+triangleOffset);
triangles.Add(3+triangleOffset);
triangles.Add(2+triangleOffset);
// same uvs for all faces
uvs.Add(new Vector2(0,0));
uvs.Add(new Vector2(1,0));
uvs.Add(new Vector2(1,1));
uvs.Add(new Vector2(0,1));
}
mesh.triangles = triangles.ToArray();
mesh.uv = uvs.ToArray();
GetComponent<Renderer>().material = new Material(Shader.Find("Diffuse"));
mesh.RecalculateNormals();
mesh.RecalculateBounds ();
mesh.Optimize();
}
}
This code works. Now, I want to draw circle that has perspective effects around this cube using SetPixel function. How can I do this work? I want to create a view as below
1 - Instead of creating a hand maded cube, why don't you user a primitive box and just set the size?
2 - One of the problems is that your "D" can be different depends on the box rotation.
For example, if your box is 0º, the D in 0º direction will be 0.5 (from center to boundary of a 1 unit cube). The sabe box, if you calculate the D in 45º direction will be 0.7 (hipotenuse).
Even if you trie to calculate the boundaries first, this boundaries will be different depens on rotation. 0º = 0.5, 45º = 0.7 and so on (the same problem)
The simple way to aprouch (that I can think now) is:
Create a primitive cube and set the desired scale.
Create a plane that will represent the circle and add a transparent texture of a circle.
Add the plane (circle) as a child of the box, while you resize the box, the circle will resize together.
Sorry for the grammar, english is not my native language.
Related
I'm making top down shooter game! Every 3-5second enemies spawn in random positions. I've managed this with Random.Range and pixels. But sometimes enemies spawn near the player or at exact position. Is there any way to make bots not spawn near the player? This problem is very critical for my project because if enemy even touches the player, game is over. Here's my script:
IEnumerator SpawnNext()
{
float randX = Random.Range(-8.638889f, 8.638889f);
float randY = Random.Range(-4.5f, 4.75f);
GameObject plt = Instantiate(enemy);
plt.transform.position = new Vector3(randX, randY, 0);
yield return new WaitForSeconds(1f);
}
Use a circle-centric random value. In this method, I normalized a random point on a circle and multiplied it by a random distance range. Adding the value to the main player will give the desired result.
var direction = Random.insideUnitCircle.normalized;
var distance = Random.Range(7, 15); // for e.g 7 is min and 15 max
var pos = direction * distance;
GameObject plt = Instantiate(enemy);
plt.transform.position = player.transform.position + new Vector3(pos.x, pos.y);
I want to add a line to my game separating the left and right regions. I added a Gameobject to my Canvas and added the following script to it:
public class DrawLine : MonoBehaviour
{
public Color c1 = Color.red;
public Color c2 = Color.white;
Vector3 topPoint;
Vector3 bottomPoint;
// Start is called before the first frame update
void Start()
{
topPoint = new Vector3(Screen.width / 4, Screen.height);
bottomPoint = new Vector3(Screen.width / 4, 0);
LineRenderer lineRenderer = gameObject.AddComponent<LineRenderer>();
lineRenderer.material = new Material(Shader.Find("Sprites/Default"));
lineRenderer.widthMultiplier = 2.2f;
lineRenderer.positionCount = 40;
// A simple 2 color gradient with a fixed alpha of 1.0f.
float alpha = 1.0f;
Gradient gradient = new Gradient();
gradient.SetKeys(
new GradientColorKey[] { new GradientColorKey(c1, 0.0f), new GradientColorKey(c2, 1.0f) },
new GradientAlphaKey[] { new GradientAlphaKey(alpha, 0.0f), new GradientAlphaKey(alpha, 1.0f) }
);
lineRenderer.colorGradient = gradient;
lineRenderer.SetPosition(0, topPoint);
lineRenderer.SetPosition(1, bottomPoint);
}
}
When I run the game. A LineRenderer is added to the gameObject with the required colours and width, but there is no line drawn.
What am I doing wrong?
Thanks
Just tested the code you posted and it's working fine. I was originally thinking that the line was being drawn too thin and was going to recommend resizing the line to be larger at the start and end points, but after testing I do see the line being drawn in editor and game.
If the LineRenderer component is being created and you are not able to see the line either editor or game view, then post an image of the inspector for the gameobject that has the LineRenderer component at runtime.
Response to your comments
Using the LineRenderer draws a line at runtime in 3D-Space based on the positions selected in your code when you do the SetPosition calls.
When talking about a Canvas, I assume you are talking about a UI Canvas.
The LineRenderer draws based on the positions you set and not based on the Canvas unless you program it that way.
So, if your camera is not positioned in a location to see the line in front of it then it won't be displayed in the game view.
I would like to find the best way (or at least a working way) to display in my UI a custom 2D shape from the position of all its points.
I have tried :
- to create a mesh (using a triangulator script to convert my (x,y) to the vertices+triangles, with a mesh renderer and a mesh filter. It seems to work but it not rendered in my UI even if its parent is a canvas (screen overlay).
- Creating a Canvasrenderer and setting my mesh inside it using SetMesh, but nothing seems to be displayed. Here is my code if it's the good way to do it :
GameObject shapeChild = new GameObject();
var shapeChildMeshRenderer = shapeChild.AddComponent<CanvasRenderer>();
shapeChildMeshRenderer.SetMaterial(material, null);
// Use the triangulator to get indices for creating triangles
Triangulator tr = new Triangulator(part.Points);
int[] indices = tr.Triangulate();
// Create the Vector3 vertices
Vector3[] vertices = new Vector3[vertices2D.Length];
for (int i = 0; i < vertices.Length; i++)
{
vertices[i] = new Vector3(vertices2D[i].x, vertices2D[i].y, 0);
}
// Create the mesh
Mesh _mesh = new Mesh();
_mesh.vertices = vertices;
_mesh.triangles = indices;
_mesh.RecalculateNormals();
_mesh.RecalculateBounds();
shapeChildMeshRenderer.SetMesh(_mesh);
shapeChild.transform.SetParent(this.transform);
I would like to keep the Screen Overlay setting.
What would be the best way to render my polygon in my overlay UI ?
I have some code which detects collision ;
public bool DetectCollision(ContentControl ctrl1, ContentControl ctrl2)
{
Rect ctrl1Rect = new Rect(
new Point(Convert.ToDouble(ctrl1.GetValue(Canvas.LeftProperty)),
Convert.ToDouble(ctrl1.GetValue(Canvas.TopProperty))),
new Point((Convert.ToDouble(ctrl1.GetValue(Canvas.LeftProperty)) + ctrl1.ActualWidth),
(Convert.ToDouble(ctrl1.GetValue(Canvas.TopProperty)) + ctrl1.ActualHeight)));
Rect ctrl2Rect = new Rect(
new Point(Convert.ToDouble(ctrl2.GetValue(Canvas.LeftProperty)),
Convert.ToDouble(ctrl2.GetValue(Canvas.TopProperty))),
new Point((Convert.ToDouble(ctrl2.GetValue(Canvas.LeftProperty)) + ctrl2.ActualWidth),
(Convert.ToDouble(ctrl2.GetValue(Canvas.TopProperty)) + ctrl2.ActualHeight)));
ctrl1Rect.Intersect(ctrl2Rect);
return !(ctrl1Rect == Rect.Empty);
}
It detects when 2 rectangles are over. There are images in the given parameter ContentControls. I want to be able to detect if those images intersects not the rectangels. Following images shows whatn I want ;
Then you are not looking for rectangular collision detection but actually pixel-level collision detection and that is going to be much more processing intensive.
On top of the rectangular collision detection that you already have implemented you will have to examine each pixel of both images in the overlapping rectangular region.
In the simplest case, if both of two overlapping pixels have non transparent color then you have a collision.
If you want to complicate things you may want to add thresholds such as: requiring a percentage of overlapping pixels in order to trigger a collision; or setting a threshold for the combined alpha level of the pixels instead of using any non zero value.
You can try converting your images as a geometry object and then you can check if they are colliding correctly. But these images should be as a vector image. To convert images to a vector image, you can check this open source project.
public static Point[] GetIntersectionPoints(Geometry g1, Geometry g2)
{
Geometry og1 = g1.GetWidenedPathGeometry(new Pen(Brushes.Black, 1.0));
Geometry og2 = g2.GetWidenedPathGeometry(new Pen(Brushes.Black, 1.0));
CombinedGeometry cg = new CombinedGeometry(GeometryCombineMode.Intersect, og1, og2);
PathGeometry pg = cg.GetFlattenedPathGeometry();
Point[] result = new Point[pg.Figures.Count];
for (int i = 0; i < pg.Figures.Count; i++)
{
Rect fig = new PathGeometry(new PathFigure[] { pg.Figures[i] }).Bounds;
result[i] = new Point(fig.Left + fig.Width / 2.0, fig.Top + fig.Height / 2.0);
}
return result;
}
I am new to 3d Graphics and also wpf and need to combine these two in my current project. I add points and normals to MeshGeometry3D and add MeshGeometry3D to GeometryModel3D. Then add GeometryModel3D to ModelVisual3D and finally add ModelVisual3D to ViewPort3D. Now if i need to rotate i perform the required Transform either on GeometryModel3D or ModelVisual3D and add it again finally to the ViewPort3D. I'm running into a problems:
objViewPort3D.Remove(objModelVisual3D);
objGeometryModel3D.Transform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), angle += 15));
objModelVisual3D.Content = objGeometryModel3D;
objViewPort3D.Children.Add(objModelVisual3D);
to rotate it everytime by 15 degrees why must i do angle += 15 and not just 15? It seems that the stored model is not transformed by Transform operation but transformation is applied only when displaying by ViewPort3D. I want the transformation to actually change the coordinates in the stored MeshGeometry3D object so that when i do the transform next time it does on the previously transformed model and not the original model. How do i obtain this behaviour?
I think you can use Animation
some pseudo-code:
angle = 0
function onClick:
new_angle = angle + 30
Animate(angle, new_angle)
angle = new_angle
You have to do angle += 15 because you're applying a new RotateTransform3D each time.
This might help:
public RotateTransform3D MyRotationTransform { get; set; }
...
//constructor
public MyClass()
{
MyRotationTransform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), 0));
}
//in your method
MyRotationTransform.Rotation += 15;
objGeometryModel3D.Transform = MyRotationTransform;
Correct, the position of the mesh is not transformed by the "Transform" operation. Instead the Transform property defines the world transform of the mesh during rendering.
In 3d graphics the world transform transforms the points of the mesh from object space to world space during the render of the object.
(Image from World, View and Projection Matrix Unveiled)
It's much faster to set the world transform and let the renderer draw the mesh in a single transform than transforming each vertex of a mesh, like you want.