I'm planning to create a square inside UI image using line renderer but size is too small that you need to zoom In. but if its outside the UI image its working. Please see attached imaged below
the line renderer component is attached to redkey1spawn object.
Tried derHugo code it works but somehow it overshoots in the screen
Your problem is that the LineRenderer works with coordinates in Unity Units.
A Screenspace Overlay canvas has pixel size scaling so the width and height (in Unity units) match up with the width and height (in Pixels) of the Window.
→ since you add 4 points
0, 0, 0
2, 0, 0
2, -2, 0
0, -2, 0
in worldspace it means they actually on the canvas will use e.g. 2px, -2px, 0px → very small.
You could e.g. multiply the sizes by the height or width of the image/canvas.
private void Start()
{
var lineRenderer = GetComponent<LineRenderer>();
var image = GetComponentInParent<RectTransform>();
// get the Unity worldspace coordinates of the images corners
// note: to get the scales like that ofcourse only works
// if the image is never rotated!
var worlsCorners = new Vector3[4];
image.GetWorldCorners(worlsCorners);
var imageWorldSize = new Vector2(Mathf.Abs(worlsCorners[0].x - worlsCorners[2].x), Mathf.Abs(worlsCorners[1].y - worlsCorners[3].y));
var positions = new Vector3[lineRenderer.positionCount];
var pointnum = lineRenderer.GetPositions(positions);
for (var i = 0; i < pointnum; i++)
{
positions[i] = positions[i] * imageWorldSize.x;
}
lineRenderer.SetPositions(positions);
}
Note, however, I'm actually not even sure you will see this LineRenderer since it is not a UI component I'm pretty sure the ScreenSpace Overlay will make every Image etc always render on top of it.
Related
I recently started using Unity and C# and am currently working on a Vertical 2D mobile Game. I'm struggling to get my background to scale with different aspect ratios. The background sprite is 19,5/9 and the playable area is 16/9. At the moment the background is scaling to fit the top and bottom of the screen, but the idea is to have the background anchored to the sides and bottom and for the view to extend upwards if needed (Hence the tall sprite). Any ideas? Thanks in advance.
Here is the code im trying, its attached to the Camera.
public SpriteRenderer background;
private void Start()
{
float screenRatio = (float)Screen.width / (float)Screen.height;
float targetRatio = background.bounds.size.x / background.bounds.size.y;
if(screenRatio >= targetRatio)
{
Camera.main.orthographicSize = background.bounds.size.y / 2;
}
else
{
float differenceInSize = targetRatio / screenRatio;
Camera.main.orthographicSize = background.bounds.size.y / 2 * differenceInSize;
}
}
My solution is to use a worldspace UI canvas.
Set the canvas to world space, place it at the desired depth in your scene (dimensions won't matter as we can set them in the script), and either add an image object as a child or add an image component to the canvas object. Add your sprite as the source for the image like so.
void Awake()
{
RectTransform rt = GetComponent<RectTransform>();
rt.position = new Vector3(0, 0, rt.position.z);
float camHeight = Camera.main.orthographicSize * 2;
rt.SetSizeWithCurrentAnchors(RectTransform.Axis.Vertical, camHeight);
float targetRectWidth = camHeight * Camera.main.aspect;
rt.SetSizeWithCurrentAnchors(RectTransform.Axis.Horizontal, targetRectWidth);
}
The Steps are:
sets the rect transform to be positioned in the centre of the screen at whatever depth you set it at in the editor.
get 2xCamera height (as that is the distance from the top to bottom of the screen)
sets the anchors so that it lines the UI object up with the top and bottom of the screen
gets the target width by multiplying the height by the aspect ratio of the camera
sets the correct width based the target width
This can be done in update instead of Awake or Start to dynamically size the background while playing if necessary.
Here it is at 1080p 16:9 and at 5:4.The red cube is to show that it is in the background behind objects in the scene.
My scene is 2048 x 1152, and the camera never moves. When I create a rectangle with the following:
timeBarRect = new Rect(220, 185, Screen.width / 3, Screen.height / 50);
Its position changes depending on the resolution of my game, so I can't figure out how to get it to always land where I want it on the screen. To clarify, if I set the resolution to 16:9, and change the size of the preview window, the game will resize at ratios of 16:9, but the bar will move out from where it's supposed to be.
I have two related questions:
Is it possible to place the Rect at a global coordinate? Since the screen is always 2048 x 1152, if I could just place it at a certain coordinate, it'd be perfect.
Is the Rect a UI element? When it's created, I can't find it in the hierarchy. If it's a UI element, I feel like it should be created relative to a canvas/camera, but I can't figure out a way to do that either.
Update:
I am realizing now that I was unclear about what is actually being visualized. Here is that information: Once the Rect is created, I create a texture, update the size of that texture in Update() and draw it to the Rect in OnGui():
timeTexture = new Texture2D (1, 1);
timeTexture.SetPixel(0,0, Color.green);
timeTexture.Apply();
The texture size being changed:
void Update ()
{
if (time < timerMax) {
playerCanAttack = false;
time = time + (10 * Time.deltaTime);
} else {
time = timerMax;
playerCanAttack = true;
}
The actual visualization of the Rect, which is being drawn in a different spot depending on the size of the screen:
void OnGUI(){
float ratio = time / 500;
float rectWidth = ratio * Screen.width / 1.6f;
timeBarRect.width = rectWidth;
GUI.DrawTexture (timeBarRect, timeTexture);
}
I don't know that I completely understand either of the two questions I posed, but I did discover that the way to get the rect's coordinates to match the screen no matter what resolution was not using global coordinates, but using the camera's coordinates, and placing code in Update() such that the rect's coordinates were updated:
timeBarRect.x = cam.pixelWidth / timerWidth;
timeBarRect.y = cam.pixelHeight / timerHeight;
I'm trying to draw some polygons and lines usinng e.Graphics.DrawPolygon (or DrawLine). But I have a little problem specifying the coordinates where to draw. I am drawing onto a PictureBox using its Paint event. The elements draw correctly relatively to each other (creating the required final picture), but seem always to draw in the upper-left corner of the PictureBox. When creating the points to draw, when I just try to multiply the coordinates, it draws it at the same place but bigger (size is multiplied, instead of location coordinates).
Here is my code:
//some for loop
{
//getting the coordinates
Point toAdd = new Point((int)xCoord, (int)yCoord); // creating the point from originaly a double, here i tried to multiply..
tmpPoints.Add(toAdd); // tmpPoints is a List<Point>
}
points.Add(tmpPoints.ToArray()); //List<Point[]>
drawBuffer = points; //saving to a public List<Point[]>
points.Clear();
this.Invalidate();
here part of the pictureBox1_Paint method:
for (int i = 0; i < drawBuffer.Count; i++)
{
//some other stuff like deciding which color to use, not very important
Brush br = new SolidBrush(polyColor);
e.Graphics.FillPolygon(br, drawBuffer[i]);
brush.Dispose();
}
I have checked using breakpoint, the coordiinates are the same ratio (what was 100 pixels wide is still 100 pixels wide), they are at coordinates like x 3000 and y 1500, but it just draws itself in the upper-left corner. When i multiply the coordinates 3 times (see the code for the place where i multiplied), it draws at the same place but 3 times bigger (doesn't make sense after checking the coords...)
So, my question is - how do I set the location correctly, or is there any other way to do this?
Like this (I know, this is nonsense, just an example)
foreach(Polygon poly in e.Graphics)
{
poly.Location = new Point(poly.Location.X * 2, poly.Location.Y * 2);
}
When you multiply the coordinates of the points, they're scaled around the point (0, 0), the top-left corner of the canvas:
In order to scale it around its center (and I suppose you expected it to work this way), you need to calculate some kind of center of the polygon. For simplicity, it can be even an arithmetic mean of the coordinates, on X and Y axes respectively. If you already have the coordinates of the center, translate the coordinates of every point by a reversed vector made from the center coordinates (this is how it would look like if you drew it after this operation - the polygon's center is in the center of the coordinate system):
Now, do your scaling:
and move it back by the vector of polygon's center coordinates:
when you multiply
poly.Location = new Point(poly.Location.X * 2, poly.Location.Y * 2);
you are doing a stretch operation when you add
poly.Location = new Point(poly.Location.X + 50, poly.Location.Y +50); you are doing a translation operation.
If you want to shift everything without modifying the stored coords then just translate the graphics before drawing:
private void pictureBox1_Paint(object sender, PaintEventArgs e)
{
e.Graphics.TranslateTransform(100, 100); // shift the origin somehow
// ... draw the polygons as before ...
}
I have to do application for tilling in C#. The tiles will have some shape and my app should be able to modify the shape. I will have a some shape - polygon, made from vertices. I will have for example field of 16 vertices, then I draw the polygon.
What I need to know how can I move with the the vertice using drag and drop. I will also have to recount other vertices in order to fit one tile to next tile, but it's just some math.
To conclude:
I have polygon defined 16 vertices in field of Vertices, I move (with mouse) with one vertice, recount the coordinates of ohter vertices and draw new polygon. My problem is moving (probably using drag & drop) with one vertice from filed of vertices.
This is what part of my previous code without drag & drop - just for idea what tools I'm using for drawing one tile:
private Bitmap canvasBitmap; //canvas for drawing
private Graphics g; // enter to graphics tool
Bitmap b = (Bitmap)Bitmap.FromFile("obr.bmp");
TextureBrush brush = new TextureBrush(b);
Pen pen = new Pen(Color.Black, 1);
hexaVertices[0] = new PointF(-40 + 40, 0 + 30);
hexaVertices[1] = new PointF(-20 + 40, 30 + 30);
hexaVertices[2] = new PointF(20 + 40, 30 + 30);
hexaVertices[3] = new PointF(40 + 40, 0 + 30);
hexaVertices[4] = new PointF(20 + 40, -30 + 30);
hexaVertices[5] = new PointF(-20 + 40, -30 + 30);
g.FillPolygon(brush, hexaVertices);
g.DrawPolygon(pen, hexaVertices);
Thx for advices.
I can only give you a rough outline for Windows Forms here. In WPF you could use Adorners and there are tutorials out there for how to do it. Here we go for a manual process in Windows Forms:
First, the array of vertices should be a member variable of the class and should be initialized only once at the start of the program.
Then, draw the polygon with the current set of vertices as you're doing right now. Also, draw some "handles" if you want, so you know that the vertices can be grabbed (this could be rectangles around the actual PointF).
Now for the magic :-) Assign the MouseDown, MouseMove and MouseUp events to the control you're using for displaying the image. Also, create a new member variable bool m_draggingVertex and another that contains the index to the vertex array of the vertex you're currently dragging.
In MouseDown:
Check whether the current mouse position is within the range of a vertext (I would assume a 5x5 rectangle around a vertex, so that is is easier to hit with the cursor). If you pressed the button on a vertex, set m_draggingVertex to true and store the index of the vertex in the other variable.
In MouseMove:
If m_draggingVertex is true, change the vertext at the index stored above to the new coordinates, recalc your positions and repaint so that the current position of the vertex is shown.
In MouseUp:
If m_draggingVertex is true, set it to false and do final work.
This is how I'd do it ...
I'm not sure why this code isn't simply drawing a triangle to screen (orthographically). I'm using OpenTK 1.1 which is the same thing as OpenGL 1.1.
List<Vector3> simpleVertices = new List<Vector3>();
simpleVertices.Add(new Vector3(0,0,0));
simpleVertices.Add(new Vector3(100,0,0));
simpleVertices.Add(new Vector3(100,100,0));
GL.MatrixMode(All.Projection);
GL.LoadIdentity();
GL.MatrixMode(All.Projection);
GL.Ortho(0, 480, 320, 0,0,1000);
GL.MatrixMode(All.Modelview);
GL.LoadIdentity();
GL.Translate(0,0,10);
unsafe
{
Vector3* data = (Vector3*)Marshal.AllocHGlobal(
Marshal.SizeOf(typeof(Vector3)) * simpleVertices.Count);
for(int i = 0; i < simpleVertices.Count; i++)
{
((Vector3*)data)[i] = simpleVertices[i];
}
GL.VertexPointer(3, All.Float, sizeof(Vector3), new IntPtr(data));
GL.DrawArrays(All.Triangles, 0, simpleVertices.Count);
}
The code executes once every update cycle in a draw function. What I think I'm doing (but evidentially am not) is creating a set of position vertices to form a triangle and drawing it 10 units in front of the camera.
Why is this code not drawing a triangle?
In OpenGL, the Z axis points out of the screen, so when you write
GL.Translate(0,0,10);
it actually translates it "in front" of the screen.
Now your two last parameters to GL.Ortho are 0,1000. This means that everything between 0 and 1000 in the MINUS Z direction ( = in front of the camera ) will be displayed.
In other words, GL.Translate(0,0,-10); will put your object in front of the camera, while GL.Translate(0,0,10); will put it behind.