In a project I'm currently working on, I am having users click on an area for where they want to place things into the environment later on down the line. I'd like to visualize what they're placing with simple markers placed on the canvas, so that as they add and remove points the markers come and go as well.
I've found some resources on how to start, listing how to instantiate prefabs into the canvas, but it never seems to work for me. I feel it must have something to do with how I'm using coordinates but I'm not entirely sure.
public GameObject markerPrefab;
Then later on in another function
GameObject boatMarker = Instantiate(markerPrefab, Input.mousePosition, Quaternion.identity);
boatMarker.transform.SetParent(GameObject.FindGameObjectWithTag("Canvas").transform, false);
The code runs, and the prefabs do spawn into the scene, but they all appear in the top right hand corner of the canvas, all sort of one on top of the other. Any ideas what I've done wrong here? Also, while I don't want to ask you guys to write my code for me, any suggestions for a jumping point on how to remove specific instances of the prefab later down the line?
The main issue I'ld say is that you are using SetParent with the second parameter false
If true, the parent-relative position, scale and rotation are modified such that the object keeps the same world space position, rotation and scale as before.
In your case you want to keep the same world space position.
Since your canvas is Screenspace overlay its width and height (in Unity units) match exactly the display/window pixel width and height. Therefore when you do
GameObject boatMarker = Instantiate(markerPrefab, Input.mousePosition, Quaternion.identity);
The object already is in the correct position. To visualize that I just gave it a cube as child so you see it spawns already where I clicked (you can't see the image yet beacuase it's not a child of a Canvas):
What happens if you pass in that false parameter to the SetParent is that it doesn't keep it's current worldspace position but instead keeps its current localPosition and moves to that relative position within its parent. Since it is a Canvas and your prefab probably also using a RectTransform the resulting position depends on a lot of things like the e.g. the pivot and anchor settings of the prefab's RectTransform but also e.g. the Canvas Scaler -> Scale Factor.
If your prefab e.g. is anchored on the center (usually the default) and you click exactly on the center of your window it will appear on the upper right corner instead.
Why?
you click at (windowWidth / 2, WindowHeight / 2). So the prefab is originally spawned here
Than you use SetParent with false so it keeps that position but this time relative to the center of the Canvas
=> Center of canvas position is (windowWidth / 2, WindowHeight / 2) so adding the local prefab coordinates (windowWidth / 2, WindowHeight / 2) results in the final position (WindowWidth, WindowHeight) == upper right corner.
So you could fix that either by making the prefab beiing anchored on the lower left corner
or do not pass false as parameter to SetParent.
boatMarker.transform.SetParent(_canvas.transform);
you could actually then also do it in one single call
Instantiate(markerPrefab, Input.mousePosition, Quaternion.identity, _canvas.transform);
Additionally you should not use FindObjectWidthTag again and again. I would rather only get it once or even reference it via the Inspector if possible:
public GameObject markerPrefab;
[SerializeField] private Canvas _canvas;
private void Awake()
{
// If no canvas is provided get it by tag
if (!_canvas) _canvas = GameObject.FindGameObjectWithTag("Canvas").GetComponent<Canvas>();
}
// Update is called once per frame
private void Update()
{
if (Input.GetMouseButtonDown(0))
{
Instantiate(markerPrefab, Input.mousePosition, Quaternion.identity, _canvas.transform);
}
}
Try This :
1- insert the button into the Canvas
2- get a Gameobject reference for this button
3- try this code after assigning your button to the button variable and the Game Canvas to the code Canvas variable, and it will work fine
public GameObject refButton;
public GameObject canvas;
// Start is called before the first frame update
void Start()
{
}
// Update is called once per frame
void Update()
{
if (Input.GetMouseButtonDown(0))
{
GameObject button = Instantiate(refButton, Input.mousePosition,Quaternion.identity);
button.transform.SetParent(canvas.transform);
}
}
Related
I have this project where I click on an object and a canvas will show up to the player select an option inside the canvas. I needed the canvas to be world space so the player can move its head and the canvas will stay static in front. The problem is, there is a ton of objects around the scene and I need to update the position of the canvas everytime the player clicks an object.
I've tried to use "transform.position" but it doesn't work the way I wanted.
obs:
painel_escolha = canvas with panel;
transform_tela = camera.
painel_escolha.transform.position = transform_tela.transform.position;
Use this to move the canvas/panel in front of the camera and make it face the camera
// move the canvas distance meters in front of the camera
painel_escolha.transform.position = transform_tela.position + transform_tela.transform.forward * distance;
// make the canvas point in the same direction as the camera
painel_escolha.transform.rotation = transform_tela.transform.rotation;
Doing this in LateUpdate(! so after User input and position and rotation changes are processed) makes your panel absolutely head stable meaning it always stays in front of the user and he can't look away basically.
Often you rather want to use some kind of smoothed lerping instead e.g. doing
[Range(0,1)]
public float interpolationRatePosition = 0.5f;
[Range(0,1)]
public float interpolationRateRotation = 0.5f;
privtae Vector3 lastPosition;
private Quaternion lastRotation;
privtae void LateUpdate()
{
painel_escolha.transform.position = Vector3.Lerp(painel_escolha.transform.position, transform_tela.position, interpolationRatePosition);
painel_escolha.transform.rotation = Quaterinon.Lerp(painel_escolha.transform.rotation, transform_tela.rotation, interpolationRateRotation);
}
this resuts basically in the same end position but makes it looking a bit smoother.
Canvas positions are basically 2D screen positions. You could get the clicked objects world position with transform.position but for a canvas element to point to that (be on top of it) you need to transform that world position to the screen position via https://docs.unity3d.com/ScriptReference/Camera.WorldToScreenPoint.html
The main camera's output is set to a render texture, which is applied to a material, which is applied to a quad that's scaled up to 128x72. The secondary camera is set to only see what is rendered to the child quad, who has the material with the render texture on it.
However Camera.main.ScreenToWorldPoint(Input.mousePosition) is returning values that aren't even close to the GameObject. I.E. The GameObject is instantiated at (0, 0, 0), and hovering over it shows the mouse at (307, 174). Moving the Rotating Object to the right edge of the screen will only return an x position of 64 (half of the 128px wide quad) so I'm not sure where the 300+ is coming from. Not sure if the quad/camera set up is responsible for this.
EDIT: Using a single orthographic camera, all properties the same except for using a render texture, instead of the setup I have now results in accurate ScreenToWorldPoint output.
The Input.mousePosition property will only return the x and y axis of the mouse position in pixels.
ScreenToWorldPoint requires the z axis too which Input.mousePosition doesn't provide. The z-axis value supposed to be the nearClipPlane of the camera. It will give you a position that's right in front of the camera.
Depending on the size of the 3D object you want to instantiate where mouse button is pressed, you will need to apply an offset to it to make it totally visible to the screen. For a simple cube created in Unity, an offset of 2 is fine. Anything bigger than that, you will need to increase the offset.
Below is a complete example of how to properly use ScreenToWorldPoint with Camera.nearClipPlane and an offset to instantiate a 3D object where mouse is clicked:
public GameObject prefab;
public float offset = 2f;
void Update()
{
if (Input.GetMouseButtonDown(0))
{
Camera cam = Camera.main;
Vector2 mousePos = Vector3.zero;
mousePos.x = Input.mousePosition.x;
mousePos.y = Input.mousePosition.y;
Vector3 worldPoint = cam.ScreenToWorldPoint(new Vector3(mousePos.x, mousePos.y, cam.nearClipPlane + offset));
Instantiate(prefab, worldPoint, Quaternion.identity);
}
}
You may not be calling the Camera.ScreenToWorldPoint method correctly. In particular, the z position of the screen position parameter that's passed to this method should be defined as world units from the camera. See the Unity documentation on Camera.ScreenToWorldPoint.
Instead of Camera.main.ScreenToWorldPoint(Input.mousePosition), I think this is the correct way to call Camera.ScreenToWorldPoint:
var cameraPosition = Camera.main.transform.position;
// assuming `transform` is the transform "Virtual Screen Quad"...
float zWorldDistanceFromCamera = transform.position.z - cameraPosition.z;
var screenPoint = new Vector3(Input.mousePosition.x, Input.mousePosition.y, zWorldDistanceFromCamera);
var worldPoint = Camera.main.ScreenToWorldPoint(screenPoint);
Debug.LogFormat("mousePosition: {0} | zWorldDistanceFromCamera: {1} | worldPoint: {2}",
Input.mousePosition,
zWorldDistanceFromCamera,
worldPoint.ToString("F3"));
(If this isn't working, could you update your question or reply to this post with a comment with details showing the values that are logged at each step?)
I was just struggling with this problem and this question helped me find the answer, so thank you for posting it!
The issue has nothing to do with the z axis or how you're calling Camera.ScreenToWorldPoint. The issue is that the camera you're calling it on is rendering to a RenderTexture, and the dimensions of the RT don't match the dimensions of your game window. I wasn't able to find the implementation of the method in the reference source, but whatever it's doing is dependent on the resolution of the RenderTexture.
To test this, click the stats button in the game window to display the game window's screen size. The coordinates you get will match the ratio between that and the RenderTexture resolution.
Solutions:
Don't call this method on a camera targeting a rendertexture, either target the screen (none) or create a child camera that matches the position of the camera you need
Match the RT resolution to the screen. Obviously this may have performance implications, or cause issues if the screen size changes.
Don't use Camera.ScreenToWorldPoint. Depending on the use case, using a raycast may be simpler or more reliable.
Since using a default camera was returning the correct values, I simply added another one to detect the mouse position independent of the render texture/quad setup.
Super basic question that I cant seem to find an answer to:
All I've done is create a new 2D unity project and added a single sprite to the scene. I add a C# script to the sprite and set its position to a new Vector2(0,0). When run, the sprite moves its center point to the center of the screen, as opposed to the bottom left corner. This is also the case when I set the position of the spite to 0,0 in the editor. I've tried making the sprite a child of the camera, and I've tried using WorldToScreenPoint(). It's been awhile since I've used Unity so I must not be understanding something.
Thanks for the help
Edit:
As suggested here's the single line of code in question, don't think this will add much clarity.
public class NewBehaviourScript : MonoBehaviour {
// Use this for initialization
void Start () {
this.transform.position = new Vector2 (0, 0);
}
// Update is called once per frame
void Update () {
}
}
Unity renders based on the camera's frustum, not an arbitrary screen pixel position (unless you translate such to the frustum through a script).
By default, the camera will "look" at the scene-space origin (0, 0), and if you put your sprite there, of course it will be centered on screen on play.
[EDIT]
BTW, from...
I've tried making the sprite a child of the camera, and I've tried
using WorldToScreenPoint().
... it seems you're trying to put a sprite somewhere in relation to the camera. If that's for GUI, the correct way to do it is not as a child of the camera. Instead, use a screen-space Canvas. Screen-space canvasses, unlike cameras, work based on pixel position, too! =)
I have an object in my scene's sky and want to prevent it from getting cut out of the view, no matter the screen ratio. So I set up a canvas and a PlaceHolder RectTransform on it, that - as being UI element - gets scaled and repositioned in respect to the screen aspect ratio. My goal is to use this position to determine where to put my scene object by using the following script:
Camera mainCam;
RectTransform placeHolder;
private void Start()
{
mainCam = GameObject.FindGameObjectWithTag("MainCamera")
.GetComponent<Camera>();
placeHolder = GameObject.FindGameObjectWithTag("SkyObjectPlaceholder")
.GetComponent<RectTransform>();
var screenPos = mainCam.ScreenToWorldPoint(
new Vector3(placeHolder.rect.x, placeHolder.rect.y, 0));
transform.position = new Vector3(screenPos.x, screenPos.y,
transform.position.z);
}
The issue is that my sky object is always placed in the center of the screen. It is not a child of anything on the scene, I just put it there. The placeholder is to the upper right in the canvas. Note that I tried playing with anchors for the UI element, set it to be left bottom, right top, no change, the scene object is always in the center of the screen.
What am I doing wrong? How to make a Scene element match up the position of a Canvas element, so it gets right behind it?
Note: I'm using a perspective camera. I don't know if it matters, but I thought I mention it, just in case.
Yet another edit: after messing around with the issue a bit, I found that the problem is possibly linked to the Z input I give the ScreenToWorldPoint, in my case it's 0. I couldn't get a viable Z to input though.
Hmmm... i think you should change the sorting layer of that object
I am creating a 3D game which is going to be a strategy game. I want to create a panel (UI group of a bunch of texts and buttons) by code (C#) and then move them so they are on top of a GameObject in the Canvas layout. So I am doing this:
if (ThisBuilding == null)
ThisBuilding = this.gameObject;
Panel = Canvas.Instantiate(Panel);
Panel.transform.SetParent(canvas.transform, false); //false to prevent stupid scaling issues created by the parent change
Vector3 centerPosition = ThisBuilding.GetComponentInChildren<MeshRenderer>().bounds.center; //the building GameObject contains a MeshRenderer component in its only children.
panelRect.anchoredPosition = Vector3.Lerp(panelRect.anchoredPosition, centerPosition, 1.0f);
The thing is that this script is run (on Start()) once per building, hoping that the panel would get instantiated 3 times and each panel corresponds to a building, but for some strange reason this does not work.
EXPECTED RESULT: each time the panel gets instantiated, the position is the same as of the building that instantiated it (the building GameObject holds the script and activates / deactivates the panel)
ACTUAL RESULT: even though the panel does get instantiated 3 times as it should, they are all in the same position and do not change even when my Update() function changes its position. What am I doing wrong?
Giving the world position of the buildings results in the panels being positioned basically in the same location on screen. What is needed is to calculate the world position to the screen position and use the calculated screen position to place the panels.
Try something like this:
panel.transform.position = Camera.main.WorldToScreenPoint(this.gameObject.transform.position);