The main camera's output is set to a render texture, which is applied to a material, which is applied to a quad that's scaled up to 128x72. The secondary camera is set to only see what is rendered to the child quad, who has the material with the render texture on it.
However Camera.main.ScreenToWorldPoint(Input.mousePosition) is returning values that aren't even close to the GameObject. I.E. The GameObject is instantiated at (0, 0, 0), and hovering over it shows the mouse at (307, 174). Moving the Rotating Object to the right edge of the screen will only return an x position of 64 (half of the 128px wide quad) so I'm not sure where the 300+ is coming from. Not sure if the quad/camera set up is responsible for this.
EDIT: Using a single orthographic camera, all properties the same except for using a render texture, instead of the setup I have now results in accurate ScreenToWorldPoint output.
The Input.mousePosition property will only return the x and y axis of the mouse position in pixels.
ScreenToWorldPoint requires the z axis too which Input.mousePosition doesn't provide. The z-axis value supposed to be the nearClipPlane of the camera. It will give you a position that's right in front of the camera.
Depending on the size of the 3D object you want to instantiate where mouse button is pressed, you will need to apply an offset to it to make it totally visible to the screen. For a simple cube created in Unity, an offset of 2 is fine. Anything bigger than that, you will need to increase the offset.
Below is a complete example of how to properly use ScreenToWorldPoint with Camera.nearClipPlane and an offset to instantiate a 3D object where mouse is clicked:
public GameObject prefab;
public float offset = 2f;
void Update()
{
if (Input.GetMouseButtonDown(0))
{
Camera cam = Camera.main;
Vector2 mousePos = Vector3.zero;
mousePos.x = Input.mousePosition.x;
mousePos.y = Input.mousePosition.y;
Vector3 worldPoint = cam.ScreenToWorldPoint(new Vector3(mousePos.x, mousePos.y, cam.nearClipPlane + offset));
Instantiate(prefab, worldPoint, Quaternion.identity);
}
}
You may not be calling the Camera.ScreenToWorldPoint method correctly. In particular, the z position of the screen position parameter that's passed to this method should be defined as world units from the camera. See the Unity documentation on Camera.ScreenToWorldPoint.
Instead of Camera.main.ScreenToWorldPoint(Input.mousePosition), I think this is the correct way to call Camera.ScreenToWorldPoint:
var cameraPosition = Camera.main.transform.position;
// assuming `transform` is the transform "Virtual Screen Quad"...
float zWorldDistanceFromCamera = transform.position.z - cameraPosition.z;
var screenPoint = new Vector3(Input.mousePosition.x, Input.mousePosition.y, zWorldDistanceFromCamera);
var worldPoint = Camera.main.ScreenToWorldPoint(screenPoint);
Debug.LogFormat("mousePosition: {0} | zWorldDistanceFromCamera: {1} | worldPoint: {2}",
Input.mousePosition,
zWorldDistanceFromCamera,
worldPoint.ToString("F3"));
(If this isn't working, could you update your question or reply to this post with a comment with details showing the values that are logged at each step?)
I was just struggling with this problem and this question helped me find the answer, so thank you for posting it!
The issue has nothing to do with the z axis or how you're calling Camera.ScreenToWorldPoint. The issue is that the camera you're calling it on is rendering to a RenderTexture, and the dimensions of the RT don't match the dimensions of your game window. I wasn't able to find the implementation of the method in the reference source, but whatever it's doing is dependent on the resolution of the RenderTexture.
To test this, click the stats button in the game window to display the game window's screen size. The coordinates you get will match the ratio between that and the RenderTexture resolution.
Solutions:
Don't call this method on a camera targeting a rendertexture, either target the screen (none) or create a child camera that matches the position of the camera you need
Match the RT resolution to the screen. Obviously this may have performance implications, or cause issues if the screen size changes.
Don't use Camera.ScreenToWorldPoint. Depending on the use case, using a raycast may be simpler or more reliable.
Since using a default camera was returning the correct values, I simply added another one to detect the mouse position independent of the render texture/quad setup.
Related
I have a Cinemachine Freelook camera and i want it similar to Skyrim, where the character's rotation follows where the camera is pointing, but only on the Y axis so that i can move my mouse and look at the character from up and down;
This video demonstrates how MrKrabs can move in 3 dimensions, but won't turn.
I already tried creating a blank object and putting it at the center of the character, then using transform.LookAt(Camera) to make it point to where the camera is, and finally getting the Y value of the object's rotation, inverting it with -transform.position.y and applying it to MrKrabs, but it didn't work: it was jittery and overall just a bad user experience, is there a way to turn the gameobject based on the camera's rotation? (having the camera always watching his back)
Something like this should do the trick.
transform.rotation = Quaternion.Euler(0, Camera.main.transform.eulerAngles.y, 0);
Beware that if your camera is a child of your object it will rotate with it.
I'm trying to move the player to the position of the mouse in my game.
Right now movement along the x axis works fine, but I want the mouse y axis to control the characters movement along the z axis (because of my top-down camera's y being world z).
Right now mouse y controls player y, which looks like this in game.
And the code for it looks like this:
public Vector2 mousePos;
private Vector3 playerPos;
void update()
{
// Get mouse position from player
mousePos = Input.mousePosition;
// Move player with mouse
playerPos = new Vector3(mousePos.x, 0, mousePos.y);
transform.position = Camera.main.ScreenToWorldPoint(playerPos);
}
I then tried to just swap the y and z like this
playerPos = new Vector3(mousePos.x, mousePos.y, 0);
But instead of allowing me to control the z axis this snippet instead causes the player to lose all movement.
I'm very new to coding so I might be missing something completely obvious. What am I doing wrong?
Unity version: 2018.4.21f1
The example you gave isn't working because you're providing a z coordinate of 0 and when called on a perspective camera, Camera.ScreenToWorldPoint does some vector math, so in your scene view, your player is probably floating right on top of your camera. I can't explain the actual math because I don't understand it, but luckily for both of us, we don't need to! Essentially, the z coordinate is saying how far away from the camera to place the point, and since the view frustum of a perspective camera gets narrower closer to the camera, placing it at 0 means there's nowhere for it to go. how moving the object farther from the camera requires it to move farther to match the mouse position.
The bigger problem is that your method is wrong and there's a better way. Camera.ScreenToWorldPoint converts a mouse coordinate to a world space coordinate depending on what the camera is looking at, so you're essentially flippng the y and z coordinate, then feeding it into a method that figures out what coordinates need to be flipped.
It looks like this script is attached to your player gameObject, so it should look like this:
Camera cam;
[Range(1,10)] //this just creates a slider in the inspector
public int distanceFromCamera;
void Start()
{
//calling Camera.main is a shortcut for
//FindGameObjectsWithTag("MainCamera")
//so you should avoid calling it every frame
cam = Camera.main;
}
void Update()
{
//This will work for both perspective and orthographic cameras
transform.position = cam.ScreenToWorldPoint(
Input.mousePosition + new Vector3(0,0,distanceFromCamera));
}
If your camera is going to move closer or farther from the plane, make sure to add something that keeps the distance from camera updated.
That said, for most games having the player tied to the mouse cursor isn't usually what you're looking for. Generally you want some kind of input to move the player in some direction a certain amount. If that's what you're looking for, this part of the space shooter tutorial is a good introduction to player movement (though the code itself may be outdated).
I have this project where I click on an object and a canvas will show up to the player select an option inside the canvas. I needed the canvas to be world space so the player can move its head and the canvas will stay static in front. The problem is, there is a ton of objects around the scene and I need to update the position of the canvas everytime the player clicks an object.
I've tried to use "transform.position" but it doesn't work the way I wanted.
obs:
painel_escolha = canvas with panel;
transform_tela = camera.
painel_escolha.transform.position = transform_tela.transform.position;
Use this to move the canvas/panel in front of the camera and make it face the camera
// move the canvas distance meters in front of the camera
painel_escolha.transform.position = transform_tela.position + transform_tela.transform.forward * distance;
// make the canvas point in the same direction as the camera
painel_escolha.transform.rotation = transform_tela.transform.rotation;
Doing this in LateUpdate(! so after User input and position and rotation changes are processed) makes your panel absolutely head stable meaning it always stays in front of the user and he can't look away basically.
Often you rather want to use some kind of smoothed lerping instead e.g. doing
[Range(0,1)]
public float interpolationRatePosition = 0.5f;
[Range(0,1)]
public float interpolationRateRotation = 0.5f;
privtae Vector3 lastPosition;
private Quaternion lastRotation;
privtae void LateUpdate()
{
painel_escolha.transform.position = Vector3.Lerp(painel_escolha.transform.position, transform_tela.position, interpolationRatePosition);
painel_escolha.transform.rotation = Quaterinon.Lerp(painel_escolha.transform.rotation, transform_tela.rotation, interpolationRateRotation);
}
this resuts basically in the same end position but makes it looking a bit smoother.
Canvas positions are basically 2D screen positions. You could get the clicked objects world position with transform.position but for a canvas element to point to that (be on top of it) you need to transform that world position to the screen position via https://docs.unity3d.com/ScriptReference/Camera.WorldToScreenPoint.html
I have different kind of objects with variable dimensions and placed at different position in a scene. I want to focus/display each object with the camera with a hard code rotation (north rotation). Like with specific camera rotation I want to focus the object that it completely show to the camera and center of the screen. For this reason, I have write this code snippet that
1
Get the position behind the specific focus object using
TransformPoint
Provide the elevation using the largest extent of the bound so that it display maximum area of the object.
Assign the position and the fixed rotation
Vector3 dest = destination.transform.TransformPoint(0, 0, behindPositionDistance);// GetBehindPosition(destination.transform, behindPositionDistance, elevation);
Debug.Log(dest);
float eleveMax = Mathf.Max(destination.GetComponent<MeshRenderer>().bounds.extents.x, destination.GetComponent<MeshRenderer>().bounds.extents.z);
dest = new Vector3(dest.x, eleveMax, dest.z);
camera.transform.position = dest;
camera.transform.rotation = lookNorth;
But the problem is, it is not accurately working with all objects as every object is at different position and dimension. I want to focus the full object using the camera but without changing the rotation.
Maybe create an empty gameobject as a child of your objects. Use that as your Camera position at runtime.
If you can use an orthographic camera, you can set the camera's orthographicSize dynamically based on the size of the bounds of the GameObject. This is probably the easiest solution.
If you need a perspective camera, you can get the Planes or corners of the camera frustum via GeometryUtility.CalculateFrustumPlanes(Camera.main) or Camera.CalculateFrustumCorners and use the results from either to manually test that your Bounds is entirely inside.
With a perspective camera, you could also compute the necessary distance from the object based on the size of the object's bounds and the FOV of the camera. If I'm not mistaken, this would be something along the lines of distance = 0.5 * size / tan(0.5 * fov), but the function doesn't have to be precise; it just needs to work for your camera. Alternatively, you could keep distance constant and compute FOV from the size of the object, although I wouldn't recommend that because frequent FOV changes sound disorienting for the viewer; my point is that there are many options.
I am coding for a game and I have it running fine and all, but I ran into the issue of projectiles. I have no idea of how to calculate the mouse position and send a projectile over. I've looked though many stackoverflow tutorials and many youtube videos, but they are either too vague or very very complex. Could someone please explain the process? I have been coding in C# for 2 years now, so I do have background knowledge. Please Help!
Well, you don't tell what kind of game you are working on and what the projectile you want to "send over", whatever that means but here are a few pointers, maybe one will help:
#1: If you are doing a 2D game, and you want to send a projectile from some point toward the mouse cursor, then you simply get the position of the mouse by calling GetState() on your MouseState instance, and reading it's X and Y properties, then get the direction of the projectile by subtracting the location of the origin of the projectile from the mouse position. Then you can move the projectile along that line by storing the time you launched it, subtracting that every frame from the actual time to get how long it has been on its way and then adding that many times it's speed times the normalized direction to its origin coordinates.
Example:
For this I assume you have a Projectile class that has Vector2 field called origin, one called direction, one called position, a float called speed and an int called startTime and you have an instance of it with origin and speed already set. Modify as necessary for your actual code.
MouseState ms = new MouseState();
in your Update method:
ms.GetState();
projectile.direction = new Vector2(ms.X, ms.Y) - projectile.origin;
projectile.direction.Normalize();
projectile.position = projectile.origin + projectile.speed * (gameTime.TotalGameTime.TotalMilliseconds - projectile.stratTime) * projectile.direction;
And then in the Draw method you just draw the projectile to the coordinates in projectile.position.
#2: You are doing a 3D game and want to shoot a projectile originating from your camera and directed toward whatever your mouse is pointing at. You will need to generate a Ray with something like this:
private Ray getClickRay(Vector2 clickLocation, Viewport viewport, Matrix view, Matrix projection)
{
Vector3 nearPoint = viewport.Unproject(new Vector3(clickLocation.X, clickLocation.Y, 0.0f), projection, view, Matrix.Identity);
Vector3 farPoint = viewport.Unproject(new Vector3(clickLocation.X, clickLocation.Y, 1.0f), projection, view, Matrix.Identity);
Vector3 direction = farPoint - nearPoint;
direction.Normalize();
return new Ray(nearPoint, direction);
}
From there you can get the position of your projectile the same way as above, only using Vector3-s instead of Vector2-s.
#3: You want to simulate some sort of launcher and do a physical simulation on the trajectory of the projectile. Since I find this less likely than the first two I'll just describe the idea: you set a minimum and maximum rotation for your launcher along 2 axes, link each rotation to the ratio of the full width and length of your screen to the mouse position, and then use the angles and whatever other variables you have to calculate the direction and speed of the projectile at launch, then simulate physics each frame.
I hope one of these will help.