Display complete object from camera with specific rotation in unity C# - c#

I have different kind of objects with variable dimensions and placed at different position in a scene. I want to focus/display each object with the camera with a hard code rotation (north rotation). Like with specific camera rotation I want to focus the object that it completely show to the camera and center of the screen. For this reason, I have write this code snippet that
1
Get the position behind the specific focus object using
TransformPoint
Provide the elevation using the largest extent of the bound so that it display maximum area of the object.
Assign the position and the fixed rotation
Vector3 dest = destination.transform.TransformPoint(0, 0, behindPositionDistance);// GetBehindPosition(destination.transform, behindPositionDistance, elevation);
Debug.Log(dest);
float eleveMax = Mathf.Max(destination.GetComponent<MeshRenderer>().bounds.extents.x, destination.GetComponent<MeshRenderer>().bounds.extents.z);
dest = new Vector3(dest.x, eleveMax, dest.z);
camera.transform.position = dest;
camera.transform.rotation = lookNorth;
But the problem is, it is not accurately working with all objects as every object is at different position and dimension. I want to focus the full object using the camera but without changing the rotation.

Maybe create an empty gameobject as a child of your objects. Use that as your Camera position at runtime.

If you can use an orthographic camera, you can set the camera's orthographicSize dynamically based on the size of the bounds of the GameObject. This is probably the easiest solution.
If you need a perspective camera, you can get the Planes or corners of the camera frustum via GeometryUtility.CalculateFrustumPlanes(Camera.main) or Camera.CalculateFrustumCorners and use the results from either to manually test that your Bounds is entirely inside.
With a perspective camera, you could also compute the necessary distance from the object based on the size of the object's bounds and the FOV of the camera. If I'm not mistaken, this would be something along the lines of distance = 0.5 * size / tan(0.5 * fov), but the function doesn't have to be precise; it just needs to work for your camera. Alternatively, you could keep distance constant and compute FOV from the size of the object, although I wouldn't recommend that because frequent FOV changes sound disorienting for the viewer; my point is that there are many options.

Related

Force an AR object to always stand upright in Unity / Vuforia

I have a Unity AR project using Vuforia engine. What I am trying to achieve is to have the AR object always stand upright in the view whether the image target is horizontal on a table or or vertical on a wall.
Currently the object is sitting on the image target no matter which orientation
Hope that makes sense,
Thanks
I always use Vector3.ProjectOnPlane for this and then you can simply assign axis directions to Transform.up and Transform.right (below I explain why right and not maybe forward)
public void AlignObject(Transform obj, Transform imageTarget)
{
obj.position = imageTarget.position;
// Get your targets right vector in world space
var right = imageTarget.right;
// If not anyway the case ensure that your objects up vector equals the world up vector
obj.up = Vector3.up;
// Align your objects right vector with the image target's right vector
// projected down onto the global XZ plane => erasing its Y component
obj.right = Vector3.ProjectOnPlane(right, Vector3.up);
}
The assumption for this is: The target is usually never rotated in the Z axis. If you want it upright on a wall you would usually rotate it around its X axis.
Therefore we can assume the the image target will never be rotated more then 90° on the Z axis (in which case the mapped vector would flip about 180°) and thus if we map the right vector down onto the global XZ plane it always still points in the correct direction regardless of any rotations in Y and X axes.
If we would use the forward instead we take the risk that due to tracking inaccuracies the vertical targets forward vector actually points a tiny little bit towards us so when we map it down onto the XZ plane it points backwards not forwards and the object is flipped by 180°.
So using right works for a horizontal and a vertical target.

How to rotate parent to move child in Unity3D?

I have a relatively complicated math problem I need to solve for a game I'm working on in Unity. I've tried a couple things but nothing has worked.
Basically, I need to apply an offset rotation (Quaternion) to a parent, where the result of this rotation is to move its child in a given direction.
To explain it better, the problem would be simple if it could be guaranteed that the parent's forward vector was pointed at the child. Then I would simply create a ghost position by adding the desired direction to the child, and then use a LookAt rotation to rotate the parent to look at that ghost position. (The child doesn't need to be put in a specific position, it just needs to move generally in that direction)
What makes this complicated is that A. the parent could be at any rotation, and B. the child could be at any position relative the parent.
For context, I'm working on a procedural animation system and I'd like to have the bones bend in the direction of the Agent's velocity. With the IK'd bones this is easy, just move the IK. But for the actual bones, I need a way to move a bone in a direction by rotating its parent's bone.
Thanks for any help!
First, we need to have the child's current position and the target position in the coordinate system of the parent. It sounds as if the child's position already is expressed in this coordinate system. Then, if the target position is in world coordinates, you simply do this with the inverse parent world transform:
pTargetLocal = parent.worldMatrix^-1 * pTarget
Once we have this, we want to find a rotation R, such that pCurrentLocal is rotated towards pTargetLocal. Assuming unit vectors (as rotations preserve lengths), this equals:
parent.worldMatrix * pTargetLocal = parent.worldMatrix * R * pCurrentLocal
pTargetLocal = R * pCurrentLocal
Once we have R, we just need to update parent.worldMatrix = parent.worldMatrix * R.
This can be solved by representing R in axis-angle format. The axis would be axis = Vector3.Cross(pCurrentLocal, pTargetLocal) and the angle is angle = Vector3.Angle(pCurrentLocal, pTargetLocal). You can either calculate a quaternion or a matrix from these parameters and multiply it with the current parent transform.
I assumed that the rotation center is the origin of the parent's local coordinate system. You could also rotate about another center to incorporate a translation component.

Unity - Camera ScreenToWorldPoint returning odd values

The main camera's output is set to a render texture, which is applied to a material, which is applied to a quad that's scaled up to 128x72. The secondary camera is set to only see what is rendered to the child quad, who has the material with the render texture on it.
However Camera.main.ScreenToWorldPoint(Input.mousePosition) is returning values that aren't even close to the GameObject. I.E. The GameObject is instantiated at (0, 0, 0), and hovering over it shows the mouse at (307, 174). Moving the Rotating Object to the right edge of the screen will only return an x position of 64 (half of the 128px wide quad) so I'm not sure where the 300+ is coming from. Not sure if the quad/camera set up is responsible for this.
EDIT: Using a single orthographic camera, all properties the same except for using a render texture, instead of the setup I have now results in accurate ScreenToWorldPoint output.
The Input.mousePosition property will only return the x and y axis of the mouse position in pixels.
ScreenToWorldPoint requires the z axis too which Input.mousePosition doesn't provide. The z-axis value supposed to be the nearClipPlane of the camera. It will give you a position that's right in front of the camera.
Depending on the size of the 3D object you want to instantiate where mouse button is pressed, you will need to apply an offset to it to make it totally visible to the screen. For a simple cube created in Unity, an offset of 2 is fine. Anything bigger than that, you will need to increase the offset.
Below is a complete example of how to properly use ScreenToWorldPoint with Camera.nearClipPlane and an offset to instantiate a 3D object where mouse is clicked:
public GameObject prefab;
public float offset = 2f;
void Update()
{
if (Input.GetMouseButtonDown(0))
{
Camera cam = Camera.main;
Vector2 mousePos = Vector3.zero;
mousePos.x = Input.mousePosition.x;
mousePos.y = Input.mousePosition.y;
Vector3 worldPoint = cam.ScreenToWorldPoint(new Vector3(mousePos.x, mousePos.y, cam.nearClipPlane + offset));
Instantiate(prefab, worldPoint, Quaternion.identity);
}
}
You may not be calling the Camera.ScreenToWorldPoint method correctly. In particular, the z position of the screen position parameter that's passed to this method should be defined as world units from the camera. See the Unity documentation on Camera.ScreenToWorldPoint.
Instead of Camera.main.ScreenToWorldPoint(Input.mousePosition), I think this is the correct way to call Camera.ScreenToWorldPoint:
var cameraPosition = Camera.main.transform.position;
// assuming `transform` is the transform "Virtual Screen Quad"...
float zWorldDistanceFromCamera = transform.position.z - cameraPosition.z;
var screenPoint = new Vector3(Input.mousePosition.x, Input.mousePosition.y, zWorldDistanceFromCamera);
var worldPoint = Camera.main.ScreenToWorldPoint(screenPoint);
Debug.LogFormat("mousePosition: {0} | zWorldDistanceFromCamera: {1} | worldPoint: {2}",
Input.mousePosition,
zWorldDistanceFromCamera,
worldPoint.ToString("F3"));
(If this isn't working, could you update your question or reply to this post with a comment with details showing the values that are logged at each step?)
I was just struggling with this problem and this question helped me find the answer, so thank you for posting it!
The issue has nothing to do with the z axis or how you're calling Camera.ScreenToWorldPoint. The issue is that the camera you're calling it on is rendering to a RenderTexture, and the dimensions of the RT don't match the dimensions of your game window. I wasn't able to find the implementation of the method in the reference source, but whatever it's doing is dependent on the resolution of the RenderTexture.
To test this, click the stats button in the game window to display the game window's screen size. The coordinates you get will match the ratio between that and the RenderTexture resolution.
Solutions:
Don't call this method on a camera targeting a rendertexture, either target the screen (none) or create a child camera that matches the position of the camera you need
Match the RT resolution to the screen. Obviously this may have performance implications, or cause issues if the screen size changes.
Don't use Camera.ScreenToWorldPoint. Depending on the use case, using a raycast may be simpler or more reliable.
Since using a default camera was returning the correct values, I simply added another one to detect the mouse position independent of the render texture/quad setup.

Unity, how to get Camera angle Width

I'm making game that must set target number on all enemy on screen.
I FAIL TO FIND function like IsInsideCameraView(gameObject)
Now I'm trying to scan area that camera looks.
For this, I need Camera angle width, angle hight.
As described in this emage (My reputation is not enough to show image directly)
My question is similer to this question, How to get angles value of perspective camera in Three.js? but for Unity.
If you know
HOW TO CALCULATE CAMERA WIDTH&HEIGHT (following screen size when unity starts)or
IS THERE GOOD FUNTIONS LIKE IsInsideCameraView(Camera, gameObject)
any answers will be welcomed.
[Untested. Javascript] You can use renderer.isVisible to determine if the object is within the camera frustum.
I FAIL TO FIND function like IsInsideCameraView(gameObject)
You can use:
object.Renderer.isVisible
...to check if the object is visible in any camera.
To test for a specific camera, use:
Vector3 screenPos = camera.WorldToScreenPoint(target.position);
Then check if X and Y coordinates are between (0,0) and Screen.width, Screen.height.
HOW TO CALCULATE CAMERA WIDTH&HEIGHT
That's what Camera.pixelWidth and Camera.pixelHeight are for or you can use Screen.width, Screen.height for cameras that represent the entire screen.
Anyway, that's arguably a XY problem, refer to the beginning of my answer for direct solution.
Tell me more
Is target in view frustum?

Why are Unity Prefab values wrong when instantiated?

I have a C# script on a character that holds a reference to a Prefab.
During initialization, the script runs :
weaponSlot = Instantiate(weaponPrefab) as Transform;
and sets
weaponSlot.parent = rightHand;
the prefab contains scaling information for the weapon, as well as some small rotation and position offsets for it to look correct.
When the game is run, the weapon's actual position is offset from the rightHand by a massive amount, although the rotation is preserved. The scaling is also a bit off, smaller than the prefab-ed size by roughly 40%.
Any insight on why this is happening, or even hints on what to check would be appreciated!
Make sure to wrap any models in an empty game object. The size, position, and orientation need to be correct under the root gameobject. When you instantiate a gameobject under a parent you need to be sure to zero-out the localPosition, and localEulerAngles (set them = Vector3.zero). You need to also set the localScale = Vector3.one.
It should look like this in the project:
Prefab (zero position, zero rotation, one scale)
->Model (correct scaling, rotation, and position)
Then you parent it.

Categories