Finding the Final Position from Original Position + Offset and Rotation - c#

I have my Building System almost finished; the problem is when I debugged my Physics.OverlapBox(), by using a Gizmos.DrawWireCube() with the code shown below:
void OnGizmosDraw()
{
Gizmos.matrix = Matrix4x4.TRS(transform.position + boundingOffset, transform.rotation, boundingExtents);
Gizmos.DrawWireCube(Vector3.zero, Vector3.one);
}
I saw the problem but I will show it in an image, since I am kind of struggling to explain.
I have thought of a solution but I could not find the equations and what method it is. Basically, there is an Original Vector to which an Offset will be added (if defined) and have it's Final Vector affected by Rotation. It is explained more in another.
Maybe it is a trigonometry based calculation. I hope you could find how to do it.
Thanks in advance.

Ok if I understand you correctly this time the issue is that your offset works as long as the transform isn't scaled or rotated.
The reason is that
transform.position + boundingOffset
uses plain worldspace coordinates. As you can see the offset from orange to red dot is exactly the same (in world coordinates) in both pictures.
What you rather want is an offset relative to the transform position and scale. You can convert your boundingOffset to local coordinates relative to the transform by using TransformPoint
Transforms position from local space to world space.
So instead use
Collider[] colliders = Physics.OverlapBox(transform.TransformPoint(boundingOffset), boundingExtents / 2, blueprint.transform.rotation);
What it does is basically
transform.position
+ transform.right * boundingOffset.x * transform.lossyScale.x
+ transform.up * boundingOffset.y * transform.lossyScale.y
+ transform.forward * boundingOffset.z * transform.lossyScale.z;
Alternatively
A bit more maintainable approach I often use in such a case is not providing boundingOffset via a hardocded/field vector but instead simply use a child GameObject e.g. called OffsetPivot and place it in the scene however you want. Since it is a child of your transform it will always keep the correct offset when you rotate/scale/move the parent.
Then in the code I would simply do
[SerializeField] private Transform boundingOffsetPivot;
...
Collider[] colliders = Physics.OverlapBox(boundingOffsetPivot.position, boundingExtents / 2, blueprint.transform.rotation);

Related

Unity: Can't get object's z axis to lock when facing camera

I want a sprite to always face the camera, except in the Z-Axis. My sprites keep tipping left or right when I move the camera, and I can't have that.
I have been googling this for hours. I've tried transform.LookAt or Quaternion.LookRotation and manually setting the z to 0, but for whatever reason the z keeps adjusting. I've seen and tried so many solutions that feel like they should work but just don't. If it matters, my sprite is a child of another object, but trying localRotation doesn't work either. Freezing rigidbody constraints also has no effect.
The most accurate I can get it is this:
public class Billboard : MonoBehaviour
{
GameObject cam;
float minDist;
// Start is called before the first frame update
void Start()
{
cam = GameObject.Find("Main Camera");
}
// Update is called once per frame
void LateUpdate()
{
//Scale
minDist = cam.GetComponent<CameraOrbit>().distanceMin;
transform.localScale = new Vector3(1f, 1f, 1f) * (cam.GetComponent<CameraOrbit>().distance - minDist) * 1.01f / 3;
//Direction
transform.LookAt(cam.transform.position);
Vector3 rot = transform.rotation.eulerAngles;
transform.rotation = Quaternion.Euler(rot.x, rot.y, 0);
}
}
With this I can get the sprite to face the camera, but the z axis refuses to stay at zero.
Special thanks to StarManta for answering this question on the Unity Forums.
"OK, I think I see what's happening. Them looking like they're rotated is, I think, an illusion; I think they really are accurately pointing at the camera and right-side-up. But, you're treating the camera as if it has a round/fisheye projection, when it really has a rectilinear projection. Usually this doesn't matter but in this case it does. It's hard to explain exactly how this affects this, but the upshot is that, when things that are on either side of the center of the screen are set to "look towards" the camera, they actually appear to be rotated around their Y axes.
The solution to this is actually annoyingly simple: don't set the rotation to look at the camera, set it to be the same as the camera."
transform.rotation = cam.transform.rotation;
Have you tried rotation constraints?
https://docs.unity3d.com/Manual/class-RotationConstraint.html
I had a similar problem, where I wanted some 3d text to always look up, but rotated to the camera.
What worked for me was setting the undesired euler components to zero after applying the lookat rotation.
In your case, it would be something like this.
transform.LookAt(cam.transform.position);
var rot = transform.rotation.eulerAngles;
transform.rotation = Quaternion.Euler(rot.x, rot.y, 0);
In case you need another z value, just replace the 0.

Unity3D: NewtonVR. Translating a point and rotation inline with changes from another point/rotation

I make NewtonVR, a free, opensource, physics based, vr interaction framework plugin for unity3d. The meat of NewtonVR is being able to hover your vr controller over a virtual object, hold a button, and "pick up" that item. Once the item is "held" any changes to the controller's position and rotation are duplicated on the item. The way we currently do this is by creating a transform (PickupTransform) at the point the item is at, then setting the parent of that transform to the controller (NVRHand).
See here: https://github.com/TomorrowTodayLabs/NewtonVR/blob/master/Assets/NewtonVR/NVRInteractableItem.cs#L180
Then, each FIxedUpdate we set the velocity and angular velocity equal to (distance / time) to get the item's position to match the controller's position plus the initial offset of position and rotation. To see this in action, check out this old blog post: http://www.vrinflux.com/newton-vr-physics-based-interaction-on-the-vive/
However, we've had a variety of issues with simply distance/time which I suspect is do to the position of the transform not being updated quite inline with the position of the rigidbody.
See here: https://github.com/TomorrowTodayLabs/NewtonVR/blob/master/Assets/NewtonVR/NVRInteractableItem.cs#L88
So what I'm trying to do is translate the "create a transform" solution to a more math based approach that solely relies on Rigidbody.position and Rigidbody.rotation. But I'm having some significant issues with that. On pickup I get a bunch of variables that might be helpful:
PickupPointItem = this.Rigidbody.position;
PickupRotationItem = this.Rigidbody.rotation;
PickupPointHand = hand.Rigidbody.position;
PickupRotationHand = hand.Rigidbody.rotation;
PickupPointDiff = PickupPointHand - PickupPointItem;
PickupRotationDelta = Quaternion.Inverse(PickupRotationHand) * PickupRotationItem;
Here's my current thought process for getting the deltas I need to apply the velocity:
For rotation I want to get the delta from the hand's initial rotation to the hands current rotation (currentHandRotationDelta). Then I want to apply that rotation to the initial rotation of the item (PickupRotationItem) to get the target rotation. Then the delta between the current rotation and that target rotation should be correct (but doesn't seem to be). Here's my code:
Quaternion currentHandRotationDelta = Quaternion.Inverse(PickupRotationHand) * AttachedHand.Rigidbody.rotation;
Quaternion targetRotation = PickupRotationItem * currentHandRotationDelta;
rotationDelta = Quaternion.Inverse(this.Rigidbody.rotation) * targetRotation;
Position is more complicated. I get the currentPickupPoint, which is the position on the object that the hand should have been over when it was first picked up. I calculate that by getting the point of the current item position plus the initial position delta and rotating that around the pivot of the current item position. Then I take the difference in position from that calculated point and the current rigidbody to get the global delta. Then I use that plus the current hand position as a point to rotate around the pivot of the hand by the rotation of the delta from the initial hand rotation to the current hand rotation. Here's the relevant code for that:
Vector3 currentPickupPoint = RotatePointAroundPivot(PickupPointDiff + this.Rigidbody.position, this.Rigidbody.position, Quaternion.Inverse(PickupRotationItem) * this.Rigidbody.rotation);
Vector3 currentDiff = this.Rigidbody.position - currentPickupPoint;
Vector3 targetPosition = RotatePointAroundPivot(AttachedHand.Rigidbody.position + currentDiff, AttachedHand.Rigidbody.position, Quaternion.Inverse(PickupRotationHand) * AttachedHand.Rigidbody.rotation);
positionDelta = targetPosition - this.Rigidbody.position;
Here's the RotatePointAroundPivot method:
private Vector3 RotatePointAroundPivot(Vector3 point, Vector3 pivot, Quaternion rotation)
{
Vector3 dir = point - pivot; // get point direction relative to pivot
dir = rotation * dir; // rotate it
point = dir + pivot; // calculate rotated point
return point; // return it
}
Here's a link to the nonworking code on the development branch of our github repo: https://github.com/TomorrowTodayLabs/NewtonVR/blob/development/Assets/NewtonVR/NVRInteractableItem.cs
This doesn't seem to be working at all :D But I'm not super sure why. I'm pretty new to quaternion math and pretty bad at vector math. Any help would be greatly appreciated.

Can`t throw an object aligned with vertical angle of the camera with .velocity = transform.TransformDirection( Vector3.forward)

I made a script for throwing a grabbed object, but i don`t know how to make it so the object is being launched depending on the vertiacal angle of the camera, not just forward. What paramentr should i give to the TransformDirection? Here is the code
void throwObject(float pushForce){
carrying = false;
carriedObject.GetComponent<Rigidbody> ().velocity = transform.TransformDirection(Vector3.forward * pushForce);
carriedObject.gameObject.GetComponent<Rigidbody> ().useGravity = true;
carriedObject = null;
}
You would want to take into account the camera's X rotation. So, something like:
carriedObject.GetComponent<Rigidbody> ().AddForce(
(Vector3.forward * pushForce) + (Vector3.up *
-Camera.main.transform.rotation.eulerAngles.x));
Firstly, how this should work: X, in most cases, should be the axis at which the camera is able to look "up and down." If it's not, change it to whatever axis the camera rotates on to look up and down. By checking the X rotation of the camera, we get it's angle. Then, we make it negative, because by default a positive rotation on a camera is actually looking down. You also just might want to encapsulate
(Vector3.forward * pushForce) + (Vector3.up *
-Camera.main.transform.rotation.eulerAngles.x)
with a transform.TransformDirection() statement so that you can be at any rotation. It might not be necessary, but I'm not quite sure.
Secondly, it's a better idea to use Rigidbody.AddForce() than directly setting Rigidbody.velocity, or so I hear.
I hope this helped you out :)

How do I transform Vector3s using a world Matrix properly?

I thought it would be simple as:
Vector3 point = Vector3.Transform(originalPoint, worldMatrix);
But apparently not... It make's the point's numbers shoot into the thousands.
Basically, what I'm trying to do is creating a collision system and every two points in this system is a line, so basically I want to collide lines. I want the lines to be able to scale, rotate, and translate based on a world matrix (so that the collision lines are in tune with the object's scale, rotation, and translation).
I've been trying for hours now and I can't seem to figure it out. I've tried multiplying by the View Matrix as well and while that is the closest to what I want, it seems to switching between two sets of numbers! It would be perfect if it stayed with the one set, I have no idea why it keeps changing...
Any help, please? :(
Edit: To add a little, I'm constantly updating the points in an Update call. But I don't know if that would change anything, either way the points = originalpoints first.
Steve H:
One line would have two points, so:
originalPoint[0] = new Vector3(-42.5f, 0f, 0f);
originalPoint[1] = new Vector3(42.5f, 0f, 0f);
point[0] = Vector3.Transform(originalPoint[0], worldMatrix);
point[1] = Vector3.Transform(originalPoint[1], worldMatrix);`
At first, point[0] & [1] equals the same as originalPoint[0] & [1]. But, the moment I move my player even just a few pixels...
point[0] = (-5782.5f, 0f, 0f)
point[1] = (-5697.5, 0f, 0f)
The player's position is -56.0f.
My worldMatrix goes as:
_world = Matrix.Identity // ISROT
* Matrix.CreateScale(_scale) // This object's scale
* Matrix.CreateFromQuaternion(_rotation) // It's rotation
* Matrix.CreateTranslation(_offset) // The offset from the centre
* Matrix.CreateFromQuaternion(_orbitRotation) // It's orbit around an object
* _orbitObjectWorld // The object to base this world from
* Matrix.CreateTranslation(_position); // This object's position
The objects display properly in graphics. They scale, rotate, translate completely fine. They follow the orbit's scale, rotation, and translation too but I haven't tested orbit much, yet.
I hope this is enough detail...
Edit: Upon further research, the original points are also being changed... :| I don't get why that's happening. They're the exact same as the new points...
I figured out my problem... -_-
So, after I create the line points, I do this at the end:
originalLines = collisionLines;
collisionLines & originalLines are both Vector3[] arrays.
I guess just by making one equal the other, it's like they're the exact same and changing one changes the other... that is something I did not know.
So I made this function:
void CreateOriginalPoints()
{
_originalPoints = new Vector3[_collisionPoints.Length];
for (int i = 0; i < _collisionPoints.Length; i++)
_originalPoints[i] = _collisionPoints[i];
}
And this solves the problem completely. It now makes complete sense to me why this problem was happening in the first place.
Thanks a lot Donnie & Steve H. I know you two didn't answer my question but it got me to poke around even deeper until I found the answer.

Determine Up vector and transform Vector in Matrix.CreateLookAt

I have
Vector3 PlayerPosition = new Vector3(0,0,0);
Matrix JetztMatrix = Matrix.CreateRotationX(pitch) * Matrix.CreateRotationY(yaw);
ef.View = Matrix.CreateLookAt(PlayerPosition, Vector3.Transform(new Vector3(?1,?2,?3), JetztMatrix) + PlayerPosition, new Vector3(?4,?5,?6));
Somehow I always hit some threshold so the screen rolls instead of rotates.
So for instance I give -180° to +180° as yaw. This leads to one turn, but not on a straight "line". As I said, it moves then the screen rolls and it moves again.
How do I determine ?1 to ?6
ef.View = Matrix.CreateLookAt(PlayerPosition, playerPosition + JetzMatrix.Forward, Vector3.Up);
The Vector3.Up will keep the camera from rolling. It's hard to understand exactly what you want to end up with based on your question. If this doesn't solve your issue or creates some other undesirable motion, please describe your desired functionality in more detail.

Categories