I have 2 gameobjects within my character say right_obj and left_obj, I want them to ignore parent rotation, like if player change its direction (left, right), these 2 gameobjects stay on their position and rotation, so far I use 2 approaches but fail. here is my code, I use this script on my two gameobjects, but they ignour this lines
// first one approch
Quaternion rotation;
void Awake()
{
rotation = transform.rotation;
}
void LateUpdate ()
{
transform.rotation = rotation;
}
// 2nd approch
void LateUpdate ()
{
transform.rotation = Quaternion.identity;;
}
Do you really need this type of relationship?
If you don't really need the father/child relationship you can spawn an empty GameObject at 0,0,0 and spawn the actual parent and child as childs of the GameObject in the middle of the scene, this way they would still be grouped in some sort of relationship but instead of a father/child you'd get a brother/brother relationship, making the first independent from the second.
If you need to make two objects dependent from each other but not in every single way as it's supposed to be in father/child try using Parent Constraints
This way you'll obtain this type of hierarchy:
- Father GO (at 0,0,0)
-- Child1 (your actual father)
-- Child2 (your actual child)
Child1 and Child2 are syncing position by their ParentConstraint but not their rotation.
Else there is no other way than applying the opposite rotation to the child.
If you really don't want destory parent-child hierarchy, you could rotate child every frame to counteract parent rotation.
Let's say we have the follow scene,
Add the follow RestoreOriginRotation.cs to child object, we can get a stable child rotation.
What we can get.
****If you don't concern the underlying math, you chould stop here. ***
LeviathanCode's "Parent Constraints" solution only works with Unity 2018.1+ because it's a new addition.
My suggestion isn't a "pretty" solution but it'll also work with older versions if you want to prevent imitating rotations while still copying movement:
Get rid of the parent/child relationship, add a new parent GameObject (as LeviathanCode suggested) and a script to it that simply copies the player's movement each frame:
public GameObject player; //Drag the "player" GO here in the Inspector
public void LateUpdate() {
transform.position = player.transform.position;
}
This way your "right_obj" and "left_obj" won't rotate like the player but still move "with" it.
You could also use a Rotation Constraint component. This is similar to the Parent Constraint mentioned above, but it's simpler to understand in my opinion - it just uses the Rotation and only the rotation of whichever GameObjects you slot in.
You just have to make an empty game object and set the rotation to 0,0,0 , then set this empty game object as the source of the rotation constraint. I have this exact setup in my game, and it overrides the rotation of the actual parent, which is what I want.
Perhaps not the absolute cleanest solution, but it's something to consider.
Here is the docs link - https://docs.unity3d.com/Manual/class-RotationConstraint.html
As #aaronrums stated, a Rotation Constraint is an easy way to achieve this.
Here, I have 3 game objects. The main parent object container ("Player Seat") which holds all the children will always be set to a rotation of 0 on the Z axis. The GO I'm trying to keep positioned vertically (so the text and avatar doesn't rotate) is "DrewPlayer." I have it offset on the X-axis by 100 in its parent "Player Position." I added a rotation constraint on the item I didn't want to rotate. (a.k.a "DrewPlayer") Once the constraint was on it, I dragged over the "Player Seat" to the source list and when I rotated "Player Seat," I was able to achieve the desired effect in the gif below.
Thanks #aaronrums! Saved me some effort here. :)
Related
Hello I am having a problem and I dont know how to solve it.
I want to create a 3d puzzle where the player needs to move multiple 3D objects to put them together. I already know how to implemet the movement (LeanTouch) but I need a way to recognize when two objects touch each other in special places. Then I would use transform to combine them. Does anyone have an Idea how to solve this?
One way to solve this is to create child objects with the colliders in the specific places you want to detect collision at. Then in OnCollisionEnter() you can practically combine them by creating a new parent of the two objects. Here's an example of an approach I took:
I set up my colliders like this:
Then on the individual colliders I added this code. Front is just the tag attached to the collider shown in the hierarchy.
void OnCollisionEnter(Collision collision)
{
if (collision.gameObject.CompareTag("Front"))
{
var newParent = new GameObject();
newParent.transform.SetParent(collision.transform.parent.parent); // Get the parent of the current gameobject the collider is attached to
collision.transform.parent.SetParent(newParent.transform); // Doing .parent because this is the child collider
transform.parent.SetParent(newParent.transform);
}
}
It's not perfect, but the result is that the objects were combined under the same parent.
Alternatively, you could simply make one cube the child of the other by removing NewParent and replacing it with collision.transform.parent.
You should probably use colliders for that, and then use the collision event
I have a game I am creating in Unity. It has a table with 30 cubes on it. I want a user to be able to shuffle the cubes on the table using their mouse/touch.
I am currently using a ray to get the initial table/cube hit point then accessing the rigidbody component on the cube to apply a force using AddForceAtPosition, happening in an OnMouseDrag. I am also pulling out my hair trying to figure out how to apply force in the direction from the mouse's last position to the hit point on the rigidbody of the cube.
Can someone please help me? An example would be great. I would share my code but I am a spaghetti code monster and fear criticism... Thanks much!
If I understood correctly, is something like this you want to achieve? https://www.youtube.com/watch?v=5Zli2CJGAtU&feature=youtu.be
If so, first you have to add your cubes and table to a new layermask, this step is only necessary if you don't want your ray to hit other colliders that might mess with your result.
After this you should add a tag to all the cubes you want to aplly the force, in my case I added a new tag called "Cubes".
You can search on youtube this if you don't know how to do it, there are plenty of tutorials to help you.
After that you create a new script and attach to the camera. Here is my script:
public class CameraForceMouse : MonoBehaviour
{
public LayerMask layerMask;
RaycastHit hit;
Vector3 lastPosition;
// Start is called before the first frame update
void Start()
{
lastPosition = new Vector3();
}
// Update is called once per frame
void Update()
{
if (Input.GetMouseButton(0))
{
if (Physics.Raycast(Camera.main.ScreenPointToRay(Input.mousePosition), out hit, 200f, layerMask))
{
if (hit.transform.CompareTag("Cubes"))
{
Rigidbody rb = hit.transform.GetComponent<Rigidbody>();
Vector3 force = (hit.point - lastPosition).normalized * 55f;
rb.AddForceAtPosition(force, hit.point);
}
lastPosition = hit.point;
}
}
}
}
Just want to say that there are a lot of ways to achieve the same thing, this is the way I decided to do, but there are probably other (maybe even better) ways of doing this.
Explaining the code:
The first thing is a public LayerMask that before you play the game you will have to select your camera in your scene and look for the script on the editor in the right side and select witch layerMask you added to your cubes and table.
If you don't know how raycast works, here is the documentation: https://docs.unity3d.com/ScriptReference/Physics.Raycast.html but you can also search on youtube.
The important thing here is the new vector being created to define the direction of the force.
First we declare a global variable called LastPosition, and for every frame that we are clicking, we will check for a collision, and if this collision happens, we will then create a new vector going out of the last position the mouse was to the position the mouse is now Vector3 force = (hit.point - lastPosition), than we normalize it because we only care about the direction and multiply it by the amount of force we want, you can make it as a nice variable but I just put it there directly.
Then, after you've applied the force to the position the ray hit the object, you have to define that this is now your last position, since you are going to the next frame. Remember to put it outside of the Tag check, because this is checking if the object we collided with is the cube we want to apply the force or if it is the table. If we add this line lastPosition = hit.point; inside of this check it will only assign a new lastPosition value when we hit the cube, and would lead to bugs.
I'm not saying my answer is perfect, I can think of a few bugs that can happen, when you click outside the table for example, but is a good start, and I think you can fix the rest.
In Unity3D I have got a gameobject that is attached with a box collider and physic material. The hand controller model is also attached to a box collider and physic material. When the gameobject collides with the hand controller, the CollideWithController logs on the console. However, the gameobject does not change direction.
if (other.CompareTag("HandController"))
{
Debug.Log("CollideWithController");
var magnitude = 1000;
var force = transform.position - other.transform.position;
force.Normalize();
gameObject.GetComponent<Rigidbody>().AddForce(force * magnitude);
}
Without seeing/knowing what other is, its hard to say, but generally there could be two problems:
transform.position-other.transform.position doesn't actually result in the direction you expect. To determine this, print the value or display it using Debug.DrawRay.
The force you're adding might not be enough to change direction, or there are other forces canceling it out.
var force = transform.position - other.transform.position;
You will get a direction vector to get other to the position of this. So you are moving into the collision, reversing the vector should fix your issue.
When you modify physics value like here with AddForce, you should do it in the FixedUpdate to minimize the risk of bug. (It can work outside of the FixedUpdate, but in most case it will create unwanted behavior).
In your code you use AddFroce, but if the actual force of the gameObject is greater than this the force you add will just slow the actual movement of the gameobject and not reverse it.
I think the best way to change the movement of your gameObject is to use the velocity vector and to reflect it.
One more advice : Don't use physics for gameplay system. Physics is something very powerfull, it can bring a lot of emergent behavior, but it's a chaotic system. And when it's about gameplay you don't want chaos you want control and consistency.
Unity doc :
Velocity : https://docs.unity3d.com/ScriptReference/Rigidbody-velocity.html
Reflect : https://docs.unity3d.com/ScriptReference/Vector3.Reflect.html
context to the problem: I've got multiple sorts of cubes set up, placed in an array. I've made it so every different cube has his own parent, so all the cubes are grouped together with the other cubes of the same type.
I'm trying to move all the cubes of the same type up in the air away from the others so it becomes more visible how many cubes of the same type there are. my thought was that when any of the child objects get clicked, all of the objects under that parent move; but so far no succes.
Anyone got any tips as to how this might be able to work?
Note that I don't intend to use raycast as it seems overly compicated for this purpose.
If your cubes have colliders you can use OnMouseDown, in a script attached to each cube:
void OnMouseDown()
{
transform.parent.transform.Translate(Vector3.up * WhateverDistance);
}
If your cubes dont have colliders you'll have to use raycast to detect which cube has been clicked
I am instantiating an object with following code
public Object ball;
Instantiate(ball, hit.point , Quaternion.identity) ;
Where hit.point is a position and ball is a prefab I created, but unfortunately this prefab has a Position animation - jumping ball - and because of that as it animates it stays in animation's position. I can't even Translate it.
How can I move it or change animation somehow ?
There is more than one way to solve this problem, depending on your other goals/constraints if any.
One simple approach is to separate this problem into two spaces, via the introduction of an empty parent node.
If you construct..
[Empty Parent Node] (created dynamically)
|- [Ball] (created from your prefab)
..then you can still apply the animation to [Ball], now within a "local space" defined by [Empty Parent Node]. Further, you can now specify an arbitrary position for the parent node, which acts to place the ball overall.
Another possible solution is to change your animation from a position animation to a localPosition animation. Then, you'll be able to change its Transform.position attribute with scripts.