Rotation with the limit in unity 3D? - c#

I am building an application in unity.
when User click on an GameObject the model position is changed.
Now I need is on change of the position, Rotation is also changed with the limit axis/Co-ordination point.
Here is my code change of the position on input click of the gameObject.
using UnityEngine;
using System.Collections;
public class centertheroombox : MonoBehaviour
{
public GameObject box345;
public int speed = 1;
// Update is called once per frame
void Update ()
{
foreach (Touch touch in Input.touches)
{
if (touch.phase == TouchPhase.Began || touch.phase == TouchPhase.Ended)
{
Ray ray = Camera.main.ScreenPointToRay(touch.position);
RaycastHit hit;
if (Physics.Raycast(ray, out hit, 1000.0f))
{
if (hit.collider.gameObject.name == "Box345")
{
Debug.Log("yupiee pressed");
box345.transform.position = new Vector3(3, 0, -9);
//box345.transform.Rotate(Vector3.right, Time.deltaTime);
//box345.transform.Rotate = new Vector3(353,0,0);
}
}
}
}
}
}

Related

Moving objects independently of each other, with the ability to move at the same time in Unity 2D

I have two objects. The point is that I need to move the object up and down (hold and drag), and at the same time I need to move the other object up and down (hold and drag) independently. Something like this:
Example
After searching the Internet, I found a small script, but it only works for one touch at a time, and dragging only one object. Plus, if I touch with second finger the object changes his position to second finger position:
public class Move : MonoBehaviour {
private Vector3 offset;
void OnMouseDown() {
offset = gameObject.transform.position - Camera.main.ScreenToWorldPoint(new Vector3(-2.527f, Input.mousePosition.y));
}
void OnMouseDrag() {
Vector3 curScreenPoint = new Vector3(-2.527f, Input.mousePosition.y);
Vector3 curPosition = Camera.main.ScreenToWorldPoint(curScreenPoint) + offset;
transform.position = curPosition;
}
}
I just can’t understand how I can make two objects drag and drop at the same time. I am new to unity and honestly i have problems with controllers
For touch screens I suggest you use Touch instead of mouse events. Touch class supports multiple touches at the same time and some useful information such as Touch.phase (Began, moving, stationary etc). I've written a script which you can attach to multiple objects with tag "Draggable", that makes objects move independently. Drag and drop the other object that should move when you use one finger to field otherObj and it should work. It only works for 2 obj in this version.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Serialization;
public class PlayerActionHandler : MonoBehaviour
{
private Camera _cam;
public bool beingDragged;
public Vector3 offset;
public Vector3 currPos;
public int fingerIndex;
public GameObject otherObj;
// Start is called before the first frame update
void Start()
{
_cam = Camera.main;
}
// Update is called once per frame
void Update()
{
//ONLY WORKS FOR 2 OBJECTS
if (Input.touchCount == 1 && Input.GetTouch(0).phase == TouchPhase.Began)
{
var raycast = _cam.ScreenPointToRay(Input.GetTouch(0).position);
RaycastHit hit;
if (Physics.Raycast(raycast, out hit))
{
//use this tag if you want only chosen objects to be draggable
if (hit.transform.CompareTag("Draggable"))
{
if(hit.transform.name == name)
{
//set being dragged and finger index so we can move the object
beingDragged = true;
offset = gameObject.transform.position - _cam.ScreenToWorldPoint(Input.GetTouch(0).position);
offset = new Vector3(offset.x, offset.y, 0);
fingerIndex = 0;
}
}
}
}else if (Input.touchCount == 1 && beingDragged)
{
otherObj.transform.SetParent(transform);
if (Input.GetTouch(fingerIndex).phase == TouchPhase.Moved ){
Vector3 pos = _cam.ScreenToWorldPoint(Input.GetTouch(fingerIndex).position);
currPos = new Vector3(pos.x, pos.y,0) + offset;
transform.position = currPos;
}
}
//ONLY WORKS FOR 2 OBJECTS_END
else if (beingDragged && Input.touchCount > 1)
{
//We tie each finger to an object so the object only moves when tied finger moves
if (Input.GetTouch(fingerIndex).phase == TouchPhase.Moved ){
Vector3 pos = _cam.ScreenToWorldPoint(Input.GetTouch(fingerIndex).position);
currPos = new Vector3(pos.x, pos.y,0) + offset;
transform.position = currPos;
}
}else if (Input.touchCount > 0)
{
for (var i = 0; i < Input.touchCount; i++)
{
//We find the fingers which just touched the screen
if(Input.GetTouch(i).phase == TouchPhase.Began)
{
var raycast = _cam.ScreenPointToRay(Input.GetTouch(i).position);
RaycastHit hit;
if (Physics.Raycast(raycast, out hit))
{
//use this tag if you want only chosen objects to be draggable
if (hit.transform.CompareTag("Draggable"))
{
if(hit.transform.name == name)
{
//set being dragged and finger index so we can move the object
beingDragged = true;
offset = gameObject.transform.position - _cam.ScreenToWorldPoint(Input.GetTouch(i).position);
offset = new Vector3(offset.x, offset.y, 0);
fingerIndex = i;
}
}
}
}
}
}else if (Input.touchCount == 0)
{
//if all fingers are lifted no object is being dragged
beingDragged = false;
}
}
}

Rotating an object at which I am looking (ARFoundation)

2 items are placed in AR. i would like to rotate 1 of those items while its in the middle of the screen (using touch controls and Raycast).
I would like all the rotation part of the code to be executed when the camera is looking at the object.
this is the script i use on an object placed in the scene, the Move function works since that is only activated when you tap on the screen where the object is and drag it away (Raycast to a Collider). however my rotation works on all objects in the scene when i swipe anywhere because if i swipe on the collider i move the object, so for rotation i have to stay outside that collider.
public class MoveObject : MonoBehaviour
{
public bool holding;
private Component rotator;
int doubleTap = 0;
void Start()
{
holding = false;
}
void Update()
{
if (holding)
{
Move();
}
// One finger
if (Input.touchCount == 1)
{
// Tap on Object
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
RaycastHit hitTouch;
if (Physics.Raycast(ray, out hitTouch, 100f))
{
if (hitTouch.transform == transform)
{
holding = true;
}
}
}
// Release
if (Input.GetTouch(0).phase == TouchPhase.Ended)
{
holding = false;
}
}
if (Input.touchCount == 1 && !holding) //THIS is the rotation part
{
// GET TOUCH 0
Touch touch0 = Input.GetTouch(0);
// APPLY ROTATION
if (touch0.phase == TouchPhase.Moved)
{
transform.Rotate(0f, touch0.deltaPosition.x * 0.5f, 0f);
}
}
}
void Move()
{
RaycastHit hit;
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
// The GameObject this script attached should be on layer "Surface"
if (Physics.Raycast(ray, out hit, 30.0f,
LayerMask.GetMask("Surface")))
{
transform.position = new Vector3(hit.point.x,
transform.position.y,
hit.point.z);
}
}
}
You'll probably want another flag similar to holding to make sure that you don't rotate an object the center of the screen passes over during a swipe, such as when the camera is moving and the object passes through the center of the screen.
You can use Camera.ViewportPointToRay with input of new Vector3(0.5f, 0.5f, 0) to create a ray coming from the center of the screen. From there, the process is similar to your holding code.
Also, with a little rearranging, you can re-use the raycast that is already used to check if this object was touched because it also checks if any object was touched. Only checking to start rotating when the first raycast touches nothing avoids rotating (and raycasting unnecessarily) in the situation where this object is in the center of the screen but another object was touched.
As a sidenote, it's recommended to cache the result of Camera.main because it does a FindGameObjectsWithTag internally every time you reference it and that can add up to extra computation time.
Altogether, it could look like this:
private bool rotating;
private Camera cam;
void Start()
{
holding = false;
rotating = false;
cam = Camera.main;
}
void Update()
{
// One finger
if (Input.touchCount == 1)
{
Touch touch0 = Input.GetTouch(0);
if (touch0.phase == TouchPhase.Began)
{
Ray ray;
RaycastHit hitTouch;
// test hold start
ray = cam.ScreenPointToRay(touch0.position);
if (Physics.Raycast(ray, out hitTouch, 100f))
{
if (hitTouch.transform == transform)
{
holding = true;
}
}
else // avoid rotating/raycasting again in situation
// where this object may be in center of screen
// but this or other object was touched.
{
// test rotate start
ray = cam.ViewportPointToRay(new Vector3(0.5f, 0.5f, 0));
if (Physics.Raycast(ray, out hitTouch, 100f)))
{
if (hitTouch.transform == transform)
{
rotating = true;
}
}
}
}
else if (touch0.phase == TouchPhase.Moved)
{
if (holding)
{
Move();
}
else if (rotating)
{
transform.Rotate(0f, touch0.deltaPosition.x * 0.5f, 0f);
}
}
// Release
else if (touch0.phase == TouchPhase.Ended)
{
holding = false;
rotating = false;
}
}
}

How can i make the player idle when not moving the mouse?

When i'm moving the mouse around the player is walking facing to the mouse cursor.
Now i want to make that if i'm not moving the mouse to make the player idle.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class WorldInteraction : MonoBehaviour
{
public int speed = 5; // Determines how quickly object moves towards position
public float rotationSpeed = 5f;
private Animator _animator;
private bool toMove = true;
private Vector3 tmpMousePosition;
UnityEngine.AI.NavMeshAgent playerAgent;
private void Start()
{
tmpMousePosition = Input.mousePosition;
_animator = GetComponent<Animator>();
_animator.CrossFade("Idle", 0);
playerAgent = GetComponent<UnityEngine.AI.NavMeshAgent>();
}
private void Update()
{
if (Input.GetMouseButtonDown(0) && toMove == false &&
!UnityEngine.EventSystems.EventSystem.current.IsPointerOverGameObject())
{
toMove = true;
}
if (Input.GetMouseButtonDown(1) && toMove == true)
{
toMove = false;
}
if (toMove == true)
{
GetInteraction();
}
}
void GetInteraction()
{
if (tmpMousePosition != Input.mousePosition)
{
tmpMousePosition = Input.mousePosition;
_animator.CrossFade("Walk", 0);
Ray interactionRay = Camera.main.ScreenPointToRay(Input.mousePosition);
RaycastHit interactionInfo;
if (Physics.Raycast(interactionRay, out interactionInfo, Mathf.Infinity))
{
GameObject interactedObject = interactionInfo.collider.gameObject;
if (interactedObject.tag == "Interactable Object")
{
interactedObject.GetComponent<Interactable>().MoveToInteraction(playerAgent);
}
else
{
playerAgent.destination = interactionInfo.point;
}
}
}
else
{
_animator.CrossFade("Idle", 0);
}
}
}
I'm using the variable tmpMousePosition to check if the mouse is in a move or not. The problem is when it's on a move and the player is in "Walk" mode the player is stuttering each a second or so.
The idea is when the mouse is moving then move the player when the mouse is not moving make the player in Idle.
In the Update function i'm using a bool to stop/continue the interaction like a switch with the mouse left/right buttons. But now i want to use the mouse movement to Walk/Idle the player.
Just get the movement through Input.GetAxis("Mouse X") and if it's not moving , play Idle

how to destroy moving object after collision with falling rigidbody body object in unity

SCENERIO
I have a an sphere with sphere collider and another prefab with rigid body problem is that when i am moving the sphere on touch then sphere donot detroy after the collision with falling rigibody object.Sphere only detory when i donot move the object then it collide with the falling rigidbody object.
The Code attach with sphere
using UnityEngine;
using System.Collections;
public class moving : MonoBehaviour {
float speed=100f;
public GameObject hero;
void Start () {
}
// Update is called once per frame
void Update () {
if (Input.touchCount > 0) {
// The screen has been touched so store the touch
Touch touch = Input.GetTouch (0);
if (touch.phase == TouchPhase.Stationary || touch.phase == TouchPhase.Moved) {
// If the finger is on the screen, move the object smoothly to the touch position
Vector3 touchPosition = Camera.main.ScreenToWorldPoint (new Vector3 (touch.position.x, touch.position.y, 100));
transform.position = Vector3.Lerp (transform.position, touchPosition, Time.deltaTime * speed);
}
}
}
void OnCollisionEnter(Collision coll){
Debug.Log(coll.gameObject.name);
if (coll.gameObject.name == "Cube(Clone)") {
Debug.Log(coll.gameObject.name);
Destroy (hero);
}
}
}
The Code Attach With Falling rigibody body object
void Start () {
InvokeRepeating ("spawn", delay, clone_delay);
}
void spawn () {
Instantiate (cube, new Vector3 (Random.Range (6, -6), 10, 0), Quaternion.identity);
}

the hit script run for 3d objects but not 2d

this is the script for my game.but it just works for 3d object like cube and not for 2d images in the game.how to fix it?
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
public class touchinput : MonoBehaviour {
void Update () {
if (Input.touchCount > 0 && Input.GetTouch (0).phase == TouchPhase.Began)
{
Ray ray = Camera.main.ScreenPointToRay( Input.GetTouch(0).position );
RaycastHit hit;
if ( Physics.Raycast(Ray, out hit))
{
Destroy(hit.collider.gameObject);
}
}
}
}
i try to change to this but i get lots of errors and don't know how to fix.
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
public class touchinput : MonoBehaviour {
void Update () {
if (Input.touchCount > 0 && Input.GetTouch (0).phase == TouchPhase.Began)
{
Ray2D ray = Camera.main.ScreenPointToRay( Input.GetTouch(0).position );
RaycastHit2D hit;
if ( Physics2D.Raycast(Ray2D, out hit))
{
Destroy(hit.collider.gameObject);
}
}
}
}
Raycast indeed doesn't work on 2D colliders.
I found this method the other day, you can try it:
if (Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Began)
{
Vector3 wp = Camera.main.ScreenToWorldPoint(Input.GetTouch(0).position);
Vector2 touchPos = new Vector2(wp.x, wp.y);
if (collider2D == Physics2D.OverlapPoint(touchPos))
{
//your code
}
}

Categories