I am developing a mobile game with touch controls. I can select a non-moving gameobject very easily and it responds, however it's very difficult to select it when it's moving since it's kind of small (falling for example, it's all physics based). Is there a way to increase the touch radius of the gameobject so that it can be pressed more easily, or is there another solution?
private void Update() {
//User input (touches) will select cubes
if(Input.touchCount > 0) {
Touch touch = Input.GetTouch(0);
}
Touch[] touches = Input.touches;
foreach(var touchInput in touches) {
RaycastHit2D hit = Physics2D.Raycast(Camera.main.ScreenToWorldPoint((Input.GetTouch(0).position)), Vector2.zero);
if (hit.collider != null) {
selectedCube = hit.collider.gameObject;
selectedCube.GetComponent<SpriteRenderer>().color = Color32.Lerp(defaultColor, darkerColor, 1);
}
}
}
Rather than increasing the object's collider size (which you discussed in the comments), how about approaching this the opposite way? Check in an area around the touch for a collision, instead of just the single point, using Physics2D.CircleCast:
private void Update() {
//User input (touches) will select cubes
if(Input.touchCount > 0) {
Touch touch = Input.GetTouch(0);
}
Touch[] touches = Input.touches;
foreach(var touchInput in touches) {
float radius = 1.0f; // Change as needed based on testing
RaycastHit2D hit = Physics2D.CircleCast(Camera.main.ScreenToWorldPoint((Input.GetTouch(0).position)), radius, Vector2.zero);
if (hit.collider != null) {
selectedCube = hit.collider.gameObject;
selectedCube.GetComponent<SpriteRenderer>().color = Color32.Lerp(defaultColor, darkerColor, 1);
}
}
}
Note that this won't be great if you've got tons of selectable objects flush against each other...but then again, increasing collider size in that case wouldn't help either. (I'd say just increase object size. No way to otherwise improve user accuracy. Or allow multi-select, and use Physics2D.CircleCastAll).
Hope this helps! Let me know if you have any questions.
EDIT: For better accuracy, since the "first" result returned by Physics2D.CircleCast may be arbitrarily selected, you can instead use Physics2D.CircleCastAll to get all objects within the touch radius, and only select the one which is closest to the original touch point:
private void Update() {
//User input (touches) will select cubes
if(Input.touchCount > 0) {
Touch touch = Input.GetTouch(0);
}
Touch[] touches = Input.touches;
foreach(var touchInput in touches) {
float radius = 1.0f; // Change as needed based on testing
Vector2 worldTouchPoint = Camera.main.ScreenToWorldPoint(Input.GetTouch(0).position);
RaycastHit2D[] allHits = Physics2D.CircleCastAll(worldTouchPoint, radius, Vector2.zero);
// Find closest collider that was hit
float closestDist = Mathf.Infinity;
GameObject closestObject = null;
foreach (RaycastHit2D hit in allHits){
// Record the object if it's the first one we check,
// or is closer to the touch point than the previous
if (closestObject == null ||
Vector2.Distance(closestObject.transform.position, worldTouchPoint) < closestDist){
closestObject = hit.collider.gameObject;
closestDist = Vector2.Distance(closestObject.transform.position, worldTouchPoint);
}
}
// Finally, select the object we chose based on the criteria
if (closestObject != null) {
selectedCube = closestObject;
selectedCube.GetComponent<SpriteRenderer>().color = Color32.Lerp(defaultColor, darkerColor, 1);
}
}
}
Related
I'm creating a touch controlled game that rotates platforms as you touch and drag however I can't get it to ignore touches over the jump button causing it to think that you've very quickly moved the same finger over it as one long swipe. I've tried tracking the start point and id however I've realised that none of these will work without a way to detect whether the touch is over the jump button.
My code for detecting touches:
int x = 0;
while (x < Input.touchCount)
{
Touch touch = Input.GetTouch(x); // get the touch
if (touch.phase == TouchPhase.Began) //check for the first touch
{
fp = touch.position;
lp = touch.position;
}
else if (touch.phase == TouchPhase.Moved) // update the last position based on where they moved
{
lp = touch.position;
}
else if (touch.phase == TouchPhase.Ended) //check if the finger is removed from the screen
{
lp = touch.position; //last touch position. Ommitted if you use list
}
if (!EventSystem.current.IsPointerOverGameObject(x))
{
int movement = ((int)(fp.x - lp.x));
if (previous_fp != fp ^ x != previous_id)
{
previous_rotate = 0;
}
previous_fp = fp;
previous_id = x;
if (!game_over) { Platform_Control.transform.Rotate(0f, 0f, -((movement - previous_rotate) * 0.15f), Space.World); }
previous_rotate = movement;
}
x++;
}
Afaik one issue here is that you are using x the index of the touch. IsPointerOverGameObject expects a fingerID as parameter!
The index x of the touch you use for GetTouch is NOT necessarily equal to the Touch.fingerId!
It should rather be
if(!EventSystem.current.IsPointerOverGameObject(touch.fingerId))
Btw in general if you are going to iterate through all touches anyway it is easier to just use Input.touches
Returns list of objects representing status of all touches during last frame. (Read Only) (Allocates temporary variables).
var touches = Input.touches;
which allows you to filter touches out right away like e.g.
using System.Linq;
...
var touches = Input.touches;
var validTouches = touches.Where(touch => !EventSystem.current.IsPointerOverGameObject(touch.fingerId));
foreach(var touch in validTouches)
{
...
}
this is using Linq Where and basically equals doing
var touches = Input.touches;
foreach(var touch in touchs)
{
if(!EventSystem.current.IsPointerOverGameObject(touch.fingerId)) continue;
...
}
In general it still confuses me how/why it is supposed to use the same shared variables fp and lp for all possible touches.
You should probably rather use the
var moved = touch.deltaPosition;
And work with that value instead.
Or in general use only one of these touches by storing the first valid touch.fingerId and later check if the touch is still of that id. Ignore all other touches meanwhile.
private int currentFingerId = int.MinValue;
...
// If you check that one first you can avoid a lot of unnecessary overhead ;)
if(!gameOver)
{
var touches = Input.touches;
var validTouches = touches.Where(touch => !EventSystem.current.IsPointerOverGameObject(touch.fingerId);
foreach (var touch in validTouches)
{
if(currentFingerId != int.MinValue && touch.fingerId != currentFingerId) continue;
switch(touch.phase)
{
case TouchPhase.Began:
currentFingerId = touch.fingerId;
break;
case TouchPhass.Moved:
// "touch.deltaPosition.x" is basically exactly what you calculated
// yourself using the "fp", "lp", "previous_rotate" and "movement"
var moved = touch.deltaPosition.x * 0.15f;
Platform_Control.transform.Rotate(0f, 0f, moved, Space.World);
break;
case TouchPhase.Ended:
currentFingerId = int.MinValue;
break;
}
}
}
Check if the EventSystem has any GameObject currently considered as active. If yes, the touch is performed over button else the touch is not over the button.
int x = 0;
while (x < Input.touchCount)
{
Touch touch = Input.GetTouch(x); // get the touch
if (touch.phase == TouchPhase.Began) //check for the first touch
{
fp = touch.position;
lp = touch.position;
}
else if (touch.phase == TouchPhase.Moved) // update the last position based on where they moved
{
lp = touch.position;
}
else if (touch.phase == TouchPhase.Ended) //check if the finger is removed from the screen
{
lp = touch.position; //last touch position. Ommitted if you use list
}
if (EventSystem.current.currentSelectedGameObject == null)
{
int movement = ((int)(fp.x - lp.x));
if (previous_fp != fp ^ x != previous_id)
{
previous_rotate = 0;
}
previous_fp = fp;
previous_id = x;
if (!game_over) { Platform_Control.transform.Rotate(0f, 0f, -((movement - previous_rotate) * 0.15f), Space.World); }
previous_rotate = movement;
}
x++;
}
EventSystem.IspointerOverGameobject is what you need. Event System is UI related so it detects ui element. If you want to ignore UI elements You can use this:
if (!EventSystem.IspointerOverGameobject)
{
// Your code goes here which only works if pointer not on the UI element
}
If you want to ignore completly and don't need to click that UI object, you should disable raycast on that UI element.
I have two objects. The point is that I need to move the object up and down (hold and drag), and at the same time I need to move the other object up and down (hold and drag) independently. Something like this:
Example
After searching the Internet, I found a small script, but it only works for one touch at a time, and dragging only one object. Plus, if I touch with second finger the object changes his position to second finger position:
public class Move : MonoBehaviour {
private Vector3 offset;
void OnMouseDown() {
offset = gameObject.transform.position - Camera.main.ScreenToWorldPoint(new Vector3(-2.527f, Input.mousePosition.y));
}
void OnMouseDrag() {
Vector3 curScreenPoint = new Vector3(-2.527f, Input.mousePosition.y);
Vector3 curPosition = Camera.main.ScreenToWorldPoint(curScreenPoint) + offset;
transform.position = curPosition;
}
}
I just can’t understand how I can make two objects drag and drop at the same time. I am new to unity and honestly i have problems with controllers
For touch screens I suggest you use Touch instead of mouse events. Touch class supports multiple touches at the same time and some useful information such as Touch.phase (Began, moving, stationary etc). I've written a script which you can attach to multiple objects with tag "Draggable", that makes objects move independently. Drag and drop the other object that should move when you use one finger to field otherObj and it should work. It only works for 2 obj in this version.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Serialization;
public class PlayerActionHandler : MonoBehaviour
{
private Camera _cam;
public bool beingDragged;
public Vector3 offset;
public Vector3 currPos;
public int fingerIndex;
public GameObject otherObj;
// Start is called before the first frame update
void Start()
{
_cam = Camera.main;
}
// Update is called once per frame
void Update()
{
//ONLY WORKS FOR 2 OBJECTS
if (Input.touchCount == 1 && Input.GetTouch(0).phase == TouchPhase.Began)
{
var raycast = _cam.ScreenPointToRay(Input.GetTouch(0).position);
RaycastHit hit;
if (Physics.Raycast(raycast, out hit))
{
//use this tag if you want only chosen objects to be draggable
if (hit.transform.CompareTag("Draggable"))
{
if(hit.transform.name == name)
{
//set being dragged and finger index so we can move the object
beingDragged = true;
offset = gameObject.transform.position - _cam.ScreenToWorldPoint(Input.GetTouch(0).position);
offset = new Vector3(offset.x, offset.y, 0);
fingerIndex = 0;
}
}
}
}else if (Input.touchCount == 1 && beingDragged)
{
otherObj.transform.SetParent(transform);
if (Input.GetTouch(fingerIndex).phase == TouchPhase.Moved ){
Vector3 pos = _cam.ScreenToWorldPoint(Input.GetTouch(fingerIndex).position);
currPos = new Vector3(pos.x, pos.y,0) + offset;
transform.position = currPos;
}
}
//ONLY WORKS FOR 2 OBJECTS_END
else if (beingDragged && Input.touchCount > 1)
{
//We tie each finger to an object so the object only moves when tied finger moves
if (Input.GetTouch(fingerIndex).phase == TouchPhase.Moved ){
Vector3 pos = _cam.ScreenToWorldPoint(Input.GetTouch(fingerIndex).position);
currPos = new Vector3(pos.x, pos.y,0) + offset;
transform.position = currPos;
}
}else if (Input.touchCount > 0)
{
for (var i = 0; i < Input.touchCount; i++)
{
//We find the fingers which just touched the screen
if(Input.GetTouch(i).phase == TouchPhase.Began)
{
var raycast = _cam.ScreenPointToRay(Input.GetTouch(i).position);
RaycastHit hit;
if (Physics.Raycast(raycast, out hit))
{
//use this tag if you want only chosen objects to be draggable
if (hit.transform.CompareTag("Draggable"))
{
if(hit.transform.name == name)
{
//set being dragged and finger index so we can move the object
beingDragged = true;
offset = gameObject.transform.position - _cam.ScreenToWorldPoint(Input.GetTouch(i).position);
offset = new Vector3(offset.x, offset.y, 0);
fingerIndex = i;
}
}
}
}
}
}else if (Input.touchCount == 0)
{
//if all fingers are lifted no object is being dragged
beingDragged = false;
}
}
}
I am trying to figure out how to detect when a touch is being moved back and forth in Unity.
so as the image shows, I am looking for a way to detect when the touch moves from its starting position (1) to the second x (2), then back to somewhere near the starting position (3) all with a certain speed and within a time frame, essentially something like a shaking gesture. Im really stuck as to how to do this. Any help will be greatly appreciated.
an illustration of what I mean
so far I just know how to get the starting position with a touch.
Vector2 startingPos;
float shakeTime = 2f;
void Update()
{
foreach(Input.touch touch in Input.touches)
{
if(touch.phase == TouchPhase.Began)
{
startingPos = touch.position;
}
}
}
float touchTimer=0f;
float shakeTime = 2f;
float moveAwayLimit = 1f;
float moveBackLimit = .5f;
bool awayReached = false;
Vector2 startingPos;
void Update()
{
foreach (Touch touch in Input.touches)
{
if (touch.phase == TouchPhase.Began)
{
startingPos = touch.position;
touchTimer = 0;
awayReached = false;
}
if (Vector2.Distance(touch.position, startingPos) > moveAwayLimit && touchTimer < shakeTime && !awayReached)
{
awayReached = true;
}
else if (Vector2.Distance(touch.position, startingPos) < moveBackLimit && touchTimer < shakeTime && awayReached)
{
Shake();
}
touchTimer += Time.deltaTime;
}
}
Something like this should work, it checks if you move away further than moveAwayLimit from the start, and after that if you're within moveBackLimit from the startingPos.
Newbie...
Learning Unity to make mobile games and I have a small hex grid map that when touched the tiles change to a random color. I have a script for the camera to move it around with a finger press and zoom in an out with two fingers.
I want the tiles to only change color on a single touch, and not change when I pan or zoom the camera. I have been trying for two days and googled myself out and can't seem to figure it out. It seems pretty simple.
Here is the code that is called from Update() when a raycast hits an object. This just changes the hex's color but I want it to only do it when not panning or zooming the camera.
void Touch_Hex(GameObject ourHitObject)
{
Touch[] touches = Input.touches;
if (touches[0].phase == TouchPhase.Ended)
{
MeshRenderer mr = ourHitObject.GetComponentInChildren<MeshRenderer>();
mr.material.color = color[Random.Range(0, color.Length)];
}
}
I've tried all different combos of phases etc but just cannot get it to work as I want it. I'm thinking deltaTime or deltaPostion is the answer but I can't work it out.
Cheers
You need to use a combination of a boolean variable and touches[0].phase == TouchPhase.Moved. When touches[0].phase == TouchPhase.Moved is true then your input is now considered to be invalid. You can then set to boolean variable to true or false then use it later on to decide whether to change the color or not.
bool onlyTouched;
void Update()
{
if (Input.touchCount > 0)
{
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
RaycastHit hit;
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
//Touched
onlyTouched = true;
}
if (Input.GetTouch(0).phase == TouchPhase.Moved)
{
//Moved Finger (Now Invalid!)
onlyTouched = false;
}
if (Input.GetTouch(0).phase == TouchPhase.Ended)
{
//Check if only touched then change color
if (onlyTouched)
{
Debug.Log("Only touched Finger");
//MeshRenderer mr = ourHitObject.GetComponentInChildren<MeshRenderer>();
//mr.material.color = color[Random.Range(0, color.Length)];
if (Physics.Raycast(ray, out hit))
{
if (hit.transform != null)
{
MeshRenderer mr = hit.transform.gameObject.GetComponentInChildren<MeshRenderer>();
mr.material.color = color[Random.Range(0, color.Length)];
//hit.transform.GetComponent<MeshRenderer>().material.color = Color.red;
}
}
}
else
{
Debug.Log("Finger was moved while touching");
}
//Reset onlyTouched for next read
onlyTouched = false;
}
}
}
Hello and thanks for reading this.
I made a little game in Unity and I finally got the movement controls with touch input to work.
But right now I face a little problem with combining the movement and jumping part. I cant jump if I'm moving BUT I can move if I'm jumping.
Each of my arrow keys contain a script and then later calls the "RobotController" script to start the movement.
ArrowRight and ArrowLeft Script. They look very much alike so I'll only post 1:
private RobotController PlayermoveRight;
// Use this for initialization
void Start () {
PlayermoveRight = GameObject.Find("Player").GetComponent<RobotController>();
}
void OnMouseOver()
{
if(Input.touchCount >= 1)
{
var touchr = Input.touches[0];
if(touchr.phase != TouchPhase.Ended && touchr.phase != TouchPhase.Canceled)
{
PlayermoveRight.MoveRight();
}
else
{
}
}
}
The ArrowUp script:
void OnMouseOver()
{
GameObject Go = GameObject.Find("Player");
if ((Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Began) || (Input.GetMouseButtonDown(0)))
{
Go.GetComponent<RobotController>().Jump();
}
}
And the RobotController script:
public double moveTime = 0.1;
public double moveTimeR = 0.1;
private double lastPressedTime = 0.0;
private double PressRight = 0.0;
public void MoveLeft() // If ArrowLeft is clicked or Pressed
{
lastPressedTime = Time.timeSinceLevelLoad;
}
public void MoveRight() // If ArrowRight is clicked or Pressed
{
PressRight = Time.timeSinceLevelLoad;
}
void FixedUpdate () {
if (PressRight + moveTimeR > Time.timeSinceLevelLoad)
{
rigidbody2D.velocity = new Vector2 (maxSpeed, rigidbody2D.velocity.y);
}
else if (lastPressedTime + moveTime > Time.timeSinceLevelLoad)
{
rigidbody2D.velocity = new Vector2 (maxSpeed - maxSpeed - maxSpeed, rigidbody2D.velocity.y);
}
else
{
rigidbody2D.velocity = new Vector2(0.0f, rigidbody2D.velocity.y);
}
}
public void Jump()
{
if (isOnGround == true) {
anim.SetBool("Ground",false);
rigidbody2D.AddForce (new Vector2 (0, jumpForce));
}
}
How can I do so I can jump and move on the same time.?
from the up arrow code:
(Input.GetTouch(0).phase == TouchPhase.Began) || (Input.GetMouseButtonDown(0))
you're checking if the first touch has just started, if you are holding down a move arrow and you tap up to jump the "jump touch" isn't the first touch and the first touch (for the movement) isn't in it's began phase.
The reason it works with hitting jump and then moving is because the first touch in that case is the jump touch (by coincidence rather than by code).
You don't want to check against the first touch here, you want to check against the touch that is over the up arrow.
(not sure how you'd actually do that, but I can't comment until I get 50 rep :( )
I am also a newbie and was facing this problem few hours ago but I fixed it as I was told to do in an video tutorial so the fix was: Add some drag to your player, in the inspector there is a "Linear Drag" option in your rigidBody component increase it by 0.3(0 is the default value) this really fixed my problem I hope it will also help you(I know it is really late but I just found your question while googling my problem).