I'm creating a touch controlled game that rotates platforms as you touch and drag however I can't get it to ignore touches over the jump button causing it to think that you've very quickly moved the same finger over it as one long swipe. I've tried tracking the start point and id however I've realised that none of these will work without a way to detect whether the touch is over the jump button.
My code for detecting touches:
int x = 0;
while (x < Input.touchCount)
{
Touch touch = Input.GetTouch(x); // get the touch
if (touch.phase == TouchPhase.Began) //check for the first touch
{
fp = touch.position;
lp = touch.position;
}
else if (touch.phase == TouchPhase.Moved) // update the last position based on where they moved
{
lp = touch.position;
}
else if (touch.phase == TouchPhase.Ended) //check if the finger is removed from the screen
{
lp = touch.position; //last touch position. Ommitted if you use list
}
if (!EventSystem.current.IsPointerOverGameObject(x))
{
int movement = ((int)(fp.x - lp.x));
if (previous_fp != fp ^ x != previous_id)
{
previous_rotate = 0;
}
previous_fp = fp;
previous_id = x;
if (!game_over) { Platform_Control.transform.Rotate(0f, 0f, -((movement - previous_rotate) * 0.15f), Space.World); }
previous_rotate = movement;
}
x++;
}
Afaik one issue here is that you are using x the index of the touch. IsPointerOverGameObject expects a fingerID as parameter!
The index x of the touch you use for GetTouch is NOT necessarily equal to the Touch.fingerId!
It should rather be
if(!EventSystem.current.IsPointerOverGameObject(touch.fingerId))
Btw in general if you are going to iterate through all touches anyway it is easier to just use Input.touches
Returns list of objects representing status of all touches during last frame. (Read Only) (Allocates temporary variables).
var touches = Input.touches;
which allows you to filter touches out right away like e.g.
using System.Linq;
...
var touches = Input.touches;
var validTouches = touches.Where(touch => !EventSystem.current.IsPointerOverGameObject(touch.fingerId));
foreach(var touch in validTouches)
{
...
}
this is using Linq Where and basically equals doing
var touches = Input.touches;
foreach(var touch in touchs)
{
if(!EventSystem.current.IsPointerOverGameObject(touch.fingerId)) continue;
...
}
In general it still confuses me how/why it is supposed to use the same shared variables fp and lp for all possible touches.
You should probably rather use the
var moved = touch.deltaPosition;
And work with that value instead.
Or in general use only one of these touches by storing the first valid touch.fingerId and later check if the touch is still of that id. Ignore all other touches meanwhile.
private int currentFingerId = int.MinValue;
...
// If you check that one first you can avoid a lot of unnecessary overhead ;)
if(!gameOver)
{
var touches = Input.touches;
var validTouches = touches.Where(touch => !EventSystem.current.IsPointerOverGameObject(touch.fingerId);
foreach (var touch in validTouches)
{
if(currentFingerId != int.MinValue && touch.fingerId != currentFingerId) continue;
switch(touch.phase)
{
case TouchPhase.Began:
currentFingerId = touch.fingerId;
break;
case TouchPhass.Moved:
// "touch.deltaPosition.x" is basically exactly what you calculated
// yourself using the "fp", "lp", "previous_rotate" and "movement"
var moved = touch.deltaPosition.x * 0.15f;
Platform_Control.transform.Rotate(0f, 0f, moved, Space.World);
break;
case TouchPhase.Ended:
currentFingerId = int.MinValue;
break;
}
}
}
Check if the EventSystem has any GameObject currently considered as active. If yes, the touch is performed over button else the touch is not over the button.
int x = 0;
while (x < Input.touchCount)
{
Touch touch = Input.GetTouch(x); // get the touch
if (touch.phase == TouchPhase.Began) //check for the first touch
{
fp = touch.position;
lp = touch.position;
}
else if (touch.phase == TouchPhase.Moved) // update the last position based on where they moved
{
lp = touch.position;
}
else if (touch.phase == TouchPhase.Ended) //check if the finger is removed from the screen
{
lp = touch.position; //last touch position. Ommitted if you use list
}
if (EventSystem.current.currentSelectedGameObject == null)
{
int movement = ((int)(fp.x - lp.x));
if (previous_fp != fp ^ x != previous_id)
{
previous_rotate = 0;
}
previous_fp = fp;
previous_id = x;
if (!game_over) { Platform_Control.transform.Rotate(0f, 0f, -((movement - previous_rotate) * 0.15f), Space.World); }
previous_rotate = movement;
}
x++;
}
EventSystem.IspointerOverGameobject is what you need. Event System is UI related so it detects ui element. If you want to ignore UI elements You can use this:
if (!EventSystem.IspointerOverGameobject)
{
// Your code goes here which only works if pointer not on the UI element
}
If you want to ignore completly and don't need to click that UI object, you should disable raycast on that UI element.
Related
i want to make a 2d gameobject tagged "paddle" follow my finger around the screen only if i touched it first but nothing happens it just stands still.
void Update()
{
// detect if there was a touch
if (Input.touchCount > 0)
{
switch (touch.phase)
{
//When a touch begins:
case TouchPhase.Began:
//get the position of the touch
touch = Input.GetTouch(0);
touchPosition = Camera.main.ScreenToWorldPoint(touch.position);
//detect if the touch is on the paddle at the beginning of the touch (if YES then we can move the paddle)
touchHit = Physics2D.Raycast(touchPosition, Camera.main.transform.forward);
if (touchHit.collider.tag == "paddle")
{
paddleTouched = true;
}
break;
//When the touch is moving, if it has hit the paddle at the beginning the paddle will move
case TouchPhase.Moved:
if (paddleTouched == true)
{
gameObject.transform.position = touchPosition;
}
break;
//When a touch has ended everything ends :3
case TouchPhase.Ended:
paddleTouched = false;
break;
}
}
}
I should've put these
touch = Input.GetTouch(0);
touchPosition = Camera.main.ScreenToWorldPoint(touch.position);
before the switch case statements.
Every time I touch the screen the same model appears in screen.If I touch the screen at five different positions the models gets instantiated at 5 touched positions on the screen.It gets saved in the cloud later.When I load the saved map and scans the same place it recognizes the scene but only one model is showing in the screen.I am adding an "arrow" as model.I will provide the code below.
So how does the position gets saved here?.Only single model position gets saved?Why other position models are not showing?Is it because I am using the same model.?How to save other models position?
If I add the code below the lines (//if line below added) the implementation works.Many shapes will be instantiated on touch and later can be loaded even after the app is closed.
bool HitTestWithResultType(ARPoint point, ARHitTestResultType resultTypes)
{
List<ARHitTestResult> hitResults = UnityARSessionNativeInterface.GetARSessionNativeInterface().HitTest(point, resultTypes);
if (hitResults.Count > 0)
{
foreach (var hitResult in hitResults)
{
Debug.Log("Got hit!");
Vector3 position = UnityARMatrixOps.GetPosition(hitResult.worldTransform);
Quaternion rotation = UnityARMatrixOps.GetRotation(hitResult.worldTransform);
//Transform to placenote frame of reference (planes are detected in ARKit frame of reference)
Matrix4x4 worldTransform = Matrix4x4.TRS(position, rotation, Vector3.one);
Matrix4x4? placenoteTransform = LibPlacenote.Instance.ProcessPose(worldTransform);
Vector3 hitPosition = PNUtility.MatrixOps.GetPosition(placenoteTransform.Value);
Quaternion hitRotation = PNUtility.MatrixOps.GetRotation(placenoteTransform.Value);
// add shape
AddShape(hitPosition, hitRotation);
return true;
}
}
return false;
}
//-----------------------------------
// Update function checks for hittest
//-----------------------------------
void Update()
{
// Check if the screen is touched
//-----------------------------------
if (Input.touchCount > 0)
{
var touch = Input.GetTouch(0);
if (touch.phase == TouchPhase.Began )
{
if (EventSystem.current.currentSelectedGameObject == null)
{
Debug.Log("Not touching a UI button. Moving on.");
// add new shape
var screenPosition = Camera.main.ScreenToViewportPoint(touch.position);
ARPoint point = new ARPoint
{
x = screenPosition.x,
y = screenPosition.y
};
// prioritize reults types
ARHitTestResultType[] resultTypes = {
//ARHitTestResultType.ARHitTestResultTypeExistingPlaneUsingExtent,
//ARHitTestResultType.ARHitTestResultTypeExistingPlane,
//ARHitTestResultType.ARHitTestResultTypeEstimatedHorizontalPlane,
//ARHitTestResultType.ARHitTestResultTypeEstimatedVerticalPlane,
ARHitTestResultType.ARHitTestResultTypeFeaturePoint
};
foreach (ARHitTestResultType resultType in resultTypes)
{
if (HitTestWithResultType(point, resultType))
{
Debug.Log("Found a hit test result");
return;
}
}
}
}
}
}
public void OnSimulatorDropShape()
{
Vector3 dropPosition = Camera.main.transform.position + Camera.main.transform.forward * 0.3f;
Quaternion dropRotation = Camera.main.transform.rotation;
AddShape(dropPosition, dropRotation);
}
//-------------------------------------------------
// All shape management functions (add shapes, save shapes to metadata etc.
//-------------------------------------------------
public void AddShape(Vector3 shapePosition, Quaternion shapeRotation)
{
//if line below added
// System.Random rnd = new System.Random();
//if line below added
//PrimitiveType type = (PrimitiveType)rnd.Next(0, 3);
ShapeInfo shapeInfo = new ShapeInfo();
shapeInfo.px = shapePosition.x;
shapeInfo.py = shapePosition.y;
shapeInfo.pz = shapePosition.z;
shapeInfo.qx = shapeRotation.x;
shapeInfo.qy = shapeRotation.y;
shapeInfo.qz = shapeRotation.z;
shapeInfo.qw = shapeRotation.w;
//if line below added
// shapeInfo.shapeType = type.GetHashCode();
shapeInfoList.Add(shapeInfo);
GameObject shape = ShapeFromInfo(shapeInfo);
shapeObjList.Add(shape);
}
public GameObject ShapeFromInfo(ShapeInfo info)
{
//if line below added
//GameObject shape = GameObject.CreatePrimitive((PrimitiveType)info.shapeType);
//Instead of
GameObject shape =Instantiate(arrow);
shape.transform.position = new Vector3(info.px, info.py, info.pz);
shape.transform.rotation = new Quaternion(info.qx, info.qy, info.qz, info.qw);
shape.transform.localScale = new Vector3(0.05f, 0.05f, 0.05f);
shape.GetComponent<MeshRenderer>().material = mShapeMaterial;
shape.GetComponent<MeshRenderer> ().material.color = Color.yellow;
return shape;
}
In HitTestWithResultType() you have a return in the foreach, so when one iteration is done, the code exits the method.
Basically you do only one iteration in the foreach every time you call the method.
move the return 2 lines below after the closing bracket.
I guess everybody need another pair of eyes :)
I am trying to figure out how to detect when a touch is being moved back and forth in Unity.
so as the image shows, I am looking for a way to detect when the touch moves from its starting position (1) to the second x (2), then back to somewhere near the starting position (3) all with a certain speed and within a time frame, essentially something like a shaking gesture. Im really stuck as to how to do this. Any help will be greatly appreciated.
an illustration of what I mean
so far I just know how to get the starting position with a touch.
Vector2 startingPos;
float shakeTime = 2f;
void Update()
{
foreach(Input.touch touch in Input.touches)
{
if(touch.phase == TouchPhase.Began)
{
startingPos = touch.position;
}
}
}
float touchTimer=0f;
float shakeTime = 2f;
float moveAwayLimit = 1f;
float moveBackLimit = .5f;
bool awayReached = false;
Vector2 startingPos;
void Update()
{
foreach (Touch touch in Input.touches)
{
if (touch.phase == TouchPhase.Began)
{
startingPos = touch.position;
touchTimer = 0;
awayReached = false;
}
if (Vector2.Distance(touch.position, startingPos) > moveAwayLimit && touchTimer < shakeTime && !awayReached)
{
awayReached = true;
}
else if (Vector2.Distance(touch.position, startingPos) < moveBackLimit && touchTimer < shakeTime && awayReached)
{
Shake();
}
touchTimer += Time.deltaTime;
}
}
Something like this should work, it checks if you move away further than moveAwayLimit from the start, and after that if you're within moveBackLimit from the startingPos.
Newbie...
Learning Unity to make mobile games and I have a small hex grid map that when touched the tiles change to a random color. I have a script for the camera to move it around with a finger press and zoom in an out with two fingers.
I want the tiles to only change color on a single touch, and not change when I pan or zoom the camera. I have been trying for two days and googled myself out and can't seem to figure it out. It seems pretty simple.
Here is the code that is called from Update() when a raycast hits an object. This just changes the hex's color but I want it to only do it when not panning or zooming the camera.
void Touch_Hex(GameObject ourHitObject)
{
Touch[] touches = Input.touches;
if (touches[0].phase == TouchPhase.Ended)
{
MeshRenderer mr = ourHitObject.GetComponentInChildren<MeshRenderer>();
mr.material.color = color[Random.Range(0, color.Length)];
}
}
I've tried all different combos of phases etc but just cannot get it to work as I want it. I'm thinking deltaTime or deltaPostion is the answer but I can't work it out.
Cheers
You need to use a combination of a boolean variable and touches[0].phase == TouchPhase.Moved. When touches[0].phase == TouchPhase.Moved is true then your input is now considered to be invalid. You can then set to boolean variable to true or false then use it later on to decide whether to change the color or not.
bool onlyTouched;
void Update()
{
if (Input.touchCount > 0)
{
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
RaycastHit hit;
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
//Touched
onlyTouched = true;
}
if (Input.GetTouch(0).phase == TouchPhase.Moved)
{
//Moved Finger (Now Invalid!)
onlyTouched = false;
}
if (Input.GetTouch(0).phase == TouchPhase.Ended)
{
//Check if only touched then change color
if (onlyTouched)
{
Debug.Log("Only touched Finger");
//MeshRenderer mr = ourHitObject.GetComponentInChildren<MeshRenderer>();
//mr.material.color = color[Random.Range(0, color.Length)];
if (Physics.Raycast(ray, out hit))
{
if (hit.transform != null)
{
MeshRenderer mr = hit.transform.gameObject.GetComponentInChildren<MeshRenderer>();
mr.material.color = color[Random.Range(0, color.Length)];
//hit.transform.GetComponent<MeshRenderer>().material.color = Color.red;
}
}
}
else
{
Debug.Log("Finger was moved while touching");
}
//Reset onlyTouched for next read
onlyTouched = false;
}
}
}
I am developing a mobile game with touch controls. I can select a non-moving gameobject very easily and it responds, however it's very difficult to select it when it's moving since it's kind of small (falling for example, it's all physics based). Is there a way to increase the touch radius of the gameobject so that it can be pressed more easily, or is there another solution?
private void Update() {
//User input (touches) will select cubes
if(Input.touchCount > 0) {
Touch touch = Input.GetTouch(0);
}
Touch[] touches = Input.touches;
foreach(var touchInput in touches) {
RaycastHit2D hit = Physics2D.Raycast(Camera.main.ScreenToWorldPoint((Input.GetTouch(0).position)), Vector2.zero);
if (hit.collider != null) {
selectedCube = hit.collider.gameObject;
selectedCube.GetComponent<SpriteRenderer>().color = Color32.Lerp(defaultColor, darkerColor, 1);
}
}
}
Rather than increasing the object's collider size (which you discussed in the comments), how about approaching this the opposite way? Check in an area around the touch for a collision, instead of just the single point, using Physics2D.CircleCast:
private void Update() {
//User input (touches) will select cubes
if(Input.touchCount > 0) {
Touch touch = Input.GetTouch(0);
}
Touch[] touches = Input.touches;
foreach(var touchInput in touches) {
float radius = 1.0f; // Change as needed based on testing
RaycastHit2D hit = Physics2D.CircleCast(Camera.main.ScreenToWorldPoint((Input.GetTouch(0).position)), radius, Vector2.zero);
if (hit.collider != null) {
selectedCube = hit.collider.gameObject;
selectedCube.GetComponent<SpriteRenderer>().color = Color32.Lerp(defaultColor, darkerColor, 1);
}
}
}
Note that this won't be great if you've got tons of selectable objects flush against each other...but then again, increasing collider size in that case wouldn't help either. (I'd say just increase object size. No way to otherwise improve user accuracy. Or allow multi-select, and use Physics2D.CircleCastAll).
Hope this helps! Let me know if you have any questions.
EDIT: For better accuracy, since the "first" result returned by Physics2D.CircleCast may be arbitrarily selected, you can instead use Physics2D.CircleCastAll to get all objects within the touch radius, and only select the one which is closest to the original touch point:
private void Update() {
//User input (touches) will select cubes
if(Input.touchCount > 0) {
Touch touch = Input.GetTouch(0);
}
Touch[] touches = Input.touches;
foreach(var touchInput in touches) {
float radius = 1.0f; // Change as needed based on testing
Vector2 worldTouchPoint = Camera.main.ScreenToWorldPoint(Input.GetTouch(0).position);
RaycastHit2D[] allHits = Physics2D.CircleCastAll(worldTouchPoint, radius, Vector2.zero);
// Find closest collider that was hit
float closestDist = Mathf.Infinity;
GameObject closestObject = null;
foreach (RaycastHit2D hit in allHits){
// Record the object if it's the first one we check,
// or is closer to the touch point than the previous
if (closestObject == null ||
Vector2.Distance(closestObject.transform.position, worldTouchPoint) < closestDist){
closestObject = hit.collider.gameObject;
closestDist = Vector2.Distance(closestObject.transform.position, worldTouchPoint);
}
}
// Finally, select the object we chose based on the criteria
if (closestObject != null) {
selectedCube = closestObject;
selectedCube.GetComponent<SpriteRenderer>().color = Color32.Lerp(defaultColor, darkerColor, 1);
}
}
}