On-Screen Joystick is causing held on-screen button to cancel mistakenly in Unity - c#

I'm making some on-screen controls for a mobile game. Movement is handled by an on-screen joystick. Attacking can be done via an on-screen button. I want it so that while I'm holding down the attack button that the player attacks for as long as the button is held.
This works however if I'm holding down the joystick simultaneously then the hold is canceled prematurely (before the attack button is released). I see that the context did in fact get canceled - likely the Joystick is triggering OnPointerUp.
My expectation is that no other input should be able to interrupt/break the HoldInteraction
This issue doesn't happen on PC so it may be because both the on-screen joystick/button use the same OnPointerDown/OnPointerUp Events.
The closest to this issue I've found is: https://forum.unity.com/threads/on-screen-button-holds-inadvertently-calling-cancel.998021/
But creating my own on-screen button and adding an OnDrag handler doesn't solve the issue.
So how is one supposed to handle multiple on-screen buttons being pressed more or less at the same time? Shouldn't each button react independently to being pressed down/unpressed?
MyControllerScript
public void OnAttack(InputAction.CallbackContext value)
{
// BasicAttackSideEffects(0);
if(value.interaction is HoldInteraction)
{
Debug.Log(value.interaction);
basicAttackHeld = true;
}
else {
Debug.Log("Press only " + value.interaction);
StartCoroutine(mPlayerAnimOrchestrator.AttackAnim());
}
if(value.canceled) {
Debug.Log("Hold canceled");
basicAttackHeld = false;
// BasicAttackSideEffects();
}
}
void Update()
{
CalculateMovementInputSmoothing();
UpdatePlayerMovement();
UpdatePlayerAnimationMovement();
if(basicAttackHeld)
{
StartCoroutine(mPlayerAnimOrchestrator.AttackAnim());
}
}
MyButtonScript
namespace UnityEngine.InputSystem.OnScreen
{
public class Action : OnScreenControl, IPointerDownHandler, IPointerUpHandler, IDragHandler
{
public void OnPointerUp(PointerEventData eventData)
{
SendValueToControl(0.0f);
}
public void OnPointerDown(PointerEventData eventData)
{
SendValueToControl(1.0f);
}
public void OnDrag(PointerEventData eventData)
{
Debug.Log("Dragging " + gameObject.name);
}
[InputControl(layout = "Button")]
[SerializeField]
private string m_ControlPath;
protected override string controlPathInternal
{
get => m_ControlPath;
set => m_ControlPath = value;
}
}
}

Related

Character jumps only when player touch and release finger Unity 2D

So I have a 2D platformer mobile game I'm about to release soon but there is this issue bothers me and appearently my testers.
In the game, you must touch screen to jump. But character only jumps when you touch and release your finger. This causes delay and affect gameplay badly. I need it to jump immediately when player touch. Not after releasing her/his finger.
Here is the code block jumps the character:
public void Jump(){
if (currentJumpRights > 0){
gameObject.GetComponent<Rigidbody2D>().AddForce(Vector2.up*jumpForce,ForceMode2D.Impulse);
soundManager.PlaySound("jumpSound");
currentJumpRights--;
}
}
I have an invisible jump button that covers most of the screen to prevent jumping when player clicks pause button. And this function is assigned to that invisible button.
How can I achieve this? Thanks for help!
Instead of using Button (since you don't want any visual effects anyway) you can use IPointerDownHandler interface and do e.g.
public class MyButton : MonoBehaviour, IPointerDownHandler, IPointerUpHandler
{
public UnityEvent onClick;
public void OnPointerDown(PointerEventData pointerEventData)
{
onClick.Invoke();
}
public void OnPointerUp(PointerEventData pointerEventData)
{
// not doing anything but from the Documentation I'm never sure if needed
// "Note: In order to receive OnPointerUp callbacks, you must also implement the IPointerDownHandler interface"
// Not sure if this also goes for the other direction ;)
}
}
This way you receive the callback immediately when pressing down instead of "clicking".
If you rather want the button to continuously trigger as long as it stays pressed you can extend this and do e.g.
public class MyButton : MonoBehaviour, IPointerDownHandler, IPointerUpHandler, IPointerEnterHandler, IPointerExitHandler
{
public UnityEvent onPointerDown;
public UnityEvent onPointerUp;
public UnityEvent whilePressed;
public void OnPointerDown(PointerEventData pointerEventData)
{
onPointerDown.Invoke();
StartCoroutine(PressedRoutine);
}
public void OnPointerUp(PointerEventData pointerEventData)
{
StopAllCoroutines();
onPointerUp.Invoke();
}
public void OnPointerEnter(PointerEventData pointerEventData)
{
// nothing but needed for exit event
}
public void OnPointerUp(PointerEventData pointerEventData)
{
StopAllCoroutines();
onPointerUp.Invoke();
}
private IEnumerator PressedRoutine()
{
while(true)
{
yield return null;
whilePressed.Invoke();
}
}
}

Unity Mirror: inbuilt buttons not working on client side of the server

I am very new to unity networking and I am trying to make a game where the user can click multiple buttons in order to spawn pieces to a board. I've hit a lot of roadblocks while doing this and I've managed to figure out a lot of them but I seem to be stuck on this one. the buttons click as they should on the host computer however, on the client's computer nothing happens. the button doesn't even highlight when clicked which is an inbuilt unity feature
I have used this code to spawn in all the button prefabs into the code. the prefabs have a prebuilt network identity and a network transform component i have tried with or without client authority and nothing changes.
public class BoardSetupScript : NetworkBehaviour
{
public GameObject[] SpawnList;
public bool isPlayer1; //used to determine which set of objects need to be placed.
public GameObject self; //makes sure the set up only happens once
// Start is called before the first frame update
void Start()
{
if (self.transform.position.x < 0) //the two players are spawned on either ends, to tell which one is which we use the position of object
{
isPlayer1 = true;
}
else
{
isPlayer1 = false;
}
}
// Update is called once per frame
void Update()
{
if (isLocalPlayer) //checks that only the player that has authority over this object can execute this function
{
if ((Input.GetKeyDown(KeyCode.Space))&(oncespawned==false))
{
oncespawned =true;
SpawnAllObjects();
}
}
}
void SpawnAllObjects()
{
Cmd_SpawnAllObjects();
}
[Command]
void Cmd_SpawnAllObjects() //object is positioned so that this spawns all the buttons in the correct place, depending on which player the box belongs to. used for loops for efficiency
{
if (isPlayer1 == true)
{
for (int Index=0; Index<10; Index++)
{
Rpc_SpawnObject(Index,Index);
}
}
if (isPlayer1 == false)
{
for (int Index=10; Index<20; Index++)
{
Rpc_SpawnObject(Index, Index - 10);
}
}
}
[ClientRpc]
void Rpc_SpawnObject(int indexOfObject,int relativePos) //instantiate adds the object to the game and then spawn sets it up as part of the network.
{
SpawnList[indexOfObject] = (GameObject)Instantiate(SpawnList[indexOfObject], new Vector3(this.transform.position.x,this.transform.position.y - relativePos, 0), Quaternion.identity);
NetworkIdentity SpawnableIdentity = SpawnList[indexOfObject].GetComponent<NetworkIdentity>();
NetworkServer.Spawn(SpawnList[indexOfObject]);
if (connectionToClient != null) //to make sure that Assign Client authority doesnt crash the server
{
SpawnableIdentity.AssignClientAuthority(connectionToClient); //gives the player who owns the player object control of this object
SpawnList[indexOfObject].transform.SetParent(self.transform,true);
}
}
}
here is the code for the functions the buttons are supposed to execute.
public class NetworkBoardStats : NetworkBehaviour
{
[SyncVar]
public bool p1turn=true;
[SyncVar]
public int p1gold;
[SyncVar]
public int p2gold;
Start()
{
}
// Update is called once per frame
void Update()
{
}
public void p22EndTurn()
{
if (isLocalPlayer)
{
Rpc_p2EndTurn();
}
}
public void p1EndTurn()
{
if (isLocalPlayer)
{
Rpc_p1EndTurn();
}
}
[ClientRpc]
public void Rpc_p1EndTurn()
{
if (p1turn == true)
{
p1turn = false;
p2gold += 5;
}
}
[ClientRpc]
public void Rpc_p2EndTurn() //ends player 2's turn over the network.
{
Debug.Log("p2EndTurn has been activated ");
if (p1turn == false)
{
p1turn = true;
p1gold += 5;
Debug.Log("p2EndTurn has been executed");
}
}
}
i have put all the buttons i want to spawn in the spawnable prefabs list
my only theory is to why this is could be that i set up the prefabs wrong as i have given the canvas the button is attached to the network identity and transform and not the button itself but you cant have multiple identities on the same object so i dont know how you would do that.
any help would be appreciated or if you have a better way of making a bunch of buttons that spawn pieces to fight across a network that would be appreciated too thanks!
apologies for weird formatting stack overflow was being finicky about my code

Pressing UI buttons alongside processing Input.GetMouseDown() in Update

The game I'm making is meant for mobile devices.
I have a PlayerInput class in which I check for mouse events in Update():
void Update()
{
if (Input.GetMouseButtonDown(0))
{
//hide UI elements
}
}
I have a button which I hide when I detect mouse input in PlayerInput class but I don't want to hide it if the player presses the button.
I've managed to solve this issue by adding this component to my UI elements:
public class UiPointerHandler : MonoBehaviour, IPointerEnterHandler, IPointerExitHandler
{
public void OnPointerEnter(PointerEventData eventData)
{
//disable mouse checks
}
public void OnPointerExit(PointerEventData eventData)
{
//enable mouse checks
}
}
It allows me to disable processing mouse events in PlayerInput's Update() to interact with certain UI elements.
This does its job fairly well on PC when I'm testing/prototyping the game but when I build the game for mobile it doesn't work at all and I can't press the buttons.
I'm looking for a solution that would work on mobile as well.
You can solve this fairly easily by adding the following check
void Update()
{
if (Input.GetMouseButtonDown(0) && !EventSystem.current.IsPointerOverGameObject())
{
//hide UI elements
}
}
What this does is it checks if your pointer (mouse or finger) is over an UI element. By checking if it's not, you get your desired behaviour.
See "Input.GetTouch" on 'Input' class on Unity, GetMouseButtonDown doesn't work on mobile. Unity designed 'Touch' event on mobile instead of mouse event so try it. it has to work with your code.
Thanks to user Immorality I've managed to solve my problem.
I've added the EventSystem.current.IsPointerOverGameObject() check to the Update() inside of my PlayerInput class.
void Update()
{
if(EventSystem.current.IsPointerOverGameObject()) return;
if (Input.GetMouseButtonDown(0))
{
//hide UI elements
}
}
I've tested this solution on its own and it did not help in my case, I still wasn't able to press the buttons.
I've introduced a bool in PlayerInput which controls whether the mouse input is being processed and 2 static methods allowing to change this variable.
public class PlayerInput : MonoBehaviour
{
private static bool _transmitInput;
void Update()
{
if(!_transmitInput || EventSystem.current.IsPointerOverGameObject()) return;
if (Input.GetMouseButtonDown(0))
{
//hide UI elements
}
}
public static void DisableInput()
{
if (!_transmitInput) return;
_transmitInput = false;
}
public static void EnableInput()
{
if (_transmitInput) return;
_transmitInput = true;
}
}
These are called from the UiPointerHandler:
public class UiPointerHandler : MonoBehaviour, IPointerEnterHandler, IPointerExitHandler
{
public void OnPointerEnter(PointerEventData eventData)
{
PlayerInput.DisableInput();
}
public void OnPointerExit(PointerEventData eventData)
{
PlayerInput.EnableInput();
}
}
At this point I was still having problems on mobile. When pressing the button, the OnPointerEnter() would be called but OnPointerExit() would not.
I assume it is because of how my scene is set up. The 2 buttons I'm working with are in the same position and are the same size. I just disable one and enable the other.
I solved this issue by calling PlayerInput.EnableInput() in the onClick of the buttons.

Using Unity3D's IPointerDownHandler approach, but with "the whole screen"

In Unity say you need to detect finger touch (finger drawing) on something in the scene.
The only way to do this in modern Unity, is very simple:
Step 1. Put a collider on that object. ("The ground" or whatever it may be.) 1
Step 2. On your camera, Inspector panel, click to add a Physics Raycaster (2D or 3D as relevant).
Step 3. Simply use code as in Example A below.
(Tip - don't forget to ensure there's an EventSystem ... sometimes Unity adds one automatically, sometimes not!)
Fantastic, couldn't be easier. Unity finally handles un/propagation correctly through the UI layer. Works uniformly and flawlessly on desktop, devices, Editor, etc etc. Hooray Unity.
All good. But what if you want to draw just "on the screen"?
So you are wanting, quite simply, swipes/touches/drawing from the user "on the screen". (Example, simply for operating an orbit camera, say.) So just as in any ordinary 3D game where the camera runs around and moves.
You don't want the position of the finger on some object in world space, you simply want abstract "finger motions" (i.e. position on the glass).
What collider do you then use? Can you do it with no collider? It seems fatuous to add a collider just for that reason.
What we do is this:
I just make a flat collider of some sort, and actually attach it under the camera. So it simply sits in the camera frustum and completely covers the screen.
(For the code, there is then no need to use ScreenToWorldPoint, so just use code as in Example B - extremely simple, works perfectly.)
My question, it seems a bit odd to have to use the "under-camera colldier" I describe, just to get touches on the glass.
What's the deal here?
(Note - please don't answer involving Unity's ancient "Touches" system, which is unusable today for real projects, you can't ignore .UI using the legacy approach.)
Code sample A - drawing on a scene object. Use ScreenToWorldPoint.
using UnityEngine;
using System.Collections;
using UnityEngine.EventSystems;
public class FingerMove:MonoBehaviour, IPointerDownHandler, IDragHandler, IPointerUpHandler
{
public void OnPointerDown (PointerEventData data)
{
Debug.Log("FINGER DOWN");
prevPointWorldSpace =
theCam.ScreenToWorldPoint( data.position );
}
public void OnDrag (PointerEventData data)
{
thisPointWorldSpace =
theCam.ScreenToWorldPoint( data.position );
realWorldTravel =
thisPointWorldSpace - prevPointWorldSpace;
_processRealWorldtravel();
prevPointWorldSpace = thisPointWorldSpace;
}
public void OnPointerUp (PointerEventData data)
{
Debug.Log("clear finger...");
}
Code sample B ... you only care about what the user does on the glass screen of the device. Even easier here:
using UnityEngine;
using System.Collections;
using UnityEngine.EventSystems;
public class FingerMove:MonoBehaviour, IPointerDownHandler, IDragHandler, IPointerUpHandler
{
private Vector2 prevPoint;
private Vector2 newPoint;
private Vector2 screenTravel;
public void OnPointerDown (PointerEventData data)
{
Debug.Log("FINGER DOWN");
prevPoint = data.position;
}
public void OnDrag (PointerEventData data)
{
newPoint = data.position;
screenTravel = newPoint - prevPoint;
prevPoint = newPoint;
_processSwipe();
}
public void OnPointerUp (PointerEventData data)
{
Debug.Log("FINEGR UP...");
}
private void _processSwipe()
{
// your code here
Debug.Log("screenTravel left-right.. " + screenTravel.x.ToString("f2"));
}
}
1 If you're just new to Unity: at that step very likely, make it a layer called say "Draw"; in physics settings make "Draw" interact with nothing; in step two with the Raycaster just set the layer to "Draw".
First of all, you need to understand that there are just 3 ways to detect click on an Object with the OnPointerDown function:
1.You need a UI component to in order to detect click with the OnPointerDown function. This applies to other similar UI events.
2.Another method to detect a click with the OnPointerDown function on a 2D/Sprite GameObject is to attach Physics2DRaycaster to the Camera and then OnPointerDown will be called when it is clicked. Note that a 2D Collider must be attached to it.
3.If this is a 3D Object with a Collider not 2D Collider, you must have PhysicsRaycaster attached to the camera in order for the OnPointerDown function to be called.
Doing this with the first method seems more reasonable instead of having a large collider or 2D collider covering the screen. All you do is to create a Canvas, Panel GameObject, and attach Image component that stretches across the whole screen to it.
Dude I just don't see using .UI as a serious solution: imagine we're
doing a big game and you're leading a team that is doing all the UI (I
mean buttons, menus and all). I'm leading a team doing the walking
robots. I suddenly say to you "oh, by the way, I can't handle touch
("!"), could you drop in a UI.Panel, don't forget to keep it under
everything you're doing, oh and put one on any/all canvasses or
cameras you swap between - and pass that info back to me OK!" :) I
mean it's just silly. One can't essentially say: "oh, Unity doesn't
handle touch"
Not really hard like the way you described it. You can write a long code that will be able to create a Canvas, Panel and an Image. Change the image alpha to 0. All you have to do is attach that code to the Camera or an empty GameObject and it will perform all this for you automatically on play mode.
Make every GameObject that wants to receive event on the screen subscribe to it, then use ExecuteEvents.Execute to send the event to all the interfaces in the script attached to that GameObject.
For example, the sample code below will send OnPointerDown event to the GameObject called target.
ExecuteEvents.Execute<IPointerDownHandler>(target,
eventData,
ExecuteEvents.pointerDownHandler);
Problem you will run into:
The hidden Image component will block other UI or GameObject from receiving raycast. This is the biggest problem here.
Solution:
Since it will cause some blocking problems, it is better to make the Canvas of the Image to be on top of everything. This will make sure that it is now 100 blocking all other UI/GameObject. Canvas.sortingOrder = 12; should help us do this.
Each time we detect an event such as OnPointerDown from the Image, we will manually send resend the OnPointerDown event to all other UI/GameObjects beneath the Image.
First of all, we throw a raycast with GraphicRaycaster(UI), Physics2DRaycaster(2D collider), PhysicsRaycaster(3D Collider) and store the result in a List.
Now, we loop over the result in the List and resend the event we received by sending artificial event to the results with:
ExecuteEvents.Execute<IPointerDownHandler>(currentListLoop,
eventData,
ExecuteEvents.pointerDownHandler);
Other problems you will run into:
You won't be able to send emulate events on the Toggle component with GraphicRaycaster. This is a bug in Unity. It took me 2 days to realize this.
Also was not able to send fake slider move event to the Slider component. I can't tell if this is a bug or not.
Other than these problems mentioned above, I was able to implement this. It comes in 3 parts. Just create a folder and put all the scripts in them.
SCRIPTS:
1.WholeScreenPointer.cs - The main part of the script that creates Canvas, GameObject, and hidden Image. It does all the complicated stuff to make sure that the Image always covers the screen. It also sends event to all the subscribe GameObject.
public class WholeScreenPointer : MonoBehaviour
{
//////////////////////////////// SINGLETON BEGIN ////////////////////////////////
private static WholeScreenPointer localInstance;
public static WholeScreenPointer Instance { get { return localInstance; } }
public EventUnBlocker eventRouter;
private void Awake()
{
if (localInstance != null && localInstance != this)
{
Destroy(this.gameObject);
}
else
{
localInstance = this;
}
}
//////////////////////////////// SINGLETON END ////////////////////////////////
//////////////////////////////// SETTINGS BEGIN ////////////////////////////////
public bool simulateUIEvent = true;
public bool simulateColliderEvent = true;
public bool simulateCollider2DEvent = true;
public bool hideWholeScreenInTheEditor = false;
//////////////////////////////// SETTINGS END ////////////////////////////////
private GameObject hiddenCanvas;
private List<GameObject> registeredGameobjects = new List<GameObject>();
//////////////////////////////// USEFUL FUNCTIONS BEGIN ////////////////////////////////
public void registerGameObject(GameObject objToRegister)
{
if (!isRegistered(objToRegister))
{
registeredGameobjects.Add(objToRegister);
}
}
public void unRegisterGameObject(GameObject objToRegister)
{
if (isRegistered(objToRegister))
{
registeredGameobjects.Remove(objToRegister);
}
}
public bool isRegistered(GameObject objToRegister)
{
return registeredGameobjects.Contains(objToRegister);
}
public void enablewholeScreenPointer(bool enable)
{
hiddenCanvas.SetActive(enable);
}
//////////////////////////////// USEFUL FUNCTIONS END ////////////////////////////////
// Use this for initialization
private void Start()
{
makeAndConfigWholeScreenPinter(hideWholeScreenInTheEditor);
}
private void makeAndConfigWholeScreenPinter(bool hide = true)
{
//Create and Add Canvas Component
createCanvas(hide);
//Add Rect Transform Component
//addRectTransform();
//Add Canvas Scaler Component
addCanvasScaler();
//Add Graphics Raycaster Component
addGraphicsRaycaster();
//Create Hidden Panel GameObject
GameObject panel = createHiddenPanel(hide);
//Make the Image to be positioned in the middle of the screen then fix its anchor
stretchImageAndConfigAnchor(panel);
//Add EventForwarder script
addEventForwarder(panel);
//Add EventUnBlocker
addEventRouter(panel);
//Add EventSystem and Input Module
addEventSystemAndInputModule();
}
//Creates Hidden GameObject and attaches Canvas component to it
private void createCanvas(bool hide)
{
//Create Canvas GameObject
hiddenCanvas = new GameObject("___HiddenCanvas");
if (hide)
{
hiddenCanvas.hideFlags = HideFlags.HideAndDontSave;
}
//Create and Add Canvas Component
Canvas cnvs = hiddenCanvas.AddComponent<Canvas>();
cnvs.renderMode = RenderMode.ScreenSpaceOverlay;
cnvs.pixelPerfect = false;
//Set Cavas sorting order to be above other Canvas sorting order
cnvs.sortingOrder = 12;
cnvs.targetDisplay = 0;
//Make it child of the GameObject this script is attached to
hiddenCanvas.transform.SetParent(gameObject.transform);
}
private void addRectTransform()
{
RectTransform rctrfm = hiddenCanvas.AddComponent<RectTransform>();
}
//Adds CanvasScaler component to the Canvas GameObject
private void addCanvasScaler()
{
CanvasScaler cvsl = hiddenCanvas.AddComponent<CanvasScaler>();
cvsl.uiScaleMode = CanvasScaler.ScaleMode.ScaleWithScreenSize;
cvsl.referenceResolution = new Vector2(800f, 600f);
cvsl.matchWidthOrHeight = 0.5f;
cvsl.screenMatchMode = CanvasScaler.ScreenMatchMode.MatchWidthOrHeight;
cvsl.referencePixelsPerUnit = 100f;
}
//Adds GraphicRaycaster component to the Canvas GameObject
private void addGraphicsRaycaster()
{
GraphicRaycaster grcter = hiddenCanvas.AddComponent<GraphicRaycaster>();
grcter.ignoreReversedGraphics = true;
grcter.blockingObjects = GraphicRaycaster.BlockingObjects.None;
}
//Creates Hidden Panel and attaches Image component to it
private GameObject createHiddenPanel(bool hide)
{
//Create Hidden Panel GameObject
GameObject hiddenPanel = new GameObject("___HiddenPanel");
if (hide)
{
hiddenPanel.hideFlags = HideFlags.HideAndDontSave;
}
//Add Image Component to the hidden panel
Image pnlImg = hiddenPanel.AddComponent<Image>();
pnlImg.sprite = null;
pnlImg.color = new Color(1, 1, 1, 0); //Invisible
pnlImg.material = null;
pnlImg.raycastTarget = true;
//Make it child of HiddenCanvas GameObject
hiddenPanel.transform.SetParent(hiddenCanvas.transform);
return hiddenPanel;
}
//Set Canvas width and height,to matach screen width and height then set anchor points to the corner of canvas.
private void stretchImageAndConfigAnchor(GameObject panel)
{
Image pnlImg = panel.GetComponent<Image>();
//Reset postion to middle of the screen
pnlImg.rectTransform.anchoredPosition3D = new Vector3(0, 0, 0);
//Stretch the Image so that the whole screen is totally covered
pnlImg.rectTransform.anchorMin = new Vector2(0, 0);
pnlImg.rectTransform.anchorMax = new Vector2(1, 1);
pnlImg.rectTransform.pivot = new Vector2(0.5f, 0.5f);
}
//Adds EventForwarder script to the Hidden Panel GameObject
private void addEventForwarder(GameObject panel)
{
EventForwarder evnfwdr = panel.AddComponent<EventForwarder>();
}
//Adds EventUnBlocker script to the Hidden Panel GameObject
private void addEventRouter(GameObject panel)
{
EventUnBlocker evtrtr = panel.AddComponent<EventUnBlocker>();
eventRouter = evtrtr;
}
//Add EventSystem
private void addEventSystemAndInputModule()
{
//Check if EventSystem exist. If it does not create and add it
EventSystem eventSys = FindObjectOfType<EventSystem>();
if (eventSys == null)
{
GameObject evObj = new GameObject("EventSystem");
EventSystem evs = evObj.AddComponent<EventSystem>();
evs.firstSelectedGameObject = null;
evs.sendNavigationEvents = true;
evs.pixelDragThreshold = 5;
eventSys = evs;
}
//Check if StandaloneInputModule exist. If it does not create and add it
StandaloneInputModule sdlIpModl = FindObjectOfType<StandaloneInputModule>();
if (sdlIpModl == null)
{
sdlIpModl = eventSys.gameObject.AddComponent<StandaloneInputModule>();
sdlIpModl.horizontalAxis = "Horizontal";
sdlIpModl.verticalAxis = "Vertical";
sdlIpModl.submitButton = "Submit";
sdlIpModl.cancelButton = "Cancel";
sdlIpModl.inputActionsPerSecond = 10f;
sdlIpModl.repeatDelay = 0.5f;
sdlIpModl.forceModuleActive = false;
}
}
/*
Forwards Handler Event to every GameObject that implements IDragHandler, IPointerDownHandler, IPointerUpHandler interface
*/
public void forwardDragEvent(PointerEventData eventData)
{
//Route and send the event to UI and Colliders
for (int i = 0; i < registeredGameobjects.Count; i++)
{
ExecuteEvents.Execute<IDragHandler>(registeredGameobjects[i],
eventData,
ExecuteEvents.dragHandler);
}
//Route and send the event to UI and Colliders
eventRouter.routeDragEvent(eventData);
}
public void forwardPointerDownEvent(PointerEventData eventData)
{
//Send the event to all subscribed scripts
for (int i = 0; i < registeredGameobjects.Count; i++)
{
ExecuteEvents.Execute<IPointerDownHandler>(registeredGameobjects[i],
eventData,
ExecuteEvents.pointerDownHandler);
}
//Route and send the event to UI and Colliders
eventRouter.routePointerDownEvent(eventData);
}
public void forwardPointerUpEvent(PointerEventData eventData)
{
//Send the event to all subscribed scripts
for (int i = 0; i < registeredGameobjects.Count; i++)
{
ExecuteEvents.Execute<IPointerUpHandler>(registeredGameobjects[i],
eventData,
ExecuteEvents.pointerUpHandler);
}
//Route and send the event to UI and Colliders
eventRouter.routePointerUpEvent(eventData);
}
}
2.EventForwarder.cs - It simply receives any event from the hidden Image and passes it to the WholeScreenPointer.cs script for processing.
public class EventForwarder : MonoBehaviour, IDragHandler, IPointerDownHandler, IPointerUpHandler
{
WholeScreenPointer wcp = null;
void Start()
{
wcp = WholeScreenPointer.Instance;
}
public void OnDrag(PointerEventData eventData)
{
wcp.forwardDragEvent(eventData);
}
public void OnPointerDown(PointerEventData eventData)
{
wcp.forwardPointerDownEvent(eventData);
}
public void OnPointerUp(PointerEventData eventData)
{
wcp.forwardPointerUpEvent(eventData);
}
}
3.EventUnBlocker.cs - It unblocks the the rays the hidden Image is blocking by sending fake event to any Object above it. Be it UI, 2D or 3D collider.
public class EventUnBlocker : MonoBehaviour
{
List<GraphicRaycaster> grRayCast = new List<GraphicRaycaster>(); //UI
List<Physics2DRaycaster> phy2dRayCast = new List<Physics2DRaycaster>(); //Collider 2D (Sprite Renderer)
List<PhysicsRaycaster> phyRayCast = new List<PhysicsRaycaster>(); //Normal Collider(3D/Mesh Renderer)
List<RaycastResult> resultList = new List<RaycastResult>();
//For Detecting button click and sending fake Button Click to UI Buttons
Dictionary<int, GameObject> pointerIdToGameObject = new Dictionary<int, GameObject>();
// Use this for initialization
void Start()
{
}
public void sendArtificialUIEvent(Component grRayCast, PointerEventData eventData, PointerEventType evType)
{
//Route to all Object in the RaycastResult
for (int i = 0; i < resultList.Count; i++)
{
/*Do something if it is NOT this GameObject.
We don't want any other detection on this GameObject
*/
if (resultList[i].gameObject != this.gameObject)
{
//Check if this is UI
if (grRayCast is GraphicRaycaster)
{
//Debug.Log("UI");
routeEvent(resultList[i], eventData, evType, true);
}
//Check if this is Collider 2D/SpriteRenderer
if (grRayCast is Physics2DRaycaster)
{
//Debug.Log("Collider 2D/SpriteRenderer");
routeEvent(resultList[i], eventData, evType, false);
}
//Check if this is Collider/MeshRender
if (grRayCast is PhysicsRaycaster)
{
//Debug.Log("Collider 3D/Mesh");
routeEvent(resultList[i], eventData, evType, false);
}
}
}
}
//Creates fake PointerEventData that will be used to make PointerEventData for the callback functions
PointerEventData createEventData(RaycastResult rayResult)
{
PointerEventData fakeEventData = new PointerEventData(EventSystem.current);
fakeEventData.pointerCurrentRaycast = rayResult;
return fakeEventData;
}
private void routeEvent(RaycastResult rayResult, PointerEventData eventData, PointerEventType evType, bool isUI = false)
{
bool foundKeyAndValue = false;
GameObject target = rayResult.gameObject;
//Make fake GameObject target
PointerEventData fakeEventData = createEventData(rayResult);
switch (evType)
{
case PointerEventType.Drag:
//Send/Simulate Fake OnDrag event
ExecuteEvents.Execute<IDragHandler>(target, fakeEventData,
ExecuteEvents.dragHandler);
break;
case PointerEventType.Down:
//Send/Simulate Fake OnPointerDown event
ExecuteEvents.Execute<IPointerDownHandler>(target,
fakeEventData,
ExecuteEvents.pointerDownHandler);
//Code Below is for UI. break out of case if this is not UI
if (!isUI)
{
break;
}
//Prepare Button Click. Should be sent in the if PointerEventType.Up statement
Button buttonFound = target.GetComponent<Button>();
//If pointerId is not in the dictionary add it
if (buttonFound != null)
{
if (!dictContains(eventData.pointerId))
{
dictAdd(eventData.pointerId, target);
}
}
//Bug in Unity with GraphicRaycaster and Toggle. Have to use a hack below
//Toggle Toggle component
Toggle toggle = null;
if ((target.name == "Checkmark" || target.name == "Label") && toggle == null)
{
toggle = target.GetComponentInParent<Toggle>();
}
if (toggle != null)
{
//Debug.LogWarning("Toggled!: " + target.name);
toggle.isOn = !toggle.isOn;
//Destroy(toggle.gameObject);
}
break;
case PointerEventType.Up:
//Send/Simulate Fake OnPointerUp event
ExecuteEvents.Execute<IPointerUpHandler>(target,
fakeEventData,
ExecuteEvents.pointerUpHandler);
//Code Below is for UI. break out of case if this is not UI
if (!isUI)
{
break;
}
//Send Fake Button Click if requirement is met
Button buttonPress = target.GetComponent<Button>();
/*If pointerId is in the dictionary, check
*/
if (buttonPress != null)
{
if (dictContains(eventData.pointerId))
{
//Check if GameObject matches too. If so then this is a valid Click
for (int i = 0; i < resultList.Count; i++)
{
GameObject tempButton = resultList[i].gameObject;
if (tempButton != this.gameObject && dictContains(eventData.pointerId, tempButton))
{
foundKeyAndValue = true;
//Debug.Log("Button ID and GameObject Match! Sending Click Event");
//Send/Simulate Fake Click event to the Button
ExecuteEvents.Execute<IPointerClickHandler>(tempButton,
new PointerEventData(EventSystem.current),
ExecuteEvents.pointerClickHandler);
}
}
}
}
break;
}
//Remove pointerId since it exist
if (foundKeyAndValue)
{
dictRemove(eventData.pointerId);
}
}
void routeOption(PointerEventData eventData, PointerEventType evType)
{
UpdateRaycaster();
if (WholeScreenPointer.Instance.simulateUIEvent)
{
//Loop Through All GraphicRaycaster(UI) and throw Raycast to each one
for (int i = 0; i < grRayCast.Count; i++)
{
//Throw Raycast to all UI elements in the position(eventData)
grRayCast[i].Raycast(eventData, resultList);
sendArtificialUIEvent(grRayCast[i], eventData, evType);
}
//Reset Result
resultList.Clear();
}
if (WholeScreenPointer.Instance.simulateCollider2DEvent)
{
//Loop Through All Collider 2D (Sprite Renderer) and throw Raycast to each one
for (int i = 0; i < phy2dRayCast.Count; i++)
{
//Throw Raycast to all UI elements in the position(eventData)
phy2dRayCast[i].Raycast(eventData, resultList);
sendArtificialUIEvent(phy2dRayCast[i], eventData, evType);
}
//Reset Result
resultList.Clear();
}
if (WholeScreenPointer.Instance.simulateColliderEvent)
{
//Loop Through All Normal Collider(3D/Mesh Renderer) and throw Raycast to each one
for (int i = 0; i < phyRayCast.Count; i++)
{
//Throw Raycast to all UI elements in the position(eventData)
phyRayCast[i].Raycast(eventData, resultList);
sendArtificialUIEvent(phyRayCast[i], eventData, evType);
}
//Reset Result
resultList.Clear();
}
}
public void routeDragEvent(PointerEventData eventData)
{
routeOption(eventData, PointerEventType.Drag);
}
public void routePointerDownEvent(PointerEventData eventData)
{
routeOption(eventData, PointerEventType.Down);
}
public void routePointerUpEvent(PointerEventData eventData)
{
routeOption(eventData, PointerEventType.Up);
}
public void UpdateRaycaster()
{
convertToList(FindObjectsOfType<GraphicRaycaster>(), grRayCast);
convertToList(FindObjectsOfType<Physics2DRaycaster>(), phy2dRayCast);
convertToList(FindObjectsOfType<PhysicsRaycaster>(), phyRayCast);
}
//To avoid ToList() function
void convertToList(GraphicRaycaster[] fromComponent, List<GraphicRaycaster> toComponent)
{
//Clear and copy new Data
toComponent.Clear();
for (int i = 0; i < fromComponent.Length; i++)
{
toComponent.Add(fromComponent[i]);
}
}
//To avoid ToList() function
void convertToList(Physics2DRaycaster[] fromComponent, List<Physics2DRaycaster> toComponent)
{
//Clear and copy new Data
toComponent.Clear();
for (int i = 0; i < fromComponent.Length; i++)
{
toComponent.Add(fromComponent[i]);
}
}
//To avoid ToList() function
void convertToList(PhysicsRaycaster[] fromComponent, List<PhysicsRaycaster> toComponent)
{
//Clear and copy new Data
toComponent.Clear();
for (int i = 0; i < fromComponent.Length; i++)
{
toComponent.Add(fromComponent[i]);
}
}
//Checks if object is in the dictionary
private bool dictContains(GameObject obj)
{
return pointerIdToGameObject.ContainsValue(obj);
}
//Checks if int is in the dictionary
private bool dictContains(int pointerId)
{
return pointerIdToGameObject.ContainsKey(pointerId);
}
//Checks if int and object is in the dictionary
private bool dictContains(int pointerId, GameObject obj)
{
return (pointerIdToGameObject.ContainsKey(pointerId) && pointerIdToGameObject.ContainsValue(obj));
}
//Adds pointerId and its value to dictionary
private void dictAdd(int pointerId, GameObject obj)
{
pointerIdToGameObject.Add(pointerId, obj);
}
//Removes pointerId and its value from dictionary
private void dictRemove(int pointerId)
{
pointerIdToGameObject.Remove(pointerId);
}
public enum PointerEventType
{
Drag, Down, Up
}
}
Usage:
1.Attach the WholeScreenPointer script to an empty GameObject or the Camera.
2.To receive any event in the scene, simply implement IDragHandler, IPointerDownHandler, IPointerUpHandler in any script then call WholeScreenPointer.Instance.registerGameObject(this.gameObject); once. Any event from the screen will now be sent to that script. Don't forget to unregister in the OnDisable() function.
For example, attach Test to any GameObject you want to receive touch events:
public class Test : MonoBehaviour, IDragHandler, IPointerDownHandler, IPointerUpHandler
{
void Start()
{
//Register this GameObject so that it will receive events from WholeScreenPointer script
WholeScreenPointer.Instance.registerGameObject(this.gameObject);
}
public void OnDrag(PointerEventData eventData)
{
Debug.Log("Dragging: ");
}
public void OnPointerDown(PointerEventData eventData)
{
Debug.Log("Pointer Down: ");
}
public void OnPointerUp(PointerEventData eventData)
{
Debug.Log("Pointer Up: ");
}
void OnDisable()
{
WholeScreenPointer.Instance.unRegisterGameObject(this.gameObject);
}
}
NOTE:
You only need to call WholeScreenPointer.Instance.registerGameObject(this.gameObject); if you want to receive event anywhere on the screen. If you just want to receive event from current Object, then you don't have to call this. If you do, you will receive multiple events.
Other Important functions:
Enable WholeScreen Event - WholeScreenPointer.Instance.enablewholeScreenPointer(true);
Disable WholeScreen Event - WholeScreenPointer.Instance.enablewholeScreenPointer(false);
Finally, this can be improved more.
The question and the answer I am going to post seems pretty much opinion based. Nevertheless I am going to answer as best as I can.
If you are trying to detect pointer events on the screen, there is nothing wrong with representing the screen with an object. In your case, you use a 3D collider to cover the entire frustum of the camera. However, there is a native way to do this in Unity, using a 2D UI object that covers the entire screen. The screen can be best represented by a 2D object. For me, this seems like a natural way to do it.
I use a generic code for this purpose:
public class Screen : MonoSingleton<Screen>, IPointerClickHandler, IDragHandler, IBeginDragHandler, IEndDragHandler, IPointerDownHandler, IPointerUpHandler, IScrollHandler {
private bool holding = false;
private PointerEventData lastPointerEventData;
#region Events
public delegate void PointerEventHandler(PointerEventData data);
static public event PointerEventHandler OnPointerClick = delegate { };
static public event PointerEventHandler OnPointerDown = delegate { };
/// <summary> Dont use delta data as it will be wrong. If you are going to use delta, use OnDrag instead. </summary>
static public event PointerEventHandler OnPointerHold = delegate { };
static public event PointerEventHandler OnPointerUp = delegate { };
static public event PointerEventHandler OnBeginDrag = delegate { };
static public event PointerEventHandler OnDrag = delegate { };
static public event PointerEventHandler OnEndDrag = delegate { };
static public event PointerEventHandler OnScroll = delegate { };
#endregion
#region Interface Implementations
void IPointerClickHandler.OnPointerClick(PointerEventData e) {
lastPointerEventData = e;
OnPointerClick(e);
}
// And other interface implementations, you get the point
#endregion
void Update() {
if (holding) {
OnPointerHold(lastPointerEventData);
}
}
}
The Screen is a singleton, because there is only one screen in the context of the game. Objects(like camera) subscribe to its pointer events, and arrange theirselves accordingly. This also keeps single-responsibility intact.
You would use this as appending it to an object that represents the so called glass (surface of the screen). If you think buttons on the UI as popping out of the screen, glass would be under them. For this, the glass has to be the first child of the Canvas. Of course, the Canvas has to be rendered in screen space for it to make sense.
One hack here, which doesn't make sense is to add an invisible Image component to the glass, so it would receive events. This acts like the raycast target of the glass.
You could also use Input (Input.touches etc.) to implement this glass object. It would work as checking if the input changed in every Update call. This seems like a polling-based approach to me, whereas the above one is an event-based approach.
Your question seems as if looking for a way to justify using the Input class. IMHO, Do not make it harder for yourself. Use what works. And accept the fact that Unity is not perfect.

multi-touch in the unity game not working

I am following this video here
The tutorial and everything works fine except the multi-touch parts. I have two buttons that are actually two different images on the canvas but when I use one I can't use the other touch button. Each one of my buttons has this script.
Here is the code:
using UnityEngine.UI;
using UnityEngine.EventSystems;
using System.Collections;
public class SimpleTouchShoot : MonoBehaviour, IPointerDownHandler, IPointerUpHandler
{
private bool touched;
private int pointerID;
private bool canFire;
void Awake()
{
touched = false;
canFire = false;
}
public void OnPointerDown (PointerEventData data)
{
if(!touched)
{
touched = true;
pointerID = data.pointerId;
canFire = true;
}
}
public void OnPointerUp (PointerEventData data)
{
if(data.pointerId == pointerID)
{
touched = false;
canFire = false;
}
}
public bool CanFire ()
{
return canFire;
}
}
The version of Unity is 4.6.
You're using the Interface for the Event Handlers. It doesn't implement its own multi touch function so you would have to make your own touch logic. In most cases, it is a hacky way of doing it. Making an array or a list of touch inputs and having cases for it.
Unity3D Forum - Event System
Click on the link and check the Answer number 5. The answer is an hacky way of doing it. Because the Event Interface is adding EventHandler to whatever GameObject you have for the script. But it you would always need to tell the game engine that you accept more than one input <- to make it simple. Catch the 2nd touch input that is being ignored.

Categories