Using the InputManager prefab from the HoloToolkit asset and implementation code the user can tap and hold on a given object and then move their hand left or right (along the x-plane) to rotate the object along the Y axis or up and down (along the y-plane) to rotate the object on the X axis.
However there appears to be a bug. If the users gaze comes off the object, the rotation stops immediately until the users gaze returns to the object. Is this the intended functionality? If so, how does one preserve the current object being changed via the navigation gesture and allow it to continue being manipulated until the users hand leaves the FOV or the user releases the tap and hold gesture?
Goal is to utilize the tap and hold gesture but not require the users gaze to be locked onto the object during the entirety of them rotating it. This is quite difficult with small or awkwardly shaped objects.
Implementation code:
[Tooltip("Controls speed of rotation.")]
public float RotationSensitivity = 2.0f;
private float rotationFactorX, rotationFactorY;
public void OnNavigationStarted(NavigationEventData eventData)
{
Debug.Log("Navigation started");
}
public void OnNavigationUpdated(NavigationEventData eventData)
{
rotationFactorX = eventData.CumulativeDelta.x * RotationSensitivity;
rotationFactorY = eventData.CumulativeDelta.y * RotationSensitivity;
//control structure to prevent dual axis movement
if (System.Math.Abs(eventData.CumulativeDelta.x) > System.Math.Abs(eventData.CumulativeDelta.y))
{
//rotate focusedObject along Y-axis
transform.Rotate(new Vector3(0, -1 * rotationFactorX, 0));
}
else
{
//rotate focusedObject along X-axis
transform.Rotate(new Vector3(-1 * rotationFactorY, 0, 0));
}
}
public void OnNavigationCompleted(NavigationEventData eventData)
{
Debug.Log("Navigation completed");
}
public void OnNavigationCanceled(NavigationEventData eventData)
{
Debug.Log("Navigation canceled");
}
You need to call these methods:
NavigationRecognizer = new GestureRecognizer();
NavigationRecognizer.SetRecognizableGestures(GestureSettings.Tap);
NavigationRecognizer.TappedEvent += NavigationRecognizer_TappedEvent;
ResetGestureRecognizers();
This is for a tapped event, but doing the other gesutres is simple as adding the event callback for them and using an | OR selector on the SetRecognizableGestures() call. e.g.
NavigationRecognizer.SetRecognizableGestures(GestureSettings.Tap | GestureSettings.NavigationX);
Draco18s answer is safer, but this solution works as well because
the InputManager prefab implements a stack for us.
On Navigation started, clear the stack and push the object being 'Navigated' onto the stack,
InputManager.Instance.ClearModalInputStack();
InputManager.Instance.PushModalInputHandler(gameObject);
On Navigation completed or canceled, pop it off the stack. InputManager.Instance.PopModalInputHandler();
Add this to your own implementation script, no need to adjust any pre-existing scripts on the InputManager.
Related
I am using the Unity Engine with C#.
I have a 1x1 cube which moves forward on a grid of 49, 1x1 cubes (screenshot below) - when I press the start button on the controller.
The movement code for the cube is below.
void MovePlayerCube()
{
transform.Translate(direction * moveSpeed * Time.deltaTime);
}
When this cube passes over a cube with an arrow on it, the cube will change direction to where the arrow is pointing (staying on the same Y axis).
I need to detect the exact point at which the cube is directly over the cube with the arrow on it, and run the 'change direction' code at that point.
I'm currently using Vector3.Distance to check if the X and Z coordinates of the 2 cubes are close enough together (if they are less than 0.03f in distance), I can't check if they are equal due to floating point imprecision.
However this is really ineffective as half the time this code doesn't register for probably the same reason, and if I increase the 0.03f to a point where it never misses it becomes really noticeable that the cube isn't aligned with the grid anymore.
There has to be a proper solution to this and hopefully I've clarified the situation enough?
Any advice is appreciated.
You are moving your cube via
transform.Translate(direction * moveSpeed * Time.deltaTime);
which will never be exact an might overshoot your positions.
=> I would rather implement a coroutine for moving the cube exactly one field at a time, ensuring that after each iteration it fully aligns with the grid and run your checks once in that moment.
It doesn't even have to match exactly then, you only need to check if you are somewhere hitting a cube below you.
So something like e.g.
private Vector3Int direction = Vector3Int.left;
private IEnumerator MoveRoutine()
{
// depends on your needs if this runs just forever or certain steps
// or has some exit condition
while(true)
{
// calculate the next position
// optional round it to int => 1x1 grid ensured on arrival
// again depends a bit on your needs
var nextPosition = Vector3Int.RoundToInt(transform.position) + direction;
// move until reaching the target position
// Vector3 == Vector3 uses a precision of 1e-5
while(transform.position != nextPosition)
{
transform.position = Vector3.MoveTowards(transform.position, nextPosition, moveSpeed * Time.deltaTime);
yield return null;
}
// set target position in one frame just to be sure
transform.position = nextPosition;
// run your check here ONCE and adjust direction
}
}
start this routine only ONCE via
StartCoroutine(MoveRoutine());
or if you have certain exit conditions at least only run one routine at a time.
A Corouine is basically just a temporary Update routine with a little bit different writing => of course you could implement the same in Update as well if you prefer that
private Vector3Int direction = Vector3Int.left;
private Vector3 nextPosition;
private void Start()
{
nextPosition = transform.position;
}
private void Update()
{
if(transform.position != nextPosition)
{
transform.position = Vector3.MoveTowards(transform.position, nextPosition, moveSpeed * Time.deltaTime);
}
else
{
transform.position = nextPosition;
// run your check here ONCE and adjust direction
// then set next position
nextPosition = Vector3Int.RoundToInt(transform.position) + direction;
}
}
Then regarding the check you can have a simple raycast since you only run it in a specific moment:
if(Physics.Raycast(transform.position, Vector3.down, out var hit))
{
direction = Vector3Int.RountToInt(hit.transform.forward);
}
assuming of course your targets have colliders attached, your moved cube position (pivot) is above those colliders (assumed it from your image) and your targets forward actually points int the desired new diretcion
I would do it this way. First I would split the ability of certain objects to be "moving with certain speed" and "moving in a certain direction", this can be done with C# interfaces. Why? Because then your "arrow" cube could affect not only your current moving cube, but anything that implements the interfaces (maybe in the future you'll have some enemy cube, and it will also be affected by the arrow modifier).
IMovingSpeed.cs
public interface IMovementSpeed
{
float MovementSpeed{ get; set; }
}
IMovementDirection3D.cs
public interface IMovementDirection3D
{
Vector3 MovementDirection { get; set; }
}
Then you implement the logic of your cube that moves automatically in a certain direction. Put this component on your player cube.
public class MovingStraight: MonoBehaviour, IMovementSpeed, IMovementDirection3D
{
private float _movementSpeed;
Vector3 MovementSpeed
{
get { return _movementSpeed; }
set { _movementSpeed = value; }
}
private Vector3 _movementDirection;
Vector3 MovementDirection
{
get { return _movementDirection; }
set { _movementDirection= value; }
}
void Update()
{
// use MovementSpeed and MovementDirection to advance the object position
}
}
Now to implement how the arrow cube modifies other objects, I would attach a collision (trigger) component for both moving cube and the arrow cube.
In the component of the arrow cube, you can implement an action when something enters this trigger zone, in our case we check if this object "has direction that we can change", and if so, we change the direction, and make sure that the object will be aligned, by forcing the arrow cube's position and the other object's position to be the same on the grid.
public class DirectionModifier: MonoBehaviour
{
private Vector3 _newDirection;
private void OnTriggerEnter(Collider collider)
{
IMovementDirection3D objectWithDirection = collider as IMovementDirection3D ;
if (objectWithDirection !=null)
{
objectWithDirection.MovementDirection = _newDirection;
// to make sure the object will continue moving exactly
// from where the arrow cube is
collider.transform.position.x = transform.position.x;
collider.transform.position.y = transform.position.y;
}
}
}
If you made your trigger zones too large, however, then the moving cube will "jump" abruptly when it enters the arrow cube's trigger zone. You can fix it by either starting a coroutine as other answers suggested, or you could make the trigger zones pretty small, so that the jump is not noticeable (just make sure not to make them too small, or they may not intersect each other)
You could then similarly create many other modifying blocks, that would change speed or something
I think that it is enough for you to check if the X and Z coordinates are equal, since the movement occurs only along them
Example
if(_player.transfom.position.x == _gameSquare.transfom.position.x && _player.transfom.position.z == _gameSquare.transfom.position.z)
{
Debag.Log("Rotate!")
}
This is what I have tried so far:
I create a raycast and if it hits an object on layer 8 (the layer in which objects need to be launched to the player), I call the SlerpToHand() function.
private void Update()
{
if(Physics.Raycast(transform.position, transform.forward * raycastLength, out hit))
{
if(hit.collider.gameObject.layer == 8)
{
// Launch object to player
SlerpToHand(hit.collider.transform);
}
}
}
Inside of SlerpToHand(), I set the object's position to Vector3.Slerp(), that vector being created from values in the hit object.
private void SlerpToHand(Transform hitObj)
{
Vector3 hitObjVector = new Vector3(hitObj.transform.position.x, hitObj.transform.position.y, hitObj.transform.position.z);
hitObj.position = Vector3.Slerp(hitObjVector, transform.position, speed);
}
But the result of this is all wrong, the object just gets teleported to the player's hands. Is Vector3.Slerp() not a good way to curve an object to the player? For context I am trying to recreate Half-Life: Alyx's grabbity gloves. There is still some work to do with the hand gestures but I am just trying to get the object curve down. Help is much appreciated, let me know if more info is needed.
See unity docs:
public static Vector3 Slerp(Vector3 a, Vector3 b, float t);
Here, t is a normalized position between two input values. It means, if t = 0, result will be exactly first value. If t = 1, result will be exactly second value. If t = 0.5, result will be the middle between two values.
So, usually, you need to call Slerp every Update, step by step increasing t from 0 to 1. For this, usually Time.deltaTime used (which equals the time between updates). For speed control, multiply your speed by Time.deltaTime.
Update()
{
if (t < 1)
{
t += Time.deltaTime * speed;
hitObj.position = Vector3.Slerp(startPosition, endPosition, t);
}
}
...and in this case, for start moving, you just need to set t = 0. Probably, you have to implement your own logic here, but this should show the idea.
In addition:
Slerp used to interpolate between vector directions, for positions use Lerp.
Consider use DOTween plugin - its free and powerful for such cases.
I'm having trouble getting my Player object to continue doing what it was doing after invoking an event.
I am working on a little platformer using Monogame, and have run into a bit of an issue. The way I'm handling collisions is I have a container that holds the game area's solid collision masks (simple Rectangles) and my Player has a mask. When the player moves, it invokes an event proposing its new position. The GameController takes that event and uses it to compare the player's proposed position and collision mask against the solids, and if it finds any collisions, passes those collisions into the player, where the player processes them to adjust its Proposed Position and then finally moves.
The problem is it seems as though once the event is invoked, the GameController does its thing and I never return to the code in my Player until I purposely try to move. I'm relatively new to events and event handling, and it sounded like this was a good way to have the GameController and Player communicate, but is it interrupting the Player's code flow and never returning to it? I guess my thought process was that the game would process its event handling method, then return to where we left off in the Player's code, but that either is not how it works, or I'm missing a step.
ActorPlayer Properties
Creates the Event to be fired off when the Player indicates any movement.
public delegate void PlayerMoveEventHandler(object source, EventArgs e);
public event PlayerMoveEventHandler PlayerMoved;
GameController Initialize
Subscribes to the Player's PlayerMoved event, assigning the OnPlayerMove method to it (see that method below)
Player = new ActorPlayer(0, PlayerIndex.One, new Vector2(112, 104));
Player.PlayerMoved += OnPlayerMove;
ActorPlayer Update
At Line 23, the ActorPlayer calls the OnPlayerMove() method, which fires off the event for any subscribers
public void Update(GameTime gameTime)
{
float deltaTime = (float)gameTime.ElapsedGameTime.TotalSeconds;
ProcessInput(deltaTime);
if(IsGrounded == false)
{
// Add Gravity
FallVector = FallVector + new Vector2(0, Global.GRAVITY_RATE * deltaTime);
if (FallVector.Y > 4f)
{
FallVector = new Vector2(0, 6f);
}
Velocity = Velocity + FallVector;
}
if (Velocity != new Vector2(0))
{
ProposedPosition = Position + Velocity;
CreateProposalMask(16, 24);
OnPlayerMoved();
if (IsColliding == true)
{
for (int i = Collisions.Count - 1; i >= 0; i--)
{
HandleCollision(Collisions.ElementAt(i));
Collisions.RemoveAt(i);
}
IsColliding = false;
}
Position = ProposedPosition;
Velocity = new Vector2(0);
CreateCollisionMask(16, 24);
}
}
ActorPlayer OnPlayerMoved
Triggers the event if any subscribers are listening
protected virtual void OnPlayerMoved()
{
PlayerMoved?.Invoke(this, EventArgs.Empty);
}
GameController OnPlayerMove
The method GameController runs if my Player's PlayerMoved event is triggered
public void OnPlayerMove(object source, EventArgs e)
{
// Check for Collisions against ProposedPosition on Player
foreach (EntityCollisionSolid solid in MapContainer.CollisionSolids)
{
if (Player.CollisionMask.Intersects(solid.CollisionMask) == true)
{
Rectangle _collision = Rectangle.Intersect(Player.CollisionMask, solid.CollisionMask);
Player.AddCollision(_collision);
}
}
if(Player.IsGrounded == true)
{
Rectangle belowPlayerMask = new Rectangle(Player.CollisionMask.X, Player.CollisionMask.Bottom, Player.CollisionMask.Width, 1);
foreach (EntityCollisionSolid solid in MapContainer.CollisionSolids)
{
if (belowPlayerMask.Intersects(solid.CollisionMask) == true)
{
return;
}
}
Player.PlayerFalling();
}
}
My assumption is that it would flow like this:
Player Checks for Input / Gravity. Creates a Velocity.
Player Combines their current Position with Velocity to create a movement Proposal.
If any Velocity exists, Player invokes its OnPlayerMoved event.
GameController is subscribed to that event, so it jumps in and uses the ProposalPosition to check for collisions that will occur if the player moves.
If the GameController finds any collisions, it adds Collision Rectangles to the Player's list to adjust their Proposal and avoid going into the wall/floor.
The Player continues after this, checking if any Collisions have been added to its list. If so, it adjusts its ProposalPosition.
Finally, the Position is updated to ProposalPosition.
What is actually happening is that when the OnPlayerMoved event triggers, the GameController does exactly as it was asked, but then nothing happens until I provide input again. It is as though it leaves the player's Update method during the event call and never returns to it. That may be how Events are supposed to work and I'm just unaware.
EDIT Thanks for everyone's suggestions! I'm still new to this, so I apologize for the mess of code shared before :) I have updated the code snippets above to properly show the delegate and EventHandler creation, subscription, triggering, and handling, as suggested below! The setup I have above works for what I was trying to accomplish.
Wow, so I had my GameController subscribed to the event, but the GameController hadn't been finished on my end, so portions of my collision detection were still being handled in my Main() loop. Moving those over to my GameController and adjusting the Update methods fixed the issue. Many thanks to Peter for helping me understand Events much better! I'd have just given up on the idea altogether soon without his input :)
I am spawning a prefab object at runtime (actually, in the Start() method of another object), and I need to apply a scaling to the object. I made a little component to handle this:
public class Spawner : MonoBehaviour {
public Transform SpawnPrefab;
public Vector3 Scale;
void Start () {
var spawn = Instantiate(SpawnPrefab, Vector3.zero, Quaternion.identity);
spawn.localScale = Vector3.Scale(spawn.localScale, Scale);
// spawn.GetComponent<Rigidbody>().ResetCenterOfMass(); // Has no effect
}
}
The pivot point of the prefab I am spawning does not coincide with the centre of mass of the object. Therefore, the rescaling means that the centre of mass location relative to the pivot will change. However, it's not being updated automatically, so my spawned object has unexpected physics.
I tried adding a call to GetComponent<Rigidbody>().ResetCenterOfMass() immediately after the call to Scale() (the commented-out line above), but this has no effect.
However, if I put the call to ResetCenterOfMass() in the Start() method of a separate little component added to the spawned object, e.g.
public class COMReset : MonoBehaviour {
void Start() {
GetComponent<Rigidbody>().ResetCenterOfMass();
}
}
this does cause the centre of mass to be recalculated correctly. However, the spawned object appears to have already been through at least one physics update with the wrong COM by this time, and so has already acquired some unexpected momentum.
Why isn't the COM being automatically recalculated, without me having to call ResetCenterOfMass() explicitly? And if I must trigger it manually, can I do that immediately after the calls to Instantiate() and Scale(), rather than deferring like this?
With thanks to #DMGregory on GameDev for the suggestion, a call to Physics.SyncTransforms before invoking Rigidbody.ResetCenterOfMass fixes the problem:
public class Spawner : MonoBehaviour {
public Transform SpawnPrefab;
public Vector3 Scale;
void Start () {
var spawn = Instantiate(SpawnPrefab, Vector3.zero, Quaternion.identity);
spawn.localScale = Vector3.Scale(spawn.localScale, Scale);
Physics.SyncTransforms();
spawn.GetComponent<Rigidbody>().ResetCenterOfMass();
}
}
Evidently this direct modification of the transform scale isn't being automatically passed through to the physics engine, but Physics.SyncTransforms lets us manually flush those changes down to PhysX, so that the ResetCenterOfMass computation is then based on the correctly scaled transform.
I have been trying to add little things on top of the tutorials Unity supplies and I am confused about how to get this certain mechanic to work.
When my player shoots it shoots in the direction of the mouse. When I run a client and host on my computer so that 2 people are connect for testing I get weird results.
I have both of my players shooting in the correct direction but when I am moving my mouse on one of the clients I see that my blue circle (which represents where the bullets will be spawned) is moving when that client is not focused, meaning I am not currently clicked on that client and when I am moving my mouse I see the blue circle moving on the client I am not focused on which I am not sure if my friend down the street was to test this would it cause errors.
Here are 2 screenshots of my Scene/Game view to get a better visual : Pic 1 - Pic 2
I ended up using Network Transform Child on my parent GameObject for one of my children GameObjects that helps spawn the location of the bullets but still the visual look from my Scene tab makes me worry about the accuracy of bullets being spawned.
Here is my shooting code that :
public class PlayerShooting : NetworkBehaviour
{
public GameObject bulletPrefab;
public GameObject fireSpot;
public float bulletSpeed;
public Transform rotater;
public Camera cam;
void Update()
{
// Only run the below code if this is the local player.
if (!isLocalPlayer)
{
return;
}
// Rotate based on the location of the mouse our spot to shoot bullets.
Vector3 dir = cam.ScreenToWorldPoint (Input.mousePosition) - rotater.position;
float angle = Mathf.Atan2 (dir.y, dir.x) * Mathf.Rad2Deg;
rotater.rotation = Quaternion.AngleAxis (angle, Vector3.forward);
// When we hit spacebar
if(Input.GetKeyDown(KeyCode.Space))
{
// Fire some bullets.
CmdFire ();
}
}
// Something the client is wanting to be done and sends this "command" to the server to be processed
[Command]
public void CmdFire ()
{
// Create the Bullet from the Bullet Prefab
var bullet = (GameObject)Instantiate (bulletPrefab, fireSpot.transform.position, Quaternion.identity);
// Add velocity to the bullet
bullet.GetComponent<Rigidbody2D>().velocity = (fireSpot.transform.position - rotater.position).normalized * bulletSpeed;
// Spawn the bullets for the Clients.
NetworkServer.Spawn (bullet);
// Destroy the bullet after 2 seconds
Destroy(bullet, 4.0f);
}
}
My movement script :
public class Movement : NetworkBehaviour {
void Update()
{
if (!isLocalPlayer)
{
return;
}
var x = Input.GetAxis("Horizontal") * Time.deltaTime * 3.0f;
var y = Input.GetAxis("Vertical") * Time.deltaTime * 3.0f;
transform.Translate(x, y, 0f);
}
}
This will continue to be a problem if you'd like to run multiple instances of unity games on the same machine. If you go into Edit > Project Settings > Player, there is a Run in Background option under Resolution and Presentation. If you have this checked, both instances of the game will receive updated mouse positions per frame from Input.mousePosition (Keep in mind Input.GetKeyDown will only register to the focused instance of the game so there's some inconsistency here). If you don't have that check box selected, your game instance will pause when not focused (I'm guessing you don't want this if you're working with a client/host networking paradigm).
There are a few ways you can work around this. The ideal way to test a networked game would be to have a unique PC for each instance of the game. If that's not an option, you can add some checks to ensure the mouse is over the window. As long as the two windows don't overlap you can do that by checking the position against the screen info:
bool IsMouseOverWindow()
{
return !(Input.mousePosition.x < 0 ||
Input.mousePosition.y < 0 ||
Input.mousePosition.x > Screen.width ||
Input.mousePosition.y > Screen.height);
}
You can then use this to decide, in your Update(), whether or not you want to update rotator.rotation.
Another option would be to implement MonoBehaviour.OnApplicationFocus. You can then enable/disable things (like updating rotations based on mousePosition) in response to that event. The cleanest solution would be to have a clear way for your systems to ask "is my window focused". You can make a class like this:
public class FocusListener : MonoBehaviour
{
public static bool isFocused = true;
void OnApplicationFocus (bool hasFocus) {
isFocused = hasFocus;
}
}
Make sure you have one of these somewhere in your game and then anything can check by looking at FocusListener.isFocused.