Building an RTS building placement system in Unity and I have run into a small problem. I am using the mouseposition and a sphere cast in a state machine to place buildings. When in the "Placing" state, the building prefab follows the mouse position until the player clicks the left mouse button to place the building. If the player is then placing another building, i am using colliders on the already placed building to detect if a building is already there and turning the placing building to red color (for testing purposes). The issue i have run into is the building that is being placed rotates around the building it collides with, and moves over the top of it. How do i stop rotation and y axis movement on an object that is part of a mouse position raycast? I have already tried to constrain movement and rotation in the inspector on the RigidBody, i have tried completely moving the RigidBody altogther, and cant quite figure it out. Would it work if i set the y position and the rotation of the placing object at start and then if the rotation changes or the y position changes, set it back to the starting values? Or is there an easier way to do it?
Neil
private void MoveCurrentPlaceableObjectToMouse()
{
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
RaycastHit hitInfo;
if(Physics.SphereCast(ray, 2f, out hitInfo))
{
if (hitInfo.collider.tag == "Building")
{
isOkayToBuild = false;
currentPlaceableObject.GetComponentInChildren<MeshRenderer>().material.color = Color.red;
}
else
{
isOkayToBuild = true;
currentPlaceableObject.GetComponentInChildren<MeshRenderer>().material.color = Color.white;
}
currentPlaceableObject.GetComponent<BoxCollider>().isTrigger = false;
state = BuildingState.Placing;
currentPlaceableObject.transform.position = hitInfo.point;
currentPlaceableObject.transform.rotation = Quaternion.FromToRotation(Vector3.up, hitInfo.normal);
}
}
Related
I'm attempting to detect when a player wants to open/close a door so I create an emptyObject called Door Hinge that has the tag "Door". I then created a cube object named Door Body which is a child of Door Hinge and gave it no tag but did give it a layer of Ignore Raycast. I have the scaling of the parent set to (1, 1, 1) but did change the scaling of the child as well as its x position slightly.
I'm not sure why but the raycast seems to only be detecting the child cube and not the parent empty object. Could anyone let me know if I'm missing anything or doing something wrong? I'll add my detection code for this below.
void CheckInteraction()
{
// origin starts from the camera
Vector3 origin = cam.transform.position;
// direction of the camera
Vector3 direction = cam.transform.forward;
// The distance for the raycast
float distance = 4f;
// Used to store info about the object that the raycast hits
RaycastHit hit;
if (Physics.Raycast(origin, direction, out hit, distance))
{
Debug.Log(hit.transform.tag);
if (hit.transform.tag == "Door")
{
Debug.Log("HIT");
if (Input.GetKeyDown(KeyCode.E))
{
hit.transform.gameObject.GetComponent<DoorOpen>().enabled = true;
}
}
}
}
The solution that I was missing was that the parent empty object needed to have a box collider added to it.
So, I am dragging 2D sprites from the canvas and instantiating a 3D object. But the issue is prefabs are spawning at their default location.This is the code. Removed all the things I tried for spawning
You are trying to instatiate a prefab without specifying the position to instantiate. Try something like this.
public void OnEndDrag(PointerEventData eventData)
{
Instantiate(prefab,eventData.position);
}
Reference:
https://docs.unity3d.com/ScriptReference/Object.Instantiate.html
https://docs.unity3d.com/2017.3/Documentation/ScriptReference/EventSystems.PointerEventData-position.html
You don't do anything with the objects position so it will spawn at the origin.
Without seeing your project it's hard to guess what you game requires for placing behavior, I'm assuming you have some an environment you want to place it on and that can easily be done with a Physics.Raycast(). But if you're not over any environment then you can fire a ray from your mouse position & project it an arbitrary amount forwards from the camera and place the object there for a good effect.
This snippet should be enough for that:
public float raycastDistance = 10f;
public float projectDistance = 4f;
public void OnEndDrag(PointerEventData eventData)
{
var spawned = Instantiate(prefab);
var ray = Camera.main.ScreenPointToRay(eventData.position);
if (Physics.Raycast(ray, out RaycastHit hit, raycastDistance)
{
spawned.transform.position = hit.point;
}
else
{
spawned.transform.position = Camera.main.transform.position + (ray * projectDistance);
}
}
If you prefer you can update the position of the spawned object in OnDrag() instead of just setting it in OnDragEnd() which can look quite nice if you interpolate the position of the object slightly as the mouse moves.
I want to be able to interact with my gameobjects while organising my sprites with Sorting Layer.
Right now, Sorting Layer do not affect my Raycast/collider situation and only change the visual appearance of my order.
In my Test scene I have two sprites: a Background and one Object.
Both have a 2D Box Collider component.
My Raycast script is suppose to detect on which Object my mouse is hovering on; either the Background (BG) or the Object, which lays on top of the BG.
As long as the BG collider is enabled in the inspector, my script will ONLY recognize the BG, even if the other object lays on top of my BG (visually and according to the Sorting Layer) (both Z-positions = 0, Sorting Layer Default (BG: 0, Object 1)).
The collider have the correct size and if I disable or remove the Collider from the BG, my script will detect the other Object just fine.
The only way to make sure that my Object gets detected, while my BG collider is activated is by changing the Objects Z position to a position between my BG and my Camera (between smaller 0 and greater -10)
I tried to:
this is a fresh 2D project, so my cameras Projection is set to Orthograph, my BG and Object use Sprite Renderer and is use 2D collider
change the "Order in Layer", back and forth, including negative numbers.
add a Foreground (and Background) "Sorting Layer"
(both options only had a visual effect and did not affect the Raycast output)
change the positions in the hierarchy.
removing and re-adding the colliders. (!) I observed that the 2D Box collider I added last, will always be behind every other colliders. This is something I can work with for now. But will become a problem later in the future, when the project gets bigger.
I want to be able to interact with items and doors etc. (clicks and reactions to mouse hovering) in my 2D click adventure. If 2D Raycast is not the way to go to archive this, I would be glad to know about the proper technique to use in this situation.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class RayCast : MonoBehaviour
{
[SerializeField] private Vector2 mousePosWorld2D;
RaycastHit2D hit;
void Update()
{
mousePosWorld2D = (Vector2)Camera.main.ScreenToWorldPoint(Input.mousePosition);
hit = Physics2D.Raycast(mousePosWorld2D, Vector2.zero);
if (hit.collider)
{
Debug.Log("Found collider: " + hit.collider.name);
}
else
{
Debug.Log("No collider found.");
}
}
}
I expected the "Sorting Layer" or "Order in Layer" to have an impact on my Raycast, while changing the Z position does nothing. But it is the other way around. Sorting Layer only have a visual effect and moving my Object closer to the camera (Z position) helps to detect the right collider.
How do I solve the problem?
What gets hit first from a Ray has only to do with how close the collider is to the camera, and nothing with the order in the Hierarchy or the SortinLayers.
To find the "highest" RaycastTarget based on the SortingLayer, I created the Script below.
We fire a Raycast and iterate over all the Colliders that we hit, by using a foreach-loop. Then the function will return the GameObject with the highest SortingLayer. Note: If there is more than one GameObject with the highest SortingLayer, the function will return the last one it found.
using UnityEngine;
public class Raycast : MonoBehaviour
{
// The name of the relevant Layer
[SerializeField] string layerToCheck = "Default";
void Update()
{
Vector2 mousePosWorld2D = (Vector2)Camera.main.ScreenToWorldPoint(Input.mousePosition);
GameObject result = GetHighestRaycastTarget(mousePosWorld2D);
// Do Stuff example:
Debug.Log($"Highest Layer: {result}");
if (Input.GetMouseButtonDown(0)
&& result != null)
{
Destroy(result);
}
}
// Get highest RaycastTarget based on the Sortinglayer
// Note: If multiple Objects have the same SortingLayer (e.g. 42) and this is also the highest SortingLayer, then the Function will return the last one it found
private GameObject GetHighestRaycastTarget(Vector2 mousePos)
{
GameObject topLayer = null;
RaycastHit2D[] hit = Physics2D.RaycastAll(mousePos, Vector2.zero);
foreach (RaycastHit2D ray in hit)
{
SpriteRenderer spriteRenderer = ray.transform.GetComponent<SpriteRenderer>();
// Check if RaycastTarget has a SpriteRenderer and
// Check if the found SpriteRenderer uses the relevant SortingLayer
if (spriteRenderer != null
&& spriteRenderer.sortingLayerName == layerToCheck)
{
if (topLayer == null)
{
topLayer = spriteRenderer.transform.gameObject;
}
if (spriteRenderer.sortingOrder >= topLayer.GetComponent<SpriteRenderer>().sortingOrder)
{
topLayer = ray.transform.gameObject;
}
}
}
return topLayer;
}
}
Have you tried with the script that it's in the Unity official documentation?
https://docs.unity3d.com/ScriptReference/Physics.Raycast.html
using UnityEngine;
// C# example.
public class ExampleClass : MonoBehaviour
{
void Update()
{
// Bit shift the index of the layer (8) to get a bit mask
int layerMask = 1 << 8;
// This would cast rays only against colliders in layer 8.
// But instead we want to collide against everything except layer 8. The ~ operator does this, it inverts a bitmask.
layerMask = ~layerMask;
RaycastHit hit;
// Does the ray intersect any objects excluding the player layer
if (Physics.Raycast(transform.position, transform.TransformDirection(Vector3.forward), out hit, Mathf.Infinity, layerMask))
{
Debug.DrawRay(transform.position, transform.TransformDirection(Vector3.forward) * hit.distance, Color.yellow);
Debug.Log("Did Hit");
}
else
{
Debug.DrawRay(transform.position, transform.TransformDirection(Vector3.forward) * 1000, Color.white);
Debug.Log("Did not Hit");
}
}
}
I'm currently integrating a real-time battleships game I wrote with the Oculus Rift.
The raycasting worked fine when I was doing it from the regular camera, however I am now trying to raycast from the LeftEyeAnchor of the OVRCameraRig to the mouse with no success. This is the code's current state.
IEnumerator setupPlayer(){
int i = 0;
while (i < 3){
if(Input.GetKeyDown("s")){
RaycastHit hit;
Ray ray = oculusCamera.camera.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast (ray, out hit, 2000)) {
Debug.Log ("Yay");
Vector3 shipSpawn = new Vector3 (hit.point.x, hit.point.y+0.6f, hit.point.z);
if(validPlayerPosition(shipSpawn)){//call test function here
shipClone = Instantiate (ship, shipSpawn, Quaternion.identity) as GameObject;
i++;
}
}
}
yield return null;
}
}
I spammed the 's' key in an attempt to find somewhere it would hit but to no avail.
I then wrote a short script and attached it to the LeftEyeAnchor camera in an attempt to track its movements and that read as follows.
using UnityEngine;
using System.Collections;
public class mousetrack : MonoBehaviour {
public Camera oculusCamera;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
Ray ray = oculusCamera.camera.ScreenPointToRay(Input.mousePosition);
Debug.DrawRay(ray.origin, ray.direction * 100, Color.red);
}
}
Upon running and moving the headset it did nothing but project a line straight outwards along the z axis , although by removing the headset i could see the frustrum changing in the Unity scene yet the line remained stationary. I searched for issues like my own and came across this thread.
https://forums.oculus.com/viewtopic.php?t=2754
I attempted caliberMengsk's fix but that made no difference. I am left every runtime with multiple slightly varying instances of the error message:
Screen position out of view frustum (screen pos -1061.000000, 837.000000) (Camera rect 0 0 1036 502)
UnityEngine.Camera:ScreenPointToRay(Vector3)
mousetrack:Update() (at Assets/mousetrack.cs:13)
which points to this line:
Ray ray = oculusCamera.camera.ScreenPointToRay(Input.mousePosition);
Well I am at a loss as to what to do now. Any help on where I'm going wrong would be much appreciated.
I am new to Unity and I am trying to figure out how to move the camera over a map/terrain using touch input. The camera would be looking down at the terrain with a rotation of (90,0,0). The terrain is on layer 8. I have had no problem getting it moving with keyboard, now I am trying to move to touch and it is very different if you want to keep expected usage on iOS.
The best example I can think of on a built in iOS app is Maps where the user would touch the screen and that point on the map would stay under the finger as long as the finger stayed on the screen. So as the user moves their finger the map appears to be moving with the finger. I have not been able to find examples that show how to do it this way. I have seen may examples of moving the camera or character with the mouse but they don't seem to translate well to this style.
Also posted on Unity3D Answers:
http://answers.unity3d.com/questions/283159/move-camera-over-terrain-using-touch-input.html
Below should be what you need. Note that it's tricky to get a 1 to 1 correspondence between finger/cursor and the terrain when using a perspective camera. If you change your camera to orthographic, the script below should give you a perfect map between finger/cursor position and map movement. With perspective you'll notice a slight offset.
You could also do this with ray tracing but I've found that route to be sloppy and not as intuitive.
Camera settings for testing (values are pulled from the inspector so apply them there):
Position: 0,20,0
Orientation: 90,0,0
Projection: Perspective/Orthographic
using UnityEngine;
using System.Collections;
public class ViewDrag : MonoBehaviour {
Vector3 hit_position = Vector3.zero;
Vector3 current_position = Vector3.zero;
Vector3 camera_position = Vector3.zero;
float z = 0.0f;
// Use this for initialization
void Start () {
}
void Update(){
if(Input.GetMouseButtonDown(0)){
hit_position = Input.mousePosition;
camera_position = transform.position;
}
if(Input.GetMouseButton(0)){
current_position = Input.mousePosition;
LeftMouseDrag();
}
}
void LeftMouseDrag(){
// From the Unity3D docs: "The z position is in world units from the camera." In my case I'm using the y-axis as height
// with my camera facing back down the y-axis. You can ignore this when the camera is orthograhic.
current_position.z = hit_position.z = camera_position.y;
// Get direction of movement. (Note: Don't normalize, the magnitude of change is going to be Vector3.Distance(current_position-hit_position)
// anyways.
Vector3 direction = Camera.main.ScreenToWorldPoint(current_position) - Camera.main.ScreenToWorldPoint(hit_position);
// Invert direction to that terrain appears to move with the mouse.
direction = direction * -1;
Vector3 position = camera_position + direction;
transform.position = position;
}
}
I've come up with this script (I have appended it to the camera):
private Vector2 worldStartPoint;
void Update () {
// only work with one touch
if (Input.touchCount == 1) {
Touch currentTouch = Input.GetTouch(0);
if (currentTouch.phase == TouchPhase.Began) {
this.worldStartPoint = this.getWorldPoint(currentTouch.position);
}
if (currentTouch.phase == TouchPhase.Moved) {
Vector2 worldDelta = this.getWorldPoint(currentTouch.position) - this.worldStartPoint;
Camera.main.transform.Translate(
-worldDelta.x,
-worldDelta.y,
0
);
}
}
}
// convert screen point to world point
private Vector2 getWorldPoint (Vector2 screenPoint) {
RaycastHit hit;
Physics.Raycast(Camera.main.ScreenPointToRay(screenPoint), out hit);
return hit.point;
}
Pavel's answer helped me a lot, so wanted to share my solution with the community in case it helps others. My scenario is a 3D world with an orthographic camera. A top-down style RTS I am working on. I want pan and zoom to work like Google Maps, where the mouse always stays at the same spot on the map when you pan and zoom. This script achieves this for me, and hopefully is robust enough to work for others' needs. I haven't tested it a ton, but I commented the heck out of it for beginners to learn from.
using UnityEngine;
// I usually attach this to my main camera, but in theory you can attach it to any object in scene, since it uses Camera.main instead of "this".
public class CameraMovement : MonoBehaviour
{
private Vector3 MouseDownPosition;
void Update()
{
// If mouse wheel scrolled vertically, apply zoom...
// TODO: Add pinch to zoom support (touch input)
if (Input.mouseScrollDelta.y != 0)
{
// Save location of mouse prior to zoom
var preZoomPosition = getWorldPoint(Input.mousePosition);
// Apply zoom (might want to multiply Input.mouseScrollDelta.y by some speed factor if you want faster/slower zooming
Camera.main.orthographicSize = Mathf.Clamp(Camera.main.orthographicSize + Input.mouseScrollDelta.y, 5, 80);
// How much did mouse move when we zoomed?
var delta = getWorldPoint(Input.mousePosition) - preZoomPosition;
// Rotate camera to top-down (right angle = 90) before applying adjustment (otherwise we get "slide" in direction of camera angle).
// TODO: If we allow camera to rotate on other axis we probably need to adjust that also. At any rate, you want camera pointing "straight down" for this part to work.
var rot = Camera.main.transform.localEulerAngles;
Camera.main.transform.localEulerAngles = new Vector3(90, rot.y, rot.z);
// Move the camera by the amount mouse moved, so that mouse is back in same position now.
Camera.main.transform.Translate(delta.x, delta.z, 0);
// Restore camera rotation
Camera.main.transform.localEulerAngles = rot;
}
// When mouse is first pressed, just save location of mouse/finger.
if (Input.GetMouseButtonDown(0))
{
MouseDownPosition = getWorldPoint(Input.mousePosition);
}
// While mouse button/finger is down...
if (Input.GetMouseButton(0))
{
// Total distance finger/mouse has moved while button is down
var delta = getWorldPoint(Input.mousePosition) - MouseDownPosition;
// Adjust camera by distance moved, so mouse/finger stays at exact location (in world, since we are using getWorldPoint for everything).
Camera.main.transform.Translate(delta.x, delta.z, 0);
}
}
// This works by casting a ray. For this to work well, this ray should always hit your "ground". Setup ignore layers if you need to ignore other colliders.
// Only tested this with a simple box collider as ground (just one flat ground).
private Vector3 getWorldPoint(Vector2 screenPoint)
{
RaycastHit hit;
Physics.Raycast(Camera.main.ScreenPointToRay(screenPoint), out hit);
return hit.point;
}
}
Based on the answer from Pavel, I simplified the script and removed the unlovely "jump" when touch with more then one finger and release the second finger:
private bool moreThenOneTouch = false;
private Vector3 worldStartPoint;
void Update() {
Touch currentTouch;
// only work with one touch
if (Input.touchCount == 1 && !moreThenOneTouch) {
currentTouch = Input.GetTouch(0);
if (currentTouch.phase == TouchPhase.Began) {
this.worldStartPoint = Camera.main.ScreenToWorldPoint(currentTouch.position);
}
if (currentTouch.phase == TouchPhase.Moved) {
Vector3 worldDelta = Camera.main.ScreenToWorldPoint(currentTouch.position) - this.worldStartPoint;
Camera.main.transform.Translate(
-worldDelta.x,
-worldDelta.y,
0
);
}
}
if (Input.touchCount > 1) {
moreThenOneTouch = true;
} else {
moreThenOneTouch = false;
if(Input.touchCount == 1)
this.worldStartPoint = Camera.main.ScreenToWorldPoint(Input.GetTouch(0).position);
}
}
using UnityEngine;
// I usually attach this to my main camera, but in theory you can attach it to any object in scene, since it uses Camera.main instead of "this".
public class CameraMovement : MonoBehaviour
{
private Vector3 MouseDownPosition;
void Update()
{
// If mouse wheel scrolled vertically, apply zoom...
// TODO: Add pinch to zoom support (touch input)
if (Input.mouseScrollDelta.y != 0)
{
// Save location of mouse prior to zoom
var preZoomPosition = getWorldPoint(Input.mousePosition);
// Apply zoom (might want to multiply Input.mouseScrollDelta.y by some speed factor if you want faster/slower zooming
Camera.main.orthographicSize = Mathf.Clamp(Camera.main.orthographicSize + Input.mouseScrollDelta.y, 5, 80);
// How much did mouse move when we zoomed?
var delta = getWorldPoint(Input.mousePosition) - preZoomPosition;
// Rotate camera to top-down (right angle = 90) before applying adjustment (otherwise we get "slide" in direction of camera angle).
// TODO: If we allow camera to rotate on other axis we probably need to adjust that also. At any rate, you want camera pointing "straight down" for this part to work.
var rot = Camera.main.transform.localEulerAngles;
Camera.main.transform.localEulerAngles = new Vector3(90, rot.y, rot.z);
// Move the camera by the amount mouse moved, so that mouse is back in same position now.
Camera.main.transform.Translate(delta.x, delta.z, 0);
// Restore camera rotation
Camera.main.transform.localEulerAngles = rot;
}
// When mouse is first pressed, just save location of mouse/finger.
if (Input.GetMouseButtonDown(0))
{
MouseDownPosition = getWorldPoint(Input.mousePosition);
}
// While mouse button/finger is down...
if (Input.GetMouseButton(0))
{
// Total distance finger/mouse has moved while button is down
var delta = getWorldPoint(Input.mousePosition) - MouseDownPosition;
// Adjust camera by distance moved, so mouse/finger stays at exact location (in world, since we are using getWorldPoint for everything).
Camera.main.transform.Translate(delta.x, delta.z, 0);
}
}
// This works by casting a ray. For this to work well, this ray should always hit your "ground". Setup ignore layers if you need to ignore other colliders.
// Only tested this with a simple box collider as ground (just one flat ground).
private Vector3 getWorldPoint(Vector2 screenPoint)
{
RaycastHit hit;
Physics.Raycast(Camera.main.ScreenPointToRay(screenPoint), out hit);
return hit.point;
}
}