I created a Raycast that shoots from an object(not main camera) and I want it to hit UI elements. Example below.
Example
UI outlined in orange, big grey plane is a test object, red head is placed upon user's head and green is DrawRay of my raycast attached to an eye. I am trying to make it so user looks at a UI button and raycast can hit that button.
I've been playing around with Graphic raycast but nothing seems to work. Grey plane is still being hit by the raycast. I tried playing with OnPointerEnter but the thing is that my eye raycast is not a mouse/finger touch and is a second pointer.
Any ideas how to make it work with either isPointerOverGameObject or Graphic Raycaster or any other method?? Or how to create a second pointer that will be the eye raycast?
Current code below.
private void FixedUpdate()
{
GraphicRaycaster gr = this.GetComponent<GraphicRaycaster>();
PointerEventData pointerData = new PointerEventData(EventSystem.current);
pointerData.position = Input.mousePosition;
List<RaycastResult> results = new List<RaycastResult>();
gr.Raycast(pointerData, results);
//EventSystem.current.RaycastAll(pointerData, results);
if (results.Count > 0)
{
if (results[0].gameObject.tag == "Test tag")
{
Debug.Log("test");
}
}
//----- Failed Graphic raycast experiments above; working RaycastHit below --------
RaycastHit hit;
Debug.DrawRay(transform.position, transform.forward * -10000f, Color.green);
if (Physics.Raycast(transform.position, transform.forward * -1, out hit))
{
Debug.Log(hit.transform.name);
}
}
This is actually much more complicated if you want to do proper interaction with the canvas elements.
I suggest you do not try to roll out the solution yourself, but look at some already made solutions. Most VR toolkits have their own implementations, you could look into VRTK's example scene 007 - Interactions. You don't have to use VR to use it - just use the fallback SDK.
For more details on how this works, I might point you to the article from Oculus: Unity Sample Framework (a bit out of date).
Related
I am trying to learn unity, and made my first own game and stucked at the beginning. The first idea was to drop a box (cube) to the mouse position. There are many videos and posts about getting the mouse position, and i tried to use them. My problem is, the mouse position i got is the camera's position, instead of the plane.
As you can see, it is kinda works, but it isn't fall to the plane.
https://prnt.sc/lmrmcl
My code:
void Update()
{
Wall();
}
void Wall()
{
if (Input.GetMouseButtonDown(0))
{
if (Input.GetMouseButtonDown(0))
{
wall = GameObject.CreatePrimitive(PrimitiveType.Cube);
Rigidbody wallsRigidbody = wall.AddComponent<Rigidbody>();
wall.transform.localScale = new Vector3(0.6f, 0.6f, 0.6f);
wallsRigidbody.mass = 1f;
wallsRigidbody.angularDrag = 0.05f;
wallsRigidbody.useGravity = true;
wallsRigidbody.constraints = RigidbodyConstraints.FreezeRotation;
wall.transform.position = Camera.main.ScreenToWorldPoint(Input.mousePosition);
}
Debug.Log(Camera.main.ScreenToWorldPoint(Input.mousePosition));
BoxCollider wallsCollider = wall.AddComponent<BoxCollider>();
wallsCollider.size = new Vector3(1f, 1f, 1f);
}
}
How should i change my code to get the right position?
This isn't a direct answer to your question, but I'm hoping it'll still get you where you need to go.
Prefabs are your friends! I'd highly recommend leveraging them here instead of constructing a cube directly in code.
But first, make sure everything else is set up right. Go ahead and construct a cube by hand in the Editor and make sure that when you hit Play, it falls as you expect. It should, provided it has a Rigidbody, collider, and you have gravity enabled (true by default).
If that works, drag that cube from your Hierarchy view into a folder in the Project view. This creates a prefab. You can now delete the cube from the Hierarchy view.
Update your script to have a public GameObject field, e.g.
public GameObject cubeToCreate;
Then, in the Inspector pane for whatever gameobject has that script attached, you should get a new field, "Cube To Create". Drag your prefab cube into that slot.
Lastly...update your code to wall = Instantiate(cubeToCreate). You'll still need to update the position, but you should be able to drop the rest of the initialization logic you have above, e.g. setting mass and drag.
As for the actual problem, the first thing that concerns me is how do you plan on turning a 2d mouse click into a 3d point? For the axis going "into" the screen...how should the game determine the value for that?
Camera.main.ScreenToWorldPoint accepts a Vector3, but you're passing it a Vector2 (Input.mousePosition, which gets converted to a z=0 Vector3), so the point is 0 units from the camera -- so in a plane that intersects with the camera.
I haven't done this, but I think you'll need to do raycasting of some sort. Maybe create an invisible 2d plane with a collider on it, and cast a physics ray from the camera, and wherever it hits that plane, that's the point where you want to create your cube. This post has a couple hints, though it's geared toward 2D. That might all be overkill, too -- if you create a new Vector3 and initialize it with your mouse position, you can set the z coordinate to whatever you want, but then your cube creation will be in terms of distance from the camera, which is not the best idea.
Hope this helps.
So I have this mesh that I generate using perlin noise
What I want to do is to be able to click to place an object in the game. My ultimate goal is to have a menu so you can place different objects but I want to just try to get a cube to work. I tried RayCasting:
public class Raycast : MonoBehaviour
{
Ray myRay; // initializing the ray
RaycastHit hit; // initializing the raycasthit
public GameObject objectToinstantiate;
// Update is called once per frame
void Update()
{
myRay = Camera.main.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast(myRay, out hit))
{
if (Input.GetMouseButtonDown(0))
{
Instantiate(objectToinstantiate, hit.point, Quaternion.identity);
Debug.Log(hit.point);
}
}
I could not get this to work... I had no real idea how to use this script either (like what object to place it on) I tried just making a cube and putting the script on that but It did not work. I got no errors it just did not work.
I also had a mesh collider on my mesh and a box collider on my cube.
I see that you are following the tutorial on YouTube "Procedural Landmass Generation", which I would say is an excellent way to understand the basics of procedural content generation.
For your question I would suggest you add a camera on the top of the terrain, from where you will be able to point with your mouse where you want to instantiate the object (the cube for example). You would need a function which raycasts to the your land tiles. Check if its gameobject's tag is "Terrain" and check if there is no water at that current location. If terrain check returns true and water check returns false, then instantiate your desired object at the "hit" position of the raycast.
You can put the script on any object you want. I'd propose the camera. After adding this script-component to the camera, drag your mesh (that you want to instantiate) into the "objectToinstantiate" slot you see in the script.
Then see if it works, and for performance move the if (Input.GetMouseButtonDown(0)) up in your code, before you do the raycast, like #Basile said.
Note: This script will work in play-mode on the game-view - not in the editor scene-view.
If you'd like this to work too, you should add [ExecuteInEditMode] above the public class ... line. But be careful with those, it's sometimes hard to stop those scripts :P
EDIT2:
Thanks to Draco in the comments referring me to the LineRenderer class.
Here is a link to a script that helped me draw ray lines in the game world space: http://answers.unity3d.com/questions/1142318/making-raycast-line-visibld.html
EDIT:
I changed the unit length from 1 to 100,000 and from black color to blue in the Debug.DrawRay method, and now I am able to see the rays drawn in the scene view only.
Debug.DrawRay(transform.position, transform.forward, Color.blue, 100000);
Also, I am now able to track the touch controller's ray by applying the below script to the game object LocalAvatar > controller_right.
ORIGINAL POST:
I am attempting to do something seemingly simple using the Oculus VR packages OVRAvatar, OVRCameraRig, and OVRInput in Unity 5.6.1 and Visual Studio C#.
Goal: I want to get the raycast from my Oculus touch controller's pointer finger, and intercept a plane to apply pixels to the plane's texture.
I was able to get the camera's raycast to intercept the plane using OVRCameraRig like so:
void Update() {
RaycastHit hit;
Vector3 fwd = ovrCamera.transform.TransformDirection(Vector3.forward);
if (Physics.Raycast(ovrCamera.transform.position, fwd, out hit, 10000))
{
if (hit.collider.tag == "Plane"))
{
print("Hit the plane!");
// apply some pixels to plane texture
// setPixels(plane.texture)
}
}
// EDIT: I changed the unit length from 1 to 100,000 and now I am able to see the rays drawn in the scene view only.
// NOTE: this line below does not work in Unity's scene view or game view
Debug.DrawRay(ovrCamera.transform.position, ovrCamera.transform.forward, Color.black, 1);
}
Aside from the Debug.DrawRay function not working, the code which checks for the OVRcamera's raycast hitting the plane does in fact work.
However, now that I am trying to test the raycast hit from OVRInput.Controller.RTouch, I am unable to see the drawn ray and the hit collider check is not triggered.
I have added this script below as a component to OVRCameraRig's RightHandAnchor:
void Update() {
RaycastHit hit;
Vector3 fwd = transform.TransformDirection(Vector3.forward);
if (Physics.Raycast(transform.position, fwd, out hit, 10000))
{
if (hit.collider.tag == "Plane")
{
print("Hit plane with touch controller's raycast!");
}
}
// EDIT: I changed the unit length from 1 to 100,000 and now I am able to see the rays drawn in the scene view only.
// NOTE: this line below does not work in Unity's scene view or game view
Debug.DrawRay(transform.position, transform.forward, Color.black, 1);
}
Needless to say, the Debug.DrawRay function does not work here as well. I can't tell if the raycast from the touch controllers are actually being tracked.
Any help or insight is appreciated, thanks!
Thanks to Draco in the comments referring me to the LineRenderer class.
Here is a link to a script that helped me draw ray lines in the game world space: http://answers.unity3d.com/questions/1142318/making-raycast-line-visibld.html
I changed the unit length from 1 to 100,000 and from black color to blue in the Debug.DrawRay method, and now I am able to see the rays drawn in the scene view only.
Debug.DrawRay(transform.position, transform.forward, Color.blue, 100000);
Also, I am now able to track the touch controller's ray by applying the below script to the game object LocalAvatar > controller_right.
I am making a mini map Using render texture to render my Over head cam to my screen and Drawing that texture using NGUI 3.9 every things is fine Apart from When i am Trying to click on My minimap and trying to destroy a object or Geting the point on my Game Floor which is on my World Space rendered by Main Camera Raycast is not reaching it some how i don't know what i am doing wrong
Edited:
I am using Ngui is because whole Gui of my game is in Ngui so its my requierment
I have Edited my code to a new aproch by this i am able to cast a ray on to my floor but the position of my ray is not accurate
here is my code
public void TestWithTextureCoords()
{
Camera main = UICamera.mainCamera;
Camera mapCam = transform.GetComponent<Camera>();
Ray firstRay = main.ScreenPointToRay(Input.mousePosition);
Debug.DrawLine(firstRay.origin, firstRay.direction);
RaycastHit textHit;
if (Physics.Raycast(firstRay, out textHit, main.farClipPlane))
{
var hitPoint = textHit.point;
Ray scondRay = mapCam.ViewportPointToRay(hitPoint);
RaycastHit worldHit;
if (Physics.Raycast(scondRay, out worldHit, mapCam.farClipPlane))
{
Debug.DrawLine(scondRay.origin, worldHit.point, Color.red);
Debug.Log(" world hit point " + worldHit.point);
Debug.Log(worldHit.transform.name, worldHit.transform);
}
}
}
Please look into it As soon as possible let me know a work around or how it is being done May be I am missing a small Think I have no idea about I have Tried different kind of approaches but all in vain help!
I am new to Vuforia.
The gameobject to which the script is added, is a 3d object which is made visible on user-defined triggered image.
I know this is not a new question and I have gone through each of the thread/post on official Vuforia discussion blog for that matter but the problem still persists. And the problem seems very fundamental.
I have the following script attached to my gameobject :
void Update ()
{
if (Input.touchCount == 1)
{
// Touches performed on screen
Ray ray;
RaycastHit hit;
Debug.Log ("2");
if(Camera.main != null)
{
Debug.Log ("3");
ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
hit = new RaycastHit();
Debug.Log ("33");
if(Physics.Raycast(ray, out hit))
{
Debug.Log ("4");
}
}
}
}
When I run the scene and touch on the gameobject, the Debug Console shows
2
3
33
BUT NOT 4. Somehow this ray doesn't hit the object.
This script works fine with the normal camera. Could anyone please shed some light on this.
Thanks
(as far as I can tell) Vuforia doesn't use the ARCamera for collision detection. Instead there is another 'Background Camera' (you can see it if you run your app in Unity and pause it; you'll find it in the Hierarchy pane). To access it use
Camera.allCameras[0]
instead of
Camera.main
Hope that helps
I think this is a bug between Collider class and ARCamera, but the solution is this:
Create a new Scene
Create a Cube or any gameobject with collider component.
NOT delete the Cube for any reason
Test with any hit algorithm(touch or mouse)
using System.Collections;
using UnityEngine;
public class rayoPrueba : MonoBehaviour {
void start () {print("entro"); }
void Update() {
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast(ray, 100))
print("Si le jue");
}
}
Replace the mainCamera for ARCamera
Test Again
Put u`r Cube inside ImageTarget and the real model
Delete the Cube and lets dance!! I dont know why, but the bug is killed with this ...(Y) ..
The trick is... Never lose a gameObject with Collider Component from the scene..
If you are trying take a hit with RayCast on 3d model, you should sure add Box Collider Component on 3d model.